Jan 27 07:36:02 np0005597378 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 27 07:36:02 np0005597378 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 27 07:36:02 np0005597378 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 07:36:02 np0005597378 kernel: BIOS-provided physical RAM map:
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 27 07:36:02 np0005597378 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 27 07:36:02 np0005597378 kernel: NX (Execute Disable) protection: active
Jan 27 07:36:02 np0005597378 kernel: APIC: Static calls initialized
Jan 27 07:36:02 np0005597378 kernel: SMBIOS 2.8 present.
Jan 27 07:36:02 np0005597378 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 27 07:36:02 np0005597378 kernel: Hypervisor detected: KVM
Jan 27 07:36:02 np0005597378 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 27 07:36:02 np0005597378 kernel: kvm-clock: using sched offset of 6198710804 cycles
Jan 27 07:36:02 np0005597378 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 27 07:36:02 np0005597378 kernel: tsc: Detected 2799.998 MHz processor
Jan 27 07:36:02 np0005597378 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 27 07:36:02 np0005597378 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 27 07:36:02 np0005597378 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 27 07:36:02 np0005597378 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 27 07:36:02 np0005597378 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 27 07:36:02 np0005597378 kernel: Using GB pages for direct mapping
Jan 27 07:36:02 np0005597378 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 27 07:36:02 np0005597378 kernel: ACPI: Early table checksum verification disabled
Jan 27 07:36:02 np0005597378 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 27 07:36:02 np0005597378 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:36:02 np0005597378 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:36:02 np0005597378 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:36:02 np0005597378 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 27 07:36:02 np0005597378 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:36:02 np0005597378 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 07:36:02 np0005597378 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 27 07:36:02 np0005597378 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 27 07:36:02 np0005597378 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 27 07:36:02 np0005597378 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 27 07:36:02 np0005597378 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 27 07:36:02 np0005597378 kernel: No NUMA configuration found
Jan 27 07:36:02 np0005597378 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 27 07:36:02 np0005597378 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 27 07:36:02 np0005597378 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 27 07:36:02 np0005597378 kernel: Zone ranges:
Jan 27 07:36:02 np0005597378 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 27 07:36:02 np0005597378 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 27 07:36:02 np0005597378 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 07:36:02 np0005597378 kernel:  Device   empty
Jan 27 07:36:02 np0005597378 kernel: Movable zone start for each node
Jan 27 07:36:02 np0005597378 kernel: Early memory node ranges
Jan 27 07:36:02 np0005597378 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 27 07:36:02 np0005597378 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 27 07:36:02 np0005597378 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 07:36:02 np0005597378 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 27 07:36:02 np0005597378 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 27 07:36:02 np0005597378 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 27 07:36:02 np0005597378 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 27 07:36:02 np0005597378 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 27 07:36:02 np0005597378 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 27 07:36:02 np0005597378 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 27 07:36:02 np0005597378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 27 07:36:02 np0005597378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 27 07:36:02 np0005597378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 27 07:36:02 np0005597378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 27 07:36:02 np0005597378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 27 07:36:02 np0005597378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 27 07:36:02 np0005597378 kernel: TSC deadline timer available
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Max. logical packages:   8
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Max. logical dies:       8
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Max. dies per package:   1
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Max. threads per core:   1
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Num. cores per package:     1
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Num. threads per package:   1
Jan 27 07:36:02 np0005597378 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 27 07:36:02 np0005597378 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 27 07:36:02 np0005597378 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 27 07:36:02 np0005597378 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 27 07:36:02 np0005597378 kernel: Booting paravirtualized kernel on KVM
Jan 27 07:36:02 np0005597378 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 27 07:36:02 np0005597378 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 27 07:36:02 np0005597378 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 27 07:36:02 np0005597378 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 27 07:36:02 np0005597378 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 07:36:02 np0005597378 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 27 07:36:02 np0005597378 kernel: random: crng init done
Jan 27 07:36:02 np0005597378 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: Fallback order for Node 0: 0 
Jan 27 07:36:02 np0005597378 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 27 07:36:02 np0005597378 kernel: Policy zone: Normal
Jan 27 07:36:02 np0005597378 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 27 07:36:02 np0005597378 kernel: software IO TLB: area num 8.
Jan 27 07:36:02 np0005597378 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 27 07:36:02 np0005597378 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 27 07:36:02 np0005597378 kernel: ftrace: allocated 194 pages with 3 groups
Jan 27 07:36:02 np0005597378 kernel: Dynamic Preempt: voluntary
Jan 27 07:36:02 np0005597378 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 27 07:36:02 np0005597378 kernel: rcu: #011RCU event tracing is enabled.
Jan 27 07:36:02 np0005597378 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 27 07:36:02 np0005597378 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 27 07:36:02 np0005597378 kernel: #011Rude variant of Tasks RCU enabled.
Jan 27 07:36:02 np0005597378 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 27 07:36:02 np0005597378 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 27 07:36:02 np0005597378 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 27 07:36:02 np0005597378 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 07:36:02 np0005597378 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 07:36:02 np0005597378 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 07:36:02 np0005597378 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 27 07:36:02 np0005597378 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 27 07:36:02 np0005597378 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 27 07:36:02 np0005597378 kernel: Console: colour VGA+ 80x25
Jan 27 07:36:02 np0005597378 kernel: printk: console [ttyS0] enabled
Jan 27 07:36:02 np0005597378 kernel: ACPI: Core revision 20230331
Jan 27 07:36:02 np0005597378 kernel: APIC: Switch to symmetric I/O mode setup
Jan 27 07:36:02 np0005597378 kernel: x2apic enabled
Jan 27 07:36:02 np0005597378 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 27 07:36:02 np0005597378 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 27 07:36:02 np0005597378 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 27 07:36:02 np0005597378 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 27 07:36:02 np0005597378 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 27 07:36:02 np0005597378 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 27 07:36:02 np0005597378 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 27 07:36:02 np0005597378 kernel: Spectre V2 : Mitigation: Retpolines
Jan 27 07:36:02 np0005597378 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 27 07:36:02 np0005597378 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 27 07:36:02 np0005597378 kernel: RETBleed: Mitigation: untrained return thunk
Jan 27 07:36:02 np0005597378 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 27 07:36:02 np0005597378 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 27 07:36:02 np0005597378 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 27 07:36:02 np0005597378 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 27 07:36:02 np0005597378 kernel: x86/bugs: return thunk changed
Jan 27 07:36:02 np0005597378 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 27 07:36:02 np0005597378 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 27 07:36:02 np0005597378 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 27 07:36:02 np0005597378 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 27 07:36:02 np0005597378 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 27 07:36:02 np0005597378 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 27 07:36:02 np0005597378 kernel: Freeing SMP alternatives memory: 40K
Jan 27 07:36:02 np0005597378 kernel: pid_max: default: 32768 minimum: 301
Jan 27 07:36:02 np0005597378 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 27 07:36:02 np0005597378 kernel: landlock: Up and running.
Jan 27 07:36:02 np0005597378 kernel: Yama: becoming mindful.
Jan 27 07:36:02 np0005597378 kernel: SELinux:  Initializing.
Jan 27 07:36:02 np0005597378 kernel: LSM support for eBPF active
Jan 27 07:36:02 np0005597378 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 27 07:36:02 np0005597378 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 27 07:36:02 np0005597378 kernel: ... version:                0
Jan 27 07:36:02 np0005597378 kernel: ... bit width:              48
Jan 27 07:36:02 np0005597378 kernel: ... generic registers:      6
Jan 27 07:36:02 np0005597378 kernel: ... value mask:             0000ffffffffffff
Jan 27 07:36:02 np0005597378 kernel: ... max period:             00007fffffffffff
Jan 27 07:36:02 np0005597378 kernel: ... fixed-purpose events:   0
Jan 27 07:36:02 np0005597378 kernel: ... event mask:             000000000000003f
Jan 27 07:36:02 np0005597378 kernel: signal: max sigframe size: 1776
Jan 27 07:36:02 np0005597378 kernel: rcu: Hierarchical SRCU implementation.
Jan 27 07:36:02 np0005597378 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 27 07:36:02 np0005597378 kernel: smp: Bringing up secondary CPUs ...
Jan 27 07:36:02 np0005597378 kernel: smpboot: x86: Booting SMP configuration:
Jan 27 07:36:02 np0005597378 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 27 07:36:02 np0005597378 kernel: smp: Brought up 1 node, 8 CPUs
Jan 27 07:36:02 np0005597378 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 27 07:36:02 np0005597378 kernel: node 0 deferred pages initialised in 10ms
Jan 27 07:36:02 np0005597378 kernel: Memory: 7763824K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 27 07:36:02 np0005597378 kernel: devtmpfs: initialized
Jan 27 07:36:02 np0005597378 kernel: x86/mm: Memory block size: 128MB
Jan 27 07:36:02 np0005597378 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 27 07:36:02 np0005597378 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 27 07:36:02 np0005597378 kernel: pinctrl core: initialized pinctrl subsystem
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 27 07:36:02 np0005597378 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 27 07:36:02 np0005597378 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 27 07:36:02 np0005597378 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 27 07:36:02 np0005597378 kernel: audit: initializing netlink subsys (disabled)
Jan 27 07:36:02 np0005597378 kernel: audit: type=2000 audit(1769517360.960:1): state=initialized audit_enabled=0 res=1
Jan 27 07:36:02 np0005597378 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 27 07:36:02 np0005597378 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 27 07:36:02 np0005597378 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 27 07:36:02 np0005597378 kernel: cpuidle: using governor menu
Jan 27 07:36:02 np0005597378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 27 07:36:02 np0005597378 kernel: PCI: Using configuration type 1 for base access
Jan 27 07:36:02 np0005597378 kernel: PCI: Using configuration type 1 for extended access
Jan 27 07:36:02 np0005597378 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 27 07:36:02 np0005597378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 27 07:36:02 np0005597378 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 27 07:36:02 np0005597378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 27 07:36:02 np0005597378 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 27 07:36:02 np0005597378 kernel: Demotion targets for Node 0: null
Jan 27 07:36:02 np0005597378 kernel: cryptd: max_cpu_qlen set to 1000
Jan 27 07:36:02 np0005597378 kernel: ACPI: Added _OSI(Module Device)
Jan 27 07:36:02 np0005597378 kernel: ACPI: Added _OSI(Processor Device)
Jan 27 07:36:02 np0005597378 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 27 07:36:02 np0005597378 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 27 07:36:02 np0005597378 kernel: ACPI: Interpreter enabled
Jan 27 07:36:02 np0005597378 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 27 07:36:02 np0005597378 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 27 07:36:02 np0005597378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 27 07:36:02 np0005597378 kernel: PCI: Using E820 reservations for host bridge windows
Jan 27 07:36:02 np0005597378 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 27 07:36:02 np0005597378 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 27 07:36:02 np0005597378 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [3] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [4] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [5] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [6] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [7] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [8] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [9] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [10] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [11] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [12] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [13] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [14] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [15] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [16] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [17] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [18] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [19] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [20] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [21] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [22] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [23] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [24] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [25] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [26] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [27] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [28] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [29] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [30] registered
Jan 27 07:36:02 np0005597378 kernel: acpiphp: Slot [31] registered
Jan 27 07:36:02 np0005597378 kernel: PCI host bridge to bus 0000:00
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 27 07:36:02 np0005597378 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 27 07:36:02 np0005597378 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 27 07:36:02 np0005597378 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 27 07:36:02 np0005597378 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 27 07:36:02 np0005597378 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 27 07:36:02 np0005597378 kernel: iommu: Default domain type: Translated
Jan 27 07:36:02 np0005597378 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 27 07:36:02 np0005597378 kernel: SCSI subsystem initialized
Jan 27 07:36:02 np0005597378 kernel: ACPI: bus type USB registered
Jan 27 07:36:02 np0005597378 kernel: usbcore: registered new interface driver usbfs
Jan 27 07:36:02 np0005597378 kernel: usbcore: registered new interface driver hub
Jan 27 07:36:02 np0005597378 kernel: usbcore: registered new device driver usb
Jan 27 07:36:02 np0005597378 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 27 07:36:02 np0005597378 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 27 07:36:02 np0005597378 kernel: PTP clock support registered
Jan 27 07:36:02 np0005597378 kernel: EDAC MC: Ver: 3.0.0
Jan 27 07:36:02 np0005597378 kernel: NetLabel: Initializing
Jan 27 07:36:02 np0005597378 kernel: NetLabel:  domain hash size = 128
Jan 27 07:36:02 np0005597378 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 27 07:36:02 np0005597378 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 27 07:36:02 np0005597378 kernel: PCI: Using ACPI for IRQ routing
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 27 07:36:02 np0005597378 kernel: vgaarb: loaded
Jan 27 07:36:02 np0005597378 kernel: clocksource: Switched to clocksource kvm-clock
Jan 27 07:36:02 np0005597378 kernel: VFS: Disk quotas dquot_6.6.0
Jan 27 07:36:02 np0005597378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 27 07:36:02 np0005597378 kernel: pnp: PnP ACPI init
Jan 27 07:36:02 np0005597378 kernel: pnp: PnP ACPI: found 5 devices
Jan 27 07:36:02 np0005597378 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_INET protocol family
Jan 27 07:36:02 np0005597378 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 27 07:36:02 np0005597378 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_XDP protocol family
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 27 07:36:02 np0005597378 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 27 07:36:02 np0005597378 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 27 07:36:02 np0005597378 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 88872 usecs
Jan 27 07:36:02 np0005597378 kernel: PCI: CLS 0 bytes, default 64
Jan 27 07:36:02 np0005597378 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 27 07:36:02 np0005597378 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 27 07:36:02 np0005597378 kernel: ACPI: bus type thunderbolt registered
Jan 27 07:36:02 np0005597378 kernel: Trying to unpack rootfs image as initramfs...
Jan 27 07:36:02 np0005597378 kernel: Initialise system trusted keyrings
Jan 27 07:36:02 np0005597378 kernel: Key type blacklist registered
Jan 27 07:36:02 np0005597378 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 27 07:36:02 np0005597378 kernel: zbud: loaded
Jan 27 07:36:02 np0005597378 kernel: integrity: Platform Keyring initialized
Jan 27 07:36:02 np0005597378 kernel: integrity: Machine keyring initialized
Jan 27 07:36:02 np0005597378 kernel: Freeing initrd memory: 87956K
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_ALG protocol family
Jan 27 07:36:02 np0005597378 kernel: xor: automatically using best checksumming function   avx       
Jan 27 07:36:02 np0005597378 kernel: Key type asymmetric registered
Jan 27 07:36:02 np0005597378 kernel: Asymmetric key parser 'x509' registered
Jan 27 07:36:02 np0005597378 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 27 07:36:02 np0005597378 kernel: io scheduler mq-deadline registered
Jan 27 07:36:02 np0005597378 kernel: io scheduler kyber registered
Jan 27 07:36:02 np0005597378 kernel: io scheduler bfq registered
Jan 27 07:36:02 np0005597378 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 27 07:36:02 np0005597378 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 27 07:36:02 np0005597378 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 27 07:36:02 np0005597378 kernel: ACPI: button: Power Button [PWRF]
Jan 27 07:36:02 np0005597378 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 27 07:36:02 np0005597378 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 27 07:36:02 np0005597378 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 27 07:36:02 np0005597378 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 27 07:36:02 np0005597378 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 27 07:36:02 np0005597378 kernel: Non-volatile memory driver v1.3
Jan 27 07:36:02 np0005597378 kernel: rdac: device handler registered
Jan 27 07:36:02 np0005597378 kernel: hp_sw: device handler registered
Jan 27 07:36:02 np0005597378 kernel: emc: device handler registered
Jan 27 07:36:02 np0005597378 kernel: alua: device handler registered
Jan 27 07:36:02 np0005597378 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 27 07:36:02 np0005597378 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 27 07:36:02 np0005597378 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 27 07:36:02 np0005597378 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 27 07:36:02 np0005597378 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 27 07:36:02 np0005597378 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 27 07:36:02 np0005597378 kernel: usb usb1: Product: UHCI Host Controller
Jan 27 07:36:02 np0005597378 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 27 07:36:02 np0005597378 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 27 07:36:02 np0005597378 kernel: hub 1-0:1.0: USB hub found
Jan 27 07:36:02 np0005597378 kernel: hub 1-0:1.0: 2 ports detected
Jan 27 07:36:02 np0005597378 kernel: usbcore: registered new interface driver usbserial_generic
Jan 27 07:36:02 np0005597378 kernel: usbserial: USB Serial support registered for generic
Jan 27 07:36:02 np0005597378 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 27 07:36:02 np0005597378 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 27 07:36:02 np0005597378 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 27 07:36:02 np0005597378 kernel: mousedev: PS/2 mouse device common for all mice
Jan 27 07:36:02 np0005597378 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 27 07:36:02 np0005597378 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 27 07:36:02 np0005597378 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 27 07:36:02 np0005597378 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 27 07:36:02 np0005597378 kernel: rtc_cmos 00:04: registered as rtc0
Jan 27 07:36:02 np0005597378 kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T12:36:01 UTC (1769517361)
Jan 27 07:36:02 np0005597378 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 27 07:36:02 np0005597378 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 27 07:36:02 np0005597378 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 27 07:36:02 np0005597378 kernel: usbcore: registered new interface driver usbhid
Jan 27 07:36:02 np0005597378 kernel: usbhid: USB HID core driver
Jan 27 07:36:02 np0005597378 kernel: drop_monitor: Initializing network drop monitor service
Jan 27 07:36:02 np0005597378 kernel: Initializing XFRM netlink socket
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_INET6 protocol family
Jan 27 07:36:02 np0005597378 kernel: Segment Routing with IPv6
Jan 27 07:36:02 np0005597378 kernel: NET: Registered PF_PACKET protocol family
Jan 27 07:36:02 np0005597378 kernel: mpls_gso: MPLS GSO support
Jan 27 07:36:02 np0005597378 kernel: IPI shorthand broadcast: enabled
Jan 27 07:36:02 np0005597378 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 27 07:36:02 np0005597378 kernel: AES CTR mode by8 optimization enabled
Jan 27 07:36:02 np0005597378 kernel: sched_clock: Marking stable (1224002536, 157789133)->(1513438841, -131647172)
Jan 27 07:36:02 np0005597378 kernel: registered taskstats version 1
Jan 27 07:36:02 np0005597378 kernel: Loading compiled-in X.509 certificates
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 27 07:36:02 np0005597378 kernel: Demotion targets for Node 0: null
Jan 27 07:36:02 np0005597378 kernel: page_owner is disabled
Jan 27 07:36:02 np0005597378 kernel: Key type .fscrypt registered
Jan 27 07:36:02 np0005597378 kernel: Key type fscrypt-provisioning registered
Jan 27 07:36:02 np0005597378 kernel: Key type big_key registered
Jan 27 07:36:02 np0005597378 kernel: Key type encrypted registered
Jan 27 07:36:02 np0005597378 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 27 07:36:02 np0005597378 kernel: Loading compiled-in module X.509 certificates
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 07:36:02 np0005597378 kernel: ima: Allocated hash algorithm: sha256
Jan 27 07:36:02 np0005597378 kernel: ima: No architecture policies found
Jan 27 07:36:02 np0005597378 kernel: evm: Initialising EVM extended attributes:
Jan 27 07:36:02 np0005597378 kernel: evm: security.selinux
Jan 27 07:36:02 np0005597378 kernel: evm: security.SMACK64 (disabled)
Jan 27 07:36:02 np0005597378 kernel: evm: security.SMACK64EXEC (disabled)
Jan 27 07:36:02 np0005597378 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 27 07:36:02 np0005597378 kernel: evm: security.SMACK64MMAP (disabled)
Jan 27 07:36:02 np0005597378 kernel: evm: security.apparmor (disabled)
Jan 27 07:36:02 np0005597378 kernel: evm: security.ima
Jan 27 07:36:02 np0005597378 kernel: evm: security.capability
Jan 27 07:36:02 np0005597378 kernel: evm: HMAC attrs: 0x1
Jan 27 07:36:02 np0005597378 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 27 07:36:02 np0005597378 kernel: Running certificate verification RSA selftest
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 27 07:36:02 np0005597378 kernel: Running certificate verification ECDSA selftest
Jan 27 07:36:02 np0005597378 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 27 07:36:02 np0005597378 kernel: clk: Disabling unused clocks
Jan 27 07:36:02 np0005597378 kernel: Freeing unused decrypted memory: 2028K
Jan 27 07:36:02 np0005597378 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 27 07:36:02 np0005597378 kernel: Write protecting the kernel read-only data: 30720k
Jan 27 07:36:02 np0005597378 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 27 07:36:02 np0005597378 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 27 07:36:02 np0005597378 kernel: Run /init as init process
Jan 27 07:36:02 np0005597378 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 07:36:02 np0005597378 systemd: Detected virtualization kvm.
Jan 27 07:36:02 np0005597378 systemd: Detected architecture x86-64.
Jan 27 07:36:02 np0005597378 systemd: Running in initrd.
Jan 27 07:36:02 np0005597378 systemd: No hostname configured, using default hostname.
Jan 27 07:36:02 np0005597378 systemd: Hostname set to <localhost>.
Jan 27 07:36:02 np0005597378 systemd: Initializing machine ID from VM UUID.
Jan 27 07:36:02 np0005597378 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 27 07:36:02 np0005597378 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 27 07:36:02 np0005597378 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 27 07:36:02 np0005597378 kernel: usb 1-1: Manufacturer: QEMU
Jan 27 07:36:02 np0005597378 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 27 07:36:02 np0005597378 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 27 07:36:02 np0005597378 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 27 07:36:02 np0005597378 systemd: Queued start job for default target Initrd Default Target.
Jan 27 07:36:02 np0005597378 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 07:36:02 np0005597378 systemd: Reached target Local Encrypted Volumes.
Jan 27 07:36:02 np0005597378 systemd: Reached target Initrd /usr File System.
Jan 27 07:36:02 np0005597378 systemd: Reached target Local File Systems.
Jan 27 07:36:02 np0005597378 systemd: Reached target Path Units.
Jan 27 07:36:02 np0005597378 systemd: Reached target Slice Units.
Jan 27 07:36:02 np0005597378 systemd: Reached target Swaps.
Jan 27 07:36:02 np0005597378 systemd: Reached target Timer Units.
Jan 27 07:36:02 np0005597378 systemd: Listening on D-Bus System Message Bus Socket.
Jan 27 07:36:02 np0005597378 systemd: Listening on Journal Socket (/dev/log).
Jan 27 07:36:02 np0005597378 systemd: Listening on Journal Socket.
Jan 27 07:36:02 np0005597378 systemd: Listening on udev Control Socket.
Jan 27 07:36:02 np0005597378 systemd: Listening on udev Kernel Socket.
Jan 27 07:36:02 np0005597378 systemd: Reached target Socket Units.
Jan 27 07:36:02 np0005597378 systemd: Starting Create List of Static Device Nodes...
Jan 27 07:36:02 np0005597378 systemd: Starting Journal Service...
Jan 27 07:36:02 np0005597378 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 07:36:02 np0005597378 systemd: Starting Apply Kernel Variables...
Jan 27 07:36:02 np0005597378 systemd: Starting Create System Users...
Jan 27 07:36:02 np0005597378 systemd: Starting Setup Virtual Console...
Jan 27 07:36:02 np0005597378 systemd: Finished Create List of Static Device Nodes.
Jan 27 07:36:02 np0005597378 systemd: Finished Apply Kernel Variables.
Jan 27 07:36:02 np0005597378 systemd-journald[308]: Journal started
Jan 27 07:36:02 np0005597378 systemd-journald[308]: Runtime Journal (/run/log/journal/3df1c84e23994242b9b70012ac6a93e5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 07:36:02 np0005597378 systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 27 07:36:02 np0005597378 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 27 07:36:02 np0005597378 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 27 07:36:02 np0005597378 systemd: Started Journal Service.
Jan 27 07:36:02 np0005597378 systemd[1]: Finished Create System Users.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 07:36:02 np0005597378 systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 07:36:02 np0005597378 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 07:36:02 np0005597378 systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 07:36:02 np0005597378 systemd[1]: Finished Setup Virtual Console.
Jan 27 07:36:02 np0005597378 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting dracut cmdline hook...
Jan 27 07:36:02 np0005597378 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 27 07:36:02 np0005597378 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 07:36:02 np0005597378 systemd[1]: Finished dracut cmdline hook.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting dracut pre-udev hook...
Jan 27 07:36:02 np0005597378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 27 07:36:02 np0005597378 kernel: device-mapper: uevent: version 1.0.3
Jan 27 07:36:02 np0005597378 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 27 07:36:02 np0005597378 kernel: RPC: Registered named UNIX socket transport module.
Jan 27 07:36:02 np0005597378 kernel: RPC: Registered udp transport module.
Jan 27 07:36:02 np0005597378 kernel: RPC: Registered tcp transport module.
Jan 27 07:36:02 np0005597378 kernel: RPC: Registered tcp-with-tls transport module.
Jan 27 07:36:02 np0005597378 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 27 07:36:02 np0005597378 rpc.statd[444]: Version 2.5.4 starting
Jan 27 07:36:02 np0005597378 rpc.statd[444]: Initializing NSM state
Jan 27 07:36:02 np0005597378 rpc.idmapd[449]: Setting log level to 0
Jan 27 07:36:02 np0005597378 systemd[1]: Finished dracut pre-udev hook.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 07:36:02 np0005597378 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 07:36:02 np0005597378 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting dracut pre-trigger hook...
Jan 27 07:36:02 np0005597378 systemd[1]: Finished dracut pre-trigger hook.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting Coldplug All udev Devices...
Jan 27 07:36:02 np0005597378 systemd[1]: Created slice Slice /system/modprobe.
Jan 27 07:36:02 np0005597378 systemd[1]: Starting Load Kernel Module configfs...
Jan 27 07:36:02 np0005597378 systemd[1]: Finished Coldplug All udev Devices.
Jan 27 07:36:02 np0005597378 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 07:36:02 np0005597378 systemd[1]: Reached target Network.
Jan 27 07:36:02 np0005597378 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 07:36:02 np0005597378 systemd[1]: Starting dracut initqueue hook...
Jan 27 07:36:02 np0005597378 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 07:36:02 np0005597378 systemd[1]: Finished Load Kernel Module configfs.
Jan 27 07:36:03 np0005597378 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 27 07:36:03 np0005597378 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 27 07:36:03 np0005597378 kernel: vda: vda1
Jan 27 07:36:03 np0005597378 systemd-udevd[520]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 07:36:03 np0005597378 kernel: scsi host0: ata_piix
Jan 27 07:36:03 np0005597378 kernel: scsi host1: ata_piix
Jan 27 07:36:03 np0005597378 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 27 07:36:03 np0005597378 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 27 07:36:03 np0005597378 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 07:36:03 np0005597378 systemd[1]: Reached target Initrd Root Device.
Jan 27 07:36:03 np0005597378 systemd[1]: Mounting Kernel Configuration File System...
Jan 27 07:36:03 np0005597378 systemd[1]: Mounted Kernel Configuration File System.
Jan 27 07:36:03 np0005597378 systemd[1]: Reached target System Initialization.
Jan 27 07:36:03 np0005597378 systemd[1]: Reached target Basic System.
Jan 27 07:36:03 np0005597378 kernel: ata1: found unknown device (class 0)
Jan 27 07:36:03 np0005597378 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 27 07:36:03 np0005597378 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 27 07:36:03 np0005597378 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 27 07:36:03 np0005597378 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 27 07:36:03 np0005597378 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 27 07:36:03 np0005597378 systemd[1]: Finished dracut initqueue hook.
Jan 27 07:36:03 np0005597378 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 07:36:03 np0005597378 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 27 07:36:03 np0005597378 systemd[1]: Reached target Remote File Systems.
Jan 27 07:36:03 np0005597378 systemd[1]: Starting dracut pre-mount hook...
Jan 27 07:36:03 np0005597378 systemd[1]: Finished dracut pre-mount hook.
Jan 27 07:36:03 np0005597378 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 27 07:36:03 np0005597378 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Jan 27 07:36:03 np0005597378 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 07:36:03 np0005597378 systemd[1]: Mounting /sysroot...
Jan 27 07:36:04 np0005597378 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 27 07:36:04 np0005597378 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 27 07:36:04 np0005597378 kernel: XFS (vda1): Ending clean mount
Jan 27 07:36:04 np0005597378 systemd[1]: Mounted /sysroot.
Jan 27 07:36:04 np0005597378 systemd[1]: Reached target Initrd Root File System.
Jan 27 07:36:04 np0005597378 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 27 07:36:04 np0005597378 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 27 07:36:04 np0005597378 systemd[1]: Reached target Initrd File Systems.
Jan 27 07:36:04 np0005597378 systemd[1]: Reached target Initrd Default Target.
Jan 27 07:36:04 np0005597378 systemd[1]: Starting dracut mount hook...
Jan 27 07:36:04 np0005597378 systemd[1]: Finished dracut mount hook.
Jan 27 07:36:04 np0005597378 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 27 07:36:04 np0005597378 rpc.idmapd[449]: exiting on signal 15
Jan 27 07:36:04 np0005597378 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 27 07:36:04 np0005597378 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Network.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Timer Units.
Jan 27 07:36:04 np0005597378 systemd[1]: dbus.socket: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Initrd Default Target.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Basic System.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Initrd Root Device.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Initrd /usr File System.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Path Units.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Remote File Systems.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Slice Units.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Socket Units.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target System Initialization.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Local File Systems.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Swaps.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut mount hook.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut pre-mount hook.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut initqueue hook.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Apply Kernel Variables.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Coldplug All udev Devices.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut pre-trigger hook.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Setup Virtual Console.
Jan 27 07:36:04 np0005597378 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 27 07:36:04 np0005597378 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Closed udev Control Socket.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Closed udev Kernel Socket.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut pre-udev hook.
Jan 27 07:36:04 np0005597378 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped dracut cmdline hook.
Jan 27 07:36:04 np0005597378 systemd[1]: Starting Cleanup udev Database...
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 27 07:36:04 np0005597378 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 27 07:36:04 np0005597378 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Stopped Create System Users.
Jan 27 07:36:04 np0005597378 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 27 07:36:04 np0005597378 systemd[1]: Finished Cleanup udev Database.
Jan 27 07:36:04 np0005597378 systemd[1]: Reached target Switch Root.
Jan 27 07:36:04 np0005597378 systemd[1]: Starting Switch Root...
Jan 27 07:36:04 np0005597378 systemd[1]: Switching root.
Jan 27 07:36:04 np0005597378 systemd-journald[308]: Journal stopped
Jan 27 07:36:05 np0005597378 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 27 07:36:05 np0005597378 kernel: audit: type=1404 audit(1769517364.889:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 07:36:05 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 07:36:05 np0005597378 kernel: audit: type=1403 audit(1769517365.061:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 27 07:36:05 np0005597378 systemd: Successfully loaded SELinux policy in 176.228ms.
Jan 27 07:36:05 np0005597378 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.832ms.
Jan 27 07:36:05 np0005597378 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 07:36:05 np0005597378 systemd: Detected virtualization kvm.
Jan 27 07:36:05 np0005597378 systemd: Detected architecture x86-64.
Jan 27 07:36:05 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 07:36:05 np0005597378 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd: Stopped Switch Root.
Jan 27 07:36:05 np0005597378 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 27 07:36:05 np0005597378 systemd: Created slice Slice /system/getty.
Jan 27 07:36:05 np0005597378 systemd: Created slice Slice /system/serial-getty.
Jan 27 07:36:05 np0005597378 systemd: Created slice Slice /system/sshd-keygen.
Jan 27 07:36:05 np0005597378 systemd: Created slice User and Session Slice.
Jan 27 07:36:05 np0005597378 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 07:36:05 np0005597378 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 27 07:36:05 np0005597378 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 27 07:36:05 np0005597378 systemd: Reached target Local Encrypted Volumes.
Jan 27 07:36:05 np0005597378 systemd: Stopped target Switch Root.
Jan 27 07:36:05 np0005597378 systemd: Stopped target Initrd File Systems.
Jan 27 07:36:05 np0005597378 systemd: Stopped target Initrd Root File System.
Jan 27 07:36:05 np0005597378 systemd: Reached target Local Integrity Protected Volumes.
Jan 27 07:36:05 np0005597378 systemd: Reached target Path Units.
Jan 27 07:36:05 np0005597378 systemd: Reached target rpc_pipefs.target.
Jan 27 07:36:05 np0005597378 systemd: Reached target Slice Units.
Jan 27 07:36:05 np0005597378 systemd: Reached target Swaps.
Jan 27 07:36:05 np0005597378 systemd: Reached target Local Verity Protected Volumes.
Jan 27 07:36:05 np0005597378 systemd: Listening on RPCbind Server Activation Socket.
Jan 27 07:36:05 np0005597378 systemd: Reached target RPC Port Mapper.
Jan 27 07:36:05 np0005597378 systemd: Listening on Process Core Dump Socket.
Jan 27 07:36:05 np0005597378 systemd: Listening on initctl Compatibility Named Pipe.
Jan 27 07:36:05 np0005597378 systemd: Listening on udev Control Socket.
Jan 27 07:36:05 np0005597378 systemd: Listening on udev Kernel Socket.
Jan 27 07:36:05 np0005597378 systemd: Mounting Huge Pages File System...
Jan 27 07:36:05 np0005597378 systemd: Mounting POSIX Message Queue File System...
Jan 27 07:36:05 np0005597378 systemd: Mounting Kernel Debug File System...
Jan 27 07:36:05 np0005597378 systemd: Mounting Kernel Trace File System...
Jan 27 07:36:05 np0005597378 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 07:36:05 np0005597378 systemd: Starting Create List of Static Device Nodes...
Jan 27 07:36:05 np0005597378 systemd: Starting Load Kernel Module configfs...
Jan 27 07:36:05 np0005597378 systemd: Starting Load Kernel Module drm...
Jan 27 07:36:05 np0005597378 systemd: Starting Load Kernel Module efi_pstore...
Jan 27 07:36:05 np0005597378 systemd: Starting Load Kernel Module fuse...
Jan 27 07:36:05 np0005597378 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 27 07:36:05 np0005597378 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd: Stopped File System Check on Root Device.
Jan 27 07:36:05 np0005597378 systemd: Stopped Journal Service.
Jan 27 07:36:05 np0005597378 kernel: fuse: init (API version 7.37)
Jan 27 07:36:05 np0005597378 systemd: Starting Journal Service...
Jan 27 07:36:05 np0005597378 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 07:36:05 np0005597378 systemd: Starting Generate network units from Kernel command line...
Jan 27 07:36:05 np0005597378 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 07:36:05 np0005597378 systemd: Starting Remount Root and Kernel File Systems...
Jan 27 07:36:05 np0005597378 systemd-journald[678]: Journal started
Jan 27 07:36:05 np0005597378 systemd-journald[678]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 07:36:05 np0005597378 systemd[1]: Queued start job for default target Multi-User System.
Jan 27 07:36:05 np0005597378 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 27 07:36:05 np0005597378 systemd: Starting Apply Kernel Variables...
Jan 27 07:36:05 np0005597378 systemd: Starting Coldplug All udev Devices...
Jan 27 07:36:05 np0005597378 systemd: Started Journal Service.
Jan 27 07:36:05 np0005597378 systemd[1]: Mounted Huge Pages File System.
Jan 27 07:36:05 np0005597378 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 27 07:36:05 np0005597378 kernel: ACPI: bus type drm_connector registered
Jan 27 07:36:05 np0005597378 systemd[1]: Mounted POSIX Message Queue File System.
Jan 27 07:36:05 np0005597378 systemd[1]: Mounted Kernel Debug File System.
Jan 27 07:36:05 np0005597378 systemd[1]: Mounted Kernel Trace File System.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 07:36:05 np0005597378 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Load Kernel Module configfs.
Jan 27 07:36:05 np0005597378 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Load Kernel Module drm.
Jan 27 07:36:05 np0005597378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 27 07:36:05 np0005597378 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Load Kernel Module fuse.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Generate network units from Kernel command line.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Apply Kernel Variables.
Jan 27 07:36:05 np0005597378 systemd[1]: Mounting FUSE Control File System...
Jan 27 07:36:05 np0005597378 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 07:36:05 np0005597378 systemd[1]: Starting Rebuild Hardware Database...
Jan 27 07:36:05 np0005597378 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 27 07:36:05 np0005597378 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 27 07:36:05 np0005597378 systemd[1]: Starting Load/Save OS Random Seed...
Jan 27 07:36:05 np0005597378 systemd[1]: Starting Create System Users...
Jan 27 07:36:05 np0005597378 systemd[1]: Mounted FUSE Control File System.
Jan 27 07:36:05 np0005597378 systemd-journald[678]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 07:36:05 np0005597378 systemd-journald[678]: Received client request to flush runtime journal.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Load/Save OS Random Seed.
Jan 27 07:36:05 np0005597378 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Create System Users.
Jan 27 07:36:05 np0005597378 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Coldplug All udev Devices.
Jan 27 07:36:05 np0005597378 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 07:36:05 np0005597378 systemd[1]: Reached target Preparation for Local File Systems.
Jan 27 07:36:05 np0005597378 systemd[1]: Reached target Local File Systems.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 27 07:36:06 np0005597378 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 27 07:36:06 np0005597378 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 27 07:36:06 np0005597378 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Automatic Boot Loader Update...
Jan 27 07:36:06 np0005597378 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 07:36:06 np0005597378 bootctl[698]: Couldn't find EFI system partition, skipping.
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Automatic Boot Loader Update.
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Security Auditing Service...
Jan 27 07:36:06 np0005597378 systemd[1]: Starting RPC Bind...
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Rebuild Journal Catalog...
Jan 27 07:36:06 np0005597378 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 27 07:36:06 np0005597378 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Rebuild Journal Catalog.
Jan 27 07:36:06 np0005597378 systemd[1]: Started RPC Bind.
Jan 27 07:36:06 np0005597378 augenrules[709]: /sbin/augenrules: No change
Jan 27 07:36:06 np0005597378 augenrules[724]: No rules
Jan 27 07:36:06 np0005597378 augenrules[724]: enabled 1
Jan 27 07:36:06 np0005597378 augenrules[724]: failure 1
Jan 27 07:36:06 np0005597378 augenrules[724]: pid 704
Jan 27 07:36:06 np0005597378 augenrules[724]: rate_limit 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_limit 8192
Jan 27 07:36:06 np0005597378 augenrules[724]: lost 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog 3
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_wait_time 60000
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_wait_time_actual 0
Jan 27 07:36:06 np0005597378 augenrules[724]: enabled 1
Jan 27 07:36:06 np0005597378 augenrules[724]: failure 1
Jan 27 07:36:06 np0005597378 augenrules[724]: pid 704
Jan 27 07:36:06 np0005597378 augenrules[724]: rate_limit 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_limit 8192
Jan 27 07:36:06 np0005597378 augenrules[724]: lost 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_wait_time 60000
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_wait_time_actual 0
Jan 27 07:36:06 np0005597378 augenrules[724]: enabled 1
Jan 27 07:36:06 np0005597378 augenrules[724]: failure 1
Jan 27 07:36:06 np0005597378 augenrules[724]: pid 704
Jan 27 07:36:06 np0005597378 augenrules[724]: rate_limit 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_limit 8192
Jan 27 07:36:06 np0005597378 augenrules[724]: lost 0
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog 1
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_wait_time 60000
Jan 27 07:36:06 np0005597378 augenrules[724]: backlog_wait_time_actual 0
Jan 27 07:36:06 np0005597378 systemd[1]: Started Security Auditing Service.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Rebuild Hardware Database.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 07:36:06 np0005597378 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Update is Completed...
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Update is Completed.
Jan 27 07:36:06 np0005597378 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 07:36:06 np0005597378 systemd[1]: Reached target System Initialization.
Jan 27 07:36:06 np0005597378 systemd[1]: Started dnf makecache --timer.
Jan 27 07:36:06 np0005597378 systemd[1]: Started Daily rotation of log files.
Jan 27 07:36:06 np0005597378 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 27 07:36:06 np0005597378 systemd[1]: Reached target Timer Units.
Jan 27 07:36:06 np0005597378 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 07:36:06 np0005597378 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 27 07:36:06 np0005597378 systemd[1]: Reached target Socket Units.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting D-Bus System Message Bus...
Jan 27 07:36:06 np0005597378 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 07:36:06 np0005597378 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Load Kernel Module configfs...
Jan 27 07:36:06 np0005597378 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Load Kernel Module configfs.
Jan 27 07:36:06 np0005597378 systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 07:36:06 np0005597378 systemd[1]: Started D-Bus System Message Bus.
Jan 27 07:36:06 np0005597378 systemd[1]: Reached target Basic System.
Jan 27 07:36:06 np0005597378 dbus-broker-lau[761]: Ready
Jan 27 07:36:06 np0005597378 systemd[1]: Starting NTP client/server...
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 27 07:36:06 np0005597378 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 27 07:36:06 np0005597378 systemd[1]: Starting IPv4 firewall with iptables...
Jan 27 07:36:06 np0005597378 systemd[1]: Started irqbalance daemon.
Jan 27 07:36:06 np0005597378 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 27 07:36:06 np0005597378 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 07:36:06 np0005597378 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 07:36:06 np0005597378 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 07:36:06 np0005597378 systemd[1]: Reached target sshd-keygen.target.
Jan 27 07:36:06 np0005597378 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 27 07:36:06 np0005597378 systemd[1]: Reached target User and Group Name Lookups.
Jan 27 07:36:06 np0005597378 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 27 07:36:06 np0005597378 systemd[1]: Starting User Login Management...
Jan 27 07:36:06 np0005597378 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 27 07:36:06 np0005597378 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 27 07:36:06 np0005597378 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 27 07:36:06 np0005597378 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 27 07:36:06 np0005597378 chronyd[796]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 07:36:06 np0005597378 chronyd[796]: Loaded 0 symmetric keys
Jan 27 07:36:06 np0005597378 chronyd[796]: Using right/UTC timezone to obtain leap second data
Jan 27 07:36:06 np0005597378 chronyd[796]: Loaded seccomp filter (level 2)
Jan 27 07:36:06 np0005597378 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 07:36:06 np0005597378 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 07:36:06 np0005597378 systemd-logind[786]: New seat seat0.
Jan 27 07:36:06 np0005597378 systemd[1]: Started NTP client/server.
Jan 27 07:36:06 np0005597378 systemd[1]: Started User Login Management.
Jan 27 07:36:06 np0005597378 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 27 07:36:06 np0005597378 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 27 07:36:06 np0005597378 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 27 07:36:06 np0005597378 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 27 07:36:06 np0005597378 kernel: Console: switching to colour dummy device 80x25
Jan 27 07:36:06 np0005597378 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 27 07:36:06 np0005597378 kernel: [drm] features: -context_init
Jan 27 07:36:06 np0005597378 kernel: [drm] number of scanouts: 1
Jan 27 07:36:06 np0005597378 kernel: [drm] number of cap sets: 0
Jan 27 07:36:06 np0005597378 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 27 07:36:06 np0005597378 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 27 07:36:06 np0005597378 kernel: Console: switching to colour frame buffer device 128x48
Jan 27 07:36:06 np0005597378 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 27 07:36:06 np0005597378 kernel: kvm_amd: TSC scaling supported
Jan 27 07:36:06 np0005597378 kernel: kvm_amd: Nested Virtualization enabled
Jan 27 07:36:06 np0005597378 kernel: kvm_amd: Nested Paging enabled
Jan 27 07:36:06 np0005597378 kernel: kvm_amd: LBR virtualization supported
Jan 27 07:36:07 np0005597378 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Jan 27 07:36:07 np0005597378 systemd[1]: Finished IPv4 firewall with iptables.
Jan 27 07:36:07 np0005597378 cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 27 Jan 2026 12:36:07 +0000. Up 7.12 seconds.
Jan 27 07:36:07 np0005597378 systemd[1]: run-cloud\x2dinit-tmp-tmpccwtvpdw.mount: Deactivated successfully.
Jan 27 07:36:07 np0005597378 systemd[1]: Starting Hostname Service...
Jan 27 07:36:07 np0005597378 systemd[1]: Started Hostname Service.
Jan 27 07:36:07 np0005597378 systemd-hostnamed[854]: Hostname set to <np0005597378.novalocal> (static)
Jan 27 07:36:07 np0005597378 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 27 07:36:07 np0005597378 systemd[1]: Reached target Preparation for Network.
Jan 27 07:36:07 np0005597378 systemd[1]: Starting Network Manager...
Jan 27 07:36:07 np0005597378 NetworkManager[858]: <info>  [1769517367.9884] NetworkManager (version 1.54.3-2.el9) is starting... (boot:fdf4cf63-50a8-4d95-a9b7-7837ebf6d82a)
Jan 27 07:36:07 np0005597378 NetworkManager[858]: <info>  [1769517367.9889] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0045] manager[0x5598e7eb8000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0087] hostname: hostname: using hostnamed
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0088] hostname: static hostname changed from (none) to "np0005597378.novalocal"
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0093] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0203] manager[0x5598e7eb8000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0204] manager[0x5598e7eb8000]: rfkill: WWAN hardware radio set enabled
Jan 27 07:36:08 np0005597378 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0290] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0290] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0291] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0291] manager: Networking is enabled by state file
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0293] settings: Loaded settings plugin: keyfile (internal)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0331] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0356] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0373] dhcp: init: Using DHCP client 'internal'
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0376] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0390] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0402] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0414] device (lo): Activation: starting connection 'lo' (2703350d-1698-4e5c-a1cb-b77d40fc5e70)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0424] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0427] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 07:36:08 np0005597378 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:36:08 np0005597378 systemd[1]: Started Network Manager.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0473] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0491] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0494] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0496] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0497] device (eth0): carrier: link connected
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0500] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0505] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0512] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0516] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0517] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0519] manager: NetworkManager state is now CONNECTING
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0520] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0526] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 07:36:08 np0005597378 systemd[1]: Reached target Network.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0529] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0576] dhcp4 (eth0): state changed new lease, address=38.102.83.129
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0584] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 07:36:08 np0005597378 systemd[1]: Starting Network Manager Wait Online...
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0609] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 07:36:08 np0005597378 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 27 07:36:08 np0005597378 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0690] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0693] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0698] device (lo): Activation: successful, device activated.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0716] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0718] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0721] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0725] device (eth0): Activation: successful, device activated.
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0731] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 07:36:08 np0005597378 NetworkManager[858]: <info>  [1769517368.0735] manager: startup complete
Jan 27 07:36:08 np0005597378 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 27 07:36:08 np0005597378 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 07:36:08 np0005597378 systemd[1]: Reached target NFS client services.
Jan 27 07:36:08 np0005597378 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 07:36:08 np0005597378 systemd[1]: Reached target Remote File Systems.
Jan 27 07:36:08 np0005597378 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 07:36:08 np0005597378 systemd[1]: Finished Network Manager Wait Online.
Jan 27 07:36:08 np0005597378 systemd[1]: Starting Cloud-init: Network Stage...
Jan 27 07:36:08 np0005597378 cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 27 Jan 2026 12:36:08 +0000. Up 8.10 seconds.
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.129         | 255.255.255.0 | global | fa:16:3e:cd:29:ef |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fecd:29ef/64 |       .       |  link  | fa:16:3e:cd:29:ef |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 27 07:36:08 np0005597378 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 07:36:09 np0005597378 cloud-init[921]: Generating public/private rsa key pair.
Jan 27 07:36:09 np0005597378 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 27 07:36:09 np0005597378 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 27 07:36:09 np0005597378 cloud-init[921]: The key fingerprint is:
Jan 27 07:36:09 np0005597378 cloud-init[921]: SHA256:zfQ4ynYm7Nr4gqpDWuDnugqPYIWIu6OWzUWP/N8+FU4 root@np0005597378.novalocal
Jan 27 07:36:09 np0005597378 cloud-init[921]: The key's randomart image is:
Jan 27 07:36:09 np0005597378 cloud-init[921]: +---[RSA 3072]----+
Jan 27 07:36:09 np0005597378 cloud-init[921]: |                 |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |                 |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |          .      |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |+ .  .   + o E   |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |+o .o o S = + .  |
Jan 27 07:36:09 np0005597378 cloud-init[921]: | +o. + + . . o   |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |*o* ... * o .    |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |B* +. .* +..     |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |X*=.  o+=..o.    |
Jan 27 07:36:09 np0005597378 cloud-init[921]: +----[SHA256]-----+
Jan 27 07:36:09 np0005597378 cloud-init[921]: Generating public/private ecdsa key pair.
Jan 27 07:36:09 np0005597378 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 27 07:36:09 np0005597378 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 27 07:36:09 np0005597378 cloud-init[921]: The key fingerprint is:
Jan 27 07:36:09 np0005597378 cloud-init[921]: SHA256:de3BawpK8UKB1uH+Xb6Mzyzw5wYaEoz7ULVr88aDOYw root@np0005597378.novalocal
Jan 27 07:36:09 np0005597378 cloud-init[921]: The key's randomart image is:
Jan 27 07:36:09 np0005597378 cloud-init[921]: +---[ECDSA 256]---+
Jan 27 07:36:09 np0005597378 cloud-init[921]: |       oo.       |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |      o....  o   |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |     . o+.... +  |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |      .o++.. . o |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |       oS.o.  =  |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |      o..+*o.=   |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |       o.=.@o..  |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |        E * B=o. |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |           o.BB  |
Jan 27 07:36:09 np0005597378 cloud-init[921]: +----[SHA256]-----+
Jan 27 07:36:09 np0005597378 cloud-init[921]: Generating public/private ed25519 key pair.
Jan 27 07:36:09 np0005597378 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 27 07:36:09 np0005597378 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 27 07:36:09 np0005597378 cloud-init[921]: The key fingerprint is:
Jan 27 07:36:09 np0005597378 cloud-init[921]: SHA256:FeZdkFsRX2fC0viD34PMMRmK1vAuj1qlFt9+QZO0qtk root@np0005597378.novalocal
Jan 27 07:36:09 np0005597378 cloud-init[921]: The key's randomart image is:
Jan 27 07:36:09 np0005597378 cloud-init[921]: +--[ED25519 256]--+
Jan 27 07:36:09 np0005597378 cloud-init[921]: |          o .*=o+|
Jan 27 07:36:09 np0005597378 cloud-init[921]: |         + o+o+=o|
Jan 27 07:36:09 np0005597378 cloud-init[921]: |          B o*+ +|
Jan 27 07:36:09 np0005597378 cloud-init[921]: |         + +o+o= |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |        S...o.*o.|
Jan 27 07:36:09 np0005597378 cloud-init[921]: |         .=..=.o.|
Jan 27 07:36:09 np0005597378 cloud-init[921]: |         ++.+.  o|
Jan 27 07:36:09 np0005597378 cloud-init[921]: |        o. +.E . |
Jan 27 07:36:09 np0005597378 cloud-init[921]: |       ..    ..  |
Jan 27 07:36:09 np0005597378 cloud-init[921]: +----[SHA256]-----+
Jan 27 07:36:10 np0005597378 systemd[1]: Finished Cloud-init: Network Stage.
Jan 27 07:36:10 np0005597378 systemd[1]: Reached target Cloud-config availability.
Jan 27 07:36:10 np0005597378 systemd[1]: Reached target Network is Online.
Jan 27 07:36:10 np0005597378 systemd[1]: Starting Cloud-init: Config Stage...
Jan 27 07:36:10 np0005597378 systemd[1]: Starting Crash recovery kernel arming...
Jan 27 07:36:10 np0005597378 systemd[1]: Starting Notify NFS peers of a restart...
Jan 27 07:36:10 np0005597378 systemd[1]: Starting System Logging Service...
Jan 27 07:36:10 np0005597378 systemd[1]: Starting OpenSSH server daemon...
Jan 27 07:36:10 np0005597378 sm-notify[1005]: Version 2.5.4 starting
Jan 27 07:36:10 np0005597378 systemd[1]: Starting Permit User Sessions...
Jan 27 07:36:10 np0005597378 systemd[1]: Started Notify NFS peers of a restart.
Jan 27 07:36:10 np0005597378 systemd[1]: Started OpenSSH server daemon.
Jan 27 07:36:10 np0005597378 systemd[1]: Finished Permit User Sessions.
Jan 27 07:36:10 np0005597378 systemd[1]: Started Command Scheduler.
Jan 27 07:36:10 np0005597378 systemd[1]: Started Getty on tty1.
Jan 27 07:36:10 np0005597378 systemd[1]: Started Serial Getty on ttyS0.
Jan 27 07:36:10 np0005597378 systemd[1]: Reached target Login Prompts.
Jan 27 07:36:10 np0005597378 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 27 07:36:10 np0005597378 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 27 07:36:10 np0005597378 systemd[1]: Started System Logging Service.
Jan 27 07:36:10 np0005597378 systemd[1]: Reached target Multi-User System.
Jan 27 07:36:10 np0005597378 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 27 07:36:10 np0005597378 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 27 07:36:10 np0005597378 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 27 07:36:10 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 07:36:10 np0005597378 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Jan 27 07:36:10 np0005597378 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 27 07:36:10 np0005597378 cloud-init[1108]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 27 Jan 2026 12:36:10 +0000. Up 9.98 seconds.
Jan 27 07:36:10 np0005597378 systemd[1]: Finished Cloud-init: Config Stage.
Jan 27 07:36:10 np0005597378 systemd[1]: Starting Cloud-init: Final Stage...
Jan 27 07:36:10 np0005597378 dracut[1286]: dracut-057-102.git20250818.el9
Jan 27 07:36:10 np0005597378 cloud-init[1302]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 27 Jan 2026 12:36:10 +0000. Up 10.42 seconds.
Jan 27 07:36:10 np0005597378 cloud-init[1305]: #############################################################
Jan 27 07:36:10 np0005597378 cloud-init[1307]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 27 07:36:10 np0005597378 cloud-init[1311]: 256 SHA256:de3BawpK8UKB1uH+Xb6Mzyzw5wYaEoz7ULVr88aDOYw root@np0005597378.novalocal (ECDSA)
Jan 27 07:36:10 np0005597378 cloud-init[1318]: 256 SHA256:FeZdkFsRX2fC0viD34PMMRmK1vAuj1qlFt9+QZO0qtk root@np0005597378.novalocal (ED25519)
Jan 27 07:36:10 np0005597378 dracut[1288]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 27 07:36:10 np0005597378 cloud-init[1323]: 3072 SHA256:zfQ4ynYm7Nr4gqpDWuDnugqPYIWIu6OWzUWP/N8+FU4 root@np0005597378.novalocal (RSA)
Jan 27 07:36:10 np0005597378 cloud-init[1326]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 27 07:36:10 np0005597378 cloud-init[1329]: #############################################################
Jan 27 07:36:10 np0005597378 cloud-init[1302]: Cloud-init v. 24.4-8.el9 finished at Tue, 27 Jan 2026 12:36:10 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.60 seconds
Jan 27 07:36:10 np0005597378 systemd[1]: Finished Cloud-init: Final Stage.
Jan 27 07:36:10 np0005597378 systemd[1]: Reached target Cloud-init target.
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 07:36:11 np0005597378 dracut[1288]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: memstrack is not available
Jan 27 07:36:12 np0005597378 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 07:36:12 np0005597378 dracut[1288]: memstrack is not available
Jan 27 07:36:12 np0005597378 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 07:36:12 np0005597378 dracut[1288]: *** Including module: systemd ***
Jan 27 07:36:12 np0005597378 dracut[1288]: *** Including module: fips ***
Jan 27 07:36:12 np0005597378 chronyd[796]: Selected source 148.113.173.205 (2.centos.pool.ntp.org)
Jan 27 07:36:12 np0005597378 chronyd[796]: System clock TAI offset set to 37 seconds
Jan 27 07:36:13 np0005597378 dracut[1288]: *** Including module: systemd-initrd ***
Jan 27 07:36:13 np0005597378 dracut[1288]: *** Including module: i18n ***
Jan 27 07:36:13 np0005597378 dracut[1288]: *** Including module: drm ***
Jan 27 07:36:13 np0005597378 dracut[1288]: *** Including module: prefixdevname ***
Jan 27 07:36:13 np0005597378 dracut[1288]: *** Including module: kernel-modules ***
Jan 27 07:36:14 np0005597378 kernel: block vda: the capability attribute has been deprecated.
Jan 27 07:36:14 np0005597378 dracut[1288]: *** Including module: kernel-modules-extra ***
Jan 27 07:36:14 np0005597378 dracut[1288]: *** Including module: qemu ***
Jan 27 07:36:14 np0005597378 dracut[1288]: *** Including module: fstab-sys ***
Jan 27 07:36:14 np0005597378 dracut[1288]: *** Including module: rootfs-block ***
Jan 27 07:36:14 np0005597378 dracut[1288]: *** Including module: terminfo ***
Jan 27 07:36:14 np0005597378 dracut[1288]: *** Including module: udev-rules ***
Jan 27 07:36:15 np0005597378 dracut[1288]: Skipping udev rule: 91-permissions.rules
Jan 27 07:36:15 np0005597378 dracut[1288]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 27 07:36:15 np0005597378 dracut[1288]: *** Including module: virtiofs ***
Jan 27 07:36:15 np0005597378 dracut[1288]: *** Including module: dracut-systemd ***
Jan 27 07:36:15 np0005597378 dracut[1288]: *** Including module: usrmount ***
Jan 27 07:36:15 np0005597378 dracut[1288]: *** Including module: base ***
Jan 27 07:36:16 np0005597378 dracut[1288]: *** Including module: fs-lib ***
Jan 27 07:36:16 np0005597378 dracut[1288]: *** Including module: kdumpbase ***
Jan 27 07:36:16 np0005597378 dracut[1288]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 27 07:36:16 np0005597378 dracut[1288]:  microcode_ctl module: mangling fw_dir
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 27 07:36:16 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 27 07:36:17 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 27 07:36:17 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 27 07:36:17 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 27 07:36:17 np0005597378 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 27 07:36:17 np0005597378 dracut[1288]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 27 07:36:17 np0005597378 dracut[1288]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 27 07:36:17 np0005597378 dracut[1288]: *** Including module: openssl ***
Jan 27 07:36:17 np0005597378 dracut[1288]: *** Including module: shutdown ***
Jan 27 07:36:17 np0005597378 dracut[1288]: *** Including module: squash ***
Jan 27 07:36:17 np0005597378 dracut[1288]: *** Including modules done ***
Jan 27 07:36:17 np0005597378 dracut[1288]: *** Installing kernel module dependencies ***
Jan 27 07:36:17 np0005597378 irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 27 07:36:17 np0005597378 irqbalance[780]: IRQ 25 affinity is now unmanaged
Jan 27 07:36:17 np0005597378 irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 27 07:36:17 np0005597378 irqbalance[780]: IRQ 31 affinity is now unmanaged
Jan 27 07:36:17 np0005597378 irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 27 07:36:17 np0005597378 irqbalance[780]: IRQ 28 affinity is now unmanaged
Jan 27 07:36:17 np0005597378 irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 27 07:36:17 np0005597378 irqbalance[780]: IRQ 32 affinity is now unmanaged
Jan 27 07:36:17 np0005597378 irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 27 07:36:17 np0005597378 irqbalance[780]: IRQ 30 affinity is now unmanaged
Jan 27 07:36:17 np0005597378 irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 27 07:36:17 np0005597378 irqbalance[780]: IRQ 29 affinity is now unmanaged
Jan 27 07:36:18 np0005597378 dracut[1288]: *** Installing kernel module dependencies done ***
Jan 27 07:36:18 np0005597378 dracut[1288]: *** Resolving executable dependencies ***
Jan 27 07:36:18 np0005597378 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:36:20 np0005597378 dracut[1288]: *** Resolving executable dependencies done ***
Jan 27 07:36:20 np0005597378 dracut[1288]: *** Generating early-microcode cpio image ***
Jan 27 07:36:20 np0005597378 dracut[1288]: *** Store current command line parameters ***
Jan 27 07:36:20 np0005597378 dracut[1288]: Stored kernel commandline:
Jan 27 07:36:20 np0005597378 dracut[1288]: No dracut internal kernel commandline stored in the initramfs
Jan 27 07:36:20 np0005597378 dracut[1288]: *** Install squash loader ***
Jan 27 07:36:21 np0005597378 dracut[1288]: *** Squashing the files inside the initramfs ***
Jan 27 07:36:22 np0005597378 dracut[1288]: *** Squashing the files inside the initramfs done ***
Jan 27 07:36:22 np0005597378 dracut[1288]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 27 07:36:22 np0005597378 dracut[1288]: *** Hardlinking files ***
Jan 27 07:36:22 np0005597378 dracut[1288]: *** Hardlinking files done ***
Jan 27 07:36:22 np0005597378 dracut[1288]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 27 07:36:23 np0005597378 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Jan 27 07:36:23 np0005597378 kdumpctl[1018]: kdump: Starting kdump: [OK]
Jan 27 07:36:23 np0005597378 systemd[1]: Finished Crash recovery kernel arming.
Jan 27 07:36:23 np0005597378 systemd[1]: Startup finished in 1.587s (kernel) + 2.974s (initrd) + 18.546s (userspace) = 23.108s.
Jan 27 07:36:27 np0005597378 systemd[1]: Created slice User Slice of UID 1000.
Jan 27 07:36:27 np0005597378 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 27 07:36:27 np0005597378 systemd-logind[786]: New session 1 of user zuul.
Jan 27 07:36:27 np0005597378 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 27 07:36:27 np0005597378 systemd[1]: Starting User Manager for UID 1000...
Jan 27 07:36:27 np0005597378 systemd[4307]: Queued start job for default target Main User Target.
Jan 27 07:36:27 np0005597378 systemd[4307]: Created slice User Application Slice.
Jan 27 07:36:27 np0005597378 systemd[4307]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 07:36:27 np0005597378 systemd[4307]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 07:36:27 np0005597378 systemd[4307]: Reached target Paths.
Jan 27 07:36:27 np0005597378 systemd[4307]: Reached target Timers.
Jan 27 07:36:27 np0005597378 systemd[4307]: Starting D-Bus User Message Bus Socket...
Jan 27 07:36:27 np0005597378 systemd[4307]: Starting Create User's Volatile Files and Directories...
Jan 27 07:36:27 np0005597378 systemd[4307]: Finished Create User's Volatile Files and Directories.
Jan 27 07:36:27 np0005597378 systemd[4307]: Listening on D-Bus User Message Bus Socket.
Jan 27 07:36:27 np0005597378 systemd[4307]: Reached target Sockets.
Jan 27 07:36:27 np0005597378 systemd[4307]: Reached target Basic System.
Jan 27 07:36:27 np0005597378 systemd[4307]: Reached target Main User Target.
Jan 27 07:36:27 np0005597378 systemd[4307]: Startup finished in 152ms.
Jan 27 07:36:27 np0005597378 systemd[1]: Started User Manager for UID 1000.
Jan 27 07:36:27 np0005597378 systemd[1]: Started Session 1 of User zuul.
Jan 27 07:36:28 np0005597378 python3[4389]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:36:30 np0005597378 python3[4417]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:36:37 np0005597378 python3[4475]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:36:37 np0005597378 python3[4515]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 27 07:36:38 np0005597378 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 07:36:39 np0005597378 python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLHZklLDl28by7t3GmOwLWKaZpOmiHXm5F1TJjDo5E3cG3HOa6kgxYOuCy8ijqHkNbKzUdRoa6oZ+fh3CntQza/7dkuyDkKoTrINY3Imuom8kqUTrsSjTqH1ysByYPmK8a5fZrALQyQfD7+JOPNB0xhxO61u+Qn4cK39MUVkXEdn8SjGJcIdSrDh+0DR7dMPSEvaQK7ovb5rlmuxshFx00yo8GlbcUN3+7Kw2w0thpAVdI4xM8o279q6LggJMhAyHBOKgBGIYUdFb1rqj+OAInuKlQk73Zrt+GMRS+XhN5oSxgLvx7vrzsy8UAjz4zI/QoYRKq0N9uUojbmvAxthaTZ5sq7//5v4FOywdtR77JWfdOjOQL3wdAFXBbTCtc66sTg56kw3EMpLY4y7cGlJA39uhmyYyEWORVxa3G2cAELl56rHuZxMByUjKRirrDbhjaiuC+tPtqKbQJXcRjgU19miB6nKKu1wYDf1Y57OSD8XPTqJEnyzXWrR8+6BCBq90= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:40 np0005597378 python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:41 np0005597378 python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:36:41 np0005597378 python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769517400.7310152-207-67295279840635/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=f49cf2ac94e64a12b95d02e3b0298e70_id_rsa follow=False checksum=ebf409ff557cf383026c99abe63f11dd20d54d90 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:41 np0005597378 python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:36:42 np0005597378 python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769517401.6898904-240-221880955093428/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=f49cf2ac94e64a12b95d02e3b0298e70_id_rsa.pub follow=False checksum=37c6cee9e1e716942f824e1941a71d762163b0f5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:43 np0005597378 python3[4979]: ansible-ping Invoked with data=pong
Jan 27 07:36:45 np0005597378 python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:36:46 np0005597378 python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 27 07:36:47 np0005597378 python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:47 np0005597378 python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:48 np0005597378 python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:48 np0005597378 python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:48 np0005597378 python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:49 np0005597378 python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:50 np0005597378 python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:51 np0005597378 python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:36:51 np0005597378 python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769517410.9849772-21-157428616351485/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:36:52 np0005597378 python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:52 np0005597378 python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:53 np0005597378 python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:53 np0005597378 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:53 np0005597378 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:54 np0005597378 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:54 np0005597378 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:54 np0005597378 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:54 np0005597378 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:55 np0005597378 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:55 np0005597378 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:55 np0005597378 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:55 np0005597378 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:56 np0005597378 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:56 np0005597378 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:56 np0005597378 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:57 np0005597378 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:57 np0005597378 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:57 np0005597378 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:58 np0005597378 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:58 np0005597378 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:58 np0005597378 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:59 np0005597378 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:59 np0005597378 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:36:59 np0005597378 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:37:00 np0005597378 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:37:02 np0005597378 python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 07:37:02 np0005597378 systemd[1]: Starting Time & Date Service...
Jan 27 07:37:02 np0005597378 systemd[1]: Started Time & Date Service.
Jan 27 07:37:02 np0005597378 systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 27 07:37:03 np0005597378 python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:03 np0005597378 python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:37:03 np0005597378 python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769517423.3734488-153-236890961202960/source _original_basename=tmp0_je62kh follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:04 np0005597378 python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:37:04 np0005597378 python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769517424.3035994-183-276669291721362/source _original_basename=tmpro5ihfvk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:05 np0005597378 python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:37:06 np0005597378 python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769517425.437508-231-5171562946924/source _original_basename=tmp6v1qhtg3 follow=False checksum=92ed7a44cd9375cbe6e214048ac6da9938435472 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:06 np0005597378 python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:37:07 np0005597378 python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:37:07 np0005597378 python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:37:08 np0005597378 python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769517427.4089763-273-252829275607470/source _original_basename=tmpdgoxd1gd follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:09 np0005597378 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-08b2-8060-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:37:09 np0005597378 python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-08b2-8060-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 27 07:37:11 np0005597378 python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:31 np0005597378 python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:37:32 np0005597378 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 27 07:38:05 np0005597378 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 27 07:38:05 np0005597378 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1426] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 07:38:05 np0005597378 systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1594] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1627] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1631] device (eth1): carrier: link connected
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1634] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1640] policy: auto-activating connection 'Wired connection 1' (eeeb07e2-82af-3685-be38-c00364f40632)
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1645] device (eth1): Activation: starting connection 'Wired connection 1' (eeeb07e2-82af-3685-be38-c00364f40632)
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1646] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1648] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1653] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 07:38:05 np0005597378 NetworkManager[858]: <info>  [1769517485.1658] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:38:06 np0005597378 python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-59ed-d301-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:38:16 np0005597378 python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:38:16 np0005597378 python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769517496.0170596-102-172812911861284/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=ef189d15bf491ae41db87f2ef09d24bc335823f2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:38:17 np0005597378 python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 07:38:17 np0005597378 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 07:38:17 np0005597378 systemd[1]: Stopped Network Manager Wait Online.
Jan 27 07:38:17 np0005597378 systemd[1]: Stopping Network Manager Wait Online...
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6721] caught SIGTERM, shutting down normally.
Jan 27 07:38:17 np0005597378 systemd[1]: Stopping Network Manager...
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6727] dhcp4 (eth0): canceled DHCP transaction
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6727] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6728] dhcp4 (eth0): state changed no lease
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6729] manager: NetworkManager state is now CONNECTING
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6874] dhcp4 (eth1): canceled DHCP transaction
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6875] dhcp4 (eth1): state changed no lease
Jan 27 07:38:17 np0005597378 NetworkManager[858]: <info>  [1769517497.6925] exiting (success)
Jan 27 07:38:17 np0005597378 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:38:17 np0005597378 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 07:38:17 np0005597378 systemd[1]: Stopped Network Manager.
Jan 27 07:38:17 np0005597378 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:38:17 np0005597378 systemd[1]: Starting Network Manager...
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.7464] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:fdf4cf63-50a8-4d95-a9b7-7837ebf6d82a)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.7468] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.7521] manager[0x563b9b2ff000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 07:38:17 np0005597378 systemd[1]: Starting Hostname Service...
Jan 27 07:38:17 np0005597378 systemd[1]: Started Hostname Service.
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8451] hostname: hostname: using hostnamed
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8452] hostname: static hostname changed from (none) to "np0005597378.novalocal"
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8457] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8461] manager[0x563b9b2ff000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8462] manager[0x563b9b2ff000]: rfkill: WWAN hardware radio set enabled
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8494] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8495] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8495] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8495] manager: Networking is enabled by state file
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8498] settings: Loaded settings plugin: keyfile (internal)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8502] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8531] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8541] dhcp: init: Using DHCP client 'internal'
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8544] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8550] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8555] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8562] device (lo): Activation: starting connection 'lo' (2703350d-1698-4e5c-a1cb-b77d40fc5e70)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8570] device (eth0): carrier: link connected
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8574] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8580] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8581] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8588] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8594] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8599] device (eth1): carrier: link connected
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8603] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8607] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (eeeb07e2-82af-3685-be38-c00364f40632) (indicated)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8607] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8614] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8620] device (eth1): Activation: starting connection 'Wired connection 1' (eeeb07e2-82af-3685-be38-c00364f40632)
Jan 27 07:38:17 np0005597378 systemd[1]: Started Network Manager.
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8626] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8630] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8632] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8634] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8636] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8639] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8641] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8643] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8645] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8653] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8656] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8665] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8668] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8685] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8690] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8697] device (lo): Activation: successful, device activated.
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8705] dhcp4 (eth0): state changed new lease, address=38.102.83.129
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8712] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 07:38:17 np0005597378 systemd[1]: Starting Network Manager Wait Online...
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8811] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8827] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8830] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8834] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8837] device (eth0): Activation: successful, device activated.
Jan 27 07:38:17 np0005597378 NetworkManager[7191]: <info>  [1769517497.8842] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 07:38:18 np0005597378 python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-59ed-d301-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:38:27 np0005597378 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:38:47 np0005597378 systemd[4307]: Starting Mark boot as successful...
Jan 27 07:38:47 np0005597378 systemd[4307]: Finished Mark boot as successful.
Jan 27 07:38:47 np0005597378 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.3191] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 07:39:03 np0005597378 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:39:03 np0005597378 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.5930] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.5932] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.5938] device (eth1): Activation: successful, device activated.
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.5943] manager: startup complete
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.5945] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <warn>  [1769517543.5948] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.5952] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 systemd[1]: Finished Network Manager Wait Online.
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6044] dhcp4 (eth1): canceled DHCP transaction
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6044] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6044] dhcp4 (eth1): state changed no lease
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6057] policy: auto-activating connection 'ci-private-network' (3a1f488f-a0fc-56be-8c4b-062dadc69ad0)
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6061] device (eth1): Activation: starting connection 'ci-private-network' (3a1f488f-a0fc-56be-8c4b-062dadc69ad0)
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6062] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6064] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6070] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6078] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6156] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6159] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 07:39:03 np0005597378 NetworkManager[7191]: <info>  [1769517543.6164] device (eth1): Activation: successful, device activated.
Jan 27 07:39:13 np0005597378 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:39:17 np0005597378 python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:39:18 np0005597378 python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769517557.4610052-267-49745993549944/source _original_basename=tmp4k7qkwhp follow=False checksum=c874f12866423a504fbb1df678a6ab4e0d0920d1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:40:18 np0005597378 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Jan 27 07:41:47 np0005597378 systemd[4307]: Created slice User Background Tasks Slice.
Jan 27 07:41:47 np0005597378 systemd[4307]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 07:41:47 np0005597378 systemd[4307]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 07:45:49 np0005597378 systemd-logind[786]: New session 3 of user zuul.
Jan 27 07:45:49 np0005597378 systemd[1]: Started Session 3 of User zuul.
Jan 27 07:45:49 np0005597378 python3[7506]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-3036-bd0a-000000002165-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:45:50 np0005597378 python3[7535]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:45:50 np0005597378 python3[7561]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:45:50 np0005597378 python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:45:50 np0005597378 python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:45:51 np0005597378 python3[7639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:45:52 np0005597378 python3[7717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:45:52 np0005597378 python3[7790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769517951.931359-493-178035010553231/source _original_basename=tmphr9gr6dw follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:45:53 np0005597378 python3[7840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 07:45:53 np0005597378 systemd[1]: Reloading.
Jan 27 07:45:53 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 07:45:55 np0005597378 python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 27 07:45:55 np0005597378 python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:45:56 np0005597378 python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:45:56 np0005597378 python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:45:56 np0005597378 python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:45:57 np0005597378 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-3036-bd0a-00000000216c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:45:57 np0005597378 python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 07:45:59 np0005597378 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Jan 27 07:45:59 np0005597378 systemd[1]: session-3.scope: Deactivated successfully.
Jan 27 07:45:59 np0005597378 systemd[1]: session-3.scope: Consumed 4.069s CPU time.
Jan 27 07:45:59 np0005597378 systemd-logind[786]: Removed session 3.
Jan 27 07:46:01 np0005597378 systemd-logind[786]: New session 4 of user zuul.
Jan 27 07:46:01 np0005597378 systemd[1]: Started Session 4 of User zuul.
Jan 27 07:46:01 np0005597378 python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 07:46:12 np0005597378 setsebool[8135]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 27 07:46:12 np0005597378 setsebool[8135]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 27 07:46:25 np0005597378 kernel: SELinux:  Converting 385 SID table entries...
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 07:46:25 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 07:46:36 np0005597378 kernel: SELinux:  Converting 388 SID table entries...
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 07:46:36 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 07:46:58 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 07:46:58 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 07:46:58 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 07:46:58 np0005597378 systemd[1]: Reloading.
Jan 27 07:46:58 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 07:46:58 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 07:47:00 np0005597378 python3[10294]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-c5eb-a4c4-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:47:01 np0005597378 kernel: evm: overlay not supported
Jan 27 07:47:02 np0005597378 systemd[4307]: Starting D-Bus User Message Bus...
Jan 27 07:47:02 np0005597378 dbus-broker-launch[11522]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 27 07:47:02 np0005597378 dbus-broker-launch[11522]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 27 07:47:02 np0005597378 systemd[4307]: Started D-Bus User Message Bus.
Jan 27 07:47:02 np0005597378 dbus-broker-lau[11522]: Ready
Jan 27 07:47:02 np0005597378 systemd[4307]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 07:47:02 np0005597378 systemd[4307]: Created slice Slice /user.
Jan 27 07:47:02 np0005597378 systemd[4307]: podman-11293.scope: unit configures an IP firewall, but not running as root.
Jan 27 07:47:02 np0005597378 systemd[4307]: (This warning is only shown for the first unit using IP firewalling.)
Jan 27 07:47:02 np0005597378 systemd[4307]: Started podman-11293.scope.
Jan 27 07:47:02 np0005597378 systemd[4307]: Started podman-pause-76a877da.scope.
Jan 27 07:47:02 np0005597378 python3[11961]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.83:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.83:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:02 np0005597378 python3[11961]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 27 07:47:03 np0005597378 systemd[1]: session-4.scope: Deactivated successfully.
Jan 27 07:47:03 np0005597378 systemd[1]: session-4.scope: Consumed 47.219s CPU time.
Jan 27 07:47:03 np0005597378 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Jan 27 07:47:03 np0005597378 systemd-logind[786]: Removed session 4.
Jan 27 07:47:17 np0005597378 irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 27 07:47:17 np0005597378 irqbalance[780]: IRQ 27 affinity is now unmanaged
Jan 27 07:47:31 np0005597378 systemd-logind[786]: New session 5 of user zuul.
Jan 27 07:47:31 np0005597378 systemd[1]: Started Session 5 of User zuul.
Jan 27 07:47:32 np0005597378 python3[22404]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXLd4XfSlhhGCQLMf0CAqXNYiNX/kNL1Af6fdaPjiJP47P5PjX123P3G3abnDChxBKw74UPUUYsAMgj7XAm8Eg= zuul@np0005597377.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:47:32 np0005597378 python3[22670]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXLd4XfSlhhGCQLMf0CAqXNYiNX/kNL1Af6fdaPjiJP47P5PjX123P3G3abnDChxBKw74UPUUYsAMgj7XAm8Eg= zuul@np0005597377.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:47:33 np0005597378 python3[23020]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005597378.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 27 07:47:34 np0005597378 python3[23290]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXLd4XfSlhhGCQLMf0CAqXNYiNX/kNL1Af6fdaPjiJP47P5PjX123P3G3abnDChxBKw74UPUUYsAMgj7XAm8Eg= zuul@np0005597377.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 07:47:34 np0005597378 python3[23530]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:47:35 np0005597378 python3[23804]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769518054.281608-135-201823171562958/source _original_basename=tmpud6nzgb6 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:47:35 np0005597378 python3[24102]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 27 07:47:36 np0005597378 systemd[1]: Starting Hostname Service...
Jan 27 07:47:36 np0005597378 systemd[1]: Started Hostname Service.
Jan 27 07:47:36 np0005597378 systemd-hostnamed[24154]: Changed pretty hostname to 'compute-0'
Jan 27 07:47:36 np0005597378 systemd-hostnamed[24154]: Hostname set to <compute-0> (static)
Jan 27 07:47:36 np0005597378 NetworkManager[7191]: <info>  [1769518056.1197] hostname: static hostname changed from "np0005597378.novalocal" to "compute-0"
Jan 27 07:47:36 np0005597378 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 07:47:36 np0005597378 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 07:47:36 np0005597378 systemd[1]: session-5.scope: Deactivated successfully.
Jan 27 07:47:36 np0005597378 systemd[1]: session-5.scope: Consumed 2.192s CPU time.
Jan 27 07:47:36 np0005597378 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Jan 27 07:47:36 np0005597378 systemd-logind[786]: Removed session 5.
Jan 27 07:47:46 np0005597378 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 07:47:53 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 07:47:53 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 07:47:53 np0005597378 systemd[1]: man-db-cache-update.service: Consumed 54.870s CPU time.
Jan 27 07:47:53 np0005597378 systemd[1]: run-rafc42a9f843441dbaff38815debf85a5.service: Deactivated successfully.
Jan 27 07:48:06 np0005597378 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 07:51:47 np0005597378 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 27 07:51:47 np0005597378 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 27 07:51:47 np0005597378 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 27 07:51:47 np0005597378 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 27 07:51:50 np0005597378 systemd-logind[786]: New session 6 of user zuul.
Jan 27 07:51:50 np0005597378 systemd[1]: Started Session 6 of User zuul.
Jan 27 07:51:50 np0005597378 python3[29995]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 07:51:52 np0005597378 python3[30111]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:52 np0005597378 python3[30184]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:52 np0005597378 python3[30210]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:53 np0005597378 python3[30283]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:53 np0005597378 python3[30309]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:53 np0005597378 python3[30382]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:54 np0005597378 python3[30408]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:54 np0005597378 python3[30481]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:54 np0005597378 python3[30507]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:55 np0005597378 python3[30580]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:55 np0005597378 python3[30606]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:55 np0005597378 python3[30679]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:56 np0005597378 python3[30705]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 07:51:56 np0005597378 python3[30778]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769518311.9062355-33614-53435541889153/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 07:51:57 np0005597378 irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 27 07:51:57 np0005597378 irqbalance[780]: IRQ 26 affinity is now unmanaged
Jan 27 07:52:13 np0005597378 python3[30836]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 07:57:13 np0005597378 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Jan 27 07:57:13 np0005597378 systemd[1]: session-6.scope: Deactivated successfully.
Jan 27 07:57:13 np0005597378 systemd[1]: session-6.scope: Consumed 4.938s CPU time.
Jan 27 07:57:13 np0005597378 systemd-logind[786]: Removed session 6.
Jan 27 08:04:05 np0005597378 systemd-logind[786]: New session 7 of user zuul.
Jan 27 08:04:05 np0005597378 systemd[1]: Started Session 7 of User zuul.
Jan 27 08:04:06 np0005597378 python3.9[31016]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:04:07 np0005597378 python3.9[31197]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:04:17 np0005597378 systemd[1]: session-7.scope: Deactivated successfully.
Jan 27 08:04:17 np0005597378 systemd[1]: session-7.scope: Consumed 8.131s CPU time.
Jan 27 08:04:17 np0005597378 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Jan 27 08:04:17 np0005597378 systemd-logind[786]: Removed session 7.
Jan 27 08:04:36 np0005597378 systemd-logind[786]: New session 8 of user zuul.
Jan 27 08:04:36 np0005597378 systemd[1]: Started Session 8 of User zuul.
Jan 27 08:04:37 np0005597378 python3.9[31408]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 08:04:38 np0005597378 python3.9[31582]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:04:39 np0005597378 python3.9[31734]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:04:40 np0005597378 python3.9[31887]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:04:41 np0005597378 python3.9[32039]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:42 np0005597378 python3.9[32191]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:04:42 np0005597378 python3.9[32314]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519081.690585-68-178980056288708/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:43 np0005597378 python3.9[32466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:04:44 np0005597378 python3.9[32622]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:04:45 np0005597378 python3.9[32774]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:04:46 np0005597378 python3.9[32924]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:04:50 np0005597378 python3.9[33178]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:04:50 np0005597378 python3.9[33328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:04:51 np0005597378 python3.9[33482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:04:53 np0005597378 python3.9[33640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:04:54 np0005597378 python3.9[33724]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:05:45 np0005597378 systemd[1]: Reloading.
Jan 27 08:05:45 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:05:46 np0005597378 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 27 08:05:46 np0005597378 systemd[1]: Reloading.
Jan 27 08:05:46 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:05:46 np0005597378 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 27 08:05:46 np0005597378 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 27 08:05:46 np0005597378 systemd[1]: Reloading.
Jan 27 08:05:46 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:05:47 np0005597378 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 27 08:05:47 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:05:47 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:05:47 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:06:47 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Jan 27 08:06:47 np0005597378 systemd[1]: Starting dnf makecache...
Jan 27 08:06:47 np0005597378 dnf[34216]: Failed determining last makecache time.
Jan 27 08:06:47 np0005597378 dnf[34216]: delorean-openstack-barbican-42b4c41831408a8e323 112 kB/s | 3.0 kB     00:00
Jan 27 08:06:47 np0005597378 dnf[34216]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 145 kB/s | 3.0 kB     00:00
Jan 27 08:06:47 np0005597378 dnf[34216]: delorean-openstack-cinder-1c00d6490d88e436f26ef 165 kB/s | 3.0 kB     00:00
Jan 27 08:06:47 np0005597378 dnf[34216]: delorean-python-stevedore-c4acc5639fd2329372142 163 kB/s | 3.0 kB     00:00
Jan 27 08:06:47 np0005597378 dnf[34216]: delorean-python-cloudkitty-tests-tempest-2c80f8 154 kB/s | 3.0 kB     00:00
Jan 27 08:06:47 np0005597378 dnf[34216]: delorean-os-refresh-config-9bfc52b5049be2d8de61 157 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 160 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-python-designate-tests-tempest-347fdbc 185 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-glance-1fd12c29b339f30fe823e 189 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 168 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-manila-3c01b7181572c95dac462 142 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-python-whitebox-neutron-tests-tempest- 161 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-octavia-ba397f07a7331190208c 155 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-watcher-c014f81a8647287f6dcc 113 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-ansible-config_template-5ccaa22121a7ff 117 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 133 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-swift-dc98a8463506ac520c469a 130 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-python-tempestconf-8515371b7cceebd4282 156 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: delorean-openstack-heat-ui-013accbfd179753bc3f0 125 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: CentOS Stream 9 - BaseOS                         56 kB/s | 6.7 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: CentOS Stream 9 - AppStream                      65 kB/s | 6.8 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: CentOS Stream 9 - CRB                            61 kB/s | 6.6 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: CentOS Stream 9 - Extras packages                72 kB/s | 7.3 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: dlrn-antelope-testing                           161 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: dlrn-antelope-build-deps                        174 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: centos9-rabbitmq                                113 kB/s | 3.0 kB     00:00
Jan 27 08:06:48 np0005597378 dnf[34216]: centos9-storage                                 120 kB/s | 3.0 kB     00:00
Jan 27 08:06:49 np0005597378 dnf[34216]: centos9-opstools                                117 kB/s | 3.0 kB     00:00
Jan 27 08:06:49 np0005597378 dnf[34216]: NFV SIG OpenvSwitch                              73 kB/s | 3.0 kB     00:00
Jan 27 08:06:49 np0005597378 dnf[34216]: repo-setup-centos-appstream                     100 kB/s | 4.4 kB     00:00
Jan 27 08:06:49 np0005597378 kernel: SELinux:  Converting 2724 SID table entries...
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:06:49 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:06:49 np0005597378 dnf[34216]: repo-setup-centos-baseos                        157 kB/s | 3.9 kB     00:00
Jan 27 08:06:49 np0005597378 dnf[34216]: repo-setup-centos-highavailability              178 kB/s | 3.9 kB     00:00
Jan 27 08:06:49 np0005597378 dnf[34216]: repo-setup-centos-powertools                     62 kB/s | 4.3 kB     00:00
Jan 27 08:06:49 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 27 08:06:49 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:06:49 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:06:49 np0005597378 systemd[1]: Reloading.
Jan 27 08:06:49 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:06:49 np0005597378 dnf[34216]: Extra Packages for Enterprise Linux 9 - x86_64   75 kB/s |  23 kB     00:00
Jan 27 08:06:49 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:06:50 np0005597378 dnf[34216]: Metadata cache created.
Jan 27 08:06:50 np0005597378 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 08:06:50 np0005597378 systemd[1]: Finished dnf makecache.
Jan 27 08:06:50 np0005597378 systemd[1]: dnf-makecache.service: Consumed 1.680s CPU time.
Jan 27 08:06:50 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:06:50 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:06:50 np0005597378 systemd[1]: man-db-cache-update.service: Consumed 1.026s CPU time.
Jan 27 08:06:50 np0005597378 systemd[1]: run-rf5168f0d73c2452fa1081e8a966e4ffe.service: Deactivated successfully.
Jan 27 08:06:50 np0005597378 python3.9[35280]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:06:53 np0005597378 python3.9[35561]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 08:06:53 np0005597378 python3.9[35713]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 08:06:56 np0005597378 python3.9[35866]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:06:57 np0005597378 python3.9[36021]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 08:06:58 np0005597378 python3.9[36173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:06:59 np0005597378 python3.9[36325]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:07:00 np0005597378 python3.9[36448]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519219.0511453-231-128644055769447/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:07:00 np0005597378 python3.9[36600]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:07:01 np0005597378 python3.9[36752]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:07:02 np0005597378 python3.9[36905]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:07:03 np0005597378 python3.9[37057]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 08:07:03 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:07:04 np0005597378 python3.9[37211]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:07:08 np0005597378 python3.9[37370]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:07:08 np0005597378 python3.9[37530]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 08:07:09 np0005597378 python3.9[37683]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:07:10 np0005597378 python3.9[37841]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 08:07:10 np0005597378 python3.9[37993]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:07:13 np0005597378 python3.9[38146]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:07:13 np0005597378 python3.9[38298]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:07:14 np0005597378 python3.9[38421]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769519233.2573376-350-19344068584293/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:07:15 np0005597378 python3.9[38573]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:07:15 np0005597378 systemd[1]: Starting Load Kernel Modules...
Jan 27 08:07:15 np0005597378 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 27 08:07:15 np0005597378 kernel: Bridge firewalling registered
Jan 27 08:07:15 np0005597378 systemd-modules-load[38577]: Inserted module 'br_netfilter'
Jan 27 08:07:15 np0005597378 systemd[1]: Finished Load Kernel Modules.
Jan 27 08:07:16 np0005597378 python3.9[38733]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:07:16 np0005597378 python3.9[38856]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769519235.9033134-373-60485194793760/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:07:17 np0005597378 python3.9[39008]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:07:20 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:07:21 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:07:21 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:07:21 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:07:21 np0005597378 systemd[1]: Reloading.
Jan 27 08:07:21 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:07:21 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:07:22 np0005597378 python3.9[40191]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:07:23 np0005597378 python3.9[41206]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 08:07:24 np0005597378 python3.9[41976]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:07:24 np0005597378 python3.9[42852]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:07:24 np0005597378 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 08:07:25 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:07:25 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:07:25 np0005597378 systemd[1]: man-db-cache-update.service: Consumed 4.798s CPU time.
Jan 27 08:07:25 np0005597378 systemd[1]: run-r8434f64c30654d8d94b4783b1cea1f05.service: Deactivated successfully.
Jan 27 08:07:25 np0005597378 systemd[1]: Starting Authorization Manager...
Jan 27 08:07:25 np0005597378 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 08:07:25 np0005597378 polkitd[43394]: Started polkitd version 0.117
Jan 27 08:07:25 np0005597378 systemd[1]: Started Authorization Manager.
Jan 27 08:07:26 np0005597378 python3.9[43564]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:07:26 np0005597378 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 08:07:26 np0005597378 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 08:07:26 np0005597378 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 08:07:26 np0005597378 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 08:07:26 np0005597378 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 08:07:27 np0005597378 python3.9[43725]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 08:07:29 np0005597378 python3.9[43877]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:07:29 np0005597378 systemd[1]: Reloading.
Jan 27 08:07:29 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:07:30 np0005597378 python3.9[44066]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:07:30 np0005597378 systemd[1]: Reloading.
Jan 27 08:07:30 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:07:31 np0005597378 python3.9[44254]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:07:32 np0005597378 python3.9[44407]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:07:32 np0005597378 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 27 08:07:32 np0005597378 python3.9[44560]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:07:34 np0005597378 python3.9[44722]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:07:35 np0005597378 python3.9[44875]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:07:35 np0005597378 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 08:07:35 np0005597378 systemd[1]: Stopped Apply Kernel Variables.
Jan 27 08:07:35 np0005597378 systemd[1]: Stopping Apply Kernel Variables...
Jan 27 08:07:35 np0005597378 systemd[1]: Starting Apply Kernel Variables...
Jan 27 08:07:35 np0005597378 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 08:07:35 np0005597378 systemd[1]: Finished Apply Kernel Variables.
Jan 27 08:07:36 np0005597378 systemd[1]: session-8.scope: Deactivated successfully.
Jan 27 08:07:36 np0005597378 systemd[1]: session-8.scope: Consumed 2min 13.214s CPU time.
Jan 27 08:07:36 np0005597378 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Jan 27 08:07:36 np0005597378 systemd-logind[786]: Removed session 8.
Jan 27 08:07:41 np0005597378 systemd-logind[786]: New session 9 of user zuul.
Jan 27 08:07:41 np0005597378 systemd[1]: Started Session 9 of User zuul.
Jan 27 08:07:42 np0005597378 python3.9[45059]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:07:44 np0005597378 python3.9[45217]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 08:07:44 np0005597378 python3.9[45370]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:07:45 np0005597378 python3.9[45528]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:07:46 np0005597378 python3.9[45688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:07:47 np0005597378 python3.9[45772]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:07:50 np0005597378 python3.9[45936]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:08:01 np0005597378 kernel: SELinux:  Converting 2736 SID table entries...
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:08:01 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:08:02 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 27 08:08:02 np0005597378 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 27 08:08:03 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:08:03 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:08:03 np0005597378 systemd[1]: Reloading.
Jan 27 08:08:03 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:08:03 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:08:03 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:08:04 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:08:04 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:08:04 np0005597378 systemd[1]: run-rc46ab3a2ba7345ad871abd266247fa5f.service: Deactivated successfully.
Jan 27 08:08:05 np0005597378 python3.9[47033]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:08:05 np0005597378 systemd[1]: Reloading.
Jan 27 08:08:05 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:08:05 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:08:05 np0005597378 systemd[1]: Starting Open vSwitch Database Unit...
Jan 27 08:08:05 np0005597378 chown[47075]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 27 08:08:05 np0005597378 ovs-ctl[47080]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 27 08:08:05 np0005597378 ovs-ctl[47080]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 27 08:08:05 np0005597378 ovs-ctl[47080]: Starting ovsdb-server [  OK  ]
Jan 27 08:08:05 np0005597378 ovs-vsctl[47129]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 27 08:08:05 np0005597378 ovs-vsctl[47148]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"65761215-e4d7-402d-90c8-18b025613da8\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 27 08:08:05 np0005597378 ovs-ctl[47080]: Configuring Open vSwitch system IDs [  OK  ]
Jan 27 08:08:05 np0005597378 ovs-ctl[47080]: Enabling remote OVSDB managers [  OK  ]
Jan 27 08:08:05 np0005597378 systemd[1]: Started Open vSwitch Database Unit.
Jan 27 08:08:05 np0005597378 ovs-vsctl[47154]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 08:08:05 np0005597378 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 27 08:08:05 np0005597378 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 27 08:08:05 np0005597378 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 27 08:08:05 np0005597378 kernel: openvswitch: Open vSwitch switching datapath
Jan 27 08:08:05 np0005597378 ovs-ctl[47199]: Inserting openvswitch module [  OK  ]
Jan 27 08:08:06 np0005597378 ovs-ctl[47168]: Starting ovs-vswitchd [  OK  ]
Jan 27 08:08:06 np0005597378 ovs-vsctl[47216]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 08:08:06 np0005597378 ovs-ctl[47168]: Enabling remote OVSDB managers [  OK  ]
Jan 27 08:08:06 np0005597378 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 27 08:08:06 np0005597378 systemd[1]: Starting Open vSwitch...
Jan 27 08:08:06 np0005597378 systemd[1]: Finished Open vSwitch.
Jan 27 08:08:06 np0005597378 python3.9[47368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:08:07 np0005597378 python3.9[47520]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 08:08:09 np0005597378 kernel: SELinux:  Converting 2750 SID table entries...
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:08:09 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:08:10 np0005597378 python3.9[47676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:08:10 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 27 08:08:11 np0005597378 python3.9[47834]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:08:13 np0005597378 python3.9[47987]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:08:14 np0005597378 python3.9[48274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 08:08:15 np0005597378 python3.9[48424]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:08:16 np0005597378 python3.9[48578]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:08:17 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:08:17 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:08:17 np0005597378 systemd[1]: Reloading.
Jan 27 08:08:18 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:08:18 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:08:18 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:08:18 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:08:18 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:08:18 np0005597378 systemd[1]: run-r78213f2c1fab48ec8fb498c6004872df.service: Deactivated successfully.
Jan 27 08:08:19 np0005597378 python3.9[48895]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:08:19 np0005597378 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 08:08:19 np0005597378 systemd[1]: Stopped Network Manager Wait Online.
Jan 27 08:08:19 np0005597378 systemd[1]: Stopping Network Manager Wait Online...
Jan 27 08:08:19 np0005597378 systemd[1]: Stopping Network Manager...
Jan 27 08:08:19 np0005597378 NetworkManager[7191]: <info>  [1769519299.4021] caught SIGTERM, shutting down normally.
Jan 27 08:08:19 np0005597378 NetworkManager[7191]: <info>  [1769519299.4037] dhcp4 (eth0): canceled DHCP transaction
Jan 27 08:08:19 np0005597378 NetworkManager[7191]: <info>  [1769519299.4037] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 08:08:19 np0005597378 NetworkManager[7191]: <info>  [1769519299.4037] dhcp4 (eth0): state changed no lease
Jan 27 08:08:19 np0005597378 NetworkManager[7191]: <info>  [1769519299.4039] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 08:08:19 np0005597378 NetworkManager[7191]: <info>  [1769519299.4097] exiting (success)
Jan 27 08:08:19 np0005597378 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 08:08:19 np0005597378 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 08:08:19 np0005597378 systemd[1]: Stopped Network Manager.
Jan 27 08:08:19 np0005597378 systemd[1]: NetworkManager.service: Consumed 12.328s CPU time, 4.1M memory peak, read 0B from disk, written 33.5K to disk.
Jan 27 08:08:19 np0005597378 systemd[1]: Starting Network Manager...
Jan 27 08:08:19 np0005597378 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.4645] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:fdf4cf63-50a8-4d95-a9b7-7837ebf6d82a)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.4645] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.4694] manager[0x55ea40ccb000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 08:08:19 np0005597378 systemd[1]: Starting Hostname Service...
Jan 27 08:08:19 np0005597378 systemd[1]: Started Hostname Service.
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5470] hostname: hostname: using hostnamed
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5471] hostname: static hostname changed from (none) to "compute-0"
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5475] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5479] manager[0x55ea40ccb000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5480] manager[0x55ea40ccb000]: rfkill: WWAN hardware radio set enabled
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5500] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5509] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5509] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5509] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5510] manager: Networking is enabled by state file
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5512] settings: Loaded settings plugin: keyfile (internal)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5515] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5535] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5544] dhcp: init: Using DHCP client 'internal'
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5546] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5550] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5555] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5561] device (lo): Activation: starting connection 'lo' (2703350d-1698-4e5c-a1cb-b77d40fc5e70)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5567] device (eth0): carrier: link connected
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5570] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5574] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5575] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5581] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5587] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5592] device (eth1): carrier: link connected
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5595] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5599] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (3a1f488f-a0fc-56be-8c4b-062dadc69ad0) (indicated)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5599] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5604] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5609] device (eth1): Activation: starting connection 'ci-private-network' (3a1f488f-a0fc-56be-8c4b-062dadc69ad0)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5614] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 08:08:19 np0005597378 systemd[1]: Started Network Manager.
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5621] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5634] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5636] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5655] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5658] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5659] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5661] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5665] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5670] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5673] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5680] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5695] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5701] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5703] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5708] device (lo): Activation: successful, device activated.
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5715] dhcp4 (eth0): state changed new lease, address=38.102.83.129
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5721] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 08:08:19 np0005597378 systemd[1]: Starting Network Manager Wait Online...
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5772] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5777] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5778] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5780] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5782] device (eth1): Activation: successful, device activated.
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5797] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5798] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5801] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5803] device (eth0): Activation: successful, device activated.
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5807] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 08:08:19 np0005597378 NetworkManager[48904]: <info>  [1769519299.5809] manager: startup complete
Jan 27 08:08:19 np0005597378 systemd[1]: Finished Network Manager Wait Online.
Jan 27 08:08:20 np0005597378 python3.9[49121]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:08:24 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:08:24 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:08:24 np0005597378 systemd[1]: Reloading.
Jan 27 08:08:25 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:08:25 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:08:25 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:08:25 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:08:25 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:08:25 np0005597378 systemd[1]: run-r9f8ff91c0b70446796a2bbe287518365.service: Deactivated successfully.
Jan 27 08:08:26 np0005597378 python3.9[49579]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:08:27 np0005597378 python3.9[49731]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:28 np0005597378 python3.9[49885]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:29 np0005597378 python3.9[50037]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:29 np0005597378 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 08:08:29 np0005597378 python3.9[50189]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:30 np0005597378 python3.9[50341]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:31 np0005597378 python3.9[50493]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:08:31 np0005597378 python3.9[50616]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519310.629421-224-110540543000773/.source _original_basename=.2uiu13op follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:32 np0005597378 python3.9[50768]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:33 np0005597378 python3.9[50920]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 27 08:08:34 np0005597378 python3.9[51072]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:36 np0005597378 python3.9[51499]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 27 08:08:37 np0005597378 ansible-async_wrapper.py[51674]: Invoked with j762337131821 300 /home/zuul/.ansible/tmp/ansible-tmp-1769519316.6549685-290-199189419304979/AnsiballZ_edpm_os_net_config.py _
Jan 27 08:08:37 np0005597378 ansible-async_wrapper.py[51677]: Starting module and watcher
Jan 27 08:08:37 np0005597378 ansible-async_wrapper.py[51677]: Start watching 51678 (300)
Jan 27 08:08:37 np0005597378 ansible-async_wrapper.py[51678]: Start module (51678)
Jan 27 08:08:37 np0005597378 ansible-async_wrapper.py[51674]: Return async_wrapper task started.
Jan 27 08:08:37 np0005597378 python3.9[51679]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 27 08:08:38 np0005597378 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 27 08:08:38 np0005597378 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 27 08:08:38 np0005597378 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 27 08:08:38 np0005597378 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 27 08:08:38 np0005597378 kernel: cfg80211: failed to load regulatory.db
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.5345] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.5365] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6090] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6092] audit: op="connection-add" uuid="a6ed6f7c-2d10-4b31-9475-8c73e3c60934" name="br-ex-br" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6111] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6112] audit: op="connection-add" uuid="a6acf2cb-82f8-48e1-aad3-889d99c91f39" name="br-ex-port" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6127] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6128] audit: op="connection-add" uuid="f70fcae6-3d40-4e77-8406-ec8f00564089" name="eth1-port" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6142] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6144] audit: op="connection-add" uuid="4e61cc0a-13d6-4046-9bb6-a3ac88bb8292" name="vlan20-port" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6157] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6158] audit: op="connection-add" uuid="eba542bc-92a7-4e73-9629-2d7b802a0727" name="vlan21-port" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6171] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6172] audit: op="connection-add" uuid="bfc7dbde-32ad-449a-a606-266f52005672" name="vlan22-port" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6185] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6187] audit: op="connection-add" uuid="d5818279-15e6-4bda-ae78-7c16d56e2b1e" name="vlan23-port" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6207] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6225] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6227] audit: op="connection-add" uuid="6da716d6-c5f5-4395-9a46-3c699b8a1784" name="br-ex-if" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6268] audit: op="connection-update" uuid="3a1f488f-a0fc-56be-8c4b-062dadc69ad0" name="ci-private-network" args="ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routing-rules,ipv6.routes,connection.master,connection.timestamp,connection.controller,connection.port-type,connection.slave-type,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.routing-rules,ipv4.routes,ovs-interface.type" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6286] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6288] audit: op="connection-add" uuid="f2724d21-a771-443c-93e3-a6bf2cb98f6e" name="vlan20-if" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6305] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6306] audit: op="connection-add" uuid="2a846863-7d73-4fb9-a923-af7600501c0a" name="vlan21-if" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6323] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6324] audit: op="connection-add" uuid="44eb01c4-b996-42af-a154-92b0d64dd546" name="vlan22-if" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6342] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6344] audit: op="connection-add" uuid="1b612ad9-c97d-416e-97dc-6d476df1d90b" name="vlan23-if" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6359] audit: op="connection-delete" uuid="eeeb07e2-82af-3685-be38-c00364f40632" name="Wired connection 1" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6372] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6375] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6381] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6385] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a6ed6f7c-2d10-4b31-9475-8c73e3c60934)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6386] audit: op="connection-activate" uuid="a6ed6f7c-2d10-4b31-9475-8c73e3c60934" name="br-ex-br" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6387] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6388] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6393] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6396] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a6acf2cb-82f8-48e1-aad3-889d99c91f39)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6398] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6399] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6403] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6407] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (f70fcae6-3d40-4e77-8406-ec8f00564089)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6409] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6410] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6415] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6418] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (4e61cc0a-13d6-4046-9bb6-a3ac88bb8292)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6420] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6421] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6426] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6429] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (eba542bc-92a7-4e73-9629-2d7b802a0727)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6431] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6432] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6436] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6440] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bfc7dbde-32ad-449a-a606-266f52005672)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6441] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6442] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6447] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6452] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d5818279-15e6-4bda-ae78-7c16d56e2b1e)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6453] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6455] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6457] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6464] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6465] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6467] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6472] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (6da716d6-c5f5-4395-9a46-3c699b8a1784)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6472] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6476] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6477] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6478] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6479] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6491] device (eth1): disconnecting for new activation request.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6505] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6536] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6539] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6541] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6544] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6546] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6550] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6556] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (f2724d21-a771-443c-93e3-a6bf2cb98f6e)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6557] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6560] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6563] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6565] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6569] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6571] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6575] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6581] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2a846863-7d73-4fb9-a923-af7600501c0a)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6582] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6586] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6588] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6590] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6594] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6595] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6599] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6605] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (44eb01c4-b996-42af-a154-92b0d64dd546)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6606] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6611] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6614] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6615] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6618] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <warn>  [1769519319.6620] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6624] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6630] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (1b612ad9-c97d-416e-97dc-6d476df1d90b)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6630] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6634] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6636] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6640] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6642] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6660] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6665] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6668] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6671] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6680] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6684] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6688] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 kernel: ovs-system: entered promiscuous mode
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6693] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6695] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6700] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6704] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6719] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6721] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6726] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6730] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6732] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 kernel: Timeout policy base is empty
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6735] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 systemd-udevd[51684]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6739] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6744] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6757] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6761] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6765] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6769] dhcp4 (eth0): canceled DHCP transaction
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6770] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6770] dhcp4 (eth0): state changed no lease
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6771] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6780] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6784] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51680 uid=0 result="fail" reason="Device is not activated"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6837] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6845] dhcp4 (eth0): state changed new lease, address=38.102.83.129
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6854] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6898] device (eth1): disconnecting for new activation request.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6899] audit: op="connection-activate" uuid="3a1f488f-a0fc-56be-8c4b-062dadc69ad0" name="ci-private-network" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6903] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6909] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6913] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6990] device (eth1): Activation: starting connection 'ci-private-network' (3a1f488f-a0fc-56be-8c4b-062dadc69ad0)
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6995] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.6996] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7009] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7011] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7016] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7019] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7024] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7028] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7032] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7033] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7035] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7037] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7038] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7039] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51680 uid=0 result="success"
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7041] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7046] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7049] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7051] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7054] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7057] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7060] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 kernel: br-ex: entered promiscuous mode
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7064] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7068] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7071] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7074] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7079] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7082] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7122] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7124] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7129] device (eth1): Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7167] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7189] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 kernel: vlan22: entered promiscuous mode
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7210] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7213] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7217] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 kernel: vlan23: entered promiscuous mode
Jan 27 08:08:39 np0005597378 systemd-udevd[51686]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:08:39 np0005597378 kernel: vlan21: entered promiscuous mode
Jan 27 08:08:39 np0005597378 systemd-udevd[51685]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7346] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7357] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7376] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7380] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7385] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7389] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7396] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 kernel: vlan20: entered promiscuous mode
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7429] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7434] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7438] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 systemd-udevd[51797]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7447] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7456] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7486] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7487] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7491] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7533] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7544] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7559] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7560] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 08:08:39 np0005597378 NetworkManager[48904]: <info>  [1769519319.7564] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 08:08:40 np0005597378 NetworkManager[48904]: <info>  [1769519320.8728] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.0291] checkpoint[0x55ea40c9f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.0293] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 python3.9[52039]: ansible-ansible.legacy.async_status Invoked with jid=j762337131821.51674 mode=status _async_dir=/root/.ansible_async
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.3070] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.3081] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.5700] audit: op="networking-control" arg="global-dns-configuration" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.5732] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.5763] audit: op="networking-control" arg="global-dns-configuration" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.5788] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.7354] checkpoint[0x55ea40c9fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 27 08:08:41 np0005597378 NetworkManager[48904]: <info>  [1769519321.7357] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51680 uid=0 result="success"
Jan 27 08:08:41 np0005597378 ansible-async_wrapper.py[51678]: Module complete (51678)
Jan 27 08:08:42 np0005597378 ansible-async_wrapper.py[51677]: Done in kid B.
Jan 27 08:08:44 np0005597378 python3.9[52146]: ansible-ansible.legacy.async_status Invoked with jid=j762337131821.51674 mode=status _async_dir=/root/.ansible_async
Jan 27 08:08:45 np0005597378 python3.9[52246]: ansible-ansible.legacy.async_status Invoked with jid=j762337131821.51674 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 08:08:46 np0005597378 python3.9[52398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:08:46 np0005597378 python3.9[52521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519325.5476856-317-31290309983851/.source.returncode _original_basename=.mcxaout6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:47 np0005597378 python3.9[52673]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:08:48 np0005597378 python3.9[52796]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519326.9351225-333-63188809029533/.source.cfg _original_basename=.zekieo6s follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:08:48 np0005597378 python3.9[52949]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:08:49 np0005597378 systemd[1]: Reloading Network Manager...
Jan 27 08:08:49 np0005597378 NetworkManager[48904]: <info>  [1769519329.0674] audit: op="reload" arg="0" pid=52953 uid=0 result="success"
Jan 27 08:08:49 np0005597378 NetworkManager[48904]: <info>  [1769519329.0680] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 27 08:08:49 np0005597378 systemd[1]: Reloaded Network Manager.
Jan 27 08:08:49 np0005597378 systemd[1]: session-9.scope: Deactivated successfully.
Jan 27 08:08:49 np0005597378 systemd[1]: session-9.scope: Consumed 48.902s CPU time.
Jan 27 08:08:49 np0005597378 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Jan 27 08:08:49 np0005597378 systemd-logind[786]: Removed session 9.
Jan 27 08:08:49 np0005597378 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 08:08:54 np0005597378 systemd-logind[786]: New session 10 of user zuul.
Jan 27 08:08:54 np0005597378 systemd[1]: Started Session 10 of User zuul.
Jan 27 08:08:55 np0005597378 python3.9[53140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:08:56 np0005597378 python3.9[53294]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:08:57 np0005597378 python3.9[53487]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:08:57 np0005597378 systemd[1]: session-10.scope: Deactivated successfully.
Jan 27 08:08:57 np0005597378 systemd[1]: session-10.scope: Consumed 2.493s CPU time.
Jan 27 08:08:57 np0005597378 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Jan 27 08:08:57 np0005597378 systemd-logind[786]: Removed session 10.
Jan 27 08:08:59 np0005597378 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 08:09:03 np0005597378 systemd-logind[786]: New session 11 of user zuul.
Jan 27 08:09:03 np0005597378 systemd[1]: Started Session 11 of User zuul.
Jan 27 08:09:04 np0005597378 python3.9[53670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:09:05 np0005597378 python3.9[53824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:09:06 np0005597378 python3.9[53980]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:09:07 np0005597378 python3.9[54064]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:09:09 np0005597378 python3.9[54218]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:09:10 np0005597378 python3.9[54414]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:11 np0005597378 python3.9[54566]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:09:11 np0005597378 podman[54567]: 2026-01-27 13:09:11.471462058 +0000 UTC m=+0.063695543 system refresh
Jan 27 08:09:12 np0005597378 python3.9[54729]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:09:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:09:13 np0005597378 python3.9[54852]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519351.6983478-74-161059999770231/.source.json follow=False _original_basename=podman_network_config.j2 checksum=b74e9ec0b6f62b329f19fa3c89ee416dbd3dda93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:13 np0005597378 python3.9[55004]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:09:14 np0005597378 python3.9[55127]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769519353.3451264-89-5288614293367/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97513ee69a4b3dc3c4fd06acbbcaa9a991e77aee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:09:15 np0005597378 python3.9[55279]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:09:15 np0005597378 python3.9[55431]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:09:16 np0005597378 python3.9[55583]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:09:17 np0005597378 python3.9[55735]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:09:17 np0005597378 python3.9[55887]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:09:20 np0005597378 python3.9[56040]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:09:20 np0005597378 python3.9[56194]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:09:21 np0005597378 python3.9[56346]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:09:22 np0005597378 python3.9[56498]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:09:23 np0005597378 python3.9[56651]: ansible-service_facts Invoked
Jan 27 08:09:23 np0005597378 network[56668]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:09:23 np0005597378 network[56669]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:09:23 np0005597378 network[56670]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:09:28 np0005597378 python3.9[57122]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:09:31 np0005597378 python3.9[57275]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 08:09:32 np0005597378 python3.9[57427]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:09:33 np0005597378 python3.9[57552]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519371.8195002-233-38306231950855/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:33 np0005597378 python3.9[57706]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:09:34 np0005597378 python3.9[57831]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519373.3282275-248-109681180634446/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:35 np0005597378 python3.9[57985]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:36 np0005597378 python3.9[58139]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:09:37 np0005597378 python3.9[58223]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:09:38 np0005597378 python3.9[58377]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:09:39 np0005597378 python3.9[58461]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:09:39 np0005597378 chronyd[796]: chronyd exiting
Jan 27 08:09:39 np0005597378 systemd[1]: Stopping NTP client/server...
Jan 27 08:09:39 np0005597378 systemd[1]: chronyd.service: Deactivated successfully.
Jan 27 08:09:39 np0005597378 systemd[1]: Stopped NTP client/server.
Jan 27 08:09:39 np0005597378 systemd[1]: Starting NTP client/server...
Jan 27 08:09:39 np0005597378 chronyd[58470]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 08:09:39 np0005597378 chronyd[58470]: Frequency -26.447 +/- 0.286 ppm read from /var/lib/chrony/drift
Jan 27 08:09:39 np0005597378 chronyd[58470]: Loaded seccomp filter (level 2)
Jan 27 08:09:39 np0005597378 systemd[1]: Started NTP client/server.
Jan 27 08:09:40 np0005597378 systemd[1]: session-11.scope: Deactivated successfully.
Jan 27 08:09:40 np0005597378 systemd[1]: session-11.scope: Consumed 25.439s CPU time.
Jan 27 08:09:40 np0005597378 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Jan 27 08:09:40 np0005597378 systemd-logind[786]: Removed session 11.
Jan 27 08:09:45 np0005597378 systemd-logind[786]: New session 12 of user zuul.
Jan 27 08:09:45 np0005597378 systemd[1]: Started Session 12 of User zuul.
Jan 27 08:09:46 np0005597378 python3.9[58651]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:47 np0005597378 python3.9[58803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:09:48 np0005597378 python3.9[58926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519386.929969-29-65507615461794/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:48 np0005597378 systemd[1]: session-12.scope: Deactivated successfully.
Jan 27 08:09:48 np0005597378 systemd[1]: session-12.scope: Consumed 1.940s CPU time.
Jan 27 08:09:48 np0005597378 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Jan 27 08:09:48 np0005597378 systemd-logind[786]: Removed session 12.
Jan 27 08:09:55 np0005597378 systemd-logind[786]: New session 13 of user zuul.
Jan 27 08:09:55 np0005597378 systemd[1]: Started Session 13 of User zuul.
Jan 27 08:09:56 np0005597378 python3.9[59104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:09:57 np0005597378 python3.9[59260]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:09:58 np0005597378 python3.9[59435]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:09:59 np0005597378 python3.9[59560]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769519398.223055-36-144570056429551/.source.json _original_basename=.6ho9xmlc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:00 np0005597378 python3.9[59712]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:01 np0005597378 python3.9[59835]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519400.2382324-59-131404058913210/.source _original_basename=.gwp3d75b follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:01 np0005597378 python3.9[59987]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:10:02 np0005597378 python3.9[60139]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:03 np0005597378 python3.9[60262]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769519402.2964623-83-229492267174234/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:10:04 np0005597378 python3.9[60414]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:04 np0005597378 python3.9[60537]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769519403.57089-83-141079727164989/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:10:05 np0005597378 python3.9[60689]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:06 np0005597378 python3.9[60841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:06 np0005597378 python3.9[60964]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519405.5627005-120-174895766033472/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:07 np0005597378 python3.9[61116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:08 np0005597378 python3.9[61239]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519407.264514-135-174283920071399/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:09 np0005597378 python3.9[61391]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:10:09 np0005597378 systemd[1]: Reloading.
Jan 27 08:10:09 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:10:09 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:10:09 np0005597378 systemd[1]: Reloading.
Jan 27 08:10:09 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:10:09 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:10:09 np0005597378 systemd[1]: Starting EDPM Container Shutdown...
Jan 27 08:10:09 np0005597378 systemd[1]: Finished EDPM Container Shutdown.
Jan 27 08:10:10 np0005597378 python3.9[61619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:10 np0005597378 python3.9[61742]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519410.048174-158-89716925157632/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:11 np0005597378 python3.9[61894]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:12 np0005597378 python3.9[62017]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519411.1309605-173-258031749633766/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:12 np0005597378 python3.9[62169]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:10:12 np0005597378 systemd[1]: Reloading.
Jan 27 08:10:12 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:10:12 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:10:13 np0005597378 systemd[1]: Reloading.
Jan 27 08:10:13 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:10:13 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:10:13 np0005597378 systemd[1]: Starting Create netns directory...
Jan 27 08:10:13 np0005597378 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:10:13 np0005597378 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:10:13 np0005597378 systemd[1]: Finished Create netns directory.
Jan 27 08:10:14 np0005597378 python3.9[62395]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:10:14 np0005597378 network[62412]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:10:14 np0005597378 network[62413]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:10:14 np0005597378 network[62414]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:10:18 np0005597378 python3.9[62676]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:10:18 np0005597378 systemd[1]: Reloading.
Jan 27 08:10:18 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:10:18 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:10:19 np0005597378 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 27 08:10:19 np0005597378 iptables.init[62716]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 27 08:10:19 np0005597378 iptables.init[62716]: iptables: Flushing firewall rules: [  OK  ]
Jan 27 08:10:19 np0005597378 systemd[1]: iptables.service: Deactivated successfully.
Jan 27 08:10:19 np0005597378 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 27 08:10:20 np0005597378 python3.9[62912]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:10:21 np0005597378 python3.9[63066]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:10:21 np0005597378 systemd[1]: Reloading.
Jan 27 08:10:21 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:10:21 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:10:21 np0005597378 systemd[1]: Starting Netfilter Tables...
Jan 27 08:10:21 np0005597378 systemd[1]: Finished Netfilter Tables.
Jan 27 08:10:22 np0005597378 python3.9[63258]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:10:23 np0005597378 python3.9[63411]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:23 np0005597378 python3.9[63536]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519422.569321-242-220430233616897/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:24 np0005597378 python3.9[63689]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:10:24 np0005597378 systemd[1]: Reloading OpenSSH server daemon...
Jan 27 08:10:24 np0005597378 systemd[1]: Reloaded OpenSSH server daemon.
Jan 27 08:10:25 np0005597378 python3.9[63845]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:26 np0005597378 python3.9[63997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:26 np0005597378 python3.9[64120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519425.6462593-273-187868017319730/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:27 np0005597378 python3.9[64272]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 08:10:27 np0005597378 systemd[1]: Starting Time & Date Service...
Jan 27 08:10:27 np0005597378 systemd[1]: Started Time & Date Service.
Jan 27 08:10:28 np0005597378 python3.9[64428]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:28 np0005597378 python3.9[64580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:29 np0005597378 python3.9[64703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519428.472516-308-28377968470828/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:30 np0005597378 python3.9[64855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:30 np0005597378 python3.9[64978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769519429.6646347-323-20680908682582/.source.yaml _original_basename=.l89mfk3k follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:31 np0005597378 python3.9[65130]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:32 np0005597378 python3.9[65253]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519430.961893-338-122254580781749/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:32 np0005597378 python3.9[65405]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:10:33 np0005597378 python3.9[65558]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:10:34 np0005597378 python3[65711]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:10:35 np0005597378 python3.9[65863]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:35 np0005597378 python3.9[65986]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519434.8038666-377-7826920629253/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:36 np0005597378 python3.9[66138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:37 np0005597378 python3.9[66261]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519436.122125-392-44700684062122/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:37 np0005597378 python3.9[66413]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:38 np0005597378 python3.9[66537]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519437.1979308-407-278526181779169/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:41 np0005597378 python3.9[66689]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:45 np0005597378 python3.9[66812]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519440.6307044-422-138567000461673/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:45 np0005597378 python3.9[66964]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:10:46 np0005597378 python3.9[67087]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769519445.4583068-437-197096889142655/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:47 np0005597378 python3.9[67239]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:47 np0005597378 python3.9[67391]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:10:48 np0005597378 python3.9[67550]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:49 np0005597378 python3.9[67703]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:49 np0005597378 python3.9[67855]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:10:50 np0005597378 python3.9[68007]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:10:51 np0005597378 python3.9[68160]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:10:52 np0005597378 systemd[1]: session-13.scope: Deactivated successfully.
Jan 27 08:10:52 np0005597378 systemd[1]: session-13.scope: Consumed 34.454s CPU time.
Jan 27 08:10:52 np0005597378 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Jan 27 08:10:52 np0005597378 systemd-logind[786]: Removed session 13.
Jan 27 08:10:57 np0005597378 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 08:10:57 np0005597378 systemd-logind[786]: New session 14 of user zuul.
Jan 27 08:10:57 np0005597378 systemd[1]: Started Session 14 of User zuul.
Jan 27 08:10:58 np0005597378 python3.9[68343]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 08:10:59 np0005597378 python3.9[68495]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:11:00 np0005597378 python3.9[68647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:11:01 np0005597378 python3.9[68799]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaCAaHmoxOQtBxrLj6sOYqfJ38HHCN1WHb9NSlOr664dGULo40YBM5+Bp/CGRllxwKlP/vqV191qSZR6oHUF3BNPAo9QAbYuQP+EfzS0WhCgrtPNloflsWb7IiB7KVzJLYF+Cifd/SFwEL2gpEK3UbK9dOqM/m0HeaznPyx4ILCiDJUsLGyNKkhvz/OP3twKij+g1uhhTO3ANNFVDPpzwQ0ISXAucJSxMvSIzjvmK5DN2a4mV8Y9mphdak/4VrXaqVk3jnlkB3yC25iAQag/DV3UL2KcLDWS+BB6StMLdx5JvPEM9faZ11bFWtK1uz8OPP2iojE84H9Y1SRZ/l8kmX/jCfo5plNhiXVDOsuVvMsm9ZRsvnmrRI6K38jEZpwl03Rs8LUXl/7OnX8hjwsgOzO3aDQJwuDvumc4m6uUUmYdHEAxvf7LttfF9D5hRM38yYDdPPaw79orh7juX8cJsHuxEAJQObeiKfPSVU69K1Wh5UOXRwdjExAXyxTwAtmyE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB1PFwCS+7a6RUIsPBWIZprjQyynEnTLsqUopEPTlz7Q#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLY/Q33X8GSkQiJHflskwY+OHS0Yva0wW27rCFHMBIpjFySd5HNHu05T8+TVxYbMxsoGAm1JnSEWJNufae5pANE=#012 create=True mode=0644 path=/tmp/ansible.vukhe9ur state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:11:02 np0005597378 python3.9[68953]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vukhe9ur' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:02 np0005597378 python3.9[69107]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vukhe9ur state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:11:03 np0005597378 systemd[1]: session-14.scope: Deactivated successfully.
Jan 27 08:11:03 np0005597378 systemd[1]: session-14.scope: Consumed 3.038s CPU time.
Jan 27 08:11:03 np0005597378 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Jan 27 08:11:03 np0005597378 systemd-logind[786]: Removed session 14.
Jan 27 08:11:08 np0005597378 systemd-logind[786]: New session 15 of user zuul.
Jan 27 08:11:08 np0005597378 systemd[1]: Started Session 15 of User zuul.
Jan 27 08:11:09 np0005597378 python3.9[69285]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:11:11 np0005597378 python3.9[69441]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 08:11:11 np0005597378 python3.9[69595]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:11:12 np0005597378 python3.9[69748]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:13 np0005597378 python3.9[69901]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:11:14 np0005597378 python3.9[70055]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:14 np0005597378 python3.9[70210]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:11:15 np0005597378 systemd[1]: session-15.scope: Deactivated successfully.
Jan 27 08:11:15 np0005597378 systemd[1]: session-15.scope: Consumed 4.123s CPU time.
Jan 27 08:11:15 np0005597378 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Jan 27 08:11:15 np0005597378 systemd-logind[786]: Removed session 15.
Jan 27 08:11:21 np0005597378 systemd-logind[786]: New session 16 of user zuul.
Jan 27 08:11:21 np0005597378 systemd[1]: Started Session 16 of User zuul.
Jan 27 08:11:22 np0005597378 python3.9[70388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:11:23 np0005597378 python3.9[70544]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:11:24 np0005597378 python3.9[70628]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:11:26 np0005597378 python3.9[70779]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:27 np0005597378 python3.9[70930]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:11:28 np0005597378 python3.9[71080]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:11:28 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:11:29 np0005597378 python3.9[71231]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:11:29 np0005597378 systemd[1]: session-16.scope: Deactivated successfully.
Jan 27 08:11:29 np0005597378 systemd[1]: session-16.scope: Consumed 5.671s CPU time.
Jan 27 08:11:29 np0005597378 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Jan 27 08:11:29 np0005597378 systemd-logind[786]: Removed session 16.
Jan 27 08:11:37 np0005597378 systemd-logind[786]: New session 17 of user zuul.
Jan 27 08:11:37 np0005597378 systemd[1]: Started Session 17 of User zuul.
Jan 27 08:11:43 np0005597378 python3[71997]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:11:45 np0005597378 python3[72092]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:11:46 np0005597378 python3[72119]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:11:47 np0005597378 python3[72145]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:47 np0005597378 kernel: loop: module loaded
Jan 27 08:11:47 np0005597378 kernel: loop3: detected capacity change from 0 to 41943040
Jan 27 08:11:47 np0005597378 python3[72180]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:47 np0005597378 lvm[72183]: PV /dev/loop3 not used.
Jan 27 08:11:47 np0005597378 lvm[72185]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:11:47 np0005597378 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 27 08:11:47 np0005597378 lvm[72191]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 27 08:11:47 np0005597378 lvm[72195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:11:47 np0005597378 lvm[72195]: VG ceph_vg0 finished
Jan 27 08:11:47 np0005597378 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 27 08:11:48 np0005597378 python3[72275]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:11:48 np0005597378 python3[72348]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519507.9101493-36248-80721523877690/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:11:49 np0005597378 python3[72398]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:11:49 np0005597378 systemd[1]: Reloading.
Jan 27 08:11:49 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:11:49 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:11:49 np0005597378 chronyd[58470]: Selected source 207.34.48.31 (pool.ntp.org)
Jan 27 08:11:49 np0005597378 systemd[1]: Starting Ceph OSD losetup...
Jan 27 08:11:49 np0005597378 bash[72438]: /dev/loop3: [64513]:4329741 (/var/lib/ceph-osd-0.img)
Jan 27 08:11:49 np0005597378 systemd[1]: Finished Ceph OSD losetup.
Jan 27 08:11:49 np0005597378 lvm[72439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:11:49 np0005597378 lvm[72439]: VG ceph_vg0 finished
Jan 27 08:11:50 np0005597378 python3[72465]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:11:51 np0005597378 python3[72492]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:11:52 np0005597378 python3[72518]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:52 np0005597378 kernel: loop4: detected capacity change from 0 to 41943040
Jan 27 08:11:59 np0005597378 python3[72550]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:11:59 np0005597378 lvm[72555]: PV /dev/loop4 not used.
Jan 27 08:11:59 np0005597378 lvm[72565]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:11:59 np0005597378 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 27 08:11:59 np0005597378 lvm[72567]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 27 08:11:59 np0005597378 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 27 08:12:00 np0005597378 python3[72645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:12:00 np0005597378 python3[72718]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519519.7541983-36275-99901717477150/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:12:01 np0005597378 python3[72768]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:12:01 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:01 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:01 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:01 np0005597378 systemd[1]: Starting Ceph OSD losetup...
Jan 27 08:12:01 np0005597378 bash[72807]: /dev/loop4: [64513]:4329742 (/var/lib/ceph-osd-1.img)
Jan 27 08:12:01 np0005597378 systemd[1]: Finished Ceph OSD losetup.
Jan 27 08:12:01 np0005597378 lvm[72808]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:12:01 np0005597378 lvm[72808]: VG ceph_vg1 finished
Jan 27 08:12:01 np0005597378 python3[72834]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:12:03 np0005597378 python3[72861]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:12:04 np0005597378 python3[72887]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:12:04 np0005597378 kernel: loop5: detected capacity change from 0 to 41943040
Jan 27 08:12:04 np0005597378 python3[72919]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:12:04 np0005597378 lvm[72922]: PV /dev/loop5 not used.
Jan 27 08:12:04 np0005597378 lvm[72924]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:12:04 np0005597378 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 27 08:12:04 np0005597378 lvm[72931]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 27 08:12:04 np0005597378 lvm[72935]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:12:04 np0005597378 lvm[72935]: VG ceph_vg2 finished
Jan 27 08:12:04 np0005597378 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 27 08:12:05 np0005597378 python3[73013]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:12:05 np0005597378 python3[73086]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519524.8663356-36302-61323573647318/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:12:05 np0005597378 python3[73136]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:12:06 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:06 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:06 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:06 np0005597378 systemd[1]: Starting Ceph OSD losetup...
Jan 27 08:12:06 np0005597378 bash[73176]: /dev/loop5: [64513]:4329743 (/var/lib/ceph-osd-2.img)
Jan 27 08:12:06 np0005597378 systemd[1]: Finished Ceph OSD losetup.
Jan 27 08:12:06 np0005597378 lvm[73177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:12:06 np0005597378 lvm[73177]: VG ceph_vg2 finished
Jan 27 08:12:08 np0005597378 python3[73201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:12:10 np0005597378 python3[73294]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:12:12 np0005597378 python3[73353]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 08:12:15 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:12:15 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:12:16 np0005597378 python3[73471]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:12:16 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:12:16 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:12:16 np0005597378 systemd[1]: run-r8792a3fb2cb641c98816d44098d110c0.service: Deactivated successfully.
Jan 27 08:12:16 np0005597378 python3[73500]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:12:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:17 np0005597378 python3[73540]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:12:18 np0005597378 python3[73566]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:12:18 np0005597378 python3[73644]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:12:19 np0005597378 python3[73717]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519538.645709-36450-41966542011651/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:12:20 np0005597378 python3[73819]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:12:20 np0005597378 python3[73892]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519539.880495-36468-24974208109864/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:12:21 np0005597378 python3[73942]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:12:21 np0005597378 python3[73970]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:12:21 np0005597378 python3[73998]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:12:22 np0005597378 python3[74024]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:12:22 np0005597378 python3[74050]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:12:22 np0005597378 systemd[1]: Created slice User Slice of UID 42477.
Jan 27 08:12:22 np0005597378 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 27 08:12:22 np0005597378 systemd-logind[786]: New session 18 of user ceph-admin.
Jan 27 08:12:22 np0005597378 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 27 08:12:22 np0005597378 systemd[1]: Starting User Manager for UID 42477...
Jan 27 08:12:23 np0005597378 systemd[74058]: Queued start job for default target Main User Target.
Jan 27 08:12:23 np0005597378 systemd[74058]: Created slice User Application Slice.
Jan 27 08:12:23 np0005597378 systemd[74058]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 08:12:23 np0005597378 systemd[74058]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 08:12:23 np0005597378 systemd[74058]: Reached target Paths.
Jan 27 08:12:23 np0005597378 systemd[74058]: Reached target Timers.
Jan 27 08:12:23 np0005597378 systemd[74058]: Starting D-Bus User Message Bus Socket...
Jan 27 08:12:23 np0005597378 systemd[74058]: Starting Create User's Volatile Files and Directories...
Jan 27 08:12:23 np0005597378 systemd[74058]: Listening on D-Bus User Message Bus Socket.
Jan 27 08:12:23 np0005597378 systemd[74058]: Reached target Sockets.
Jan 27 08:12:23 np0005597378 systemd[74058]: Finished Create User's Volatile Files and Directories.
Jan 27 08:12:23 np0005597378 systemd[74058]: Reached target Basic System.
Jan 27 08:12:23 np0005597378 systemd[74058]: Reached target Main User Target.
Jan 27 08:12:23 np0005597378 systemd[74058]: Startup finished in 170ms.
Jan 27 08:12:23 np0005597378 systemd[1]: Started User Manager for UID 42477.
Jan 27 08:12:23 np0005597378 systemd[1]: Started Session 18 of User ceph-admin.
Jan 27 08:12:23 np0005597378 systemd[1]: session-18.scope: Deactivated successfully.
Jan 27 08:12:23 np0005597378 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Jan 27 08:12:23 np0005597378 systemd-logind[786]: Removed session 18.
Jan 27 08:12:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-compat4045510339-merged.mount: Deactivated successfully.
Jan 27 08:12:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-compat4045510339-lower\x2dmapped.mount: Deactivated successfully.
Jan 27 08:12:33 np0005597378 systemd[1]: Stopping User Manager for UID 42477...
Jan 27 08:12:33 np0005597378 systemd[74058]: Activating special unit Exit the Session...
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped target Main User Target.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped target Basic System.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped target Paths.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped target Sockets.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped target Timers.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 08:12:33 np0005597378 systemd[74058]: Closed D-Bus User Message Bus Socket.
Jan 27 08:12:33 np0005597378 systemd[74058]: Stopped Create User's Volatile Files and Directories.
Jan 27 08:12:33 np0005597378 systemd[74058]: Removed slice User Application Slice.
Jan 27 08:12:33 np0005597378 systemd[74058]: Reached target Shutdown.
Jan 27 08:12:33 np0005597378 systemd[74058]: Finished Exit the Session.
Jan 27 08:12:33 np0005597378 systemd[74058]: Reached target Exit the Session.
Jan 27 08:12:33 np0005597378 systemd[1]: user@42477.service: Deactivated successfully.
Jan 27 08:12:33 np0005597378 systemd[1]: Stopped User Manager for UID 42477.
Jan 27 08:12:33 np0005597378 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 27 08:12:33 np0005597378 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 27 08:12:33 np0005597378 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 27 08:12:33 np0005597378 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 27 08:12:33 np0005597378 systemd[1]: Removed slice User Slice of UID 42477.
Jan 27 08:12:46 np0005597378 podman[74152]: 2026-01-27 13:12:46.679839858 +0000 UTC m=+23.134805883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:46 np0005597378 podman[74214]: 2026-01-27 13:12:46.721910001 +0000 UTC m=+0.021147586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:46 np0005597378 podman[74214]: 2026-01-27 13:12:46.947295921 +0000 UTC m=+0.246533476 container create 8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90 (image=quay.io/ceph/ceph:v20, name=interesting_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:12:47 np0005597378 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 27 08:12:47 np0005597378 systemd[1]: Started libpod-conmon-8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90.scope.
Jan 27 08:12:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:47 np0005597378 podman[74214]: 2026-01-27 13:12:47.139097832 +0000 UTC m=+0.438335437 container init 8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90 (image=quay.io/ceph/ceph:v20, name=interesting_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:47 np0005597378 podman[74214]: 2026-01-27 13:12:47.147880641 +0000 UTC m=+0.447118236 container start 8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90 (image=quay.io/ceph/ceph:v20, name=interesting_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:12:47 np0005597378 podman[74214]: 2026-01-27 13:12:47.217954264 +0000 UTC m=+0.517191819 container attach 8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90 (image=quay.io/ceph/ceph:v20, name=interesting_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:12:47 np0005597378 interesting_chebyshev[74230]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 27 08:12:47 np0005597378 systemd[1]: libpod-8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90.scope: Deactivated successfully.
Jan 27 08:12:47 np0005597378 podman[74214]: 2026-01-27 13:12:47.254063265 +0000 UTC m=+0.553300830 container died 8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90 (image=quay.io/ceph/ceph:v20, name=interesting_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:12:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c50a28f7571327dae977eb1aad7efcaa7583b0f50207d591ef7b4583815b48e4-merged.mount: Deactivated successfully.
Jan 27 08:12:47 np0005597378 podman[74214]: 2026-01-27 13:12:47.578540567 +0000 UTC m=+0.877778122 container remove 8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90 (image=quay.io/ceph/ceph:v20, name=interesting_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:12:47 np0005597378 systemd[1]: libpod-conmon-8347780f189bbc5619086b078282bab17b3839e6ec8c1b9d737b453dd5ef2a90.scope: Deactivated successfully.
Jan 27 08:12:47 np0005597378 podman[74249]: 2026-01-27 13:12:47.634135328 +0000 UTC m=+0.030259104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:47 np0005597378 podman[74249]: 2026-01-27 13:12:47.795625763 +0000 UTC m=+0.191749479 container create c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218 (image=quay.io/ceph/ceph:v20, name=angry_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:12:47 np0005597378 systemd[1]: Started libpod-conmon-c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218.scope.
Jan 27 08:12:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:48 np0005597378 podman[74249]: 2026-01-27 13:12:48.011686622 +0000 UTC m=+0.407810388 container init c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218 (image=quay.io/ceph/ceph:v20, name=angry_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:12:48 np0005597378 podman[74249]: 2026-01-27 13:12:48.024380857 +0000 UTC m=+0.420504623 container start c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218 (image=quay.io/ceph/ceph:v20, name=angry_williams, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:12:48 np0005597378 angry_williams[74266]: 167 167
Jan 27 08:12:48 np0005597378 systemd[1]: libpod-c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218.scope: Deactivated successfully.
Jan 27 08:12:48 np0005597378 podman[74249]: 2026-01-27 13:12:48.161713287 +0000 UTC m=+0.557837103 container attach c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218 (image=quay.io/ceph/ceph:v20, name=angry_williams, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:12:48 np0005597378 podman[74249]: 2026-01-27 13:12:48.162997262 +0000 UTC m=+0.559121038 container died c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218 (image=quay.io/ceph/ceph:v20, name=angry_williams, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:12:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f41e7249d4d4666dff1ce2e4b6f631a1854f6c803bc2410318659ba4df213751-merged.mount: Deactivated successfully.
Jan 27 08:12:48 np0005597378 podman[74249]: 2026-01-27 13:12:48.771037497 +0000 UTC m=+1.167161233 container remove c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218 (image=quay.io/ceph/ceph:v20, name=angry_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:12:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:48 np0005597378 systemd[1]: libpod-conmon-c572fbaa6dcb6c14609f0008f68322dea5eb7ff1b0b1a523b35e8def3642d218.scope: Deactivated successfully.
Jan 27 08:12:48 np0005597378 podman[74283]: 2026-01-27 13:12:48.816049839 +0000 UTC m=+0.024607139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:48 np0005597378 podman[74283]: 2026-01-27 13:12:48.950175211 +0000 UTC m=+0.158732491 container create a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667 (image=quay.io/ceph/ceph:v20, name=crazy_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:12:49 np0005597378 systemd[1]: Started libpod-conmon-a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667.scope.
Jan 27 08:12:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:49 np0005597378 podman[74283]: 2026-01-27 13:12:49.165496801 +0000 UTC m=+0.374054101 container init a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667 (image=quay.io/ceph/ceph:v20, name=crazy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:12:49 np0005597378 podman[74283]: 2026-01-27 13:12:49.170915078 +0000 UTC m=+0.379472358 container start a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667 (image=quay.io/ceph/ceph:v20, name=crazy_swanson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:12:49 np0005597378 crazy_swanson[74299]: AQDRuXhpCqhGCxAAODSy9mqYZggJzB2aMkdUIQ==
Jan 27 08:12:49 np0005597378 systemd[1]: libpod-a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667.scope: Deactivated successfully.
Jan 27 08:12:49 np0005597378 podman[74283]: 2026-01-27 13:12:49.238553005 +0000 UTC m=+0.447110315 container attach a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667 (image=quay.io/ceph/ceph:v20, name=crazy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:12:49 np0005597378 podman[74283]: 2026-01-27 13:12:49.238936725 +0000 UTC m=+0.447494005 container died a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667 (image=quay.io/ceph/ceph:v20, name=crazy_swanson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:12:49 np0005597378 podman[74283]: 2026-01-27 13:12:49.45264489 +0000 UTC m=+0.661202170 container remove a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667 (image=quay.io/ceph/ceph:v20, name=crazy_swanson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:12:49 np0005597378 systemd[1]: libpod-conmon-a1d98ce22ef0f7701a8f1692939b772bbe3da780f0e8f7f7c48f737153bae667.scope: Deactivated successfully.
Jan 27 08:12:49 np0005597378 podman[74318]: 2026-01-27 13:12:49.55941863 +0000 UTC m=+0.090332194 container create 75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8 (image=quay.io/ceph/ceph:v20, name=priceless_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:12:49 np0005597378 podman[74318]: 2026-01-27 13:12:49.490711184 +0000 UTC m=+0.021624798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:49 np0005597378 systemd[1]: Started libpod-conmon-75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8.scope.
Jan 27 08:12:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:49 np0005597378 podman[74318]: 2026-01-27 13:12:49.71550487 +0000 UTC m=+0.246418454 container init 75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8 (image=quay.io/ceph/ceph:v20, name=priceless_spence, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:12:49 np0005597378 podman[74318]: 2026-01-27 13:12:49.721096922 +0000 UTC m=+0.252010486 container start 75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8 (image=quay.io/ceph/ceph:v20, name=priceless_spence, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:12:49 np0005597378 priceless_spence[74334]: AQDRuXhpyNMQLBAAJi0DBkxLiw/eoJ6PPXM92Q==
Jan 27 08:12:49 np0005597378 systemd[1]: libpod-75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8.scope: Deactivated successfully.
Jan 27 08:12:49 np0005597378 podman[74318]: 2026-01-27 13:12:49.754268782 +0000 UTC m=+0.285182376 container attach 75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8 (image=quay.io/ceph/ceph:v20, name=priceless_spence, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:12:49 np0005597378 podman[74318]: 2026-01-27 13:12:49.754792977 +0000 UTC m=+0.285706551 container died 75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8 (image=quay.io/ceph/ceph:v20, name=priceless_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b36fbdce8cad85a71912fa3254586837305c42306a64c041f0f25792179a4572-merged.mount: Deactivated successfully.
Jan 27 08:12:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ae5daa7f98ee9328fcc77c92078c2c028dee4f274ed36f1fdf246d4890ae1bd6-merged.mount: Deactivated successfully.
Jan 27 08:12:50 np0005597378 podman[74318]: 2026-01-27 13:12:50.088184452 +0000 UTC m=+0.619098026 container remove 75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8 (image=quay.io/ceph/ceph:v20, name=priceless_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:12:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:50 np0005597378 systemd[1]: libpod-conmon-75f3ea77e7383f2d64af79f8b357ae2acf6751f621a6d4c633a57949db3f2da8.scope: Deactivated successfully.
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.182578605 +0000 UTC m=+0.070657989 container create 963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1 (image=quay.io/ceph/ceph:v20, name=naughty_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.132913107 +0000 UTC m=+0.020992491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:50 np0005597378 systemd[1]: Started libpod-conmon-963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1.scope.
Jan 27 08:12:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.353665262 +0000 UTC m=+0.241744646 container init 963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1 (image=quay.io/ceph/ceph:v20, name=naughty_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.358273287 +0000 UTC m=+0.246352661 container start 963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1 (image=quay.io/ceph/ceph:v20, name=naughty_yalow, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:50 np0005597378 naughty_yalow[74371]: AQDSuXhpdVppFhAAitZjm/MsQ9ob4RSWYbh+gw==
Jan 27 08:12:50 np0005597378 systemd[1]: libpod-963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1.scope: Deactivated successfully.
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.383934204 +0000 UTC m=+0.272013628 container attach 963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1 (image=quay.io/ceph/ceph:v20, name=naughty_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.38487159 +0000 UTC m=+0.272950994 container died 963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1 (image=quay.io/ceph/ceph:v20, name=naughty_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:12:50 np0005597378 podman[74355]: 2026-01-27 13:12:50.682208866 +0000 UTC m=+0.570288250 container remove 963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1 (image=quay.io/ceph/ceph:v20, name=naughty_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:12:50 np0005597378 systemd[1]: libpod-conmon-963a997b4cc46fbbd3b517bf27f0c7e202daed974c3d93f9b828851e123e6cd1.scope: Deactivated successfully.
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.749255797 +0000 UTC m=+0.048094797 container create de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b (image=quay.io/ceph/ceph:v20, name=hopeful_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:50 np0005597378 systemd[1]: Started libpod-conmon-de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b.scope.
Jan 27 08:12:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457cbac419c40be2502154ecd67af8933ebb8171306dd3f1f204cf193d9bbeb5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.815922247 +0000 UTC m=+0.114761267 container init de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b (image=quay.io/ceph/ceph:v20, name=hopeful_ellis, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.723760495 +0000 UTC m=+0.022599515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.821966492 +0000 UTC m=+0.120805492 container start de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b (image=quay.io/ceph/ceph:v20, name=hopeful_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.832765305 +0000 UTC m=+0.131604305 container attach de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b (image=quay.io/ceph/ceph:v20, name=hopeful_ellis, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:12:50 np0005597378 hopeful_ellis[74409]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 27 08:12:50 np0005597378 hopeful_ellis[74409]: setting min_mon_release = tentacle
Jan 27 08:12:50 np0005597378 hopeful_ellis[74409]: /usr/bin/monmaptool: set fsid to 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:50 np0005597378 hopeful_ellis[74409]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.859171022 +0000 UTC m=+0.158010032 container died de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b (image=quay.io/ceph/ceph:v20, name=hopeful_ellis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:12:50 np0005597378 systemd[1]: libpod-de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b.scope: Deactivated successfully.
Jan 27 08:12:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-457cbac419c40be2502154ecd67af8933ebb8171306dd3f1f204cf193d9bbeb5-merged.mount: Deactivated successfully.
Jan 27 08:12:50 np0005597378 podman[74392]: 2026-01-27 13:12:50.962119379 +0000 UTC m=+0.260958369 container remove de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b (image=quay.io/ceph/ceph:v20, name=hopeful_ellis, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:12:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:50 np0005597378 systemd[1]: libpod-conmon-de2386171e2eb09f52ca71e1454f802131a4f8cd7ed3d36346c57c05f427f41b.scope: Deactivated successfully.
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.027534746 +0000 UTC m=+0.044128931 container create 2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88 (image=quay.io/ceph/ceph:v20, name=bold_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:12:51 np0005597378 systemd[1]: Started libpod-conmon-2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88.scope.
Jan 27 08:12:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3bef1a18d0e82b451c8889737e2c3ded697515eb83e48dfe809d7677ba7a0b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3bef1a18d0e82b451c8889737e2c3ded697515eb83e48dfe809d7677ba7a0b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.004255653 +0000 UTC m=+0.020849868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3bef1a18d0e82b451c8889737e2c3ded697515eb83e48dfe809d7677ba7a0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3bef1a18d0e82b451c8889737e2c3ded697515eb83e48dfe809d7677ba7a0b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.125406994 +0000 UTC m=+0.142001179 container init 2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88 (image=quay.io/ceph/ceph:v20, name=bold_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.131507829 +0000 UTC m=+0.148102034 container start 2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88 (image=quay.io/ceph/ceph:v20, name=bold_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.150573666 +0000 UTC m=+0.167167851 container attach 2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88 (image=quay.io/ceph/ceph:v20, name=bold_kilby, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:12:51 np0005597378 systemd[1]: libpod-2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88.scope: Deactivated successfully.
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.354850415 +0000 UTC m=+0.371444630 container died 2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88 (image=quay.io/ceph/ceph:v20, name=bold_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:12:51 np0005597378 podman[74426]: 2026-01-27 13:12:51.420878079 +0000 UTC m=+0.437472264 container remove 2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88 (image=quay.io/ceph/ceph:v20, name=bold_kilby, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:12:51 np0005597378 systemd[1]: libpod-conmon-2506c9967cc1224566622bb0a4efad01eaacccd71bc525c62d1a3560cf98bb88.scope: Deactivated successfully.
Jan 27 08:12:51 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:51 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:51 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:51 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:51 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:51 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:52 np0005597378 systemd[1]: Reached target All Ceph clusters and services.
Jan 27 08:12:52 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:52 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:52 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:52 np0005597378 systemd[1]: Reached target Ceph cluster 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:12:52 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:52 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:52 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:52 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:52 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:52 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:53 np0005597378 systemd[1]: Created slice Slice /system/ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:12:53 np0005597378 systemd[1]: Reached target System Time Set.
Jan 27 08:12:53 np0005597378 systemd[1]: Reached target System Time Synchronized.
Jan 27 08:12:53 np0005597378 systemd[1]: Starting Ceph mon.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:12:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:53 np0005597378 podman[74718]: 2026-01-27 13:12:53.254632205 +0000 UTC m=+0.047464680 container create 40fd13e319f9f5470358a0df0cb1ce170a0737e993d88d9c96e9075a5ededb20 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4189fac332d4ffa483f5d74f95f8e4d4e8d8bb5d32374a9eb2fbadbebfc72e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4189fac332d4ffa483f5d74f95f8e4d4e8d8bb5d32374a9eb2fbadbebfc72e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4189fac332d4ffa483f5d74f95f8e4d4e8d8bb5d32374a9eb2fbadbebfc72e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4189fac332d4ffa483f5d74f95f8e4d4e8d8bb5d32374a9eb2fbadbebfc72e7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 podman[74718]: 2026-01-27 13:12:53.228473045 +0000 UTC m=+0.021305570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:53 np0005597378 podman[74718]: 2026-01-27 13:12:53.325454959 +0000 UTC m=+0.118287454 container init 40fd13e319f9f5470358a0df0cb1ce170a0737e993d88d9c96e9075a5ededb20 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:12:53 np0005597378 podman[74718]: 2026-01-27 13:12:53.331360889 +0000 UTC m=+0.124193364 container start 40fd13e319f9f5470358a0df0cb1ce170a0737e993d88d9c96e9075a5ededb20 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:12:53 np0005597378 bash[74718]: 40fd13e319f9f5470358a0df0cb1ce170a0737e993d88d9c96e9075a5ededb20
Jan 27 08:12:53 np0005597378 systemd[1]: Started Ceph mon.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: pidfile_write: ignore empty --pid-file
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: load: jerasure load: lrc 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Git sha 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: DB SUMMARY
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: DB Session ID:  OTEY9MDDPP598PEIRKOM
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                                     Options.env: 0x55e2b9360440
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                                Options.info_log: 0x55e2ba7f13e0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                                 Options.wal_dir: 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                    Options.write_buffer_manager: 0x55e2ba770140
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                               Options.row_cache: None
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                              Options.wal_filter: None
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.wal_compression: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.max_background_jobs: 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.max_total_wal_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:       Options.compaction_readahead_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Compression algorithms supported:
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kZSTD supported: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:           Options.merge_operator: 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:        Options.compaction_filter: None
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2ba77c600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e2ba7618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:        Options.write_buffer_size: 33554432
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:  Options.max_write_buffer_number: 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.compression: NoCompression
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.num_levels: 7
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fa55fde-2af1-4194-a177-64db194a2554
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519573381763, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519573404195, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "OTEY9MDDPP598PEIRKOM", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519573404369, "job": 1, "event": "recovery_finished"}
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 27 08:12:53 np0005597378 podman[74738]: 2026-01-27 13:12:53.447440482 +0000 UTC m=+0.077062064 container create 038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824 (image=quay.io/ceph/ceph:v20, name=stoic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:12:53 np0005597378 podman[74738]: 2026-01-27 13:12:53.3939763 +0000 UTC m=+0.023597912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e2ba78ee00
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: DB pointer 0x55e2ba8da000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e2ba7618d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@-1(???) e0 preinit fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:12:53 np0005597378 systemd[1]: Started libpod-conmon-038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824.scope.
Jan 27 08:12:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d535207912ecaedf60ce9a80c9ec7fc3fe0f45393478a3a8ddb45b7d2831686f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d535207912ecaedf60ce9a80c9ec7fc3fe0f45393478a3a8ddb45b7d2831686f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d535207912ecaedf60ce9a80c9ec7fc3fe0f45393478a3a8ddb45b7d2831686f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 27 08:12:53 np0005597378 podman[74738]: 2026-01-27 13:12:53.677780788 +0000 UTC m=+0.307402400 container init 038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824 (image=quay.io/ceph/ceph:v20, name=stoic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:12:53 np0005597378 podman[74738]: 2026-01-27 13:12:53.685899619 +0000 UTC m=+0.315521201 container start 038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824 (image=quay.io/ceph/ceph:v20, name=stoic_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : last_changed 2026-01-27T13:12:50.854365+0000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : created 2026-01-27T13:12:50.854365+0000
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-27T13:12:51.174775Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:12:53 np0005597378 podman[74738]: 2026-01-27 13:12:53.736344599 +0000 UTC m=+0.365966181 container attach 038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824 (image=quay.io/ceph/ceph:v20, name=stoic_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).mds e1 new map
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-27T13:12:53:733380+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : fsmap 
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mkfs 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:12:53 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059386518' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:  cluster:
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    id:     4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    health: HEALTH_OK
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]: 
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:  services:
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    mon: 1 daemons, quorum compute-0 (age 0.408808s) [leader: compute-0]
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    mgr: no daemons active
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    osd: 0 osds: 0 up, 0 in
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]: 
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:  data:
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    pools:   0 pools, 0 pgs
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    objects: 0 objects, 0 B
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    usage:   0 B used, 0 B / 0 B avail
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]:    pgs:     
Jan 27 08:12:54 np0005597378 stoic_archimedes[74792]: 
Jan 27 08:12:54 np0005597378 systemd[1]: libpod-038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824.scope: Deactivated successfully.
Jan 27 08:12:54 np0005597378 podman[74738]: 2026-01-27 13:12:54.111501078 +0000 UTC m=+0.741122660 container died 038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824 (image=quay.io/ceph/ceph:v20, name=stoic_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:12:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d535207912ecaedf60ce9a80c9ec7fc3fe0f45393478a3a8ddb45b7d2831686f-merged.mount: Deactivated successfully.
Jan 27 08:12:54 np0005597378 podman[74738]: 2026-01-27 13:12:54.294571711 +0000 UTC m=+0.924193293 container remove 038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824 (image=quay.io/ceph/ceph:v20, name=stoic_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:54 np0005597378 systemd[1]: libpod-conmon-038cc2049637201060ede3182f5d990ec11f8b445378f0c70ae99cd12abaa824.scope: Deactivated successfully.
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.359681359 +0000 UTC m=+0.046384031 container create d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de (image=quay.io/ceph/ceph:v20, name=charming_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:12:54 np0005597378 systemd[1]: Started libpod-conmon-d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de.scope.
Jan 27 08:12:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e079c53ce70ef7cb5206cfe2b776bf48ba852a3eddf0dd833861ecfa999a3cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e079c53ce70ef7cb5206cfe2b776bf48ba852a3eddf0dd833861ecfa999a3cbb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e079c53ce70ef7cb5206cfe2b776bf48ba852a3eddf0dd833861ecfa999a3cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e079c53ce70ef7cb5206cfe2b776bf48ba852a3eddf0dd833861ecfa999a3cbb/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.332527532 +0000 UTC m=+0.019230224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.442691694 +0000 UTC m=+0.129394386 container init d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de (image=quay.io/ceph/ceph:v20, name=charming_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.448604645 +0000 UTC m=+0.135307317 container start d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de (image=quay.io/ceph/ceph:v20, name=charming_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.456677924 +0000 UTC m=+0.143380596 container attach d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de (image=quay.io/ceph/ceph:v20, name=charming_taussig, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3317143423' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3317143423' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:12:54 np0005597378 charming_taussig[74848]: 
Jan 27 08:12:54 np0005597378 charming_taussig[74848]: [global]
Jan 27 08:12:54 np0005597378 charming_taussig[74848]: #011fsid = 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:54 np0005597378 charming_taussig[74848]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 27 08:12:54 np0005597378 charming_taussig[74848]: #011osd_crush_chooseleaf_type = 0
Jan 27 08:12:54 np0005597378 systemd[1]: libpod-d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de.scope: Deactivated successfully.
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.727097649 +0000 UTC m=+0.413800321 container died d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de (image=quay.io/ceph/ceph:v20, name=charming_taussig, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:12:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e079c53ce70ef7cb5206cfe2b776bf48ba852a3eddf0dd833861ecfa999a3cbb-merged.mount: Deactivated successfully.
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: from='client.? 192.168.122.100:0/3317143423' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 27 08:12:54 np0005597378 ceph-mon[74737]: from='client.? 192.168.122.100:0/3317143423' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:12:54 np0005597378 podman[74832]: 2026-01-27 13:12:54.890459031 +0000 UTC m=+0.577161703 container remove d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de (image=quay.io/ceph/ceph:v20, name=charming_taussig, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:12:54 np0005597378 systemd[1]: libpod-conmon-d7a3d31a3d30ea86d80789aac8ab5785596f012fe10fc86534f407db0e1816de.scope: Deactivated successfully.
Jan 27 08:12:55 np0005597378 podman[74887]: 2026-01-27 13:12:54.932483616 +0000 UTC m=+0.022966156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:55 np0005597378 podman[74887]: 2026-01-27 13:12:55.066980556 +0000 UTC m=+0.157463066 container create 7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977 (image=quay.io/ceph/ceph:v20, name=vigorous_sutherland, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:12:55 np0005597378 systemd[1]: Started libpod-conmon-7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977.scope.
Jan 27 08:12:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b1950e6f232ea1c89d4105c7502353fde73ae8048f8e2169c5d9fdcc903920/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b1950e6f232ea1c89d4105c7502353fde73ae8048f8e2169c5d9fdcc903920/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b1950e6f232ea1c89d4105c7502353fde73ae8048f8e2169c5d9fdcc903920/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b1950e6f232ea1c89d4105c7502353fde73ae8048f8e2169c5d9fdcc903920/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:55 np0005597378 podman[74887]: 2026-01-27 13:12:55.212902671 +0000 UTC m=+0.303385181 container init 7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977 (image=quay.io/ceph/ceph:v20, name=vigorous_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:12:55 np0005597378 podman[74887]: 2026-01-27 13:12:55.218979251 +0000 UTC m=+0.309461761 container start 7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977 (image=quay.io/ceph/ceph:v20, name=vigorous_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:12:55 np0005597378 podman[74887]: 2026-01-27 13:12:55.275282925 +0000 UTC m=+0.365765435 container attach 7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977 (image=quay.io/ceph/ceph:v20, name=vigorous_sutherland, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:12:55 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:12:55 np0005597378 ceph-mon[74737]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919143802' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:12:55 np0005597378 systemd[1]: libpod-7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977.scope: Deactivated successfully.
Jan 27 08:12:55 np0005597378 podman[74887]: 2026-01-27 13:12:55.424374699 +0000 UTC m=+0.514857209 container died 7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977 (image=quay.io/ceph/ceph:v20, name=vigorous_sutherland, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:12:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e9b1950e6f232ea1c89d4105c7502353fde73ae8048f8e2169c5d9fdcc903920-merged.mount: Deactivated successfully.
Jan 27 08:12:56 np0005597378 podman[74887]: 2026-01-27 13:12:56.598047187 +0000 UTC m=+1.688529697 container remove 7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977 (image=quay.io/ceph/ceph:v20, name=vigorous_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:12:56 np0005597378 systemd[1]: Stopping Ceph mon.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:12:56 np0005597378 systemd[1]: libpod-conmon-7d9d1f21c5ee295a0d64dd4556f29acd0c9102d9f3a6963c7b74cf2386f7f977.scope: Deactivated successfully.
Jan 27 08:12:56 np0005597378 ceph-mon[74737]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 27 08:12:56 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 27 08:12:56 np0005597378 ceph-mon[74737]: mon.compute-0@0(leader) e1 shutdown
Jan 27 08:12:56 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0[74733]: 2026-01-27T13:12:56.801+0000 7f04b9fb4640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 27 08:12:56 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0[74733]: 2026-01-27T13:12:56.801+0000 7f04b9fb4640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 27 08:12:56 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 27 08:12:56 np0005597378 ceph-mon[74737]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 27 08:12:57 np0005597378 podman[74970]: 2026-01-27 13:12:57.006635226 +0000 UTC m=+0.241203420 container died 40fd13e319f9f5470358a0df0cb1ce170a0737e993d88d9c96e9075a5ededb20 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:12:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c4189fac332d4ffa483f5d74f95f8e4d4e8d8bb5d32374a9eb2fbadbebfc72e7-merged.mount: Deactivated successfully.
Jan 27 08:12:57 np0005597378 podman[74970]: 2026-01-27 13:12:57.070951272 +0000 UTC m=+0.305519346 container remove 40fd13e319f9f5470358a0df0cb1ce170a0737e993d88d9c96e9075a5ededb20 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 08:12:57 np0005597378 bash[74970]: ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0
Jan 27 08:12:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 08:12:57 np0005597378 systemd[1]: ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mon.compute-0.service: Deactivated successfully.
Jan 27 08:12:57 np0005597378 systemd[1]: Stopped Ceph mon.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:12:57 np0005597378 systemd[1]: Starting Ceph mon.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:12:57 np0005597378 podman[75071]: 2026-01-27 13:12:57.383968399 +0000 UTC m=+0.039194826 container create da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d068fb0a15885919ad122aeb86985bff3d84b5a23149c31dd267c3804d943a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d068fb0a15885919ad122aeb86985bff3d84b5a23149c31dd267c3804d943a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d068fb0a15885919ad122aeb86985bff3d84b5a23149c31dd267c3804d943a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d068fb0a15885919ad122aeb86985bff3d84b5a23149c31dd267c3804d943a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 podman[75071]: 2026-01-27 13:12:57.433194461 +0000 UTC m=+0.088420888 container init da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:12:57 np0005597378 podman[75071]: 2026-01-27 13:12:57.439858464 +0000 UTC m=+0.095084891 container start da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:12:57 np0005597378 bash[75071]: da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c
Jan 27 08:12:57 np0005597378 podman[75071]: 2026-01-27 13:12:57.366068594 +0000 UTC m=+0.021295051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:57 np0005597378 systemd[1]: Started Ceph mon.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: pidfile_write: ignore empty --pid-file
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: load: jerasure load: lrc 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Git sha 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: DB SUMMARY
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: DB Session ID:  RO2N8YJLBKD38EI8MLHU
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 61645 ; 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                                     Options.env: 0x55ec4cd3a440
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                                Options.info_log: 0x55ec4e497e80
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                                 Options.wal_dir: 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                    Options.write_buffer_manager: 0x55ec4e4e2140
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                               Options.row_cache: None
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                              Options.wal_filter: None
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.wal_compression: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.max_background_jobs: 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.max_total_wal_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:       Options.compaction_readahead_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Compression algorithms supported:
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kZSTD supported: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:           Options.merge_operator: 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:        Options.compaction_filter: None
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ec4e4eea00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ec4e4d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:        Options.write_buffer_size: 33554432
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:  Options.max_write_buffer_number: 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.compression: NoCompression
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.num_levels: 7
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fa55fde-2af1-4194-a177-64db194a2554
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519577478622, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519577488954, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 61222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 150, "table_properties": {"data_size": 59701, "index_size": 163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3459, "raw_average_key_size": 30, "raw_value_size": 57000, "raw_average_value_size": 504, "num_data_blocks": 9, "num_entries": 113, "num_filter_entries": 113, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519577, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519577489146, "job": 1, "event": "recovery_finished"}
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ec4e500e00
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: DB pointer 0x55ec4e64a000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   61.69 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0   61.69 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.52 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.52 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.49394635 +0000 UTC m=+0.022651499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???) e1 preinit fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).mds e1 new map
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-27T13:12:53:733380+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.593297571 +0000 UTC m=+0.122002730 container create 1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0 (image=quay.io/ceph/ceph:v20, name=pedantic_aryabhata, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : last_changed 2026-01-27T13:12:50.854365+0000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : created 2026-01-27T13:12:50.854365+0000
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : fsmap 
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 27 08:12:57 np0005597378 systemd[1]: Started libpod-conmon-1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0.scope.
Jan 27 08:12:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5154b171d4b18bfd0617389f7b8e4cd76b1eab68527027f58f52c270d24cd87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5154b171d4b18bfd0617389f7b8e4cd76b1eab68527027f58f52c270d24cd87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5154b171d4b18bfd0617389f7b8e4cd76b1eab68527027f58f52c270d24cd87c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.691552319 +0000 UTC m=+0.220257438 container init 1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0 (image=quay.io/ceph/ceph:v20, name=pedantic_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.699082542 +0000 UTC m=+0.227787661 container start 1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0 (image=quay.io/ceph/ceph:v20, name=pedantic_aryabhata, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.719866559 +0000 UTC m=+0.248571678 container attach 1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0 (image=quay.io/ceph/ceph:v20, name=pedantic_aryabhata, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:12:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 27 08:12:57 np0005597378 systemd[1]: libpod-1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0.scope: Deactivated successfully.
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.912837409 +0000 UTC m=+0.441542588 container died 1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0 (image=quay.io/ceph/ceph:v20, name=pedantic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 27 08:12:57 np0005597378 podman[75091]: 2026-01-27 13:12:57.994880198 +0000 UTC m=+0.523585317 container remove 1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0 (image=quay.io/ceph/ceph:v20, name=pedantic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:12:58 np0005597378 systemd[1]: libpod-conmon-1da1ff14c8027f6571ef28601fbde1d7449ea8505835ffc64c3dc0734823e6c0.scope: Deactivated successfully.
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.070490468 +0000 UTC m=+0.055926317 container create 7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a (image=quay.io/ceph/ceph:v20, name=unruffled_aryabhata, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:12:58 np0005597378 systemd[1]: Started libpod-conmon-7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a.scope.
Jan 27 08:12:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8360be0e41622218a9bce6790c20dcc8bf26b450186767ccc0ac1588db0c40af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8360be0e41622218a9bce6790c20dcc8bf26b450186767ccc0ac1588db0c40af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8360be0e41622218a9bce6790c20dcc8bf26b450186767ccc0ac1588db0c40af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.036804851 +0000 UTC m=+0.022240720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.164978354 +0000 UTC m=+0.150414223 container init 7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a (image=quay.io/ceph/ceph:v20, name=unruffled_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.169980652 +0000 UTC m=+0.155416501 container start 7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a (image=quay.io/ceph/ceph:v20, name=unruffled_aryabhata, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.180656361 +0000 UTC m=+0.166092240 container attach 7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a (image=quay.io/ceph/ceph:v20, name=unruffled_aryabhata, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:12:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 27 08:12:58 np0005597378 systemd[1]: libpod-7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a.scope: Deactivated successfully.
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.357383032 +0000 UTC m=+0.342818881 container died 7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a (image=quay.io/ceph/ceph:v20, name=unruffled_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:12:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8360be0e41622218a9bce6790c20dcc8bf26b450186767ccc0ac1588db0c40af-merged.mount: Deactivated successfully.
Jan 27 08:12:58 np0005597378 podman[75183]: 2026-01-27 13:12:58.476624252 +0000 UTC m=+0.462060101 container remove 7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a (image=quay.io/ceph/ceph:v20, name=unruffled_aryabhata, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:12:58 np0005597378 systemd[1]: libpod-conmon-7ab5ff1934c57e1f8c20bdfbb05f125b13c0af30d45cd924374d1e215745cb6a.scope: Deactivated successfully.
Jan 27 08:12:58 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:58 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:58 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:58 np0005597378 systemd[1]: Reloading.
Jan 27 08:12:58 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:12:58 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:12:59 np0005597378 systemd[1]: Starting Ceph mgr.compute-0.uujfpe for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:12:59 np0005597378 podman[75365]: 2026-01-27 13:12:59.319459039 +0000 UTC m=+0.113695951 container create 01727bd4ff0bb5eec906a6c0bc1173499614eff58701da1b2806e2c941f7d807 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:12:59 np0005597378 podman[75365]: 2026-01-27 13:12:59.227427736 +0000 UTC m=+0.021664668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8664c50d2887717ed51c268d70da4fa3bcd6d3ded8a6303099cb4860f709ea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8664c50d2887717ed51c268d70da4fa3bcd6d3ded8a6303099cb4860f709ea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8664c50d2887717ed51c268d70da4fa3bcd6d3ded8a6303099cb4860f709ea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8664c50d2887717ed51c268d70da4fa3bcd6d3ded8a6303099cb4860f709ea9/merged/var/lib/ceph/mgr/ceph-compute-0.uujfpe supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:59 np0005597378 podman[75365]: 2026-01-27 13:12:59.46148444 +0000 UTC m=+0.255721392 container init 01727bd4ff0bb5eec906a6c0bc1173499614eff58701da1b2806e2c941f7d807 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:12:59 np0005597378 podman[75365]: 2026-01-27 13:12:59.465744542 +0000 UTC m=+0.259981464 container start 01727bd4ff0bb5eec906a6c0bc1173499614eff58701da1b2806e2c941f7d807 (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:12:59 np0005597378 ceph-mgr[75385]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:12:59 np0005597378 ceph-mgr[75385]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 27 08:12:59 np0005597378 ceph-mgr[75385]: pidfile_write: ignore empty --pid-file
Jan 27 08:12:59 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'alerts'
Jan 27 08:12:59 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'balancer'
Jan 27 08:12:59 np0005597378 bash[75365]: 01727bd4ff0bb5eec906a6c0bc1173499614eff58701da1b2806e2c941f7d807
Jan 27 08:12:59 np0005597378 systemd[1]: Started Ceph mgr.compute-0.uujfpe for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:12:59 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'cephadm'
Jan 27 08:12:59 np0005597378 podman[75406]: 2026-01-27 13:12:59.801711734 +0000 UTC m=+0.063608521 container create f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583 (image=quay.io/ceph/ceph:v20, name=intelligent_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:12:59 np0005597378 systemd[1]: Started libpod-conmon-f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583.scope.
Jan 27 08:12:59 np0005597378 podman[75406]: 2026-01-27 13:12:59.76347535 +0000 UTC m=+0.025372197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:12:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d0358006a1024e6602e9d4c948a7d3d091db2dc2a02d9f3beb8b3dfcdb2012/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d0358006a1024e6602e9d4c948a7d3d091db2dc2a02d9f3beb8b3dfcdb2012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:12:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d0358006a1024e6602e9d4c948a7d3d091db2dc2a02d9f3beb8b3dfcdb2012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:00 np0005597378 podman[75406]: 2026-01-27 13:13:00.049683959 +0000 UTC m=+0.311580766 container init f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583 (image=quay.io/ceph/ceph:v20, name=intelligent_sanderson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:00 np0005597378 podman[75406]: 2026-01-27 13:13:00.057725733 +0000 UTC m=+0.319622500 container start f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583 (image=quay.io/ceph/ceph:v20, name=intelligent_sanderson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:13:00 np0005597378 podman[75406]: 2026-01-27 13:13:00.160459967 +0000 UTC m=+0.422356754 container attach f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583 (image=quay.io/ceph/ceph:v20, name=intelligent_sanderson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:13:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 27 08:13:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828912912' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]: 
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]: {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "health": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "status": "HEALTH_OK",
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "checks": {},
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "mutes": []
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "election_epoch": 5,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "quorum": [
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        0
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    ],
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "quorum_names": [
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "compute-0"
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    ],
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "quorum_age": 2,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "monmap": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "epoch": 1,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "min_mon_release_name": "tentacle",
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_mons": 1
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "osdmap": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "epoch": 1,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_osds": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_up_osds": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "osd_up_since": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_in_osds": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "osd_in_since": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_remapped_pgs": 0
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "pgmap": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "pgs_by_state": [],
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_pgs": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_pools": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_objects": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "data_bytes": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "bytes_used": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "bytes_avail": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "bytes_total": 0
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "fsmap": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "epoch": 1,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "btime": "2026-01-27T13:12:53:733380+0000",
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "by_rank": [],
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "up:standby": 0
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "mgrmap": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "available": false,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "num_standbys": 0,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "modules": [
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:            "iostat",
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:            "nfs"
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        ],
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "services": {}
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "servicemap": {
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "epoch": 1,
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "modified": "2026-01-27T13:12:53.736512+0000",
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:        "services": {}
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    },
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]:    "progress_events": {}
Jan 27 08:13:00 np0005597378 intelligent_sanderson[75422]: }
Jan 27 08:13:00 np0005597378 systemd[1]: libpod-f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583.scope: Deactivated successfully.
Jan 27 08:13:00 np0005597378 podman[75406]: 2026-01-27 13:13:00.262814154 +0000 UTC m=+0.524710921 container died f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583 (image=quay.io/ceph/ceph:v20, name=intelligent_sanderson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:13:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-12d0358006a1024e6602e9d4c948a7d3d091db2dc2a02d9f3beb8b3dfcdb2012-merged.mount: Deactivated successfully.
Jan 27 08:13:00 np0005597378 podman[75406]: 2026-01-27 13:13:00.338538946 +0000 UTC m=+0.600435713 container remove f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583 (image=quay.io/ceph/ceph:v20, name=intelligent_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:00 np0005597378 systemd[1]: libpod-conmon-f3e9a17c05fc033f72a4457c150cbc29afe9dd8654aa85769ed5c944fdd36583.scope: Deactivated successfully.
Jan 27 08:13:00 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'crash'
Jan 27 08:13:00 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'dashboard'
Jan 27 08:13:01 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'devicehealth'
Jan 27 08:13:01 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'diskprediction_local'
Jan 27 08:13:01 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 27 08:13:01 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 27 08:13:01 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]:  from numpy import show_config as show_numpy_config
Jan 27 08:13:01 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'influx'
Jan 27 08:13:01 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'insights'
Jan 27 08:13:01 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'iostat'
Jan 27 08:13:01 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'k8sevents'
Jan 27 08:13:02 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'localpool'
Jan 27 08:13:02 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'mds_autoscaler'
Jan 27 08:13:02 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'mirroring'
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.417487388 +0000 UTC m=+0.050427158 container create be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27 (image=quay.io/ceph/ceph:v20, name=clever_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:13:02 np0005597378 systemd[1]: Started libpod-conmon-be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27.scope.
Jan 27 08:13:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad7b151be8e189ae5dec11f0677cd402007484f33095d76c4299b624d24175/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad7b151be8e189ae5dec11f0677cd402007484f33095d76c4299b624d24175/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad7b151be8e189ae5dec11f0677cd402007484f33095d76c4299b624d24175/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.395807781 +0000 UTC m=+0.028747551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:02 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'nfs'
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.498237109 +0000 UTC m=+0.131176859 container init be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27 (image=quay.io/ceph/ceph:v20, name=clever_goldwasser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.507410517 +0000 UTC m=+0.140350267 container start be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27 (image=quay.io/ceph/ceph:v20, name=clever_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.512417525 +0000 UTC m=+0.145357255 container attach be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27 (image=quay.io/ceph/ceph:v20, name=clever_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:13:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 27 08:13:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602902399' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]: 
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]: {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "health": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "status": "HEALTH_OK",
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "checks": {},
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "mutes": []
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "election_epoch": 5,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "quorum": [
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        0
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    ],
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "quorum_names": [
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "compute-0"
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    ],
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "quorum_age": 5,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "monmap": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "epoch": 1,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "min_mon_release_name": "tentacle",
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_mons": 1
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "osdmap": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "epoch": 1,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_osds": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_up_osds": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "osd_up_since": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_in_osds": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "osd_in_since": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_remapped_pgs": 0
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "pgmap": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "pgs_by_state": [],
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_pgs": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_pools": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_objects": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "data_bytes": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "bytes_used": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "bytes_avail": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "bytes_total": 0
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "fsmap": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "epoch": 1,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "btime": "2026-01-27T13:12:53:733380+0000",
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "by_rank": [],
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "up:standby": 0
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "mgrmap": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "available": false,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "num_standbys": 0,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "modules": [
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:            "iostat",
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:            "nfs"
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        ],
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "services": {}
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "servicemap": {
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "epoch": 1,
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "modified": "2026-01-27T13:12:53.736512+0000",
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:        "services": {}
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    },
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]:    "progress_events": {}
Jan 27 08:13:02 np0005597378 clever_goldwasser[75486]: }
Jan 27 08:13:02 np0005597378 systemd[1]: libpod-be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27.scope: Deactivated successfully.
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.713778405 +0000 UTC m=+0.346718145 container died be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27 (image=quay.io/ceph/ceph:v20, name=clever_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:13:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d4ad7b151be8e189ae5dec11f0677cd402007484f33095d76c4299b624d24175-merged.mount: Deactivated successfully.
Jan 27 08:13:02 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'orchestrator'
Jan 27 08:13:02 np0005597378 podman[75470]: 2026-01-27 13:13:02.761850611 +0000 UTC m=+0.394790341 container remove be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27 (image=quay.io/ceph/ceph:v20, name=clever_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:13:02 np0005597378 systemd[1]: libpod-conmon-be0706d3acb336bab553ed48b84f908b2664fd53f6859ac5865a067795983e27.scope: Deactivated successfully.
Jan 27 08:13:02 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'osd_perf_query'
Jan 27 08:13:03 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'osd_support'
Jan 27 08:13:03 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'pg_autoscaler'
Jan 27 08:13:03 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'progress'
Jan 27 08:13:03 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'prometheus'
Jan 27 08:13:03 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'rbd_support'
Jan 27 08:13:03 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'rgw'
Jan 27 08:13:04 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'rook'
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'selftest'
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'smb'
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:04.805852791 +0000 UTC m=+0.021853412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'snap_schedule'
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'stats'
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:05.091442037 +0000 UTC m=+0.307442648 container create b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50 (image=quay.io/ceph/ceph:v20, name=affectionate_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'status'
Jan 27 08:13:05 np0005597378 systemd[1]: Started libpod-conmon-b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50.scope.
Jan 27 08:13:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34478257126ffa609aaa1e7ec43c3d9214fb06daa53782f3b19b57d0ff701d71/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34478257126ffa609aaa1e7ec43c3d9214fb06daa53782f3b19b57d0ff701d71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34478257126ffa609aaa1e7ec43c3d9214fb06daa53782f3b19b57d0ff701d71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:05.18393426 +0000 UTC m=+0.399934931 container init b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50 (image=quay.io/ceph/ceph:v20, name=affectionate_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:05.191357501 +0000 UTC m=+0.407358092 container start b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50 (image=quay.io/ceph/ceph:v20, name=affectionate_feistel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:05.197566804 +0000 UTC m=+0.413567455 container attach b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50 (image=quay.io/ceph/ceph:v20, name=affectionate_feistel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'telegraf'
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'telemetry'
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743165206' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]: 
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]: {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "health": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "status": "HEALTH_OK",
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "checks": {},
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "mutes": []
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "election_epoch": 5,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "quorum": [
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        0
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    ],
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "quorum_names": [
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "compute-0"
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    ],
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "quorum_age": 7,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "monmap": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "epoch": 1,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "min_mon_release_name": "tentacle",
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_mons": 1
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "osdmap": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "epoch": 1,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_osds": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_up_osds": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "osd_up_since": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_in_osds": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "osd_in_since": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_remapped_pgs": 0
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "pgmap": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "pgs_by_state": [],
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_pgs": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_pools": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_objects": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "data_bytes": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "bytes_used": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "bytes_avail": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "bytes_total": 0
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "fsmap": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "epoch": 1,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "btime": "2026-01-27T13:12:53:733380+0000",
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "by_rank": [],
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "up:standby": 0
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "mgrmap": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "available": false,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "num_standbys": 0,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "modules": [
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:            "iostat",
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:            "nfs"
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        ],
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "services": {}
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "servicemap": {
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "epoch": 1,
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "modified": "2026-01-27T13:12:53.736512+0000",
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:        "services": {}
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    },
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]:    "progress_events": {}
Jan 27 08:13:05 np0005597378 affectionate_feistel[75541]: }
Jan 27 08:13:05 np0005597378 systemd[1]: libpod-b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50.scope: Deactivated successfully.
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:05.384435253 +0000 UTC m=+0.600435904 container died b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50 (image=quay.io/ceph/ceph:v20, name=affectionate_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-34478257126ffa609aaa1e7ec43c3d9214fb06daa53782f3b19b57d0ff701d71-merged.mount: Deactivated successfully.
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'test_orchestrator'
Jan 27 08:13:05 np0005597378 podman[75525]: 2026-01-27 13:13:05.663546179 +0000 UTC m=+0.879546780 container remove b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50 (image=quay.io/ceph/ceph:v20, name=affectionate_feistel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'volumes'
Jan 27 08:13:05 np0005597378 systemd[1]: libpod-conmon-b63e27df9fe7a76293db0ee43b6bfc170d0c6f1b6777e3b1c3a5ae393a53dd50.scope: Deactivated successfully.
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: ms_deliver_dispatch: unhandled message 0x55caae9a1860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.uujfpe
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr handle_mgr_map Activating!
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr handle_mgr_map I am now activating
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.uujfpe(active, starting, since 0.0105349s)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mds metadata"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e1 all = 1
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mon metadata"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.uujfpe", "id": "compute-0.uujfpe"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mgr metadata", "who": "compute-0.uujfpe", "id": "compute-0.uujfpe"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: balancer
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [balancer INFO root] Starting
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: crash
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:13:05
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Manager daemon compute-0.uujfpe is now available
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [balancer INFO root] No pools available
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: devicehealth
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: iostat
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Starting
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: nfs
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: orchestrator
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: pg_autoscaler
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: progress
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [progress INFO root] Loading...
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [progress INFO root] No stored events to load
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [progress INFO root] Loaded [] historic events
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [progress INFO root] Loaded OSDMap, ready.
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] recovery thread starting
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] starting setup
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: rbd_support
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: status
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: telemetry
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/mirror_snapshot_schedule"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/mirror_snapshot_schedule"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] PerfHandler: starting
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TaskHandler: starting
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/trash_purge_schedule"} v 0)
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/trash_purge_schedule"} : dispatch
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] setup complete
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 27 08:13:05 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: volumes
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: Activating manager daemon compute-0.uujfpe
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: Manager daemon compute-0.uujfpe is now available
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/mirror_snapshot_schedule"} : dispatch
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/trash_purge_schedule"} : dispatch
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:06 np0005597378 ceph-mon[75090]: from='mgr.14102 192.168.122.100:0/2405003994' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:07 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.uujfpe(active, since 1.29614s)
Jan 27 08:13:07 np0005597378 podman[75659]: 2026-01-27 13:13:07.719033585 +0000 UTC m=+0.033118925 container create c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637 (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:13:07 np0005597378 systemd[1]: Started libpod-conmon-c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637.scope.
Jan 27 08:13:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414f7c101af22c9bc5fd8ba42fb54eae02325c1f33c00b4dfe559829b5c9a603/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414f7c101af22c9bc5fd8ba42fb54eae02325c1f33c00b4dfe559829b5c9a603/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414f7c101af22c9bc5fd8ba42fb54eae02325c1f33c00b4dfe559829b5c9a603/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:07 np0005597378 podman[75659]: 2026-01-27 13:13:07.795514854 +0000 UTC m=+0.109600214 container init c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637 (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:07 np0005597378 podman[75659]: 2026-01-27 13:13:07.800101763 +0000 UTC m=+0.114187113 container start c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637 (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:07 np0005597378 podman[75659]: 2026-01-27 13:13:07.703769776 +0000 UTC m=+0.017855136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:07 np0005597378 podman[75659]: 2026-01-27 13:13:07.805970639 +0000 UTC m=+0.120055979 container attach c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637 (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:13:07 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:07 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.uujfpe(active, since 2s)
Jan 27 08:13:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 27 08:13:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193626524' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]: 
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]: {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "health": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "status": "HEALTH_OK",
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "checks": {},
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "mutes": []
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "election_epoch": 5,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "quorum": [
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        0
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    ],
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "quorum_names": [
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "compute-0"
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    ],
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "quorum_age": 10,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "monmap": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "epoch": 1,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "min_mon_release_name": "tentacle",
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_mons": 1
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "osdmap": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "epoch": 1,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_osds": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_up_osds": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "osd_up_since": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_in_osds": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "osd_in_since": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_remapped_pgs": 0
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "pgmap": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "pgs_by_state": [],
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_pgs": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_pools": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_objects": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "data_bytes": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "bytes_used": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "bytes_avail": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "bytes_total": 0
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "fsmap": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "epoch": 1,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "btime": "2026-01-27T13:12:53:733380+0000",
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "by_rank": [],
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "up:standby": 0
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "mgrmap": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "available": true,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "num_standbys": 0,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "modules": [
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:            "iostat",
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:            "nfs"
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        ],
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "services": {}
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "servicemap": {
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "epoch": 1,
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "modified": "2026-01-27T13:12:53.736512+0000",
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:        "services": {}
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    },
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]:    "progress_events": {}
Jan 27 08:13:08 np0005597378 tender_bhaskara[75676]: }
Jan 27 08:13:08 np0005597378 systemd[1]: libpod-c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637.scope: Deactivated successfully.
Jan 27 08:13:08 np0005597378 podman[75659]: 2026-01-27 13:13:08.296745808 +0000 UTC m=+0.610831148 container died c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637 (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:13:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-414f7c101af22c9bc5fd8ba42fb54eae02325c1f33c00b4dfe559829b5c9a603-merged.mount: Deactivated successfully.
Jan 27 08:13:08 np0005597378 podman[75659]: 2026-01-27 13:13:08.345661472 +0000 UTC m=+0.659746822 container remove c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637 (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:08 np0005597378 systemd[1]: libpod-conmon-c5d2ec86ed165fa3a302ce0ea61a1cd62fd728da2d6b4ab52a13a22c1d6a0637.scope: Deactivated successfully.
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.430515631 +0000 UTC m=+0.062083689 container create b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc (image=quay.io/ceph/ceph:v20, name=hopeful_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:08 np0005597378 systemd[1]: Started libpod-conmon-b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc.scope.
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.395818983 +0000 UTC m=+0.027387131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b6d240baed48519d58c41cf01437791cfab357722efe391f06e885e1dc2d11/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b6d240baed48519d58c41cf01437791cfab357722efe391f06e885e1dc2d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b6d240baed48519d58c41cf01437791cfab357722efe391f06e885e1dc2d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b6d240baed48519d58c41cf01437791cfab357722efe391f06e885e1dc2d11/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.523928335 +0000 UTC m=+0.155496423 container init b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc (image=quay.io/ceph/ceph:v20, name=hopeful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.533318387 +0000 UTC m=+0.164886435 container start b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc (image=quay.io/ceph/ceph:v20, name=hopeful_booth, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.540788498 +0000 UTC m=+0.172356546 container attach b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc (image=quay.io/ceph/ceph:v20, name=hopeful_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 27 08:13:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3028246980' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 27 08:13:08 np0005597378 hopeful_booth[75729]: 
Jan 27 08:13:08 np0005597378 hopeful_booth[75729]: [global]
Jan 27 08:13:08 np0005597378 hopeful_booth[75729]: #011fsid = 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:13:08 np0005597378 hopeful_booth[75729]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 27 08:13:08 np0005597378 hopeful_booth[75729]: #011osd_crush_chooseleaf_type = 0
Jan 27 08:13:08 np0005597378 systemd[1]: libpod-b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc.scope: Deactivated successfully.
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.947604037 +0000 UTC m=+0.579172075 container died b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc (image=quay.io/ceph/ceph:v20, name=hopeful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:13:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-09b6d240baed48519d58c41cf01437791cfab357722efe391f06e885e1dc2d11-merged.mount: Deactivated successfully.
Jan 27 08:13:08 np0005597378 podman[75712]: 2026-01-27 13:13:08.994863876 +0000 UTC m=+0.626431944 container remove b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc (image=quay.io/ceph/ceph:v20, name=hopeful_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:09 np0005597378 systemd[1]: libpod-conmon-b6299041337ef3499609054d9889c03d86575edb33e4fc1c20abc11569f95cbc.scope: Deactivated successfully.
Jan 27 08:13:09 np0005597378 podman[75767]: 2026-01-27 13:13:09.068966563 +0000 UTC m=+0.050362216 container create 41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397 (image=quay.io/ceph/ceph:v20, name=mystifying_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:13:09 np0005597378 systemd[1]: Started libpod-conmon-41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397.scope.
Jan 27 08:13:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0824582371b90d831e1a4fbf5ddf2e343e2922bb5061744b0c185a9aaefe91f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0824582371b90d831e1a4fbf5ddf2e343e2922bb5061744b0c185a9aaefe91f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0824582371b90d831e1a4fbf5ddf2e343e2922bb5061744b0c185a9aaefe91f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:09 np0005597378 podman[75767]: 2026-01-27 13:13:09.131990762 +0000 UTC m=+0.113386425 container init 41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397 (image=quay.io/ceph/ceph:v20, name=mystifying_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:09 np0005597378 podman[75767]: 2026-01-27 13:13:09.137281706 +0000 UTC m=+0.118677359 container start 41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397 (image=quay.io/ceph/ceph:v20, name=mystifying_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:09 np0005597378 podman[75767]: 2026-01-27 13:13:09.141221561 +0000 UTC m=+0.122617274 container attach 41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397 (image=quay.io/ceph/ceph:v20, name=mystifying_northcutt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:13:09 np0005597378 podman[75767]: 2026-01-27 13:13:09.048499812 +0000 UTC m=+0.029895555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:09 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3028246980' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 27 08:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 27 08:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3100543274' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 27 08:13:09 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:09 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:10 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3100543274' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 27 08:13:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3100543274' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 27 08:13:10 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.uujfpe(active, since 4s)
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  1: '-n'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  2: 'mgr.compute-0.uujfpe'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  3: '-f'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  4: '--setuser'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  5: 'ceph'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  6: '--setgroup'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  7: 'ceph'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  8: '--default-log-to-file=false'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  9: '--default-log-to-journald=true'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr respawn  exe_path /proc/self/exe
Jan 27 08:13:10 np0005597378 systemd[1]: libpod-41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397.scope: Deactivated successfully.
Jan 27 08:13:10 np0005597378 podman[75767]: 2026-01-27 13:13:10.302410951 +0000 UTC m=+1.283806614 container died 41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397 (image=quay.io/ceph/ceph:v20, name=mystifying_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:13:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0824582371b90d831e1a4fbf5ddf2e343e2922bb5061744b0c185a9aaefe91f2-merged.mount: Deactivated successfully.
Jan 27 08:13:10 np0005597378 podman[75767]: 2026-01-27 13:13:10.337512987 +0000 UTC m=+1.318908640 container remove 41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397 (image=quay.io/ceph/ceph:v20, name=mystifying_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:13:10 np0005597378 systemd[1]: libpod-conmon-41e6e60d3a5a78a2f30458e68ec062434b51c9455a4aa0c22a06962ba679d397.scope: Deactivated successfully.
Jan 27 08:13:10 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: ignoring --setuser ceph since I am not root
Jan 27 08:13:10 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: ignoring --setgroup ceph since I am not root
Jan 27 08:13:10 np0005597378 podman[75821]: 2026-01-27 13:13:10.392049153 +0000 UTC m=+0.035397504 container create bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9 (image=quay.io/ceph/ceph:v20, name=flamboyant_neumann, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: pidfile_write: ignore empty --pid-file
Jan 27 08:13:10 np0005597378 systemd[1]: Started libpod-conmon-bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9.scope.
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'alerts'
Jan 27 08:13:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13efc162a8d613029c411bbf6fb8ab1d41671ef49c0879385613b39ff094ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13efc162a8d613029c411bbf6fb8ab1d41671ef49c0879385613b39ff094ec4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13efc162a8d613029c411bbf6fb8ab1d41671ef49c0879385613b39ff094ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:10 np0005597378 podman[75821]: 2026-01-27 13:13:10.453983748 +0000 UTC m=+0.097332179 container init bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9 (image=quay.io/ceph/ceph:v20, name=flamboyant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:10 np0005597378 podman[75821]: 2026-01-27 13:13:10.459447346 +0000 UTC m=+0.102795707 container start bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9 (image=quay.io/ceph/ceph:v20, name=flamboyant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:10 np0005597378 podman[75821]: 2026-01-27 13:13:10.462387619 +0000 UTC m=+0.105736020 container attach bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9 (image=quay.io/ceph/ceph:v20, name=flamboyant_neumann, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:10 np0005597378 podman[75821]: 2026-01-27 13:13:10.376710392 +0000 UTC m=+0.020058773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'balancer'
Jan 27 08:13:10 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'cephadm'
Jan 27 08:13:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 27 08:13:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635497908' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 08:13:10 np0005597378 flamboyant_neumann[75858]: {
Jan 27 08:13:10 np0005597378 flamboyant_neumann[75858]:    "epoch": 5,
Jan 27 08:13:10 np0005597378 flamboyant_neumann[75858]:    "available": true,
Jan 27 08:13:10 np0005597378 flamboyant_neumann[75858]:    "active_name": "compute-0.uujfpe",
Jan 27 08:13:10 np0005597378 flamboyant_neumann[75858]:    "num_standby": 0
Jan 27 08:13:10 np0005597378 flamboyant_neumann[75858]: }
Jan 27 08:13:10 np0005597378 systemd[1]: libpod-bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9.scope: Deactivated successfully.
Jan 27 08:13:10 np0005597378 podman[75821]: 2026-01-27 13:13:10.974444927 +0000 UTC m=+0.617793308 container died bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9 (image=quay.io/ceph/ceph:v20, name=flamboyant_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:11 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'crash'
Jan 27 08:13:11 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'dashboard'
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'devicehealth'
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'diskprediction_local'
Jan 27 08:13:12 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3100543274' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 27 08:13:12 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 27 08:13:12 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 27 08:13:12 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]:  from numpy import show_config as show_numpy_config
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'influx'
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'insights'
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'iostat'
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'k8sevents'
Jan 27 08:13:12 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'localpool'
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'mds_autoscaler'
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'mirroring'
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'nfs'
Jan 27 08:13:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a13efc162a8d613029c411bbf6fb8ab1d41671ef49c0879385613b39ff094ec4-merged.mount: Deactivated successfully.
Jan 27 08:13:13 np0005597378 podman[75821]: 2026-01-27 13:13:13.5057475 +0000 UTC m=+3.149095891 container remove bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9 (image=quay.io/ceph/ceph:v20, name=flamboyant_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:13:13 np0005597378 systemd[1]: libpod-conmon-bd737d77ccc20162a1b5aab65c0f4d41117def45139b6ad427b286c7dceedfc9.scope: Deactivated successfully.
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'orchestrator'
Jan 27 08:13:13 np0005597378 podman[75909]: 2026-01-27 13:13:13.627416563 +0000 UTC m=+0.094705683 container create 96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426 (image=quay.io/ceph/ceph:v20, name=sharp_archimedes, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:13 np0005597378 podman[75909]: 2026-01-27 13:13:13.572142551 +0000 UTC m=+0.039431681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:13 np0005597378 systemd[1]: Started libpod-conmon-96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426.scope.
Jan 27 08:13:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b6900205d6d6b92a82622c86a93b6bbcf0d7f3c9120e9cf08eaa3dccd9644f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b6900205d6d6b92a82622c86a93b6bbcf0d7f3c9120e9cf08eaa3dccd9644f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b6900205d6d6b92a82622c86a93b6bbcf0d7f3c9120e9cf08eaa3dccd9644f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'osd_perf_query'
Jan 27 08:13:13 np0005597378 podman[75909]: 2026-01-27 13:13:13.82399049 +0000 UTC m=+0.291279640 container init 96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426 (image=quay.io/ceph/ceph:v20, name=sharp_archimedes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:13 np0005597378 podman[75909]: 2026-01-27 13:13:13.833657918 +0000 UTC m=+0.300947038 container start 96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426 (image=quay.io/ceph/ceph:v20, name=sharp_archimedes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:13:13 np0005597378 podman[75909]: 2026-01-27 13:13:13.837855259 +0000 UTC m=+0.305144419 container attach 96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426 (image=quay.io/ceph/ceph:v20, name=sharp_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'osd_support'
Jan 27 08:13:13 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'pg_autoscaler'
Jan 27 08:13:14 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'progress'
Jan 27 08:13:14 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'prometheus'
Jan 27 08:13:14 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'rbd_support'
Jan 27 08:13:14 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'rgw'
Jan 27 08:13:14 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'rook'
Jan 27 08:13:15 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'selftest'
Jan 27 08:13:15 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'smb'
Jan 27 08:13:15 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'snap_schedule'
Jan 27 08:13:15 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'stats'
Jan 27 08:13:15 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'status'
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'telegraf'
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'telemetry'
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'test_orchestrator'
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr[py] Loading python module 'volumes'
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Active manager daemon compute-0.uujfpe restarted
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.uujfpe
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: ms_deliver_dispatch: unhandled message 0x557bed07e000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr handle_mgr_map Activating!
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.uujfpe(active, starting, since 0.0779899s)
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr handle_mgr_map I am now activating
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: Active manager daemon compute-0.uujfpe restarted
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: Activating manager daemon compute-0.uujfpe
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.uujfpe", "id": "compute-0.uujfpe"} v 0)
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mgr metadata", "who": "compute-0.uujfpe", "id": "compute-0.uujfpe"} : dispatch
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mds metadata"} : dispatch
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e1 all = 1
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mon metadata"} : dispatch
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: balancer
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Starting
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:13:16
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:13:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] No pools available
Jan 27 08:13:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Manager daemon compute-0.uujfpe is now available
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019921286 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: cephadm
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: crash
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: devicehealth
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: iostat
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Starting
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: nfs
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: orchestrator
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: pg_autoscaler
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: progress
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [progress INFO root] Loading...
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [progress INFO root] No stored events to load
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [progress INFO root] Loaded [] historic events
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [progress INFO root] Loaded OSDMap, ready.
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] recovery thread starting
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] starting setup
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: rbd_support
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: status
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: telemetry
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/mirror_snapshot_schedule"} v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/mirror_snapshot_schedule"} : dispatch
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] PerfHandler: starting
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TaskHandler: starting
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/trash_purge_schedule"} v 0)
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/trash_purge_schedule"} : dispatch
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] setup complete
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: mgr load Constructed class from module: volumes
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: Manager daemon compute-0.uujfpe is now available
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/mirror_snapshot_schedule"} : dispatch
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uujfpe/trash_purge_schedule"} : dispatch
Jan 27 08:13:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.uujfpe(active, since 1.11022s)
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 27 08:13:17 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 27 08:13:17 np0005597378 sharp_archimedes[75925]: {
Jan 27 08:13:17 np0005597378 sharp_archimedes[75925]:    "mgrmap_epoch": 7,
Jan 27 08:13:17 np0005597378 sharp_archimedes[75925]:    "initialized": true
Jan 27 08:13:17 np0005597378 sharp_archimedes[75925]: }
Jan 27 08:13:17 np0005597378 systemd[1]: libpod-96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426.scope: Deactivated successfully.
Jan 27 08:13:17 np0005597378 podman[75909]: 2026-01-27 13:13:17.964074341 +0000 UTC m=+4.431363441 container died 96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426 (image=quay.io/ceph/ceph:v20, name=sharp_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c8b6900205d6d6b92a82622c86a93b6bbcf0d7f3c9120e9cf08eaa3dccd9644f-merged.mount: Deactivated successfully.
Jan 27 08:13:18 np0005597378 podman[75909]: 2026-01-27 13:13:18.012440284 +0000 UTC m=+4.479729374 container remove 96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426 (image=quay.io/ceph/ceph:v20, name=sharp_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:18 np0005597378 systemd[1]: libpod-conmon-96aedd575214ba58c358fa74d869804295afcaf0f8fcb5bd743addb6c15ad426.scope: Deactivated successfully.
Jan 27 08:13:18 np0005597378 podman[76074]: 2026-01-27 13:13:18.068730297 +0000 UTC m=+0.036014407 container create 983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579 (image=quay.io/ceph/ceph:v20, name=blissful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:13:18 np0005597378 systemd[1]: Started libpod-conmon-983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579.scope.
Jan 27 08:13:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9227525bb0e0116ddf2d5e5726d4c04425b856ff22c96fe9d5857af1d66df117/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9227525bb0e0116ddf2d5e5726d4c04425b856ff22c96fe9d5857af1d66df117/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9227525bb0e0116ddf2d5e5726d4c04425b856ff22c96fe9d5857af1d66df117/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:18 np0005597378 podman[76074]: 2026-01-27 13:13:18.130241503 +0000 UTC m=+0.097525663 container init 983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579 (image=quay.io/ceph/ceph:v20, name=blissful_allen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:13:18 np0005597378 podman[76074]: 2026-01-27 13:13:18.135956096 +0000 UTC m=+0.103240206 container start 983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579 (image=quay.io/ceph/ceph:v20, name=blissful_allen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:13:18 np0005597378 podman[76074]: 2026-01-27 13:13:18.139150085 +0000 UTC m=+0.106434285 container attach 983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579 (image=quay.io/ceph/ceph:v20, name=blissful_allen, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:18 np0005597378 podman[76074]: 2026-01-27 13:13:18.053166552 +0000 UTC m=+0.020450682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: [cephadm INFO cherrypy.error] [27/Jan/2026:13:13:18] ENGINE Bus STARTING
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : [27/Jan/2026:13:13:18] ENGINE Bus STARTING
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: [cephadm INFO cherrypy.error] [27/Jan/2026:13:13:18] ENGINE Serving on https://192.168.122.100:7150
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : [27/Jan/2026:13:13:18] ENGINE Serving on https://192.168.122.100:7150
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: [cephadm INFO cherrypy.error] [27/Jan/2026:13:13:18] ENGINE Client ('192.168.122.100', 40144) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : [27/Jan/2026:13:13:18] ENGINE Client ('192.168.122.100', 40144) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/8225523' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: [cephadm INFO cherrypy.error] [27/Jan/2026:13:13:18] ENGINE Serving on http://192.168.122.100:8765
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : [27/Jan/2026:13:13:18] ENGINE Serving on http://192.168.122.100:8765
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: [cephadm INFO cherrypy.error] [27/Jan/2026:13:13:18] ENGINE Bus STARTED
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : [27/Jan/2026:13:13:18] ENGINE Bus STARTED
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 27 08:13:18 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/8225523' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 27 08:13:18 np0005597378 blissful_allen[76090]: module 'orchestrator' is already enabled (always-on)
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.uujfpe(active, since 2s)
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: Found migration_current of "None". Setting to last migration.
Jan 27 08:13:18 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/8225523' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 27 08:13:18 np0005597378 systemd[1]: libpod-983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579.scope: Deactivated successfully.
Jan 27 08:13:18 np0005597378 conmon[76090]: conmon 983ec11ea7867122c072 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579.scope/container/memory.events
Jan 27 08:13:18 np0005597378 podman[76074]: 2026-01-27 13:13:18.968613015 +0000 UTC m=+0.935897135 container died 983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579 (image=quay.io/ceph/ceph:v20, name=blissful_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:13:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9227525bb0e0116ddf2d5e5726d4c04425b856ff22c96fe9d5857af1d66df117-merged.mount: Deactivated successfully.
Jan 27 08:13:19 np0005597378 podman[76074]: 2026-01-27 13:13:19.012636823 +0000 UTC m=+0.979920943 container remove 983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579 (image=quay.io/ceph/ceph:v20, name=blissful_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:19 np0005597378 systemd[1]: libpod-conmon-983ec11ea7867122c07223707b497ea0c1d577dad1d8d298ca547bd581511579.scope: Deactivated successfully.
Jan 27 08:13:19 np0005597378 podman[76151]: 2026-01-27 13:13:19.098398312 +0000 UTC m=+0.063567961 container create e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93 (image=quay.io/ceph/ceph:v20, name=xenodochial_wescoff, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:19 np0005597378 systemd[1]: Started libpod-conmon-e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93.scope.
Jan 27 08:13:19 np0005597378 podman[76151]: 2026-01-27 13:13:19.067115498 +0000 UTC m=+0.032285217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76244b3422ca8e881af003578e1bfef215b26a79dd04185a01f6f40eebfba9ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76244b3422ca8e881af003578e1bfef215b26a79dd04185a01f6f40eebfba9ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76244b3422ca8e881af003578e1bfef215b26a79dd04185a01f6f40eebfba9ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:19 np0005597378 podman[76151]: 2026-01-27 13:13:19.193983243 +0000 UTC m=+0.159152932 container init e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93 (image=quay.io/ceph/ceph:v20, name=xenodochial_wescoff, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:19 np0005597378 podman[76151]: 2026-01-27 13:13:19.203987318 +0000 UTC m=+0.169156967 container start e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93 (image=quay.io/ceph/ceph:v20, name=xenodochial_wescoff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:13:19 np0005597378 podman[76151]: 2026-01-27 13:13:19.207284689 +0000 UTC m=+0.172454378 container attach e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93 (image=quay.io/ceph/ceph:v20, name=xenodochial_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:13:19 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 27 08:13:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 27 08:13:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 27 08:13:19 np0005597378 systemd[1]: libpod-e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93.scope: Deactivated successfully.
Jan 27 08:13:19 np0005597378 podman[76151]: 2026-01-27 13:13:19.692415127 +0000 UTC m=+0.657584776 container died e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93 (image=quay.io/ceph/ceph:v20, name=xenodochial_wescoff, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:19 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-76244b3422ca8e881af003578e1bfef215b26a79dd04185a01f6f40eebfba9ab-merged.mount: Deactivated successfully.
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: [27/Jan/2026:13:13:18] ENGINE Bus STARTING
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: [27/Jan/2026:13:13:18] ENGINE Serving on https://192.168.122.100:7150
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: [27/Jan/2026:13:13:18] ENGINE Client ('192.168.122.100', 40144) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: [27/Jan/2026:13:13:18] ENGINE Serving on http://192.168.122.100:8765
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: [27/Jan/2026:13:13:18] ENGINE Bus STARTED
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/8225523' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:20 np0005597378 podman[76151]: 2026-01-27 13:13:20.126926423 +0000 UTC m=+1.092096062 container remove e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93 (image=quay.io/ceph/ceph:v20, name=xenodochial_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:13:20 np0005597378 systemd[1]: libpod-conmon-e664ae02a36763f7b79210acc726d92c880188ff90563fbeadb868f50b946d93.scope: Deactivated successfully.
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.302250141 +0000 UTC m=+0.138973886 container create be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059 (image=quay.io/ceph/ceph:v20, name=priceless_austin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.21172584 +0000 UTC m=+0.048449575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:20 np0005597378 systemd[1]: Started libpod-conmon-be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059.scope.
Jan 27 08:13:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5639f06d6d2a6ddff16658ff2383307323ac2e33137d4335c22d7c6d0df145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5639f06d6d2a6ddff16658ff2383307323ac2e33137d4335c22d7c6d0df145/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5639f06d6d2a6ddff16658ff2383307323ac2e33137d4335c22d7c6d0df145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.442876522 +0000 UTC m=+0.279600327 container init be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059 (image=quay.io/ceph/ceph:v20, name=priceless_austin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.451030298 +0000 UTC m=+0.287754003 container start be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059 (image=quay.io/ceph/ceph:v20, name=priceless_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.456387044 +0000 UTC m=+0.293110869 container attach be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059 (image=quay.io/ceph/ceph:v20, name=priceless_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Set ssh ssh_user
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 27 08:13:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Set ssh ssh_config
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 27 08:13:20 np0005597378 priceless_austin[76225]: ssh user set to ceph-admin. sudo will be used
Jan 27 08:13:20 np0005597378 systemd[1]: libpod-be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059.scope: Deactivated successfully.
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.898106976 +0000 UTC m=+0.734830701 container died be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059 (image=quay.io/ceph/ceph:v20, name=priceless_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:20 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2c5639f06d6d2a6ddff16658ff2383307323ac2e33137d4335c22d7c6d0df145-merged.mount: Deactivated successfully.
Jan 27 08:13:20 np0005597378 podman[76209]: 2026-01-27 13:13:20.941971161 +0000 UTC m=+0.778694876 container remove be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059 (image=quay.io/ceph/ceph:v20, name=priceless_austin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:13:20 np0005597378 systemd[1]: libpod-conmon-be86028ed50ef9c8740b66732b654be8d2660cdd61075a243022769b8e4c9059.scope: Deactivated successfully.
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.017988189 +0000 UTC m=+0.050141072 container create 912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35 (image=quay.io/ceph/ceph:v20, name=xenodochial_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:13:21 np0005597378 systemd[1]: Started libpod-conmon-912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35.scope.
Jan 27 08:13:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f31eff78b4ccb6804483e421792cb9bf24a8ea02ccf7763e2a0572040ff474d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f31eff78b4ccb6804483e421792cb9bf24a8ea02ccf7763e2a0572040ff474d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f31eff78b4ccb6804483e421792cb9bf24a8ea02ccf7763e2a0572040ff474d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f31eff78b4ccb6804483e421792cb9bf24a8ea02ccf7763e2a0572040ff474d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f31eff78b4ccb6804483e421792cb9bf24a8ea02ccf7763e2a0572040ff474d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.000955602 +0000 UTC m=+0.033108485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.109279697 +0000 UTC m=+0.141432550 container init 912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35 (image=quay.io/ceph/ceph:v20, name=xenodochial_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.115631924 +0000 UTC m=+0.147784777 container start 912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35 (image=quay.io/ceph/ceph:v20, name=xenodochial_bassi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.119108959 +0000 UTC m=+0.151261822 container attach 912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35 (image=quay.io/ceph/ceph:v20, name=xenodochial_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:21 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 27 08:13:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:21 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 27 08:13:21 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 27 08:13:21 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Set ssh private key
Jan 27 08:13:21 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 27 08:13:21 np0005597378 systemd[1]: libpod-912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35.scope: Deactivated successfully.
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.551723974 +0000 UTC m=+0.583876827 container died 912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35 (image=quay.io/ceph/ceph:v20, name=xenodochial_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:13:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7f31eff78b4ccb6804483e421792cb9bf24a8ea02ccf7763e2a0572040ff474d-merged.mount: Deactivated successfully.
Jan 27 08:13:21 np0005597378 podman[76264]: 2026-01-27 13:13:21.595207372 +0000 UTC m=+0.627360225 container remove 912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35 (image=quay.io/ceph/ceph:v20, name=xenodochial_bassi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:21 np0005597378 systemd[1]: libpod-conmon-912985b33a246d8c6c6dc8b007cc6bfa1d187343329975e1b08f5a9958c67a35.scope: Deactivated successfully.
Jan 27 08:13:21 np0005597378 podman[76319]: 2026-01-27 13:13:21.655717526 +0000 UTC m=+0.040665317 container create 8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a (image=quay.io/ceph/ceph:v20, name=brave_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:13:21 np0005597378 systemd[1]: Started libpod-conmon-8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a.scope.
Jan 27 08:13:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832c151856a5684505a630de87c42e000f90889f8c59c64d28e07bfb6c5c287/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832c151856a5684505a630de87c42e000f90889f8c59c64d28e07bfb6c5c287/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832c151856a5684505a630de87c42e000f90889f8c59c64d28e07bfb6c5c287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832c151856a5684505a630de87c42e000f90889f8c59c64d28e07bfb6c5c287/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832c151856a5684505a630de87c42e000f90889f8c59c64d28e07bfb6c5c287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:21 np0005597378 podman[76319]: 2026-01-27 13:13:21.728528576 +0000 UTC m=+0.113476387 container init 8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a (image=quay.io/ceph/ceph:v20, name=brave_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:21 np0005597378 podman[76319]: 2026-01-27 13:13:21.635441359 +0000 UTC m=+0.020389160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:21 np0005597378 podman[76319]: 2026-01-27 13:13:21.735195919 +0000 UTC m=+0.120143710 container start 8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a (image=quay.io/ceph/ceph:v20, name=brave_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:13:21 np0005597378 podman[76319]: 2026-01-27 13:13:21.738147943 +0000 UTC m=+0.123095734 container attach 8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a (image=quay.io/ceph/ceph:v20, name=brave_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:21 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: Set ssh ssh_user
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: Set ssh ssh_config
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: ssh user set to ceph-admin. sudo will be used
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:22 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:22 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 27 08:13:22 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 27 08:13:22 np0005597378 systemd[1]: libpod-8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a.scope: Deactivated successfully.
Jan 27 08:13:22 np0005597378 podman[76319]: 2026-01-27 13:13:22.197403843 +0000 UTC m=+0.582351664 container died 8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a (image=quay.io/ceph/ceph:v20, name=brave_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:13:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5832c151856a5684505a630de87c42e000f90889f8c59c64d28e07bfb6c5c287-merged.mount: Deactivated successfully.
Jan 27 08:13:22 np0005597378 podman[76319]: 2026-01-27 13:13:22.240053872 +0000 UTC m=+0.625001683 container remove 8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a (image=quay.io/ceph/ceph:v20, name=brave_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:13:22 np0005597378 systemd[1]: libpod-conmon-8ed50b939812af9d74313dd63f388c180c6f317dc122ae08725c6955d70e971a.scope: Deactivated successfully.
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.303838867 +0000 UTC m=+0.044037511 container create 7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89 (image=quay.io/ceph/ceph:v20, name=goofy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:22 np0005597378 systemd[1]: Started libpod-conmon-7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89.scope.
Jan 27 08:13:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5922c2718fc6c8a977c0ca82682fc8b1ce88426afbfb1c3de6897bbbbcc92ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5922c2718fc6c8a977c0ca82682fc8b1ce88426afbfb1c3de6897bbbbcc92ebd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5922c2718fc6c8a977c0ca82682fc8b1ce88426afbfb1c3de6897bbbbcc92ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.370456622 +0000 UTC m=+0.110655286 container init 7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89 (image=quay.io/ceph/ceph:v20, name=goofy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.375391509 +0000 UTC m=+0.115590153 container start 7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89 (image=quay.io/ceph/ceph:v20, name=goofy_goldberg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.282615049 +0000 UTC m=+0.022813723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.378922725 +0000 UTC m=+0.119121499 container attach 7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89 (image=quay.io/ceph/ceph:v20, name=goofy_goldberg, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052893 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:22 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:22 np0005597378 goofy_goldberg[76388]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClCc1VAESzJ9ODdDVBhoE5QvzWWhApYNs+pyPJBHvL6ES2f7pbkMd95ydX66blbwPAZsyGyvD/nzLeVF/29glMRkwprYEXQF0sj8IlBbJk6KNrUCgAQaQyHtmZyBK5n0Hv5XA5mrlFiQMtzJCBgSkcmwir3v3380y7h5N/+shdH+XiF5oQQQD/X41uqGkFERxiloOvQojw5GIn7JzkzQlewo6Jd0sE/lKeSIyjNkp0vkjKnO4ufLAqT9tXpWCi122rDZUoi4HyJfaXeZiyITDPLO2gu74bHZcr7uuqT8p7eTrR5aFkruQ9PKPsSSGhL6eH7PT+fhiqDfOICdSaQeJDIQ9Rt+i2McgaknrawFvkBax+e1NVW/uXs9icYjiHSY01D0roFQMuuli9i9o6lOdWm7XDQH8/j/WsRabQhodEujZIlmlefRR9SwWqVaCgeQaVoyYwH/6VnVbksDDkVcVWV1JLZoRV00lOzM062vW8amqEdoF3MjAgFD9sEgFYB48= zuul@controller
Jan 27 08:13:22 np0005597378 systemd[1]: libpod-7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89.scope: Deactivated successfully.
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.815347082 +0000 UTC m=+0.555545736 container died 7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89 (image=quay.io/ceph/ceph:v20, name=goofy_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:13:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5922c2718fc6c8a977c0ca82682fc8b1ce88426afbfb1c3de6897bbbbcc92ebd-merged.mount: Deactivated successfully.
Jan 27 08:13:22 np0005597378 podman[76372]: 2026-01-27 13:13:22.852283738 +0000 UTC m=+0.592482382 container remove 7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89 (image=quay.io/ceph/ceph:v20, name=goofy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:13:22 np0005597378 systemd[1]: libpod-conmon-7b9a14948d3db72a9b4107dc706e092ecb658cbb63d64a437768a4c810b7ac89.scope: Deactivated successfully.
Jan 27 08:13:22 np0005597378 podman[76425]: 2026-01-27 13:13:22.907650362 +0000 UTC m=+0.039650045 container create be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38 (image=quay.io/ceph/ceph:v20, name=pedantic_brown, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:13:22 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:22 np0005597378 systemd[1]: Started libpod-conmon-be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38.scope.
Jan 27 08:13:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef179171e6c0e31cde76dd2748ccf046dc8014ef58522fd7909730be068c025a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef179171e6c0e31cde76dd2748ccf046dc8014ef58522fd7909730be068c025a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef179171e6c0e31cde76dd2748ccf046dc8014ef58522fd7909730be068c025a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:22 np0005597378 podman[76425]: 2026-01-27 13:13:22.973471761 +0000 UTC m=+0.105471464 container init be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38 (image=quay.io/ceph/ceph:v20, name=pedantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:22 np0005597378 podman[76425]: 2026-01-27 13:13:22.977725333 +0000 UTC m=+0.109725016 container start be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38 (image=quay.io/ceph/ceph:v20, name=pedantic_brown, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:22 np0005597378 podman[76425]: 2026-01-27 13:13:22.980830709 +0000 UTC m=+0.112830422 container attach be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38 (image=quay.io/ceph/ceph:v20, name=pedantic_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:13:22 np0005597378 podman[76425]: 2026-01-27 13:13:22.886433435 +0000 UTC m=+0.018433138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:23 np0005597378 ceph-mon[75090]: Set ssh ssh_identity_key
Jan 27 08:13:23 np0005597378 ceph-mon[75090]: Set ssh private key
Jan 27 08:13:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:23 np0005597378 ceph-mon[75090]: Set ssh ssh_identity_pub
Jan 27 08:13:23 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:23 np0005597378 systemd[1]: Created slice User Slice of UID 42477.
Jan 27 08:13:23 np0005597378 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 27 08:13:23 np0005597378 systemd-logind[786]: New session 20 of user ceph-admin.
Jan 27 08:13:23 np0005597378 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 27 08:13:23 np0005597378 systemd[1]: Starting User Manager for UID 42477...
Jan 27 08:13:23 np0005597378 systemd[76472]: Queued start job for default target Main User Target.
Jan 27 08:13:23 np0005597378 systemd[76472]: Created slice User Application Slice.
Jan 27 08:13:23 np0005597378 systemd[76472]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 08:13:23 np0005597378 systemd[76472]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 08:13:23 np0005597378 systemd[76472]: Reached target Paths.
Jan 27 08:13:23 np0005597378 systemd[76472]: Reached target Timers.
Jan 27 08:13:23 np0005597378 systemd[76472]: Starting D-Bus User Message Bus Socket...
Jan 27 08:13:23 np0005597378 systemd[76472]: Starting Create User's Volatile Files and Directories...
Jan 27 08:13:23 np0005597378 systemd[76472]: Finished Create User's Volatile Files and Directories.
Jan 27 08:13:23 np0005597378 systemd[76472]: Listening on D-Bus User Message Bus Socket.
Jan 27 08:13:23 np0005597378 systemd[76472]: Reached target Sockets.
Jan 27 08:13:23 np0005597378 systemd[76472]: Reached target Basic System.
Jan 27 08:13:23 np0005597378 systemd[76472]: Reached target Main User Target.
Jan 27 08:13:23 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:23 np0005597378 systemd[76472]: Startup finished in 116ms.
Jan 27 08:13:23 np0005597378 systemd[1]: Started User Manager for UID 42477.
Jan 27 08:13:23 np0005597378 systemd[1]: Started Session 20 of User ceph-admin.
Jan 27 08:13:23 np0005597378 systemd-logind[786]: New session 22 of user ceph-admin.
Jan 27 08:13:23 np0005597378 systemd[1]: Started Session 22 of User ceph-admin.
Jan 27 08:13:24 np0005597378 systemd-logind[786]: New session 23 of user ceph-admin.
Jan 27 08:13:24 np0005597378 systemd[1]: Started Session 23 of User ceph-admin.
Jan 27 08:13:24 np0005597378 systemd-logind[786]: New session 24 of user ceph-admin.
Jan 27 08:13:24 np0005597378 systemd[1]: Started Session 24 of User ceph-admin.
Jan 27 08:13:24 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 27 08:13:24 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 27 08:13:24 np0005597378 systemd-logind[786]: New session 25 of user ceph-admin.
Jan 27 08:13:24 np0005597378 systemd[1]: Started Session 25 of User ceph-admin.
Jan 27 08:13:24 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:25 np0005597378 systemd-logind[786]: New session 26 of user ceph-admin.
Jan 27 08:13:25 np0005597378 systemd[1]: Started Session 26 of User ceph-admin.
Jan 27 08:13:25 np0005597378 systemd-logind[786]: New session 27 of user ceph-admin.
Jan 27 08:13:25 np0005597378 systemd[1]: Started Session 27 of User ceph-admin.
Jan 27 08:13:25 np0005597378 ceph-mon[75090]: Deploying cephadm binary to compute-0
Jan 27 08:13:25 np0005597378 systemd-logind[786]: New session 28 of user ceph-admin.
Jan 27 08:13:25 np0005597378 systemd[1]: Started Session 28 of User ceph-admin.
Jan 27 08:13:25 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:26 np0005597378 systemd-logind[786]: New session 29 of user ceph-admin.
Jan 27 08:13:26 np0005597378 systemd[1]: Started Session 29 of User ceph-admin.
Jan 27 08:13:26 np0005597378 systemd-logind[786]: New session 30 of user ceph-admin.
Jan 27 08:13:26 np0005597378 systemd[1]: Started Session 30 of User ceph-admin.
Jan 27 08:13:26 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054706 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:27 np0005597378 systemd-logind[786]: New session 31 of user ceph-admin.
Jan 27 08:13:27 np0005597378 systemd[1]: Started Session 31 of User ceph-admin.
Jan 27 08:13:27 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:28 np0005597378 systemd-logind[786]: New session 32 of user ceph-admin.
Jan 27 08:13:28 np0005597378 systemd[1]: Started Session 32 of User ceph-admin.
Jan 27 08:13:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 27 08:13:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:28 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Added host compute-0
Jan 27 08:13:28 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 27 08:13:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 27 08:13:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 27 08:13:28 np0005597378 pedantic_brown[76442]: Added host 'compute-0' with addr '192.168.122.100'
Jan 27 08:13:28 np0005597378 systemd[1]: libpod-be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38.scope: Deactivated successfully.
Jan 27 08:13:28 np0005597378 podman[76425]: 2026-01-27 13:13:28.481725744 +0000 UTC m=+5.613725437 container died be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38 (image=quay.io/ceph/ceph:v20, name=pedantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 27 08:13:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ef179171e6c0e31cde76dd2748ccf046dc8014ef58522fd7909730be068c025a-merged.mount: Deactivated successfully.
Jan 27 08:13:28 np0005597378 podman[76425]: 2026-01-27 13:13:28.52144421 +0000 UTC m=+5.653443893 container remove be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38 (image=quay.io/ceph/ceph:v20, name=pedantic_brown, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:28 np0005597378 systemd[1]: libpod-conmon-be5f61cd8f605ce74a4b7c1b1c633ce58ed834f88f4ace969aff1c222d1ebd38.scope: Deactivated successfully.
Jan 27 08:13:28 np0005597378 podman[76863]: 2026-01-27 13:13:28.593904362 +0000 UTC m=+0.048647600 container create 34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495 (image=quay.io/ceph/ceph:v20, name=quizzical_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:13:28 np0005597378 systemd[1]: Started libpod-conmon-34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495.scope.
Jan 27 08:13:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc92a6b4dd7e9f509a5131c8a62a8a2723b201a6beabb9c6671815ea0351d95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc92a6b4dd7e9f509a5131c8a62a8a2723b201a6beabb9c6671815ea0351d95/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc92a6b4dd7e9f509a5131c8a62a8a2723b201a6beabb9c6671815ea0351d95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:28 np0005597378 podman[76863]: 2026-01-27 13:13:28.654646621 +0000 UTC m=+0.109389949 container init 34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495 (image=quay.io/ceph/ceph:v20, name=quizzical_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:13:28 np0005597378 podman[76863]: 2026-01-27 13:13:28.661632381 +0000 UTC m=+0.116375609 container start 34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495 (image=quay.io/ceph/ceph:v20, name=quizzical_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:13:28 np0005597378 podman[76863]: 2026-01-27 13:13:28.668686483 +0000 UTC m=+0.123429761 container attach 34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495 (image=quay.io/ceph/ceph:v20, name=quizzical_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:28 np0005597378 podman[76863]: 2026-01-27 13:13:28.572688114 +0000 UTC m=+0.027431412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:28 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:29 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:29 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 27 08:13:29 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 27 08:13:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 27 08:13:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:29 np0005597378 quizzical_cori[76904]: Scheduled mon update...
Jan 27 08:13:29 np0005597378 systemd[1]: libpod-34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495.scope: Deactivated successfully.
Jan 27 08:13:29 np0005597378 podman[76863]: 2026-01-27 13:13:29.091791494 +0000 UTC m=+0.546534722 container died 34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495 (image=quay.io/ceph/ceph:v20, name=quizzical_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:13:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7fc92a6b4dd7e9f509a5131c8a62a8a2723b201a6beabb9c6671815ea0351d95-merged.mount: Deactivated successfully.
Jan 27 08:13:29 np0005597378 podman[76863]: 2026-01-27 13:13:29.278849135 +0000 UTC m=+0.733592373 container remove 34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495 (image=quay.io/ceph/ceph:v20, name=quizzical_cori, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:29 np0005597378 systemd[1]: libpod-conmon-34a10114e3a407fc2410c4043559de92c6c5c96d8a1ee7564b566a68322b4495.scope: Deactivated successfully.
Jan 27 08:13:29 np0005597378 podman[76968]: 2026-01-27 13:13:29.353982075 +0000 UTC m=+0.053014384 container create e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd (image=quay.io/ceph/ceph:v20, name=xenodochial_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:29 np0005597378 systemd[1]: Started libpod-conmon-e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd.scope.
Jan 27 08:13:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f48f34ec8c4c66ed4fcb6494953293045c15982cb1d0ba0e34c0bb62d681d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f48f34ec8c4c66ed4fcb6494953293045c15982cb1d0ba0e34c0bb62d681d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f48f34ec8c4c66ed4fcb6494953293045c15982cb1d0ba0e34c0bb62d681d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:29 np0005597378 podman[76938]: 2026-01-27 13:13:29.413394075 +0000 UTC m=+0.584375656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:29 np0005597378 podman[76968]: 2026-01-27 13:13:29.328301622 +0000 UTC m=+0.027334021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:29 np0005597378 podman[76968]: 2026-01-27 13:13:29.515294811 +0000 UTC m=+0.214327170 container init e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd (image=quay.io/ceph/ceph:v20, name=xenodochial_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:29 np0005597378 podman[76968]: 2026-01-27 13:13:29.52822526 +0000 UTC m=+0.227257599 container start e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd (image=quay.io/ceph/ceph:v20, name=xenodochial_bohr, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:13:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:29 np0005597378 ceph-mon[75090]: Added host compute-0
Jan 27 08:13:29 np0005597378 ceph-mon[75090]: Saving service mon spec with placement count:5
Jan 27 08:13:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:29 np0005597378 podman[76968]: 2026-01-27 13:13:29.631218321 +0000 UTC m=+0.330250640 container attach e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd (image=quay.io/ceph/ceph:v20, name=xenodochial_bohr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:13:29 np0005597378 podman[77004]: 2026-01-27 13:13:29.740468875 +0000 UTC m=+0.068966617 container create a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8 (image=quay.io/ceph/ceph:v20, name=vigorous_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:29 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:29 np0005597378 systemd[1]: Started libpod-conmon-a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8.scope.
Jan 27 08:13:29 np0005597378 podman[77004]: 2026-01-27 13:13:29.700707749 +0000 UTC m=+0.029205541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:29 np0005597378 podman[77004]: 2026-01-27 13:13:29.834319898 +0000 UTC m=+0.162817610 container init a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8 (image=quay.io/ceph/ceph:v20, name=vigorous_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:13:29 np0005597378 podman[77004]: 2026-01-27 13:13:29.839513631 +0000 UTC m=+0.168011343 container start a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8 (image=quay.io/ceph/ceph:v20, name=vigorous_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:29 np0005597378 podman[77004]: 2026-01-27 13:13:29.847038972 +0000 UTC m=+0.175536694 container attach a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8 (image=quay.io/ceph/ceph:v20, name=vigorous_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:13:29 np0005597378 vigorous_varahamihira[77039]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 27 08:13:29 np0005597378 podman[77004]: 2026-01-27 13:13:29.93971125 +0000 UTC m=+0.268208952 container died a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8 (image=quay.io/ceph/ceph:v20, name=vigorous_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:29 np0005597378 systemd[1]: libpod-a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8.scope: Deactivated successfully.
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-879370504f67ad5d75a06e626755da4611b9d4099b42813ad9e4028a30713d03-merged.mount: Deactivated successfully.
Jan 27 08:13:30 np0005597378 xenodochial_bohr[76985]: Scheduled mgr update...
Jan 27 08:13:30 np0005597378 systemd[1]: libpod-e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd.scope: Deactivated successfully.
Jan 27 08:13:30 np0005597378 podman[76968]: 2026-01-27 13:13:30.089866377 +0000 UTC m=+0.788898736 container died e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd (image=quay.io/ceph/ceph:v20, name=xenodochial_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:13:30 np0005597378 podman[77004]: 2026-01-27 13:13:30.113078217 +0000 UTC m=+0.441575949 container remove a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8 (image=quay.io/ceph/ceph:v20, name=vigorous_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-21f48f34ec8c4c66ed4fcb6494953293045c15982cb1d0ba0e34c0bb62d681d9-merged.mount: Deactivated successfully.
Jan 27 08:13:30 np0005597378 systemd[1]: libpod-conmon-a880be0a5df0da3fdba13f1a2d7b404cc578547fd05f2cfa87fd9c2aa6bc8fe8.scope: Deactivated successfully.
Jan 27 08:13:30 np0005597378 podman[76968]: 2026-01-27 13:13:30.134186432 +0000 UTC m=+0.833218741 container remove e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd (image=quay.io/ceph/ceph:v20, name=xenodochial_bohr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 27 08:13:30 np0005597378 systemd[1]: libpod-conmon-e4af7cf6f278062bf7136d8a075334b73c340173166ccf5d60b38780b34b46bd.scope: Deactivated successfully.
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.195661997 +0000 UTC m=+0.045399499 container create fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330 (image=quay.io/ceph/ceph:v20, name=fervent_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:13:30 np0005597378 systemd[1]: Started libpod-conmon-fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330.scope.
Jan 27 08:13:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99fcbef38bae90e62d24727ee3fa9a6e929746f5caeb9239ee4c69c24789963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99fcbef38bae90e62d24727ee3fa9a6e929746f5caeb9239ee4c69c24789963/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99fcbef38bae90e62d24727ee3fa9a6e929746f5caeb9239ee4c69c24789963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.261142908 +0000 UTC m=+0.110880430 container init fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330 (image=quay.io/ceph/ceph:v20, name=fervent_colden, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.266031623 +0000 UTC m=+0.115769125 container start fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330 (image=quay.io/ceph/ceph:v20, name=fervent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.175461582 +0000 UTC m=+0.025199104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.270348076 +0000 UTC m=+0.120085598 container attach fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330 (image=quay.io/ceph/ceph:v20, name=fervent_colden, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service crash spec with placement *
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 27 08:13:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:30 np0005597378 fervent_colden[77115]: Scheduled crash update...
Jan 27 08:13:30 np0005597378 systemd[1]: libpod-fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330.scope: Deactivated successfully.
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.684999785 +0000 UTC m=+0.534737287 container died fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330 (image=quay.io/ceph/ceph:v20, name=fervent_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:13:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e99fcbef38bae90e62d24727ee3fa9a6e929746f5caeb9239ee4c69c24789963-merged.mount: Deactivated successfully.
Jan 27 08:13:30 np0005597378 podman[77070]: 2026-01-27 13:13:30.727216365 +0000 UTC m=+0.576953867 container remove fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330 (image=quay.io/ceph/ceph:v20, name=fervent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:13:30 np0005597378 systemd[1]: libpod-conmon-fa0b9e8d66070bc12ea9474492f603c1010cfe163b49a1484010e434abdad330.scope: Deactivated successfully.
Jan 27 08:13:30 np0005597378 podman[77243]: 2026-01-27 13:13:30.7886917 +0000 UTC m=+0.041137348 container create 52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839 (image=quay.io/ceph/ceph:v20, name=angry_boyd, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:30 np0005597378 systemd[1]: Started libpod-conmon-52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839.scope.
Jan 27 08:13:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896df23e85ec0f22c5493837a949950438e8e48dbffafb738d10218a56e79a43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896df23e85ec0f22c5493837a949950438e8e48dbffafb738d10218a56e79a43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896df23e85ec0f22c5493837a949950438e8e48dbffafb738d10218a56e79a43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:30 np0005597378 podman[77243]: 2026-01-27 13:13:30.862938121 +0000 UTC m=+0.115383799 container init 52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839 (image=quay.io/ceph/ceph:v20, name=angry_boyd, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:30 np0005597378 podman[77243]: 2026-01-27 13:13:30.771151402 +0000 UTC m=+0.023597050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:30 np0005597378 podman[77243]: 2026-01-27 13:13:30.869785088 +0000 UTC m=+0.122230766 container start 52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839 (image=quay.io/ceph/ceph:v20, name=angry_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:13:30 np0005597378 podman[77243]: 2026-01-27 13:13:30.877037775 +0000 UTC m=+0.129483433 container attach 52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839 (image=quay.io/ceph/ceph:v20, name=angry_boyd, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:13:30 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: Saving service mgr spec with placement count:2
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:31 np0005597378 podman[77326]: 2026-01-27 13:13:31.145135023 +0000 UTC m=+0.055607659 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3044109179' entity='client.admin' 
Jan 27 08:13:31 np0005597378 podman[77326]: 2026-01-27 13:13:31.267819278 +0000 UTC m=+0.178291884 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 27 08:13:31 np0005597378 systemd[1]: libpod-52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839.scope: Deactivated successfully.
Jan 27 08:13:31 np0005597378 podman[77243]: 2026-01-27 13:13:31.288060304 +0000 UTC m=+0.540505962 container died 52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839 (image=quay.io/ceph/ceph:v20, name=angry_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-896df23e85ec0f22c5493837a949950438e8e48dbffafb738d10218a56e79a43-merged.mount: Deactivated successfully.
Jan 27 08:13:31 np0005597378 podman[77243]: 2026-01-27 13:13:31.336845435 +0000 UTC m=+0.589291083 container remove 52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839 (image=quay.io/ceph/ceph:v20, name=angry_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:31 np0005597378 systemd[1]: libpod-conmon-52e9bed5363b580bbc2894e4b5084d31edaecdbebc32a658d047b0c9abc02839.scope: Deactivated successfully.
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.40384212 +0000 UTC m=+0.045722307 container create 401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c (image=quay.io/ceph/ceph:v20, name=upbeat_wu, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:31 np0005597378 systemd[1]: Started libpod-conmon-401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c.scope.
Jan 27 08:13:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806aaee46905cba75c9f2072623ffe7f3bb904f233bd36a97deba7347693be7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806aaee46905cba75c9f2072623ffe7f3bb904f233bd36a97deba7347693be7b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806aaee46905cba75c9f2072623ffe7f3bb904f233bd36a97deba7347693be7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.382145502 +0000 UTC m=+0.024025689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.478964569 +0000 UTC m=+0.120844786 container init 401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c (image=quay.io/ceph/ceph:v20, name=upbeat_wu, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.483900395 +0000 UTC m=+0.125780572 container start 401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c (image=quay.io/ceph/ceph:v20, name=upbeat_wu, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.487969853 +0000 UTC m=+0.129850050 container attach 401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c (image=quay.io/ceph/ceph:v20, name=upbeat_wu, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:31 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:31 np0005597378 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77519 (sysctl)
Jan 27 08:13:31 np0005597378 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 27 08:13:31 np0005597378 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 27 08:13:31 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 27 08:13:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:31 np0005597378 systemd[1]: libpod-401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c.scope: Deactivated successfully.
Jan 27 08:13:31 np0005597378 conmon[77416]: conmon 401a411f3076e65ab291 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c.scope/container/memory.events
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.937938682 +0000 UTC m=+0.579818859 container died 401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c (image=quay.io/ceph/ceph:v20, name=upbeat_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:13:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-806aaee46905cba75c9f2072623ffe7f3bb904f233bd36a97deba7347693be7b-merged.mount: Deactivated successfully.
Jan 27 08:13:31 np0005597378 podman[77383]: 2026-01-27 13:13:31.989375351 +0000 UTC m=+0.631255528 container remove 401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c (image=quay.io/ceph/ceph:v20, name=upbeat_wu, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:13:32 np0005597378 systemd[1]: libpod-conmon-401a411f3076e65ab291b5dd3d231338f0714ee987bdb3e203a881108b32047c.scope: Deactivated successfully.
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.059885411 +0000 UTC m=+0.048647290 container create 8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838 (image=quay.io/ceph/ceph:v20, name=tender_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:32 np0005597378 systemd[1]: Started libpod-conmon-8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838.scope.
Jan 27 08:13:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.039683835 +0000 UTC m=+0.028445764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86688700b8943422b2efdaf2bb33c1d38abbc4bb2f2ad7d1647f8985b254516/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86688700b8943422b2efdaf2bb33c1d38abbc4bb2f2ad7d1647f8985b254516/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86688700b8943422b2efdaf2bb33c1d38abbc4bb2f2ad7d1647f8985b254516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.168029392 +0000 UTC m=+0.156791291 container init 8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838 (image=quay.io/ceph/ceph:v20, name=tender_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.17488752 +0000 UTC m=+0.163649399 container start 8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838 (image=quay.io/ceph/ceph:v20, name=tender_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.178066038 +0000 UTC m=+0.166827937 container attach 8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838 (image=quay.io/ceph/ceph:v20, name=tender_swirles, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: Saving service crash spec with placement *
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3044109179' entity='client.admin' 
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:32 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:32 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Added label _admin to host compute-0
Jan 27 08:13:32 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 27 08:13:32 np0005597378 tender_swirles[77560]: Added label _admin to host compute-0
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:32 np0005597378 systemd[1]: libpod-8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838.scope: Deactivated successfully.
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.582961696 +0000 UTC m=+0.571723605 container died 8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838 (image=quay.io/ceph/ceph:v20, name=tender_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c86688700b8943422b2efdaf2bb33c1d38abbc4bb2f2ad7d1647f8985b254516-merged.mount: Deactivated successfully.
Jan 27 08:13:32 np0005597378 podman[77541]: 2026-01-27 13:13:32.766492202 +0000 UTC m=+0.755254071 container remove 8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838 (image=quay.io/ceph/ceph:v20, name=tender_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:13:32 np0005597378 systemd[1]: libpod-conmon-8bd3b344604b0597511ea874022f37fe468df77cab5a2223c69294e151108838.scope: Deactivated successfully.
Jan 27 08:13:32 np0005597378 podman[77730]: 2026-01-27 13:13:32.849425739 +0000 UTC m=+0.056332145 container create a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be (image=quay.io/ceph/ceph:v20, name=awesome_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:13:32 np0005597378 systemd[1]: Started libpod-conmon-a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be.scope.
Jan 27 08:13:32 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:32 np0005597378 podman[77730]: 2026-01-27 13:13:32.82253718 +0000 UTC m=+0.029443686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b01be6e3e592878788c97597ee4c51a816ff1845855243cf6febade125c6f50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b01be6e3e592878788c97597ee4c51a816ff1845855243cf6febade125c6f50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b01be6e3e592878788c97597ee4c51a816ff1845855243cf6febade125c6f50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:32 np0005597378 podman[77730]: 2026-01-27 13:13:32.939769387 +0000 UTC m=+0.146675813 container init a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be (image=quay.io/ceph/ceph:v20, name=awesome_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:32 np0005597378 podman[77730]: 2026-01-27 13:13:32.949936546 +0000 UTC m=+0.156842942 container start a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be (image=quay.io/ceph/ceph:v20, name=awesome_benz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:32 np0005597378 podman[77730]: 2026-01-27 13:13:32.953772499 +0000 UTC m=+0.160678895 container attach a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be (image=quay.io/ceph/ceph:v20, name=awesome_benz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:13:32 np0005597378 podman[77763]: 2026-01-27 13:13:32.993070266 +0000 UTC m=+0.043101100 container create 0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:13:33 np0005597378 systemd[1]: Started libpod-conmon-0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36.scope.
Jan 27 08:13:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:33 np0005597378 podman[77763]: 2026-01-27 13:13:32.972958843 +0000 UTC m=+0.022989717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:33 np0005597378 podman[77763]: 2026-01-27 13:13:33.074875689 +0000 UTC m=+0.124906513 container init 0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:33 np0005597378 podman[77763]: 2026-01-27 13:13:33.081640285 +0000 UTC m=+0.131671149 container start 0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:13:33 np0005597378 bold_kare[77782]: 167 167
Jan 27 08:13:33 np0005597378 systemd[1]: libpod-0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36.scope: Deactivated successfully.
Jan 27 08:13:33 np0005597378 podman[77763]: 2026-01-27 13:13:33.086277936 +0000 UTC m=+0.136308770 container attach 0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:33 np0005597378 podman[77763]: 2026-01-27 13:13:33.089308291 +0000 UTC m=+0.139339125 container died 0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4fd2f78b595d9c6e65e3ba3f7aee9ddb81b830a6ee7ac45e3df4a340a39fa962-merged.mount: Deactivated successfully.
Jan 27 08:13:33 np0005597378 podman[77763]: 2026-01-27 13:13:33.133914332 +0000 UTC m=+0.183945156 container remove 0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:13:33 np0005597378 systemd[1]: libpod-conmon-0500a4d49b7de38f96b3bfcd67ce6c9ed08b2986bcbd39608b4b70809f2d3e36.scope: Deactivated successfully.
Jan 27 08:13:33 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:33 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 27 08:13:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1529414251' entity='client.admin' 
Jan 27 08:13:33 np0005597378 awesome_benz[77747]: set mgr/dashboard/cluster/status
Jan 27 08:13:33 np0005597378 systemd[1]: libpod-a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be.scope: Deactivated successfully.
Jan 27 08:13:33 np0005597378 conmon[77747]: conmon a2ef9c11308c362130a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be.scope/container/memory.events
Jan 27 08:13:33 np0005597378 podman[77730]: 2026-01-27 13:13:33.549272895 +0000 UTC m=+0.756179291 container died a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be (image=quay.io/ceph/ceph:v20, name=awesome_benz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5b01be6e3e592878788c97597ee4c51a816ff1845855243cf6febade125c6f50-merged.mount: Deactivated successfully.
Jan 27 08:13:33 np0005597378 podman[77730]: 2026-01-27 13:13:33.598821294 +0000 UTC m=+0.805727690 container remove a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be (image=quay.io/ceph/ceph:v20, name=awesome_benz, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:13:33 np0005597378 systemd[1]: libpod-conmon-a2ef9c11308c362130a3c4f426ca9bf16dfb23029428955b8608a512518eb5be.scope: Deactivated successfully.
Jan 27 08:13:33 np0005597378 systemd[1]: Reloading.
Jan 27 08:13:33 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:13:33 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:13:33 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.143688708 +0000 UTC m=+0.052000931 container create 8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:34 np0005597378 systemd[1]: Started libpod-conmon-8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4.scope.
Jan 27 08:13:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.121221374 +0000 UTC m=+0.029533647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b84676ac9247be2202b9d6b597bc6136e1b6111e376bbd4a0abec332b8a15c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b84676ac9247be2202b9d6b597bc6136e1b6111e376bbd4a0abec332b8a15c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b84676ac9247be2202b9d6b597bc6136e1b6111e376bbd4a0abec332b8a15c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b84676ac9247be2202b9d6b597bc6136e1b6111e376bbd4a0abec332b8a15c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.249320405 +0000 UTC m=+0.157632658 container init 8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.256186273 +0000 UTC m=+0.164498496 container start 8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.261714932 +0000 UTC m=+0.170027155 container attach 8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jepsen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 27 08:13:34 np0005597378 ceph-mon[75090]: Added label _admin to host compute-0
Jan 27 08:13:34 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1529414251' entity='client.admin' 
Jan 27 08:13:34 np0005597378 python3[77924]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:34 np0005597378 podman[77931]: 2026-01-27 13:13:34.677059955 +0000 UTC m=+0.089581742 container create a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767 (image=quay.io/ceph/ceph:v20, name=blissful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:34 np0005597378 podman[77931]: 2026-01-27 13:13:34.609000568 +0000 UTC m=+0.021522375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]: [
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:    {
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "available": false,
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "being_replaced": false,
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "ceph_device_lvm": false,
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "lsm_data": {},
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "lvs": [],
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "path": "/dev/sr0",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "rejected_reasons": [
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "Has a FileSystem",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "Insufficient space (<5GB)"
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        ],
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        "sys_api": {
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "actuators": null,
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "device_nodes": [
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:                "sr0"
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            ],
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "devname": "sr0",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "human_readable_size": "482.00 KB",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "id_bus": "ata",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "model": "QEMU DVD-ROM",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "nr_requests": "2",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "parent": "/dev/sr0",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "partitions": {},
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "path": "/dev/sr0",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "removable": "1",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "rev": "2.5+",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "ro": "0",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "rotational": "1",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "sas_address": "",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "sas_device_handle": "",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "scheduler_mode": "mq-deadline",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "sectors": 0,
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "sectorsize": "2048",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "size": 493568.0,
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "support_discard": "2048",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "type": "disk",
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:            "vendor": "QEMU"
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:        }
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]:    }
Jan 27 08:13:34 np0005597378 elegant_jepsen[77894]: ]
Jan 27 08:13:34 np0005597378 systemd[1]: Started libpod-conmon-a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767.scope.
Jan 27 08:13:34 np0005597378 systemd[1]: libpod-8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4.scope: Deactivated successfully.
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.79701847 +0000 UTC m=+0.705330703 container died 8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e62a60b1df738e84a5e01354ae6967f1cd44d1a8b82244373212a35bbd1c82/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e62a60b1df738e84a5e01354ae6967f1cd44d1a8b82244373212a35bbd1c82/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:34 np0005597378 podman[77931]: 2026-01-27 13:13:34.872273593 +0000 UTC m=+0.284795430 container init a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767 (image=quay.io/ceph/ceph:v20, name=blissful_hofstadter, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:13:34 np0005597378 podman[77931]: 2026-01-27 13:13:34.886304355 +0000 UTC m=+0.298826142 container start a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767 (image=quay.io/ceph/ceph:v20, name=blissful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:13:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b0b84676ac9247be2202b9d6b597bc6136e1b6111e376bbd4a0abec332b8a15c-merged.mount: Deactivated successfully.
Jan 27 08:13:34 np0005597378 podman[77931]: 2026-01-27 13:13:34.896429633 +0000 UTC m=+0.308951460 container attach a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767 (image=quay.io/ceph/ceph:v20, name=blissful_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:34 np0005597378 ceph-mgr[75385]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 27 08:13:34 np0005597378 podman[77878]: 2026-01-27 13:13:34.942631639 +0000 UTC m=+0.850943872 container remove 8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jepsen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:13:34 np0005597378 systemd[1]: libpod-conmon-8d3d309ad7a544fab21dda1539973fc070c899550a05369240707ccfef0607f4.scope: Deactivated successfully.
Jan 27 08:13:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:35 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:13:35 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 27 08:13:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/96080923' entity='client.admin' 
Jan 27 08:13:35 np0005597378 systemd[1]: libpod-a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767.scope: Deactivated successfully.
Jan 27 08:13:35 np0005597378 podman[77931]: 2026-01-27 13:13:35.321766552 +0000 UTC m=+0.734288339 container died a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767 (image=quay.io/ceph/ceph:v20, name=blissful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:13:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a8e62a60b1df738e84a5e01354ae6967f1cd44d1a8b82244373212a35bbd1c82-merged.mount: Deactivated successfully.
Jan 27 08:13:35 np0005597378 podman[77931]: 2026-01-27 13:13:35.357779388 +0000 UTC m=+0.770301175 container remove a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767 (image=quay.io/ceph/ceph:v20, name=blissful_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:13:35 np0005597378 systemd[1]: libpod-conmon-a44fd26161e242cd8b331285d0e0d1ca63d1445bd21209c21edf4ea7f4813767.scope: Deactivated successfully.
Jan 27 08:13:35 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/config/ceph.conf
Jan 27 08:13:35 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/config/ceph.conf
Jan 27 08:13:35 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: Updating compute-0:/etc/ceph/ceph.conf
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/96080923' entity='client.admin' 
Jan 27 08:13:36 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:13:36 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:13:36 np0005597378 ansible-async_wrapper.py[79233]: Invoked with j218621281057 30 /home/zuul/.ansible/tmp/ansible-tmp-1769519615.6864686-36516-42844397684890/AnsiballZ_command.py _
Jan 27 08:13:36 np0005597378 ansible-async_wrapper.py[79290]: Starting module and watcher
Jan 27 08:13:36 np0005597378 ansible-async_wrapper.py[79290]: Start watching 79295 (30)
Jan 27 08:13:36 np0005597378 ansible-async_wrapper.py[79295]: Start module (79295)
Jan 27 08:13:36 np0005597378 ansible-async_wrapper.py[79233]: Return async_wrapper task started.
Jan 27 08:13:36 np0005597378 python3[79302]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:36 np0005597378 podman[79358]: 2026-01-27 13:13:36.502563765 +0000 UTC m=+0.046830351 container create f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:36 np0005597378 systemd[1]: Started libpod-conmon-f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a.scope.
Jan 27 08:13:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a863af88e588ce6474cf6a3a246e5c088d81a0ecee4e2174f5059f857654dfe5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a863af88e588ce6474cf6a3a246e5c088d81a0ecee4e2174f5059f857654dfe5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:36 np0005597378 podman[79358]: 2026-01-27 13:13:36.483533884 +0000 UTC m=+0.027800480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:36 np0005597378 podman[79358]: 2026-01-27 13:13:36.582053828 +0000 UTC m=+0.126320454 container init f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:13:36 np0005597378 podman[79358]: 2026-01-27 13:13:36.589795525 +0000 UTC m=+0.134062111 container start f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:13:36 np0005597378 podman[79358]: 2026-01-27 13:13:36.593416002 +0000 UTC m=+0.137682608 container attach f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:36 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/config/ceph.client.admin.keyring
Jan 27 08:13:36 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/config/ceph.client.admin.keyring
Jan 27 08:13:36 np0005597378 ceph-mgr[75385]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 27 08:13:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: Updating compute-0:/var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/config/ceph.conf
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 27 08:13:37 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:13:37 np0005597378 friendly_torvalds[79407]: 
Jan 27 08:13:37 np0005597378 friendly_torvalds[79407]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:13:37 np0005597378 systemd[1]: libpod-f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a.scope: Deactivated successfully.
Jan 27 08:13:37 np0005597378 podman[79358]: 2026-01-27 13:13:37.053768056 +0000 UTC m=+0.598034632 container died f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a863af88e588ce6474cf6a3a246e5c088d81a0ecee4e2174f5059f857654dfe5-merged.mount: Deactivated successfully.
Jan 27 08:13:37 np0005597378 podman[79358]: 2026-01-27 13:13:37.088036715 +0000 UTC m=+0.632303281 container remove f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:37 np0005597378 systemd[1]: libpod-conmon-f829f66e34fedf4c05193b24a1fc94e4a64351fca49e188dc92dad164c43348a.scope: Deactivated successfully.
Jan 27 08:13:37 np0005597378 ansible-async_wrapper.py[79295]: Module complete (79295)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:37 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 89c53073-d338-4829-ad5a-bfe515a2bcff (Updating crash deployment (+1 -> 1))
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:37 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 27 08:13:37 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 27 08:13:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:37 np0005597378 python3[79855]: ansible-ansible.legacy.async_status Invoked with jid=j218621281057.79233 mode=status _async_dir=/root/.ansible_async
Jan 27 08:13:37 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:37 np0005597378 podman[79945]: 2026-01-27 13:13:37.97066515 +0000 UTC m=+0.040251609 container create 247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_darwin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:13:38 np0005597378 systemd[1]: Started libpod-conmon-247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e.scope.
Jan 27 08:13:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:38 np0005597378 podman[79945]: 2026-01-27 13:13:37.951291692 +0000 UTC m=+0.020878171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:38 np0005597378 podman[79945]: 2026-01-27 13:13:38.053001635 +0000 UTC m=+0.122588104 container init 247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_darwin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:38 np0005597378 python3[79946]: ansible-ansible.legacy.async_status Invoked with jid=j218621281057.79233 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 08:13:38 np0005597378 podman[79945]: 2026-01-27 13:13:38.058183327 +0000 UTC m=+0.127769776 container start 247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:13:38 np0005597378 eager_darwin[79962]: 167 167
Jan 27 08:13:38 np0005597378 systemd[1]: libpod-247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e.scope: Deactivated successfully.
Jan 27 08:13:38 np0005597378 conmon[79962]: conmon 247376dcab25c50d2b46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e.scope/container/memory.events
Jan 27 08:13:38 np0005597378 podman[79945]: 2026-01-27 13:13:38.063802558 +0000 UTC m=+0.133389037 container attach 247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_darwin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:38 np0005597378 podman[79945]: 2026-01-27 13:13:38.064293068 +0000 UTC m=+0.133879537 container died 247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_darwin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4833cf57c70764e7406a4a3e150e6b27533e6223c2f22a5dba79d7bee9123fcc-merged.mount: Deactivated successfully.
Jan 27 08:13:38 np0005597378 podman[79945]: 2026-01-27 13:13:38.097843831 +0000 UTC m=+0.167430300 container remove 247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:13:38 np0005597378 systemd[1]: libpod-conmon-247376dcab25c50d2b46648eca6ccbfd47952ec16873f7c41768384978d5ea1e.scope: Deactivated successfully.
Jan 27 08:13:38 np0005597378 systemd[1]: Reloading.
Jan 27 08:13:38 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:13:38 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:13:38 np0005597378 ceph-mon[75090]: Updating compute-0:/var/lib/ceph/4d8fd694-f443-5fb1-b612-70034b2f3c6e/config/ceph.client.admin.keyring
Jan 27 08:13:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 27 08:13:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 27 08:13:38 np0005597378 systemd[1]: Reloading.
Jan 27 08:13:38 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:13:38 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:13:38 np0005597378 systemd[1]: Starting Ceph crash.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:13:38 np0005597378 python3[80084]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 08:13:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:38 np0005597378 podman[80133]: 2026-01-27 13:13:38.961444107 +0000 UTC m=+0.048313653 container create c82a6acd43c4110a458b97856a765bd053e9f013644e19ace7730ddc1b53d8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccea8cd00a4a3ed0fc03492ec0c69c536c55abe595f3c85c4cf2fd65178b84a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccea8cd00a4a3ed0fc03492ec0c69c536c55abe595f3c85c4cf2fd65178b84a/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccea8cd00a4a3ed0fc03492ec0c69c536c55abe595f3c85c4cf2fd65178b84a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccea8cd00a4a3ed0fc03492ec0c69c536c55abe595f3c85c4cf2fd65178b84a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 podman[80133]: 2026-01-27 13:13:39.025797074 +0000 UTC m=+0.112666640 container init c82a6acd43c4110a458b97856a765bd053e9f013644e19ace7730ddc1b53d8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:13:39 np0005597378 podman[80133]: 2026-01-27 13:13:38.940494305 +0000 UTC m=+0.027363901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:39 np0005597378 podman[80133]: 2026-01-27 13:13:39.038373495 +0000 UTC m=+0.125243031 container start c82a6acd43c4110a458b97856a765bd053e9f013644e19ace7730ddc1b53d8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:13:39 np0005597378 bash[80133]: c82a6acd43c4110a458b97856a765bd053e9f013644e19ace7730ddc1b53d8ef
Jan 27 08:13:39 np0005597378 systemd[1]: Started Ceph crash.compute-0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 89c53073-d338-4829-ad5a-bfe515a2bcff (Updating crash deployment (+1 -> 1))
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 89c53073-d338-4829-ad5a-bfe515a2bcff (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev cc90a19d-1193-489b-bd77-ce51890d82dd (Updating mgr deployment (+1 -> 2))
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmdbwg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmdbwg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmdbwg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mgr services"} : dispatch
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.pmdbwg on compute-0
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.pmdbwg on compute-0
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: 2026-01-27T13:13:39.208+0000 7f0b625e6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: 2026-01-27T13:13:39.208+0000 7f0b625e6640 -1 AuthRegistry(0x7f0b5c052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: 2026-01-27T13:13:39.209+0000 7f0b625e6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: 2026-01-27T13:13:39.209+0000 7f0b625e6640 -1 AuthRegistry(0x7f0b625e4fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: 2026-01-27T13:13:39.209+0000 7f0b5bfff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: 2026-01-27T13:13:39.210+0000 7f0b625e6640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 27 08:13:39 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-crash-compute-0[80148]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 27 08:13:39 np0005597378 python3[80190]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:39 np0005597378 podman[80241]: 2026-01-27 13:13:39.35779372 +0000 UTC m=+0.042641871 container create c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee (image=quay.io/ceph/ceph:v20, name=nifty_curran, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: Deploying daemon crash.compute-0 on compute-0
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmdbwg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 27 08:13:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmdbwg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 27 08:13:39 np0005597378 systemd[1]: Started libpod-conmon-c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee.scope.
Jan 27 08:13:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c683719442afe644594eea011cb6727fa7cf3a5dd090377a7f82716454f73992/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c683719442afe644594eea011cb6727fa7cf3a5dd090377a7f82716454f73992/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c683719442afe644594eea011cb6727fa7cf3a5dd090377a7f82716454f73992/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:39 np0005597378 podman[80241]: 2026-01-27 13:13:39.339261951 +0000 UTC m=+0.024110122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:39 np0005597378 podman[80241]: 2026-01-27 13:13:39.446715276 +0000 UTC m=+0.131563457 container init c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee (image=quay.io/ceph/ceph:v20, name=nifty_curran, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:39 np0005597378 podman[80241]: 2026-01-27 13:13:39.454784921 +0000 UTC m=+0.139633062 container start c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee (image=quay.io/ceph/ceph:v20, name=nifty_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:13:39 np0005597378 podman[80241]: 2026-01-27 13:13:39.457920348 +0000 UTC m=+0.142768549 container attach c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee (image=quay.io/ceph/ceph:v20, name=nifty_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.613385949 +0000 UTC m=+0.034298750 container create 9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:13:39 np0005597378 systemd[1]: Started libpod-conmon-9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b.scope.
Jan 27 08:13:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.683316306 +0000 UTC m=+0.104229127 container init 9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_thompson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.688825125 +0000 UTC m=+0.109737936 container start 9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:39 np0005597378 loving_thompson[80336]: 167 167
Jan 27 08:13:39 np0005597378 systemd[1]: libpod-9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b.scope: Deactivated successfully.
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.692599427 +0000 UTC m=+0.113512248 container attach 9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.693863114 +0000 UTC m=+0.114775915 container died 9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.596664169 +0000 UTC m=+0.017576990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f0e9352f10b9039db29bab312268c34c4edae4b3bac6913f0c060b2d1b9d2d60-merged.mount: Deactivated successfully.
Jan 27 08:13:39 np0005597378 podman[80320]: 2026-01-27 13:13:39.735577214 +0000 UTC m=+0.156490015 container remove 9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:39 np0005597378 systemd[1]: libpod-conmon-9cc41804365d2b8ee58872121cf201795336baf8681667d8f9f2138e2b2e320b.scope: Deactivated successfully.
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:39 np0005597378 systemd[1]: Reloading.
Jan 27 08:13:39 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:13:39 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:13:39 np0005597378 nifty_curran[80257]: 
Jan 27 08:13:39 np0005597378 nifty_curran[80257]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:13:39 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:13:39 np0005597378 podman[80241]: 2026-01-27 13:13:39.897311069 +0000 UTC m=+0.582159210 container died c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee (image=quay.io/ceph/ceph:v20, name=nifty_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:40 np0005597378 systemd[1]: libpod-c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee.scope: Deactivated successfully.
Jan 27 08:13:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c683719442afe644594eea011cb6727fa7cf3a5dd090377a7f82716454f73992-merged.mount: Deactivated successfully.
Jan 27 08:13:40 np0005597378 podman[80241]: 2026-01-27 13:13:40.080681352 +0000 UTC m=+0.765529493 container remove c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee (image=quay.io/ceph/ceph:v20, name=nifty_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:13:40 np0005597378 systemd[1]: libpod-conmon-c13b80e05c63c73b67b31dd9e15673ed3ed088cabc669d140f34875040c9fdee.scope: Deactivated successfully.
Jan 27 08:13:40 np0005597378 systemd[1]: Reloading.
Jan 27 08:13:40 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:13:40 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:13:40 np0005597378 systemd[1]: Starting Ceph mgr.compute-0.pmdbwg for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: Deploying daemon mgr.compute-0.pmdbwg on compute-0
Jan 27 08:13:40 np0005597378 podman[80518]: 2026-01-27 13:13:40.558113673 +0000 UTC m=+0.037926948 container create 539bda77bf592a3cea911cbe3737c1b7a277c0967e41137903a3ad809ea6ba78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:13:40 np0005597378 python3[80507]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfac9035980e34cd7f6b202c5ee2d491e3b8f66d7ba2ba912766238deec8cc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfac9035980e34cd7f6b202c5ee2d491e3b8f66d7ba2ba912766238deec8cc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfac9035980e34cd7f6b202c5ee2d491e3b8f66d7ba2ba912766238deec8cc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfac9035980e34cd7f6b202c5ee2d491e3b8f66d7ba2ba912766238deec8cc1/merged/var/lib/ceph/mgr/ceph-compute-0.pmdbwg supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 podman[80518]: 2026-01-27 13:13:40.627356586 +0000 UTC m=+0.107169891 container init 539bda77bf592a3cea911cbe3737c1b7a277c0967e41137903a3ad809ea6ba78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:40 np0005597378 podman[80518]: 2026-01-27 13:13:40.539643395 +0000 UTC m=+0.019456680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:40 np0005597378 podman[80518]: 2026-01-27 13:13:40.636628956 +0000 UTC m=+0.116442231 container start 539bda77bf592a3cea911cbe3737c1b7a277c0967e41137903a3ad809ea6ba78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:40 np0005597378 bash[80518]: 539bda77bf592a3cea911cbe3737c1b7a277c0967e41137903a3ad809ea6ba78
Jan 27 08:13:40 np0005597378 systemd[1]: Started Ceph mgr.compute-0.pmdbwg for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:13:40 np0005597378 ceph-mgr[80548]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:13:40 np0005597378 ceph-mgr[80548]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 27 08:13:40 np0005597378 ceph-mgr[80548]: pidfile_write: ignore empty --pid-file
Jan 27 08:13:40 np0005597378 podman[80536]: 2026-01-27 13:13:40.680638005 +0000 UTC m=+0.059747579 container create 36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:40 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'alerts'
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 27 08:13:40 np0005597378 systemd[1]: Started libpod-conmon-36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6.scope.
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:40 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev cc90a19d-1193-489b-bd77-ce51890d82dd (Updating mgr deployment (+1 -> 2))
Jan 27 08:13:40 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event cc90a19d-1193-489b-bd77-ce51890d82dd (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 27 08:13:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da55b132e1e0637b8a616329f2508c66c3e28db0a52152dafcc4d2f96f48cc0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da55b132e1e0637b8a616329f2508c66c3e28db0a52152dafcc4d2f96f48cc0f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da55b132e1e0637b8a616329f2508c66c3e28db0a52152dafcc4d2f96f48cc0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:40 np0005597378 podman[80536]: 2026-01-27 13:13:40.663198478 +0000 UTC m=+0.042308072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:40 np0005597378 podman[80536]: 2026-01-27 13:13:40.766778251 +0000 UTC m=+0.145887845 container init 36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6 (image=quay.io/ceph/ceph:v20, name=serene_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:13:40 np0005597378 podman[80536]: 2026-01-27 13:13:40.773030076 +0000 UTC m=+0.152139650 container start 36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6 (image=quay.io/ceph/ceph:v20, name=serene_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:13:40 np0005597378 podman[80536]: 2026-01-27 13:13:40.776089592 +0000 UTC m=+0.155199166 container attach 36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6 (image=quay.io/ceph/ceph:v20, name=serene_meitner, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:13:40 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'balancer'
Jan 27 08:13:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:40 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'cephadm'
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/113128379' entity='client.admin' 
Jan 27 08:13:41 np0005597378 systemd[1]: libpod-36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6.scope: Deactivated successfully.
Jan 27 08:13:41 np0005597378 podman[80536]: 2026-01-27 13:13:41.247996134 +0000 UTC m=+0.627105728 container died 36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:13:41 np0005597378 ansible-async_wrapper.py[79290]: Done in kid B.
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:41 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/113128379' entity='client.admin' 
Jan 27 08:13:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-da55b132e1e0637b8a616329f2508c66c3e28db0a52152dafcc4d2f96f48cc0f-merged.mount: Deactivated successfully.
Jan 27 08:13:41 np0005597378 podman[80536]: 2026-01-27 13:13:41.637181253 +0000 UTC m=+1.016290827 container remove 36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6 (image=quay.io/ceph/ceph:v20, name=serene_meitner, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:41 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'crash'
Jan 27 08:13:41 np0005597378 systemd[1]: libpod-conmon-36020486c27ba540d3ed3735ef13f0f0cf455400eaac04e30edc83e7b3582ee6.scope: Deactivated successfully.
Jan 27 08:13:41 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:41 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'dashboard'
Jan 27 08:13:41 np0005597378 podman[80713]: 2026-01-27 13:13:41.920072921 +0000 UTC m=+0.659803333 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:13:41 np0005597378 python3[80771]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.027980557 +0000 UTC m=+0.035156289 container create 1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb (image=quay.io/ceph/ceph:v20, name=eager_gould, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:42 np0005597378 podman[80713]: 2026-01-27 13:13:42.02860591 +0000 UTC m=+0.768336322 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:42 np0005597378 systemd[1]: Started libpod-conmon-1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb.scope.
Jan 27 08:13:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdddb506fa05f732dcb67af65a1bc7c88b4fcb02c2bfc1f3f2b18c2f3643c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdddb506fa05f732dcb67af65a1bc7c88b4fcb02c2bfc1f3f2b18c2f3643c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdddb506fa05f732dcb67af65a1bc7c88b4fcb02c2bfc1f3f2b18c2f3643c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.01094137 +0000 UTC m=+0.018117122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.113069311 +0000 UTC m=+0.120245073 container init 1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb (image=quay.io/ceph/ceph:v20, name=eager_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.119091681 +0000 UTC m=+0.126267413 container start 1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb (image=quay.io/ceph/ceph:v20, name=eager_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.122257289 +0000 UTC m=+0.129433041 container attach 1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb (image=quay.io/ceph/ceph:v20, name=eager_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:42 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'devicehealth'
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3525328356' entity='client.admin' 
Jan 27 08:13:42 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 27 08:13:42 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:42 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:13:42 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.575103551 +0000 UTC m=+0.582279283 container died 1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb (image=quay.io/ceph/ceph:v20, name=eager_gould, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:42 np0005597378 systemd[1]: libpod-1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb.scope: Deactivated successfully.
Jan 27 08:13:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-23fdddb506fa05f732dcb67af65a1bc7c88b4fcb02c2bfc1f3f2b18c2f3643c9-merged.mount: Deactivated successfully.
Jan 27 08:13:42 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'diskprediction_local'
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:42 np0005597378 podman[80778]: 2026-01-27 13:13:42.617587216 +0000 UTC m=+0.624762948 container remove 1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb (image=quay.io/ceph/ceph:v20, name=eager_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:13:42 np0005597378 systemd[1]: libpod-conmon-1b0295a466948f3d746fc275833f4c8aa4d74571f8251f4179a5829f8dd28fcb.scope: Deactivated successfully.
Jan 27 08:13:42 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg[80533]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 27 08:13:42 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg[80533]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 27 08:13:42 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg[80533]:  from numpy import show_config as show_numpy_config
Jan 27 08:13:42 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 2 completed events
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:13:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:42 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'influx'
Jan 27 08:13:42 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'insights'
Jan 27 08:13:42 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'iostat'
Jan 27 08:13:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:42 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'k8sevents'
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:43.002797009 +0000 UTC m=+0.043553019 container create b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6 (image=quay.io/ceph/ceph:v20, name=funny_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:43 np0005597378 systemd[1]: Started libpod-conmon-b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6.scope.
Jan 27 08:13:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:43 np0005597378 python3[81029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:42.981360918 +0000 UTC m=+0.022116978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:43.077691154 +0000 UTC m=+0.118447184 container init b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6 (image=quay.io/ceph/ceph:v20, name=funny_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:43.087518936 +0000 UTC m=+0.128274956 container start b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6 (image=quay.io/ceph/ceph:v20, name=funny_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:13:43 np0005597378 funny_buck[81054]: 167 167
Jan 27 08:13:43 np0005597378 systemd[1]: libpod-b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6.scope: Deactivated successfully.
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:43.092999174 +0000 UTC m=+0.133755204 container attach b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6 (image=quay.io/ceph/ceph:v20, name=funny_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:43.093507405 +0000 UTC m=+0.134263415 container died b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6 (image=quay.io/ceph/ceph:v20, name=funny_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-950cb16329dbd9bb44acc855b92958e5f9afef4e3047e93793ad643b17085916-merged.mount: Deactivated successfully.
Jan 27 08:13:43 np0005597378 podman[81038]: 2026-01-27 13:13:43.12852972 +0000 UTC m=+0.169285730 container remove b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6 (image=quay.io/ceph/ceph:v20, name=funny_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:13:43 np0005597378 podman[81057]: 2026-01-27 13:13:43.149966901 +0000 UTC m=+0.061587678 container create 3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c (image=quay.io/ceph/ceph:v20, name=romantic_poincare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:43 np0005597378 systemd[1]: libpod-conmon-b695a034d3af989091d74270a30f810c30adaad20a455e502d9ae9a11adf10f6.scope: Deactivated successfully.
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:43 np0005597378 systemd[1]: Started libpod-conmon-3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c.scope.
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:43 np0005597378 podman[81057]: 2026-01-27 13:13:43.11231056 +0000 UTC m=+0.023931357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.uujfpe (unknown last config time)...
Jan 27 08:13:43 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.uujfpe (unknown last config time)...
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.uujfpe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.uujfpe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mgr services"} : dispatch
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:43 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.uujfpe on compute-0
Jan 27 08:13:43 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.uujfpe on compute-0
Jan 27 08:13:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bf5d1f014f3ae5d8dca3a0843a11c6fab57c2e38c2d75a5894e4c1ff307383/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bf5d1f014f3ae5d8dca3a0843a11c6fab57c2e38c2d75a5894e4c1ff307383/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bf5d1f014f3ae5d8dca3a0843a11c6fab57c2e38c2d75a5894e4c1ff307383/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:43 np0005597378 podman[81057]: 2026-01-27 13:13:43.232891529 +0000 UTC m=+0.144512316 container init 3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c (image=quay.io/ceph/ceph:v20, name=romantic_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:13:43 np0005597378 podman[81057]: 2026-01-27 13:13:43.238580162 +0000 UTC m=+0.150200939 container start 3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c (image=quay.io/ceph/ceph:v20, name=romantic_poincare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:13:43 np0005597378 podman[81057]: 2026-01-27 13:13:43.259364979 +0000 UTC m=+0.170985776 container attach 3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c (image=quay.io/ceph/ceph:v20, name=romantic_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 27 08:13:43 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'localpool'
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3525328356' entity='client.admin' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: Reconfiguring mgr.compute-0.uujfpe (unknown last config time)...
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.uujfpe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: Reconfiguring daemon mgr.compute-0.uujfpe on compute-0
Jan 27 08:13:43 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'mds_autoscaler'
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.624301346 +0000 UTC m=+0.035225190 container create 24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1946990580' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 27 08:13:43 np0005597378 systemd[1]: Started libpod-conmon-24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c.scope.
Jan 27 08:13:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.681354906 +0000 UTC m=+0.092278750 container init 24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.686289872 +0000 UTC m=+0.097213716 container start 24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:43 np0005597378 gifted_ritchie[81191]: 167 167
Jan 27 08:13:43 np0005597378 systemd[1]: libpod-24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c.scope: Deactivated successfully.
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.691064746 +0000 UTC m=+0.101988590 container attach 24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.691318361 +0000 UTC m=+0.102242215 container died 24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 27 08:13:43 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'mirroring'
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.609736403 +0000 UTC m=+0.020660267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1685b1798e57fc92094d78d61a9cc35732c2604d3c9d9e33c651c64a92ae0824-merged.mount: Deactivated successfully.
Jan 27 08:13:43 np0005597378 podman[81173]: 2026-01-27 13:13:43.728607195 +0000 UTC m=+0.139531039 container remove 24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:43 np0005597378 systemd[1]: libpod-conmon-24dc11fda0c5d24ef944a109083e89b763f8a6e3eb1658a76dfb3bef20bc581c.scope: Deactivated successfully.
Jan 27 08:13:43 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:43 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'nfs'
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'orchestrator'
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'osd_perf_query'
Jan 27 08:13:44 np0005597378 podman[81304]: 2026-01-27 13:13:44.30228166 +0000 UTC m=+0.045913961 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'osd_support'
Jan 27 08:13:44 np0005597378 podman[81304]: 2026-01-27 13:13:44.417678598 +0000 UTC m=+0.161310809 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'pg_autoscaler'
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1946990580' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1946990580' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 27 08:13:44 np0005597378 romantic_poincare[81085]: set require_min_compat_client to mimic
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 27 08:13:44 np0005597378 systemd[1]: libpod-3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c.scope: Deactivated successfully.
Jan 27 08:13:44 np0005597378 podman[81057]: 2026-01-27 13:13:44.485570641 +0000 UTC m=+1.397191498 container died 3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c (image=quay.io/ceph/ceph:v20, name=romantic_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:13:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-26bf5d1f014f3ae5d8dca3a0843a11c6fab57c2e38c2d75a5894e4c1ff307383-merged.mount: Deactivated successfully.
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'progress'
Jan 27 08:13:44 np0005597378 podman[81057]: 2026-01-27 13:13:44.522825115 +0000 UTC m=+1.434445892 container remove 3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c (image=quay.io/ceph/ceph:v20, name=romantic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:44 np0005597378 systemd[1]: libpod-conmon-3a2a4383c9fcc8568be10eab0b2c8681e6087c07aff99742d5a905b58c5d853c.scope: Deactivated successfully.
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'prometheus'
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:13:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:44 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'rbd_support'
Jan 27 08:13:45 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'rgw'
Jan 27 08:13:45 np0005597378 python3[81482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:45 np0005597378 podman[81483]: 2026-01-27 13:13:45.214048633 +0000 UTC m=+0.051962100 container create 55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3 (image=quay.io/ceph/ceph:v20, name=epic_ptolemy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:45 np0005597378 systemd[1]: Started libpod-conmon-55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3.scope.
Jan 27 08:13:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:45 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'rook'
Jan 27 08:13:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e120d8ebb07fc04cd380958a8c900b4f68ef55441b9afab6a520c717fcd4a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e120d8ebb07fc04cd380958a8c900b4f68ef55441b9afab6a520c717fcd4a8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e120d8ebb07fc04cd380958a8c900b4f68ef55441b9afab6a520c717fcd4a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:45 np0005597378 podman[81483]: 2026-01-27 13:13:45.189715579 +0000 UTC m=+0.027629086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:45 np0005597378 podman[81483]: 2026-01-27 13:13:45.300766953 +0000 UTC m=+0.138680480 container init 55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3 (image=quay.io/ceph/ceph:v20, name=epic_ptolemy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:45 np0005597378 podman[81483]: 2026-01-27 13:13:45.308127972 +0000 UTC m=+0.146041429 container start 55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3 (image=quay.io/ceph/ceph:v20, name=epic_ptolemy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:13:45 np0005597378 podman[81483]: 2026-01-27 13:13:45.311346941 +0000 UTC m=+0.149260388 container attach 55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3 (image=quay.io/ceph/ceph:v20, name=epic_ptolemy, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:13:45 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1946990580' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 27 08:13:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:45 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:13:45 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:45 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'selftest'
Jan 27 08:13:45 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'smb'
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Added host compute-0
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 22504ac5-93b6-4a06-bf29-cab5b26eded3 (Updating mgr deployment (-1 -> 1))
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.pmdbwg from compute-0 -- ports [8765]
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.pmdbwg from compute-0 -- ports [8765]
Jan 27 08:13:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:46 np0005597378 epic_ptolemy[81498]: Added host 'compute-0' with addr '192.168.122.100'
Jan 27 08:13:46 np0005597378 epic_ptolemy[81498]: Scheduled mon update...
Jan 27 08:13:46 np0005597378 epic_ptolemy[81498]: Scheduled mgr update...
Jan 27 08:13:46 np0005597378 epic_ptolemy[81498]: Scheduled osd.default_drive_group update...
Jan 27 08:13:46 np0005597378 systemd[1]: libpod-55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3.scope: Deactivated successfully.
Jan 27 08:13:46 np0005597378 podman[81483]: 2026-01-27 13:13:46.219678513 +0000 UTC m=+1.057591970 container died 55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3 (image=quay.io/ceph/ceph:v20, name=epic_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:13:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b7e120d8ebb07fc04cd380958a8c900b4f68ef55441b9afab6a520c717fcd4a8-merged.mount: Deactivated successfully.
Jan 27 08:13:46 np0005597378 podman[81483]: 2026-01-27 13:13:46.253198574 +0000 UTC m=+1.091112031 container remove 55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3 (image=quay.io/ceph/ceph:v20, name=epic_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:46 np0005597378 systemd[1]: libpod-conmon-55d583b2488adf86a65aae1e461dbcb1c1f7343243c20de6c7890b586a7556b3.scope: Deactivated successfully.
Jan 27 08:13:46 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'snap_schedule'
Jan 27 08:13:46 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'stats'
Jan 27 08:13:46 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'status'
Jan 27 08:13:46 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'telegraf'
Jan 27 08:13:46 np0005597378 systemd[1]: Stopping Ceph mgr.compute-0.pmdbwg for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:13:46 np0005597378 ceph-mgr[80548]: mgr[py] Loading python module 'telemetry'
Jan 27 08:13:46 np0005597378 python3[81698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:13:46 np0005597378 podman[81724]: 2026-01-27 13:13:46.745614137 +0000 UTC m=+0.082714407 container died 539bda77bf592a3cea911cbe3737c1b7a277c0967e41137903a3ad809ea6ba78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:13:46 np0005597378 podman[81740]: 2026-01-27 13:13:46.7490041 +0000 UTC m=+0.034488058 container create 8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6 (image=quay.io/ceph/ceph:v20, name=sweet_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5dfac9035980e34cd7f6b202c5ee2d491e3b8f66d7ba2ba912766238deec8cc1-merged.mount: Deactivated successfully.
Jan 27 08:13:46 np0005597378 podman[81724]: 2026-01-27 13:13:46.803917901 +0000 UTC m=+0.141018161 container remove 539bda77bf592a3cea911cbe3737c1b7a277c0967e41137903a3ad809ea6ba78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:13:46 np0005597378 bash[81724]: ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-pmdbwg
Jan 27 08:13:46 np0005597378 systemd[1]: Started libpod-conmon-8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6.scope.
Jan 27 08:13:46 np0005597378 systemd[1]: ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mgr.compute-0.pmdbwg.service: Main process exited, code=exited, status=143/n/a
Jan 27 08:13:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378ec3a97679d26dcc53721b5d79a5dfcd275ce35b4fb84ca4a1a9068249a7b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378ec3a97679d26dcc53721b5d79a5dfcd275ce35b4fb84ca4a1a9068249a7b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378ec3a97679d26dcc53721b5d79a5dfcd275ce35b4fb84ca4a1a9068249a7b2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:46 np0005597378 podman[81740]: 2026-01-27 13:13:46.734337391 +0000 UTC m=+0.019821369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:13:46 np0005597378 podman[81740]: 2026-01-27 13:13:46.837818862 +0000 UTC m=+0.123302840 container init 8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6 (image=quay.io/ceph/ceph:v20, name=sweet_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:13:46 np0005597378 podman[81740]: 2026-01-27 13:13:46.844231606 +0000 UTC m=+0.129715564 container start 8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6 (image=quay.io/ceph/ceph:v20, name=sweet_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:46 np0005597378 podman[81740]: 2026-01-27 13:13:46.848318747 +0000 UTC m=+0.133802745 container attach 8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6 (image=quay.io/ceph/ceph:v20, name=sweet_gauss, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:46 np0005597378 systemd[1]: ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mgr.compute-0.pmdbwg.service: Failed with result 'exit-code'.
Jan 27 08:13:46 np0005597378 systemd[1]: Stopped Ceph mgr.compute-0.pmdbwg for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:13:46 np0005597378 systemd[1]: ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mgr.compute-0.pmdbwg.service: Consumed 6.790s CPU time, 414.0M memory peak, read 0B from disk, written 186.5K to disk.
Jan 27 08:13:46 np0005597378 systemd[1]: Reloading.
Jan 27 08:13:47 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:13:47 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: Added host compute-0
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: Saving service mon spec with placement compute-0
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: Saving service mgr spec with placement compute-0
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: Saving service osd.default_drive_group spec with placement compute-0
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: Removing daemon mgr.compute-0.pmdbwg from compute-0 -- ports [8765]
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.pmdbwg
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.pmdbwg
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.pmdbwg"} v 0)
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.pmdbwg"} : dispatch
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.pmdbwg"}]': finished
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 22504ac5-93b6-4a06-bf29-cab5b26eded3 (Updating mgr deployment (-1 -> 1))
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 22504ac5-93b6-4a06-bf29-cab5b26eded3 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997924803' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 27 08:13:47 np0005597378 sweet_gauss[81768]: 
Jan 27 08:13:47 np0005597378 sweet_gauss[81768]: {"fsid":"4d8fd694-f443-5fb1-b612-70034b2f3c6e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":49,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-27T13:12:53:733380+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-27T13:12:53.736512+0000","services":{}},"progress_events":{"22504ac5-93b6-4a06-bf29-cab5b26eded3":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 27 08:13:47 np0005597378 systemd[1]: libpod-8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6.scope: Deactivated successfully.
Jan 27 08:13:47 np0005597378 podman[81740]: 2026-01-27 13:13:47.365432982 +0000 UTC m=+0.650916960 container died 8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6 (image=quay.io/ceph/ceph:v20, name=sweet_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-378ec3a97679d26dcc53721b5d79a5dfcd275ce35b4fb84ca4a1a9068249a7b2-merged.mount: Deactivated successfully.
Jan 27 08:13:47 np0005597378 podman[81740]: 2026-01-27 13:13:47.417779683 +0000 UTC m=+0.703263641 container remove 8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6 (image=quay.io/ceph/ceph:v20, name=sweet_gauss, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:47 np0005597378 systemd[1]: libpod-conmon-8dbac12baa6dbc69b213aa48d7dd4bb2e811b3020bb1af134eecf7766c2259d6.scope: Deactivated successfully.
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 3 completed events
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:13:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:13:47 np0005597378 podman[81996]: 2026-01-27 13:13:47.830930124 +0000 UTC m=+0.049250068 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:47 np0005597378 podman[81996]: 2026-01-27 13:13:47.920704384 +0000 UTC m=+0.139024328 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: Removing key for mgr.compute-0.pmdbwg
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.pmdbwg"} : dispatch
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.pmdbwg"}]': finished
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.695593839 +0000 UTC m=+0.041355924 container create a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:13:48 np0005597378 systemd[1]: Started libpod-conmon-a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5.scope.
Jan 27 08:13:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.677116758 +0000 UTC m=+0.022878873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.775564721 +0000 UTC m=+0.121326806 container init a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.783336693 +0000 UTC m=+0.129098758 container start a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.786892009 +0000 UTC m=+0.132654124 container attach a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:13:48 np0005597378 lucid_easley[82170]: 167 167
Jan 27 08:13:48 np0005597378 systemd[1]: libpod-a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5.scope: Deactivated successfully.
Jan 27 08:13:48 np0005597378 conmon[82170]: conmon a76e21d68492080b9a84 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5.scope/container/memory.events
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.789201272 +0000 UTC m=+0.134963347 container died a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:13:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0261e4616fd933ac1441b4b870b35d8c873839370d4705128233d00ae4c72836-merged.mount: Deactivated successfully.
Jan 27 08:13:48 np0005597378 podman[82154]: 2026-01-27 13:13:48.820679056 +0000 UTC m=+0.166441131 container remove a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_easley, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:48 np0005597378 systemd[1]: libpod-conmon-a76e21d68492080b9a84ad70c3f8a0009b79085baddeb65a40f0344b847385c5.scope: Deactivated successfully.
Jan 27 08:13:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:48 np0005597378 podman[82195]: 2026-01-27 13:13:48.975992375 +0000 UTC m=+0.048580300 container create 71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:49 np0005597378 systemd[1]: Started libpod-conmon-71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3.scope.
Jan 27 08:13:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ed2abee385f0401c1427a43df7d2b0722bcbf461ba9895ef6a13ae28914bfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ed2abee385f0401c1427a43df7d2b0722bcbf461ba9895ef6a13ae28914bfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ed2abee385f0401c1427a43df7d2b0722bcbf461ba9895ef6a13ae28914bfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ed2abee385f0401c1427a43df7d2b0722bcbf461ba9895ef6a13ae28914bfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ed2abee385f0401c1427a43df7d2b0722bcbf461ba9895ef6a13ae28914bfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:49 np0005597378 podman[82195]: 2026-01-27 13:13:48.952651342 +0000 UTC m=+0.025239267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:49 np0005597378 podman[82195]: 2026-01-27 13:13:49.052981346 +0000 UTC m=+0.125569271 container init 71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:13:49 np0005597378 podman[82195]: 2026-01-27 13:13:49.061444266 +0000 UTC m=+0.134032171 container start 71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_dubinsky, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:13:49 np0005597378 podman[82195]: 2026-01-27 13:13:49.064795627 +0000 UTC m=+0.137383532 container attach 71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:13:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:13:49 np0005597378 jovial_dubinsky[82211]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:13:49 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:13:49 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:49 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:13:49 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7401de7e-4bb5-49b0-a16c-bddf5aaf400a
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a"} v 0)
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3724632087' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a"} : dispatch
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3724632087' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a"}]': finished
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:13:50 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3724632087' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a"} : dispatch
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3724632087' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a"}]': finished
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 27 08:13:50 np0005597378 lvm[82304]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:13:50 np0005597378 lvm[82304]: VG ceph_vg0 finished
Jan 27 08:13:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 27 08:13:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677025670' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: stderr: got monmap epoch 1
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: --> Creating keyring file for osd.0
Jan 27 08:13:50 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 27 08:13:51 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 27 08:13:51 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 7401de7e-4bb5-49b0-a16c-bddf5aaf400a --setuser ceph --setgroup ceph
Jan 27 08:13:51 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 27 08:13:51 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:13:51 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:51 np0005597378 jovial_dubinsky[82211]: stderr: 2026-01-27T13:13:51.054+0000 7f4db5f668c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 27 08:13:51 np0005597378 jovial_dubinsky[82211]: stderr: 2026-01-27T13:13:51.078+0000 7f4db5f668c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 27 08:13:51 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 534ad76f-0fe2-4925-988a-e0878f02e0e5
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: Cluster is now healthy
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "534ad76f-0fe2-4925-988a-e0878f02e0e5"} v 0)
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735145355' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "534ad76f-0fe2-4925-988a-e0878f02e0e5"} : dispatch
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735145355' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "534ad76f-0fe2-4925-988a-e0878f02e0e5"}]': finished
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:13:52 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:13:52 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:13:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:52 np0005597378 lvm[83260]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:13:52 np0005597378 lvm[83260]: VG ceph_vg1 finished
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 27 08:13:52 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 27 08:13:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 27 08:13:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202636581' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 27 08:13:53 np0005597378 jovial_dubinsky[82211]: stderr: got monmap epoch 1
Jan 27 08:13:53 np0005597378 jovial_dubinsky[82211]: --> Creating keyring file for osd.1
Jan 27 08:13:53 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 27 08:13:53 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 27 08:13:53 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 534ad76f-0fe2-4925-988a-e0878f02e0e5 --setuser ceph --setgroup ceph
Jan 27 08:13:53 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2735145355' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "534ad76f-0fe2-4925-988a-e0878f02e0e5"} : dispatch
Jan 27 08:13:53 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2735145355' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "534ad76f-0fe2-4925-988a-e0878f02e0e5"}]': finished
Jan 27 08:13:53 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: stderr: 2026-01-27T13:13:53.257+0000 7f62f20e68c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: stderr: 2026-01-27T13:13:53.283+0000 7f62f20e68c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:13:54 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 37f85830-e66d-4c55-9f5f-5b8a8c68c8a4
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4"} v 0)
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080423007' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4"} : dispatch
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080423007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4"}]': finished
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:13:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:13:54 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:13:54 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:13:54 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:13:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:55 np0005597378 lvm[84220]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:13:55 np0005597378 lvm[84220]: VG ceph_vg2 finished
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 27 08:13:55 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/4080423007' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4"} : dispatch
Jan 27 08:13:55 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/4080423007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4"}]': finished
Jan 27 08:13:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 27 08:13:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875972226' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: stderr: got monmap epoch 1
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: --> Creating keyring file for osd.2
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 27 08:13:55 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 37f85830-e66d-4c55-9f5f-5b8a8c68c8a4 --setuser ceph --setgroup ceph
Jan 27 08:13:55 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: stderr: 2026-01-27T13:13:55.681+0000 7fcd4dd318c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: stderr: 2026-01-27T13:13:55.702+0000 7fcd4dd318c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 27 08:13:56 np0005597378 jovial_dubinsky[82211]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 27 08:13:56 np0005597378 systemd[1]: libpod-71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3.scope: Deactivated successfully.
Jan 27 08:13:56 np0005597378 systemd[1]: libpod-71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3.scope: Consumed 5.551s CPU time.
Jan 27 08:13:56 np0005597378 podman[85143]: 2026-01-27 13:13:56.685008324 +0000 UTC m=+0.025432352 container died 71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_dubinsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-88ed2abee385f0401c1427a43df7d2b0722bcbf461ba9895ef6a13ae28914bfd-merged.mount: Deactivated successfully.
Jan 27 08:13:56 np0005597378 podman[85143]: 2026-01-27 13:13:56.7312376 +0000 UTC m=+0.071661628 container remove 71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_dubinsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:13:56 np0005597378 systemd[1]: libpod-conmon-71785b9ceabe529e7eeea779bb773b04b81bba8daa7e61c88a3475e7082e93d3.scope: Deactivated successfully.
Jan 27 08:13:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.196296331 +0000 UTC m=+0.054687937 container create 267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:13:57 np0005597378 systemd[1]: Started libpod-conmon-267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf.scope.
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.170641454 +0000 UTC m=+0.029033150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.279181242 +0000 UTC m=+0.137572868 container init 267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.285673688 +0000 UTC m=+0.144065294 container start 267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_galois, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.288127635 +0000 UTC m=+0.146519271 container attach 267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:13:57 np0005597378 funny_galois[85237]: 167 167
Jan 27 08:13:57 np0005597378 systemd[1]: libpod-267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf.scope: Deactivated successfully.
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.293427949 +0000 UTC m=+0.151819555 container died 267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_galois, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:13:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-24545290f6fcdfd47f92da038bafa58cc8b311345292015d48a239dd19f155a6-merged.mount: Deactivated successfully.
Jan 27 08:13:57 np0005597378 podman[85221]: 2026-01-27 13:13:57.325244243 +0000 UTC m=+0.183635849 container remove 267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:13:57 np0005597378 systemd[1]: libpod-conmon-267c7c7e74cad56fdeaa8435609ea52bc4a738f2e55f90c4a27c92adf62532bf.scope: Deactivated successfully.
Jan 27 08:13:57 np0005597378 podman[85261]: 2026-01-27 13:13:57.460540798 +0000 UTC m=+0.039619027 container create 0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:13:57 np0005597378 systemd[1]: Started libpod-conmon-0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5.scope.
Jan 27 08:13:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b92aff632ce41ffbf828cef3b2d9a20cec0599ad0bf958c97571eb922dad89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b92aff632ce41ffbf828cef3b2d9a20cec0599ad0bf958c97571eb922dad89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b92aff632ce41ffbf828cef3b2d9a20cec0599ad0bf958c97571eb922dad89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b92aff632ce41ffbf828cef3b2d9a20cec0599ad0bf958c97571eb922dad89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:57 np0005597378 podman[85261]: 2026-01-27 13:13:57.517258169 +0000 UTC m=+0.096336418 container init 0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:13:57 np0005597378 podman[85261]: 2026-01-27 13:13:57.523308783 +0000 UTC m=+0.102387032 container start 0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:13:57 np0005597378 podman[85261]: 2026-01-27 13:13:57.526496999 +0000 UTC m=+0.105575218 container attach 0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:13:57 np0005597378 podman[85261]: 2026-01-27 13:13:57.445579271 +0000 UTC m=+0.024657510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:13:57 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]: {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:    "0": [
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:        {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "devices": [
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "/dev/loop3"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            ],
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_name": "ceph_lv0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_size": "21470642176",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "name": "ceph_lv0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "tags": {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cluster_name": "ceph",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.crush_device_class": "",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.encrypted": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.objectstore": "bluestore",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osd_id": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.type": "block",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.vdo": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.with_tpm": "0"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            },
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "type": "block",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "vg_name": "ceph_vg0"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:        }
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:    ],
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:    "1": [
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:        {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "devices": [
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "/dev/loop4"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            ],
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_name": "ceph_lv1",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_size": "21470642176",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "name": "ceph_lv1",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "tags": {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cluster_name": "ceph",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.crush_device_class": "",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.encrypted": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.objectstore": "bluestore",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osd_id": "1",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.type": "block",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.vdo": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.with_tpm": "0"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            },
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "type": "block",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "vg_name": "ceph_vg1"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:        }
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:    ],
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:    "2": [
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:        {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "devices": [
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "/dev/loop5"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            ],
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_name": "ceph_lv2",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_size": "21470642176",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "name": "ceph_lv2",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "tags": {
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.cluster_name": "ceph",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.crush_device_class": "",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.encrypted": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.objectstore": "bluestore",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osd_id": "2",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.type": "block",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.vdo": "0",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:                "ceph.with_tpm": "0"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            },
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "type": "block",
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:            "vg_name": "ceph_vg2"
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:        }
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]:    ]
Jan 27 08:13:57 np0005597378 relaxed_jones[85278]: }
Jan 27 08:13:57 np0005597378 systemd[1]: libpod-0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5.scope: Deactivated successfully.
Jan 27 08:13:57 np0005597378 podman[85261]: 2026-01-27 13:13:57.800360008 +0000 UTC m=+0.379438247 container died 0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-84b92aff632ce41ffbf828cef3b2d9a20cec0599ad0bf958c97571eb922dad89-merged.mount: Deactivated successfully.
Jan 27 08:13:58 np0005597378 podman[85261]: 2026-01-27 13:13:58.044560669 +0000 UTC m=+0.623638898 container remove 0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:58 np0005597378 systemd[1]: libpod-conmon-0164e749841e6d3e412b78bfde6b92cde175ddc00da543806b0e9f21f21fa4d5.scope: Deactivated successfully.
Jan 27 08:13:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 27 08:13:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 27 08:13:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:13:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:13:58 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 27 08:13:58 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 27 08:13:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.55451575 +0000 UTC m=+0.047192162 container create de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:58 np0005597378 systemd[1]: Started libpod-conmon-de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8.scope.
Jan 27 08:13:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.531585858 +0000 UTC m=+0.024262300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.627942765 +0000 UTC m=+0.120619177 container init de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.63440653 +0000 UTC m=+0.127082942 container start de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.637772782 +0000 UTC m=+0.130449174 container attach de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:13:58 np0005597378 trusting_mestorf[85404]: 167 167
Jan 27 08:13:58 np0005597378 systemd[1]: libpod-de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8.scope: Deactivated successfully.
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.642481559 +0000 UTC m=+0.135157961 container died de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:13:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4b72d5aec92dd994a71767a320a9d3416494cbdef397394eb60d9f67b9d36096-merged.mount: Deactivated successfully.
Jan 27 08:13:58 np0005597378 podman[85389]: 2026-01-27 13:13:58.677912062 +0000 UTC m=+0.170588444 container remove de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:13:58 np0005597378 systemd[1]: libpod-conmon-de7cd586c389bce7e505f01368147469aea8251a9b85cb76ba4e216a488420a8.scope: Deactivated successfully.
Jan 27 08:13:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:13:58 np0005597378 podman[85433]: 2026-01-27 13:13:58.926261227 +0000 UTC m=+0.042477894 container create 298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:13:58 np0005597378 systemd[1]: Started libpod-conmon-298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a.scope.
Jan 27 08:13:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:13:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8549f9b7052c068897ad649d454894c867cecf5e25025c424422b409a2d443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8549f9b7052c068897ad649d454894c867cecf5e25025c424422b409a2d443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8549f9b7052c068897ad649d454894c867cecf5e25025c424422b409a2d443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8549f9b7052c068897ad649d454894c867cecf5e25025c424422b409a2d443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8549f9b7052c068897ad649d454894c867cecf5e25025c424422b409a2d443/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:13:59 np0005597378 podman[85433]: 2026-01-27 13:13:58.907507947 +0000 UTC m=+0.023724654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:13:59 np0005597378 podman[85433]: 2026-01-27 13:13:59.112900057 +0000 UTC m=+0.229116734 container init 298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:13:59 np0005597378 podman[85433]: 2026-01-27 13:13:59.118522193 +0000 UTC m=+0.234738850 container start 298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:13:59 np0005597378 podman[85433]: 2026-01-27 13:13:59.228301311 +0000 UTC m=+0.344517968 container attach 298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:13:59 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test[85449]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 27 08:13:59 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test[85449]:                            [--no-systemd] [--no-tmpfs]
Jan 27 08:13:59 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test[85449]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 27 08:13:59 np0005597378 systemd[1]: libpod-298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a.scope: Deactivated successfully.
Jan 27 08:13:59 np0005597378 podman[85433]: 2026-01-27 13:13:59.303267512 +0000 UTC m=+0.419484199 container died 298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:13:59 np0005597378 ceph-mon[75090]: Deploying daemon osd.0 on compute-0
Jan 27 08:13:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6e8549f9b7052c068897ad649d454894c867cecf5e25025c424422b409a2d443-merged.mount: Deactivated successfully.
Jan 27 08:13:59 np0005597378 podman[85433]: 2026-01-27 13:13:59.616720431 +0000 UTC m=+0.732937078 container remove 298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:13:59 np0005597378 systemd[1]: libpod-conmon-298be59af70ae3ffbfc15e6f1d22c6c367a8c4f7702658c24a88a807c88d0b1a.scope: Deactivated successfully.
Jan 27 08:13:59 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:14:00 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:00 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:00 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:00 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:00 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:00 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:00 np0005597378 systemd[1]: Starting Ceph osd.0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:14:00 np0005597378 podman[85607]: 2026-01-27 13:14:00.894638858 +0000 UTC m=+0.105258120 container create 6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:00 np0005597378 podman[85607]: 2026-01-27 13:14:00.80868842 +0000 UTC m=+0.019307702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:14:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746393491e08ea197d37737ffaa6e13870c00d34a0a177c5203f162e51b90ab8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746393491e08ea197d37737ffaa6e13870c00d34a0a177c5203f162e51b90ab8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746393491e08ea197d37737ffaa6e13870c00d34a0a177c5203f162e51b90ab8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746393491e08ea197d37737ffaa6e13870c00d34a0a177c5203f162e51b90ab8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746393491e08ea197d37737ffaa6e13870c00d34a0a177c5203f162e51b90ab8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:01 np0005597378 podman[85607]: 2026-01-27 13:14:01.189989057 +0000 UTC m=+0.400608419 container init 6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 27 08:14:01 np0005597378 podman[85607]: 2026-01-27 13:14:01.196767594 +0000 UTC m=+0.407386876 container start 6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:14:01 np0005597378 podman[85607]: 2026-01-27 13:14:01.289122057 +0000 UTC m=+0.499741319 container attach 6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:01 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:01 np0005597378 bash[85607]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:01 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:01 np0005597378 bash[85607]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:01 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:14:02 np0005597378 lvm[85706]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:02 np0005597378 lvm[85706]: VG ceph_vg0 finished
Jan 27 08:14:02 np0005597378 lvm[85709]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:02 np0005597378 lvm[85709]: VG ceph_vg1 finished
Jan 27 08:14:02 np0005597378 lvm[85711]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:02 np0005597378 lvm[85711]: VG ceph_vg2 finished
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:02 np0005597378 bash[85607]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:14:02 np0005597378 bash[85607]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 27 08:14:02 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate[85623]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 27 08:14:02 np0005597378 bash[85607]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 27 08:14:02 np0005597378 systemd[1]: libpod-6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264.scope: Deactivated successfully.
Jan 27 08:14:02 np0005597378 systemd[1]: libpod-6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264.scope: Consumed 1.448s CPU time.
Jan 27 08:14:02 np0005597378 podman[85607]: 2026-01-27 13:14:02.361906714 +0000 UTC m=+1.572525986 container died 6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:14:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-746393491e08ea197d37737ffaa6e13870c00d34a0a177c5203f162e51b90ab8-merged.mount: Deactivated successfully.
Jan 27 08:14:02 np0005597378 podman[85607]: 2026-01-27 13:14:02.407381048 +0000 UTC m=+1.618000310 container remove 6a91e8802d3041820e00bb32124c8cf45d68cf6ed12e1bb70ebffe2aa4dfc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 27 08:14:02 np0005597378 podman[85877]: 2026-01-27 13:14:02.592110846 +0000 UTC m=+0.041598684 container create 1159b902fe1f1272655d79de0998646d359667949ed463f0e41503a8ae6fc464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c93f17d0932df8f0007d308e980f96189f2415ad641d277f5e126f11e0eee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c93f17d0932df8f0007d308e980f96189f2415ad641d277f5e126f11e0eee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c93f17d0932df8f0007d308e980f96189f2415ad641d277f5e126f11e0eee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c93f17d0932df8f0007d308e980f96189f2415ad641d277f5e126f11e0eee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c93f17d0932df8f0007d308e980f96189f2415ad641d277f5e126f11e0eee0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:02 np0005597378 podman[85877]: 2026-01-27 13:14:02.640945688 +0000 UTC m=+0.090433546 container init 1159b902fe1f1272655d79de0998646d359667949ed463f0e41503a8ae6fc464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:02 np0005597378 podman[85877]: 2026-01-27 13:14:02.648275968 +0000 UTC m=+0.097763806 container start 1159b902fe1f1272655d79de0998646d359667949ed463f0e41503a8ae6fc464 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:02 np0005597378 bash[85877]: 1159b902fe1f1272655d79de0998646d359667949ed463f0e41503a8ae6fc464
Jan 27 08:14:02 np0005597378 podman[85877]: 2026-01-27 13:14:02.57647592 +0000 UTC m=+0.025963778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:02 np0005597378 systemd[1]: Started Ceph osd.0 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: pidfile_write: ignore empty --pid-file
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:02 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 27 08:14:02 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a400 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484a000 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: load: jerasure load: lrc 
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:02 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x56263484bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount shared_bdev_used = 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Git sha 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DB SUMMARY
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DB Session ID:  OAMF42VYVV872RASWSG3
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                     Options.env: 0x5626346dbea0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                Options.info_log: 0x5626357368a0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.write_buffer_manager: 0x5626355dcb40
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.row_cache: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                              Options.wal_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.wal_compression: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Compression algorithms supported:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kZSTD supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346dfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346dfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346dfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 354f3b77-0a1a-41e9-932f-fdde8b6872d5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643050519, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643052050, "job": 1, "event": "recovery_finished"}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: freelist init
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: freelist _read_cfg
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs umount
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) close
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bdev(0x5626354eb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluefs mount shared_bdev_used = 27262976
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Git sha 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DB SUMMARY
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DB Session ID:  OAMF42VYVV872RASWSG2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                     Options.env: 0x562635906a80
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                Options.info_log: 0x562635736a20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.write_buffer_manager: 0x5626355dd900
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.row_cache: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                              Options.wal_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.wal_compression: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Compression algorithms supported:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kZSTD supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562635736bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346df8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5626357370c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346dfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5626357370c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346dfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5626357370c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5626346dfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 354f3b77-0a1a-41e9-932f-fdde8b6872d5
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643100040, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643106447, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519643, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "354f3b77-0a1a-41e9-932f-fdde8b6872d5", "db_session_id": "OAMF42VYVV872RASWSG2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643108911, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519643, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "354f3b77-0a1a-41e9-932f-fdde8b6872d5", "db_session_id": "OAMF42VYVV872RASWSG2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643112492, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519643, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "354f3b77-0a1a-41e9-932f-fdde8b6872d5", "db_session_id": "OAMF42VYVV872RASWSG2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519643114089, "job": 1, "event": "recovery_finished"}
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56263591a000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: DB pointer 0x5626358f0000
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 460.80 MB usag
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: _get_class not permitted to load lua
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: _get_class not permitted to load sdk
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 load_pgs
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 load_pgs opened 0 pgs
Jan 27 08:14:03 np0005597378 ceph-osd[85897]: osd.0 0 log_to_monitors true
Jan 27 08:14:03 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0[85893]: 2026-01-27T13:14:03.149+0000 7f80699908c0 -1 osd.0 0 log_to_monitors true
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.250804164 +0000 UTC m=+0.039112209 container create 992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heisenberg, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:14:03 np0005597378 systemd[1]: Started libpod-conmon-992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480.scope.
Jan 27 08:14:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.325300473 +0000 UTC m=+0.113608538 container init 992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.332146011 +0000 UTC m=+0.120454056 container start 992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.235080114 +0000 UTC m=+0.023388179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.335056197 +0000 UTC m=+0.123364262 container attach 992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:14:03 np0005597378 condescending_heisenberg[86456]: 167 167
Jan 27 08:14:03 np0005597378 systemd[1]: libpod-992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480.scope: Deactivated successfully.
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.33902412 +0000 UTC m=+0.127332175 container died 992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:14:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d57787513d9e7c772bd6877a973e6c02f08636489aedcba7218c092a32e23b13-merged.mount: Deactivated successfully.
Jan 27 08:14:03 np0005597378 podman[86440]: 2026-01-27 13:14:03.37322793 +0000 UTC m=+0.161535975 container remove 992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_heisenberg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:03 np0005597378 systemd[1]: libpod-conmon-992bf06b396d22a19e5495dced281fad97f30042c1fde6e6aeee987e3c8c1480.scope: Deactivated successfully.
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.594723216 +0000 UTC m=+0.038651136 container create dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:03 np0005597378 systemd[1]: Started libpod-conmon-dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af.scope.
Jan 27 08:14:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cadb5cbc7d3efc0b8f2dbd2789d77dc4f4e76bd06797ebeafc520dd310b7930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cadb5cbc7d3efc0b8f2dbd2789d77dc4f4e76bd06797ebeafc520dd310b7930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cadb5cbc7d3efc0b8f2dbd2789d77dc4f4e76bd06797ebeafc520dd310b7930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cadb5cbc7d3efc0b8f2dbd2789d77dc4f4e76bd06797ebeafc520dd310b7930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cadb5cbc7d3efc0b8f2dbd2789d77dc4f4e76bd06797ebeafc520dd310b7930/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.665778646 +0000 UTC m=+0.109706556 container init dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.671544076 +0000 UTC m=+0.115471986 container start dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.578175825 +0000 UTC m=+0.022103755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.675015587 +0000 UTC m=+0.118943497 container attach dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: Deploying daemon osd.1 on compute-0
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:03 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:03 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:03 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:03 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:14:03 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test[86501]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 27 08:14:03 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test[86501]:                            [--no-systemd] [--no-tmpfs]
Jan 27 08:14:03 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test[86501]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 27 08:14:03 np0005597378 systemd[1]: libpod-dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af.scope: Deactivated successfully.
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.852373813 +0000 UTC m=+0.296301723 container died dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2cadb5cbc7d3efc0b8f2dbd2789d77dc4f4e76bd06797ebeafc520dd310b7930-merged.mount: Deactivated successfully.
Jan 27 08:14:03 np0005597378 podman[86486]: 2026-01-27 13:14:03.885965478 +0000 UTC m=+0.329893398 container remove dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:03 np0005597378 systemd[1]: libpod-conmon-dce6c97fd56135383b5ed034d6cfdbec576612ea04040bf5843384b1593c48af.scope: Deactivated successfully.
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0 done with init, starting boot process
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0 start_boot
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 27 08:14:04 np0005597378 ceph-osd[85897]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:04 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:04 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:04 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:04 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2225645683; not ready for session (expect reconnect)
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:04 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 27 08:14:04 np0005597378 ceph-mon[75090]: from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 27 08:14:04 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:04 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:04 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:14:05 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:05 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:05 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:05 np0005597378 systemd[1]: Starting Ceph osd.1 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:14:05 np0005597378 podman[86661]: 2026-01-27 13:14:05.651527109 +0000 UTC m=+0.067411106 container create 9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:14:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba320fd75cb0511a7fb4dbda43a2dbe62daa23069d3d826ac1e2b4b2d1e422d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba320fd75cb0511a7fb4dbda43a2dbe62daa23069d3d826ac1e2b4b2d1e422d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba320fd75cb0511a7fb4dbda43a2dbe62daa23069d3d826ac1e2b4b2d1e422d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba320fd75cb0511a7fb4dbda43a2dbe62daa23069d3d826ac1e2b4b2d1e422d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba320fd75cb0511a7fb4dbda43a2dbe62daa23069d3d826ac1e2b4b2d1e422d9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:05 np0005597378 podman[86661]: 2026-01-27 13:14:05.617190675 +0000 UTC m=+0.033074702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:05 np0005597378 podman[86661]: 2026-01-27 13:14:05.729516929 +0000 UTC m=+0.145400946 container init 9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:05 np0005597378 podman[86661]: 2026-01-27 13:14:05.736019419 +0000 UTC m=+0.151903416 container start 9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:14:05 np0005597378 podman[86661]: 2026-01-27 13:14:05.747635621 +0000 UTC m=+0.163519648 container attach 9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:05 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:14:05 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2225645683; not ready for session (expect reconnect)
Jan 27 08:14:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:05 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:05 np0005597378 ceph-mon[75090]: from='osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:14:05 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:05 np0005597378 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:05 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:05 np0005597378 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:06 np0005597378 lvm[86761]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:06 np0005597378 lvm[86761]: VG ceph_vg0 finished
Jan 27 08:14:06 np0005597378 lvm[86763]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:06 np0005597378 lvm[86763]: VG ceph_vg1 finished
Jan 27 08:14:06 np0005597378 lvm[86765]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:06 np0005597378 lvm[86765]: VG ceph_vg2 finished
Jan 27 08:14:06 np0005597378 lvm[86766]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:06 np0005597378 lvm[86766]: VG ceph_vg1 finished
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:06 np0005597378 bash[86661]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 27 08:14:06 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2225645683; not ready for session (expect reconnect)
Jan 27 08:14:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:06 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 27 08:14:06 np0005597378 bash[86661]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 27 08:14:06 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate[86677]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 27 08:14:06 np0005597378 bash[86661]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 27 08:14:06 np0005597378 systemd[1]: libpod-9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370.scope: Deactivated successfully.
Jan 27 08:14:06 np0005597378 systemd[1]: libpod-9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370.scope: Consumed 1.598s CPU time.
Jan 27 08:14:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:14:06 np0005597378 podman[86869]: 2026-01-27 13:14:06.926593551 +0000 UTC m=+0.026463420 container died 9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:14:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ba320fd75cb0511a7fb4dbda43a2dbe62daa23069d3d826ac1e2b4b2d1e422d9-merged.mount: Deactivated successfully.
Jan 27 08:14:07 np0005597378 podman[86869]: 2026-01-27 13:14:07.012794314 +0000 UTC m=+0.112664173 container remove 9b8bd1d18c22e2304902a4a2bc842d79dfa7174b364ce4aea4e0d7c35bb64370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:14:07 np0005597378 podman[86921]: 2026-01-27 13:14:07.27073748 +0000 UTC m=+0.109485642 container create 7225f8e2277aea0d1108434920826311809889b2fc216364360132e1a743d366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:07 np0005597378 podman[86921]: 2026-01-27 13:14:07.180910841 +0000 UTC m=+0.019659033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865519af69ed7b7c3a888c0ee49740068d815efd92d3fa5856732e5cc35fa7b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865519af69ed7b7c3a888c0ee49740068d815efd92d3fa5856732e5cc35fa7b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865519af69ed7b7c3a888c0ee49740068d815efd92d3fa5856732e5cc35fa7b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865519af69ed7b7c3a888c0ee49740068d815efd92d3fa5856732e5cc35fa7b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865519af69ed7b7c3a888c0ee49740068d815efd92d3fa5856732e5cc35fa7b0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:07 np0005597378 podman[86921]: 2026-01-27 13:14:07.422073429 +0000 UTC m=+0.260821631 container init 7225f8e2277aea0d1108434920826311809889b2fc216364360132e1a743d366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:07 np0005597378 podman[86921]: 2026-01-27 13:14:07.427952982 +0000 UTC m=+0.266701144 container start 7225f8e2277aea0d1108434920826311809889b2fc216364360132e1a743d366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: pidfile_write: ignore empty --pid-file
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 bash[86921]: 7225f8e2277aea0d1108434920826311809889b2fc216364360132e1a743d366
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 systemd[1]: Started Ceph osd.1 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642400 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5642000 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: load: jerasure load: lrc 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-mgr[75385]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 14.024 iops: 3590.041 elapsed_sec: 0.836
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: log_channel(cluster) log [WRN] : OSD bench result of 3590.041475 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 0 waiting for initial osdmap
Jan 27 08:14:07 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0[85893]: 2026-01-27T13:14:07.777+0000 7f8066124640 -1 osd.0 0 waiting for initial osdmap
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:07 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2225645683; not ready for session (expect reconnect)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b5643c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount shared_bdev_used = 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Git sha 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: DB SUMMARY
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: DB Session ID:  M74EW9399UL1EW88SBU1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                     Options.env: 0x5640b54d3ea0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                Options.info_log: 0x5640b65248a0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.write_buffer_manager: 0x5640b5538b40
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.row_cache: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                              Options.wal_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.wal_compression: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Compression algorithms supported:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kZSTD supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5bb25e5c-ccf4-495b-81fb-1643eb1fb2a5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647822054, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647823489, "job": 1, "event": "recovery_finished"}
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: freelist init
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: freelist _read_cfg
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs umount
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) close
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bdev(0x5640b62d9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluefs mount shared_bdev_used = 27262976
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Git sha 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: DB SUMMARY
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: DB Session ID:  M74EW9399UL1EW88SBU0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                     Options.env: 0x5640b54d3ce0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                Options.info_log: 0x5640b6524960
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.write_buffer_manager: 0x5640b5538b40
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.row_cache: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                              Options.wal_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.wal_compression: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Compression algorithms supported:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kZSTD supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b6524bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b65250c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b65250c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640b65250c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5640b54d7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5bb25e5c-ccf4-495b-81fb-1643eb1fb2a5
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647861497, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647889283, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519647, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bb25e5c-ccf4-495b-81fb-1643eb1fb2a5", "db_session_id": "M74EW9399UL1EW88SBU0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 set_numa_affinity not setting numa affinity
Jan 27 08:14:07 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-0[85893]: 2026-01-27T13:14:07.889+0000 7f8060717640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:14:07 np0005597378 ceph-osd[85897]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647922145, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519647, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bb25e5c-ccf4-495b-81fb-1643eb1fb2a5", "db_session_id": "M74EW9399UL1EW88SBU0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647956351, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519647, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bb25e5c-ccf4-495b-81fb-1643eb1fb2a5", "db_session_id": "M74EW9399UL1EW88SBU0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519647996555, "job": 1, "event": "recovery_finished"}
Jan 27 08:14:07 np0005597378 ceph-osd[86941]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 27 08:14:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5640b673e000
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: rocksdb: DB pointer 0x5640b66de000
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 460.80 MB usag
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: _get_class not permitted to load lua
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: _get_class not permitted to load sdk
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 load_pgs
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 load_pgs opened 0 pgs
Jan 27 08:14:08 np0005597378 ceph-osd[86941]: osd.1 0 log_to_monitors true
Jan 27 08:14:08 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1[86937]: 2026-01-27T13:14:08.191+0000 7f47e82678c0 -1 osd.1 0 log_to_monitors true
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.526395996 +0000 UTC m=+0.040010402 container create 90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:14:08 np0005597378 systemd[1]: Started libpod-conmon-90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d.scope.
Jan 27 08:14:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.597441416 +0000 UTC m=+0.111055802 container init 90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.605903286 +0000 UTC m=+0.119517662 container start 90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.50998357 +0000 UTC m=+0.023597956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.609028397 +0000 UTC m=+0.122642783 container attach 90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:14:08 np0005597378 flamboyant_cerf[87503]: 167 167
Jan 27 08:14:08 np0005597378 systemd[1]: libpod-90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d.scope: Deactivated successfully.
Jan 27 08:14:08 np0005597378 conmon[87503]: conmon 90baa87e28f4e0ec1638 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d.scope/container/memory.events
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.611556713 +0000 UTC m=+0.125171069 container died 90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-17c4c084592615605739da7efc4825417e395545dc1ea5ca1fee936666866ca9-merged.mount: Deactivated successfully.
Jan 27 08:14:08 np0005597378 podman[87486]: 2026-01-27 13:14:08.651416122 +0000 UTC m=+0.165030488 container remove 90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:14:08 np0005597378 systemd[1]: libpod-conmon-90baa87e28f4e0ec16385d98146e29da7221c9df5c3ee3ac17d2f47d1882f42d.scope: Deactivated successfully.
Jan 27 08:14:08 np0005597378 ceph-osd[85897]: osd.0 8 tick checking mon for new map
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2225645683; not ready for session (expect reconnect)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 27 08:14:08 np0005597378 podman[87533]: 2026-01-27 13:14:08.882379664 +0000 UTC m=+0.054726336 container create f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Jan 27 08:14:08 np0005597378 ceph-osd[85897]: osd.0 9 state: booting -> active
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683] boot
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: OSD bench result of 3590.041475 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: Deploying daemon osd.2 on compute-0
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: osd.0 [v2:192.168.122.100:6802/2225645683,v1:192.168.122.100:6803/2225645683] boot
Jan 27 08:14:08 np0005597378 ceph-mon[75090]: from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 27 08:14:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 27 08:14:08 np0005597378 systemd[1]: Started libpod-conmon-f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495.scope.
Jan 27 08:14:08 np0005597378 podman[87533]: 2026-01-27 13:14:08.861799308 +0000 UTC m=+0.034146010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fcc38d2fc12835fff93db62c987e55ac0e96fb22b7fa4eb629c5db88faadec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fcc38d2fc12835fff93db62c987e55ac0e96fb22b7fa4eb629c5db88faadec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fcc38d2fc12835fff93db62c987e55ac0e96fb22b7fa4eb629c5db88faadec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fcc38d2fc12835fff93db62c987e55ac0e96fb22b7fa4eb629c5db88faadec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fcc38d2fc12835fff93db62c987e55ac0e96fb22b7fa4eb629c5db88faadec/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:08 np0005597378 podman[87533]: 2026-01-27 13:14:08.999096382 +0000 UTC m=+0.171443084 container init f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:14:09 np0005597378 podman[87533]: 2026-01-27 13:14:09.00554549 +0000 UTC m=+0.177892182 container start f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:14:09 np0005597378 podman[87533]: 2026-01-27 13:14:09.009930384 +0000 UTC m=+0.182277036 container attach f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 27 08:14:09 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test[87550]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 27 08:14:09 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test[87550]:                            [--no-systemd] [--no-tmpfs]
Jan 27 08:14:09 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test[87550]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 27 08:14:09 np0005597378 systemd[1]: libpod-f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495.scope: Deactivated successfully.
Jan 27 08:14:09 np0005597378 podman[87533]: 2026-01-27 13:14:09.231866611 +0000 UTC m=+0.404213273 container died f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-78fcc38d2fc12835fff93db62c987e55ac0e96fb22b7fa4eb629c5db88faadec-merged.mount: Deactivated successfully.
Jan 27 08:14:09 np0005597378 podman[87533]: 2026-01-27 13:14:09.28980349 +0000 UTC m=+0.462150142 container remove f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:14:09 np0005597378 systemd[1]: libpod-conmon-f73d0872a3708306d876315ad9ab869be7bc37b3ff17e61eddd500b3cb564495.scope: Deactivated successfully.
Jan 27 08:14:09 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:09 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:09 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:09 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] creating mgr pool
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 27 08:14:09 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0 done with init, starting boot process
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0 start_boot
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 27 08:14:09 np0005597378 ceph-osd[86941]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 27 08:14:09 np0005597378 ceph-osd[85897]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 27 08:14:09 np0005597378 ceph-osd[85897]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 27 08:14:09 np0005597378 ceph-osd[85897]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:09 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:09 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 27 08:14:09 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:09 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:09 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:09 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: from='osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 27 08:14:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 27 08:14:10 np0005597378 systemd[1]: Starting Ceph osd.2 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:14:10 np0005597378 podman[87712]: 2026-01-27 13:14:10.347716419 +0000 UTC m=+0.062489167 container create d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:14:10 np0005597378 podman[87712]: 2026-01-27 13:14:10.305070129 +0000 UTC m=+0.019842927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd998f9e92f337f77d249245246c209d731407e3e7d77feb90357f924f59915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd998f9e92f337f77d249245246c209d731407e3e7d77feb90357f924f59915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd998f9e92f337f77d249245246c209d731407e3e7d77feb90357f924f59915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd998f9e92f337f77d249245246c209d731407e3e7d77feb90357f924f59915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd998f9e92f337f77d249245246c209d731407e3e7d77feb90357f924f59915/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:10 np0005597378 podman[87712]: 2026-01-27 13:14:10.491077661 +0000 UTC m=+0.205850439 container init d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:10 np0005597378 podman[87712]: 2026-01-27 13:14:10.496777289 +0000 UTC m=+0.211550047 container start d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:14:10 np0005597378 podman[87712]: 2026-01-27 13:14:10.631810964 +0000 UTC m=+0.346583722 container attach d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:14:10 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:10 np0005597378 bash[87712]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:10 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:10 np0005597378 bash[87712]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 27 08:14:10 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v28: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:11 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 27 08:14:11 np0005597378 lvm[87811]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:11 np0005597378 lvm[87811]: VG ceph_vg0 finished
Jan 27 08:14:11 np0005597378 lvm[87814]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:11 np0005597378 lvm[87814]: VG ceph_vg1 finished
Jan 27 08:14:11 np0005597378 lvm[87816]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:11 np0005597378 lvm[87816]: VG ceph_vg2 finished
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:11 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:11 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:11 np0005597378 bash[87712]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 27 08:14:11 np0005597378 bash[87712]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 27 08:14:11 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate[87728]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 27 08:14:11 np0005597378 bash[87712]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 27 08:14:11 np0005597378 systemd[1]: libpod-d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc.scope: Deactivated successfully.
Jan 27 08:14:11 np0005597378 systemd[1]: libpod-d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc.scope: Consumed 1.522s CPU time.
Jan 27 08:14:11 np0005597378 podman[87712]: 2026-01-27 13:14:11.61620707 +0000 UTC m=+1.330979828 container died d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4fd998f9e92f337f77d249245246c209d731407e3e7d77feb90357f924f59915-merged.mount: Deactivated successfully.
Jan 27 08:14:11 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:11 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:11 np0005597378 podman[87712]: 2026-01-27 13:14:11.955710558 +0000 UTC m=+1.670483316 container remove d93879e82af8a134953f594da6387ddb340c1a4816249327b1f1678607d81efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:14:12 np0005597378 podman[87985]: 2026-01-27 13:14:12.119747458 +0000 UTC m=+0.019465198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:12 np0005597378 podman[87985]: 2026-01-27 13:14:12.262310499 +0000 UTC m=+0.162028259 container create afdea70c1a449391c06ba650aa28dec887290641c3f22dc3c84d234fc2f52238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:14:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772b04f48d42cb8ee0a6c62b6aa13d2f0e19b050484a1c4fd94d8fbf526c065/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772b04f48d42cb8ee0a6c62b6aa13d2f0e19b050484a1c4fd94d8fbf526c065/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772b04f48d42cb8ee0a6c62b6aa13d2f0e19b050484a1c4fd94d8fbf526c065/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772b04f48d42cb8ee0a6c62b6aa13d2f0e19b050484a1c4fd94d8fbf526c065/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1772b04f48d42cb8ee0a6c62b6aa13d2f0e19b050484a1c4fd94d8fbf526c065/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 27 08:14:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:12 np0005597378 podman[87985]: 2026-01-27 13:14:12.846531517 +0000 UTC m=+0.746249237 container init afdea70c1a449391c06ba650aa28dec887290641c3f22dc3c84d234fc2f52238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:12 np0005597378 podman[87985]: 2026-01-27 13:14:12.852546403 +0000 UTC m=+0.752264123 container start afdea70c1a449391c06ba650aa28dec887290641c3f22dc3c84d234fc2f52238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: pidfile_write: ignore empty --pid-file
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:12 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:12 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:12 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74400 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e74000 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: load: jerasure load: lrc 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 bash[87985]: afdea70c1a449391c06ba650aa28dec887290641c3f22dc3c84d234fc2f52238
Jan 27 08:14:13 np0005597378 systemd[1]: Started Ceph osd.2 for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc8e75c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount shared_bdev_used = 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Git sha 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: DB SUMMARY
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: DB Session ID:  2WQ7EHV3F5HAL2QSIGS8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                     Options.env: 0x564bc8d05ea0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                Options.info_log: 0x564bc9d568a0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.write_buffer_manager: 0x564bc8d6ab40
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.row_cache: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                              Options.wal_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.wal_compression: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Compression algorithms supported:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kZSTD supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d09a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d09a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d09a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: afceabae-9b13-40d0-bcf4-d58b218ccdaf
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519653245015, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519653246536, "job": 1, "event": "recovery_finished"}
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: freelist init
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: freelist _read_cfg
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs umount
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) close
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bdev(0x564bc9b0b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluefs mount shared_bdev_used = 27262976
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: RocksDB version: 7.9.2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Git sha 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: DB SUMMARY
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: DB Session ID:  2WQ7EHV3F5HAL2QSIGS9
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: CURRENT file:  CURRENT
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: IDENTITY file:  IDENTITY
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.error_if_exists: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.create_if_missing: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.paranoid_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                     Options.env: 0x564bc9f26a80
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                Options.info_log: 0x564bc9d56960
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_file_opening_threads: 16
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                              Options.statistics: (nil)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.use_fsync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.max_log_file_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.allow_fallocate: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.use_direct_reads: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.create_missing_column_families: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                              Options.db_log_dir: 
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                                 Options.wal_dir: db.wal
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.advise_random_on_open: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.write_buffer_manager: 0x564bc8d6b900
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                            Options.rate_limiter: (nil)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.unordered_write: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.row_cache: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                              Options.wal_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.allow_ingest_behind: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.two_write_queues: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.manual_wal_flush: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.wal_compression: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.atomic_flush: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.log_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.allow_data_in_errors: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.db_host_id: __hostname__
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_background_jobs: 4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_background_compactions: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_subcompactions: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.max_open_files: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.bytes_per_sync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.max_background_flushes: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Compression algorithms supported:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kZSTD supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kXpressCompression supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kBZip2Compression supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kLZ4Compression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kZlibCompression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: #011kSnappyCompression supported: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d56bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d098d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d570c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d09a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d570c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d09a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:           Options.merge_operator: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.compaction_filter_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.sst_partitioner_factory: None
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bc9d570c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564bc8d09a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.write_buffer_size: 16777216
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.max_write_buffer_number: 64
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.compression: LZ4
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.num_levels: 7
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.level: 32767
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.compression_opts.strategy: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                  Options.compression_opts.enabled: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.arena_block_size: 1048576
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.disable_auto_compactions: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.inplace_update_support: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.bloom_locality: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                    Options.max_successive_merges: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.paranoid_file_checks: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.force_consistency_checks: 1
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.report_bg_io_stats: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                               Options.ttl: 2592000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                       Options.enable_blob_files: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                           Options.min_blob_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                          Options.blob_file_size: 268435456
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb:                Options.blob_file_starting_level: 0
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: afceabae-9b13-40d0-bcf4-d58b218ccdaf
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519653296198, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 27 08:14:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519653466019, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519653, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "afceabae-9b13-40d0-bcf4-d58b218ccdaf", "db_session_id": "2WQ7EHV3F5HAL2QSIGS9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519653840614, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519653, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "afceabae-9b13-40d0-bcf4-d58b218ccdaf", "db_session_id": "2WQ7EHV3F5HAL2QSIGS9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:13 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519653881129, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519653, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "afceabae-9b13-40d0-bcf4-d58b218ccdaf", "db_session_id": "2WQ7EHV3F5HAL2QSIGS9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:14:13 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519654058712, "job": 1, "event": "recovery_finished"}
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:14 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564bc9f70000
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: rocksdb: DB pointer 0x564bc9f10000
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.0 total, 1.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 460.80 MB usag
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: _get_class not permitted to load lua
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: _get_class not permitted to load sdk
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 load_pgs
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 load_pgs opened 0 pgs
Jan 27 08:14:14 np0005597378 ceph-osd[88005]: osd.2 0 log_to_monitors true
Jan 27 08:14:14 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2[88001]: 2026-01-27T13:14:14.300+0000 7f2473f8e8c0 -1 osd.2 0 log_to_monitors true
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 27 08:14:14 np0005597378 podman[88517]: 2026-01-27 13:14:14.493516022 +0000 UTC m=+0.054945321 container create b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:14:14 np0005597378 podman[88517]: 2026-01-27 13:14:14.457396212 +0000 UTC m=+0.018825501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:14 np0005597378 systemd[1]: Started libpod-conmon-b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27.scope.
Jan 27 08:14:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:14 np0005597378 podman[88517]: 2026-01-27 13:14:14.763574872 +0000 UTC m=+0.325004151 container init b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:14 np0005597378 podman[88517]: 2026-01-27 13:14:14.772051693 +0000 UTC m=+0.333480962 container start b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:14 np0005597378 friendly_jones[88533]: 167 167
Jan 27 08:14:14 np0005597378 systemd[1]: libpod-b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27.scope: Deactivated successfully.
Jan 27 08:14:14 np0005597378 conmon[88533]: conmon b0d39489a0d97ffe1f2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27.scope/container/memory.events
Jan 27 08:14:14 np0005597378 podman[88517]: 2026-01-27 13:14:14.877142469 +0000 UTC m=+0.438571758 container attach b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:14 np0005597378 podman[88517]: 2026-01-27 13:14:14.877578291 +0000 UTC m=+0.439007560 container died b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:14 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:14 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:14 np0005597378 ceph-mon[75090]: from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 27 08:14:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7b27b30ca49d84c24f0c6eef7bc37fa5d5296d8f24170d3fd92e52a0cfa8c7f9-merged.mount: Deactivated successfully.
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 27 08:14:15 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 27 08:14:15 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:15 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:15 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:15 np0005597378 podman[88517]: 2026-01-27 13:14:15.675422999 +0000 UTC m=+1.236852268 container remove b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:15 np0005597378 systemd[1]: libpod-conmon-b0d39489a0d97ffe1f2eeb5b03c858a6b82c2ad287530a2863aa091ff3500e27.scope: Deactivated successfully.
Jan 27 08:14:15 np0005597378 podman[88560]: 2026-01-27 13:14:15.809072829 +0000 UTC m=+0.021844630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:15 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:15 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:15 np0005597378 podman[88560]: 2026-01-27 13:14:15.967045622 +0000 UTC m=+0.179817423 container create 9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 27 08:14:15 np0005597378 ceph-mon[75090]: from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 27 08:14:16 np0005597378 systemd[1]: Started libpod-conmon-9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc.scope.
Jan 27 08:14:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65e574d0778218ed56753b638c43ee055ed4ba41f3d5d5c37fdcf8b8f9c027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65e574d0778218ed56753b638c43ee055ed4ba41f3d5d5c37fdcf8b8f9c027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65e574d0778218ed56753b638c43ee055ed4ba41f3d5d5c37fdcf8b8f9c027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65e574d0778218ed56753b638c43ee055ed4ba41f3d5d5c37fdcf8b8f9c027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 27 08:14:16 np0005597378 podman[88560]: 2026-01-27 13:14:16.24512407 +0000 UTC m=+0.457895881 container init 9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0 done with init, starting boot process
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0 start_boot
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 27 08:14:16 np0005597378 ceph-osd[88005]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 27 08:14:16 np0005597378 podman[88560]: 2026-01-27 13:14:16.25815481 +0000 UTC m=+0.470926651 container start 9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:16 np0005597378 podman[88560]: 2026-01-27 13:14:16.496604597 +0000 UTC m=+0.709376428 container attach 9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:14:16 np0005597378 lvm[88655]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:16 np0005597378 lvm[88654]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:16 np0005597378 lvm[88655]: VG ceph_vg1 finished
Jan 27 08:14:16 np0005597378 lvm[88654]: VG ceph_vg0 finished
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 27 08:14:16 np0005597378 lvm[88657]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:16 np0005597378 lvm[88657]: VG ceph_vg2 finished
Jan 27 08:14:16 np0005597378 lvm[88658]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:16 np0005597378 lvm[88658]: VG ceph_vg0 finished
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:14:16
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:14:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 27 08:14:17 np0005597378 romantic_lumiere[88576]: {}
Jan 27 08:14:17 np0005597378 systemd[1]: libpod-9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc.scope: Deactivated successfully.
Jan 27 08:14:17 np0005597378 systemd[1]: libpod-9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc.scope: Consumed 1.235s CPU time.
Jan 27 08:14:17 np0005597378 podman[88661]: 2026-01-27 13:14:17.120100687 +0000 UTC m=+0.021943252 container died 9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:14:17 np0005597378 ceph-mon[75090]: from='osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:17 np0005597378 python3[88699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:14:17 np0005597378 podman[88701]: 2026-01-27 13:14:17.743199228 +0000 UTC m=+0.020640459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1e65e574d0778218ed56753b638c43ee055ed4ba41f3d5d5c37fdcf8b8f9c027-merged.mount: Deactivated successfully.
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:17 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:17 np0005597378 podman[88661]: 2026-01-27 13:14:17.952943948 +0000 UTC m=+0.854786483 container remove 9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:14:17 np0005597378 systemd[1]: libpod-conmon-9f8fc41d1f890084a68c509d6d9d718c15d9ae47d2acf4ee11418b0fd9f24bdc.scope: Deactivated successfully.
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:18 np0005597378 podman[88701]: 2026-01-27 13:14:18.037473078 +0000 UTC m=+0.314914289 container create c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c (image=quay.io/ceph/ceph:v20, name=suspicious_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:18 np0005597378 systemd[1]: Started libpod-conmon-c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c.scope.
Jan 27 08:14:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f1336fa8a7a7c1cd37b30e8681954f05ec15915c33d465873f1914b4c76629/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f1336fa8a7a7c1cd37b30e8681954f05ec15915c33d465873f1914b4c76629/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f1336fa8a7a7c1cd37b30e8681954f05ec15915c33d465873f1914b4c76629/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:18 np0005597378 podman[88701]: 2026-01-27 13:14:18.204140887 +0000 UTC m=+0.481582118 container init c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c (image=quay.io/ceph/ceph:v20, name=suspicious_wescoff, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:14:18 np0005597378 podman[88701]: 2026-01-27 13:14:18.215112023 +0000 UTC m=+0.492553234 container start c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c (image=quay.io/ceph/ceph:v20, name=suspicious_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 27 08:14:18 np0005597378 podman[88701]: 2026-01-27 13:14:18.246361127 +0000 UTC m=+0.523802358 container attach c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c (image=quay.io/ceph/ceph:v20, name=suspicious_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:18 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:18 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608339574' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 27 08:14:18 np0005597378 suspicious_wescoff[88733]: 
Jan 27 08:14:18 np0005597378 suspicious_wescoff[88733]: {"fsid":"4d8fd694-f443-5fb1-b612-70034b2f3c6e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":13,"num_osds":3,"num_up_osds":1,"osd_up_since":1769519648,"num_in_osds":3,"osd_in_since":1769519634,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447033344,"bytes_avail":21023608832,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-27T13:12:53:733380+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-27T13:12:53.736512+0000","services":{}},"progress_events":{}}
Jan 27 08:14:18 np0005597378 podman[88862]: 2026-01-27 13:14:18.774375322 +0000 UTC m=+0.113300210 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:14:18 np0005597378 systemd[1]: libpod-c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c.scope: Deactivated successfully.
Jan 27 08:14:18 np0005597378 podman[88701]: 2026-01-27 13:14:18.809487386 +0000 UTC m=+1.086928597 container died c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c (image=quay.io/ceph/ceph:v20, name=suspicious_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-33f1336fa8a7a7c1cd37b30e8681954f05ec15915c33d465873f1914b4c76629-merged.mount: Deactivated successfully.
Jan 27 08:14:18 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:18 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 27 08:14:19 np0005597378 podman[88884]: 2026-01-27 13:14:19.006446092 +0000 UTC m=+0.215393227 container remove c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c (image=quay.io/ceph/ceph:v20, name=suspicious_wescoff, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:19 np0005597378 systemd[1]: libpod-conmon-c683b04218526770ae9a9fc13a908aa3a835777d36a3f01133f75efc4b12197c.scope: Deactivated successfully.
Jan 27 08:14:19 np0005597378 podman[88862]: 2026-01-27 13:14:19.023852026 +0000 UTC m=+0.362776824 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:19 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:19 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:19 np0005597378 python3[88999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:19 np0005597378 podman[89036]: 2026-01-27 13:14:19.639294287 +0000 UTC m=+0.063939535 container create 4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7 (image=quay.io/ceph/ceph:v20, name=jolly_einstein, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:19 np0005597378 podman[89036]: 2026-01-27 13:14:19.596415081 +0000 UTC m=+0.021060349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:19 np0005597378 systemd[1]: Started libpod-conmon-4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7.scope.
Jan 27 08:14:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f46b6bb06240dce70a4957db5c68df1fe1ced6480d9e85fb4ee34c2c68cc6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f46b6bb06240dce70a4957db5c68df1fe1ced6480d9e85fb4ee34c2c68cc6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:19 np0005597378 podman[89036]: 2026-01-27 13:14:19.796481399 +0000 UTC m=+0.221126697 container init 4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7 (image=quay.io/ceph/ceph:v20, name=jolly_einstein, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:19 np0005597378 podman[89036]: 2026-01-27 13:14:19.802609608 +0000 UTC m=+0.227254856 container start 4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7 (image=quay.io/ceph/ceph:v20, name=jolly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:19 np0005597378 podman[89036]: 2026-01-27 13:14:19.832318501 +0000 UTC m=+0.256963749 container attach 4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7 (image=quay.io/ceph/ceph:v20, name=jolly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:19 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:19 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1170926055' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:20 np0005597378 podman[89152]: 2026-01-27 13:14:20.229616674 +0000 UTC m=+0.017782054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:20 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:20 np0005597378 podman[89152]: 2026-01-27 13:14:20.499531901 +0000 UTC m=+0.287697251 container create 971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_visvesvaraya, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:14:20 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1170926055' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:20 np0005597378 systemd[1]: Started libpod-conmon-971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35.scope.
Jan 27 08:14:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 27 08:14:20 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 27 08:14:20 np0005597378 podman[89152]: 2026-01-27 13:14:20.967079882 +0000 UTC m=+0.755245242 container init 971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:14:20 np0005597378 podman[89152]: 2026-01-27 13:14:20.975949213 +0000 UTC m=+0.764114563 container start 971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:14:20 np0005597378 upbeat_visvesvaraya[89171]: 167 167
Jan 27 08:14:20 np0005597378 systemd[1]: libpod-971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35.scope: Deactivated successfully.
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:20 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1170926055' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Jan 27 08:14:21 np0005597378 podman[89152]: 2026-01-27 13:14:21.023894031 +0000 UTC m=+0.812059411 container attach 971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:21 np0005597378 jolly_einstein[89067]: pool 'vms' created
Jan 27 08:14:21 np0005597378 podman[89152]: 2026-01-27 13:14:21.024908157 +0000 UTC m=+0.813073507 container died 971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_visvesvaraya, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:21 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:21 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:21 np0005597378 systemd[1]: libpod-4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7.scope: Deactivated successfully.
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.089 iops: 1814.781 elapsed_sec: 1.653
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: log_channel(cluster) log [WRN] : OSD bench result of 1814.781158 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 0 waiting for initial osdmap
Jan 27 08:14:21 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1[86937]: 2026-01-27T13:14:21.055+0000 7f47e49fb640 -1 osd.1 0 waiting for initial osdmap
Jan 27 08:14:21 np0005597378 podman[89036]: 2026-01-27 13:14:21.063229094 +0000 UTC m=+1.487874352 container died 4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7 (image=quay.io/ceph/ceph:v20, name=jolly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 check_osdmap_features require_osd_release unknown -> tentacle
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 set_numa_affinity not setting numa affinity
Jan 27 08:14:21 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-1[86937]: 2026-01-27T13:14:21.124+0000 7f47defee640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:14:21 np0005597378 ceph-osd[86941]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 27 08:14:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-69a5a0081d579cb765c9ef979753882f03103124c9d86acdb020cfbd4fea29e1-merged.mount: Deactivated successfully.
Jan 27 08:14:21 np0005597378 podman[89152]: 2026-01-27 13:14:21.206137825 +0000 UTC m=+0.994303175 container remove 971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:21 np0005597378 systemd[1]: libpod-conmon-971a8786281b4970569ed0785da981ec6a1693182cec1d2a868babd0e9ca8e35.scope: Deactivated successfully.
Jan 27 08:14:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a3f46b6bb06240dce70a4957db5c68df1fe1ced6480d9e85fb4ee34c2c68cc6b-merged.mount: Deactivated successfully.
Jan 27 08:14:21 np0005597378 podman[89036]: 2026-01-27 13:14:21.286732933 +0000 UTC m=+1.711378181 container remove 4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7 (image=quay.io/ceph/ceph:v20, name=jolly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:14:21 np0005597378 systemd[1]: libpod-conmon-4fa4df4336099f6de4a3e4ab85e5a6117fa86818f16c1b2502f89c0b618158a7.scope: Deactivated successfully.
Jan 27 08:14:21 np0005597378 podman[89211]: 2026-01-27 13:14:21.363227755 +0000 UTC m=+0.048613097 container create 0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:21 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:21 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:21 np0005597378 systemd[1]: Started libpod-conmon-0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504.scope.
Jan 27 08:14:21 np0005597378 podman[89211]: 2026-01-27 13:14:21.334907367 +0000 UTC m=+0.020292719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddfed1513a82ae7e70eed667c0851dfd01434c8ca67cc98963d3e2204a017b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddfed1513a82ae7e70eed667c0851dfd01434c8ca67cc98963d3e2204a017b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddfed1513a82ae7e70eed667c0851dfd01434c8ca67cc98963d3e2204a017b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ddfed1513a82ae7e70eed667c0851dfd01434c8ca67cc98963d3e2204a017b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:21 np0005597378 podman[89211]: 2026-01-27 13:14:21.453075664 +0000 UTC m=+0.138461016 container init 0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:14:21 np0005597378 podman[89211]: 2026-01-27 13:14:21.460074256 +0000 UTC m=+0.145459598 container start 0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:14:21 np0005597378 podman[89211]: 2026-01-27 13:14:21.480312312 +0000 UTC m=+0.165697654 container attach 0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_colden, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1170926055' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: OSD bench result of 1814.781158 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:14:21 np0005597378 python3[89257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:21 np0005597378 podman[89258]: 2026-01-27 13:14:21.676269543 +0000 UTC m=+0.048097212 container create b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d (image=quay.io/ceph/ceph:v20, name=pedantic_elgamal, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:14:21 np0005597378 systemd[1]: Started libpod-conmon-b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d.scope.
Jan 27 08:14:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:21 np0005597378 podman[89258]: 2026-01-27 13:14:21.649115767 +0000 UTC m=+0.020943456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42c616a823e8e6e4aa635448adc8c91b2f879bd3e0a2ba359ea43fc03fe8e8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42c616a823e8e6e4aa635448adc8c91b2f879bd3e0a2ba359ea43fc03fe8e8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:21 np0005597378 podman[89258]: 2026-01-27 13:14:21.761697207 +0000 UTC m=+0.133524886 container init b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d (image=quay.io/ceph/ceph:v20, name=pedantic_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:21 np0005597378 podman[89258]: 2026-01-27 13:14:21.76759111 +0000 UTC m=+0.139418779 container start b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d (image=quay.io/ceph/ceph:v20, name=pedantic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:21 np0005597378 podman[89258]: 2026-01-27 13:14:21.786702168 +0000 UTC m=+0.158529837 container attach b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d (image=quay.io/ceph/ceph:v20, name=pedantic_elgamal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:21 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/957634717; not ready for session (expect reconnect)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:21 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]: [
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:    {
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "available": false,
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "being_replaced": false,
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "ceph_device_lvm": false,
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "lsm_data": {},
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "lvs": [],
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "path": "/dev/sr0",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "rejected_reasons": [
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "Has a FileSystem",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "Insufficient space (<5GB)"
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        ],
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        "sys_api": {
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "actuators": null,
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "device_nodes": [
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:                "sr0"
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            ],
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "devname": "sr0",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "human_readable_size": "482.00 KB",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "id_bus": "ata",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "model": "QEMU DVD-ROM",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "nr_requests": "2",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "parent": "/dev/sr0",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "partitions": {},
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "path": "/dev/sr0",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "removable": "1",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "rev": "2.5+",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "ro": "0",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "rotational": "1",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "sas_address": "",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "sas_device_handle": "",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "scheduler_mode": "mq-deadline",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "sectors": 0,
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "sectorsize": "2048",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "size": 493568.0,
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "support_discard": "2048",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "type": "disk",
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:            "vendor": "QEMU"
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:        }
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]:    }
Jan 27 08:14:21 np0005597378 wonderful_colden[89228]: ]
Jan 27 08:14:22 np0005597378 systemd[1]: libpod-0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504.scope: Deactivated successfully.
Jan 27 08:14:22 np0005597378 podman[89211]: 2026-01-27 13:14:22.019721595 +0000 UTC m=+0.705106937 container died 0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_colden, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717] boot
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:22 np0005597378 ceph-osd[86941]: osd.1 15 state: booting -> active
Jan 27 08:14:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[10,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-48ddfed1513a82ae7e70eed667c0851dfd01434c8ca67cc98963d3e2204a017b-merged.mount: Deactivated successfully.
Jan 27 08:14:22 np0005597378 podman[89211]: 2026-01-27 13:14:22.098457094 +0000 UTC m=+0.783842436 container remove 0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_colden, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:22 np0005597378 systemd[1]: libpod-conmon-0e7a1383264121a1e28cadeb40e2e3dd1ace3d2f3e9f655d7eb4296dd3c8b504.scope: Deactivated successfully.
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43689k
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43689k
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3789236934' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.450 iops: 9331.139 elapsed_sec: 0.322
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [WRN] : OSD bench result of 9331.138720 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 0 waiting for initial osdmap
Jan 27 08:14:22 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2[88001]: 2026-01-27T13:14:22.318+0000 7f246ff10640 -1 osd.2 0 waiting for initial osdmap
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 check_osdmap_features require_osd_release unknown -> tentacle
Jan 27 08:14:22 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-osd-2[88001]: 2026-01-27T13:14:22.338+0000 7f246ad15640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 set_numa_affinity not setting numa affinity
Jan 27 08:14:22 np0005597378 ceph-osd[88005]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/566339909; not ready for session (expect reconnect)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.579985679 +0000 UTC m=+0.042251750 container create 9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:22 np0005597378 systemd[1]: Started libpod-conmon-9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950.scope.
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: osd.1 [v2:192.168.122.100:6806/957634717,v1:192.168.122.100:6807/957634717] boot
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: Adjusting osd_memory_target on compute-0 to 43689k
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3789236934' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.655214037 +0000 UTC m=+0.117480108 container init 9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.561604801 +0000 UTC m=+0.023870852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.662868777 +0000 UTC m=+0.125134808 container start 9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nash, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.666189883 +0000 UTC m=+0.128455964 container attach 9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 27 08:14:22 np0005597378 priceless_nash[90174]: 167 167
Jan 27 08:14:22 np0005597378 systemd[1]: libpod-9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950.scope: Deactivated successfully.
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.668570385 +0000 UTC m=+0.130836436 container died 9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7f13ccf904fae5ee1b3155c27721bea2834729d9ad5348785712308bf7bcb3ea-merged.mount: Deactivated successfully.
Jan 27 08:14:22 np0005597378 podman[90158]: 2026-01-27 13:14:22.712129789 +0000 UTC m=+0.174395830 container remove 9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:14:22 np0005597378 systemd[1]: libpod-conmon-9b601a481d9c4f3c621531e2ff118e611f5f169a6e18a8066da2040138070950.scope: Deactivated successfully.
Jan 27 08:14:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:22 np0005597378 podman[90198]: 2026-01-27 13:14:22.862344739 +0000 UTC m=+0.041307296 container create 159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:14:22 np0005597378 systemd[1]: Started libpod-conmon-159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1.scope.
Jan 27 08:14:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 creating+peering, 1 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 27 08:14:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b01358f8cca1c49a3c73a2661932838918ea416bead2abd066f8d23667d92b70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b01358f8cca1c49a3c73a2661932838918ea416bead2abd066f8d23667d92b70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b01358f8cca1c49a3c73a2661932838918ea416bead2abd066f8d23667d92b70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b01358f8cca1c49a3c73a2661932838918ea416bead2abd066f8d23667d92b70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b01358f8cca1c49a3c73a2661932838918ea416bead2abd066f8d23667d92b70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:22 np0005597378 podman[90198]: 2026-01-27 13:14:22.840976883 +0000 UTC m=+0.019939450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:22 np0005597378 podman[90198]: 2026-01-27 13:14:22.949839047 +0000 UTC m=+0.128801634 container init 159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:22 np0005597378 podman[90198]: 2026-01-27 13:14:22.962277291 +0000 UTC m=+0.141239868 container start 159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:22 np0005597378 podman[90198]: 2026-01-27 13:14:22.966745517 +0000 UTC m=+0.145708084 container attach 159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3789236934' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Jan 27 08:14:23 np0005597378 pedantic_elgamal[89278]: pool 'volumes' created
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909] boot
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 27 08:14:23 np0005597378 ceph-osd[88005]: osd.2 16 state: booting -> active
Jan 27 08:14:23 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 16 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16) [2] r=0 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:23 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:23 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[10,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:23 np0005597378 systemd[1]: libpod-b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d.scope: Deactivated successfully.
Jan 27 08:14:23 np0005597378 podman[89258]: 2026-01-27 13:14:23.060641071 +0000 UTC m=+1.432468750 container died b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d (image=quay.io/ceph/ceph:v20, name=pedantic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:14:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b42c616a823e8e6e4aa635448adc8c91b2f879bd3e0a2ba359ea43fc03fe8e8f-merged.mount: Deactivated successfully.
Jan 27 08:14:23 np0005597378 podman[89258]: 2026-01-27 13:14:23.107516541 +0000 UTC m=+1.479344230 container remove b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d (image=quay.io/ceph/ceph:v20, name=pedantic_elgamal, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:23 np0005597378 systemd[1]: libpod-conmon-b408f61ee90529bd743baa44af07bf52be652154faef342353739620bdcd442d.scope: Deactivated successfully.
Jan 27 08:14:23 np0005597378 relaxed_dubinsky[90214]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:14:23 np0005597378 relaxed_dubinsky[90214]: --> All data devices are unavailable
Jan 27 08:14:23 np0005597378 python3[90264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:23 np0005597378 systemd[1]: libpod-159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1.scope: Deactivated successfully.
Jan 27 08:14:23 np0005597378 podman[90274]: 2026-01-27 13:14:23.477320459 +0000 UTC m=+0.025027833 container died 159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:23 np0005597378 podman[90273]: 2026-01-27 13:14:23.48930807 +0000 UTC m=+0.044868889 container create 397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed (image=quay.io/ceph/ceph:v20, name=charming_hoover, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:14:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b01358f8cca1c49a3c73a2661932838918ea416bead2abd066f8d23667d92b70-merged.mount: Deactivated successfully.
Jan 27 08:14:23 np0005597378 podman[90274]: 2026-01-27 13:14:23.529130607 +0000 UTC m=+0.076837961 container remove 159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:23 np0005597378 systemd[1]: Started libpod-conmon-397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed.scope.
Jan 27 08:14:23 np0005597378 systemd[1]: libpod-conmon-159e113727f7d0c2a8f4fe4fa80a4fb8b3e6c6fd15157b8d42627ca2f77c32c1.scope: Deactivated successfully.
Jan 27 08:14:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e746757f18d513f0fba6651a5d23a38b8dff4dcb97fc65354f4e415a9e373f5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e746757f18d513f0fba6651a5d23a38b8dff4dcb97fc65354f4e415a9e373f5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:23 np0005597378 podman[90273]: 2026-01-27 13:14:23.470622464 +0000 UTC m=+0.026183303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:23 np0005597378 podman[90273]: 2026-01-27 13:14:23.569570279 +0000 UTC m=+0.125131158 container init 397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed (image=quay.io/ceph/ceph:v20, name=charming_hoover, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:14:23 np0005597378 podman[90273]: 2026-01-27 13:14:23.574857227 +0000 UTC m=+0.130418046 container start 397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed (image=quay.io/ceph/ceph:v20, name=charming_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:14:23 np0005597378 podman[90273]: 2026-01-27 13:14:23.57919797 +0000 UTC m=+0.134758809 container attach 397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed (image=quay.io/ceph/ceph:v20, name=charming_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:14:23 np0005597378 podman[90388]: 2026-01-27 13:14:23.952317324 +0000 UTC m=+0.054202053 container create de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 27 08:14:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3077025925' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:23 np0005597378 systemd[1]: Started libpod-conmon-de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a.scope.
Jan 27 08:14:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:24 np0005597378 podman[90388]: 2026-01-27 13:14:24.018582958 +0000 UTC m=+0.120467727 container init de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:14:24 np0005597378 podman[90388]: 2026-01-27 13:14:24.025087328 +0000 UTC m=+0.126972067 container start de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_yonath, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:24 np0005597378 podman[90388]: 2026-01-27 13:14:23.931174613 +0000 UTC m=+0.033059362 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:24 np0005597378 modest_yonath[90407]: 167 167
Jan 27 08:14:24 np0005597378 podman[90388]: 2026-01-27 13:14:24.029281437 +0000 UTC m=+0.131166216 container attach de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_yonath, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:24 np0005597378 systemd[1]: libpod-de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a.scope: Deactivated successfully.
Jan 27 08:14:24 np0005597378 podman[90388]: 2026-01-27 13:14:24.029672507 +0000 UTC m=+0.131557236 container died de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3077025925' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 27 08:14:24 np0005597378 charming_hoover[90303]: pool 'backups' created
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 27 08:14:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16) [2] r=0 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: OSD bench result of 9331.138720 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3789236934' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: osd.2 [v2:192.168.122.100:6810/566339909,v1:192.168.122.100:6811/566339909] boot
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3077025925' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f54140a54e0511fd99ec1dbfd017aeef7b8f9295d6ae784dca3011b8cb8ee84c-merged.mount: Deactivated successfully.
Jan 27 08:14:24 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:24 np0005597378 systemd[1]: libpod-397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed.scope: Deactivated successfully.
Jan 27 08:14:24 np0005597378 podman[90388]: 2026-01-27 13:14:24.079988947 +0000 UTC m=+0.181873676 container remove de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:24 np0005597378 podman[90273]: 2026-01-27 13:14:24.085750527 +0000 UTC m=+0.641311346 container died 397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed (image=quay.io/ceph/ceph:v20, name=charming_hoover, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:14:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e746757f18d513f0fba6651a5d23a38b8dff4dcb97fc65354f4e415a9e373f5f-merged.mount: Deactivated successfully.
Jan 27 08:14:24 np0005597378 systemd[1]: libpod-conmon-de2b6a5175372ba455f32f76eb0ab6fd3f9e8e9f8aa7dca96789e62e2495d43a.scope: Deactivated successfully.
Jan 27 08:14:24 np0005597378 podman[90273]: 2026-01-27 13:14:24.121114097 +0000 UTC m=+0.676674916 container remove 397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed (image=quay.io/ceph/ceph:v20, name=charming_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:14:24 np0005597378 systemd[1]: libpod-conmon-397b5c879fbc8dea5958519fe365ba1f713e085082da5491d80a675064a60bed.scope: Deactivated successfully.
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.235579017 +0000 UTC m=+0.043835682 container create bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_curie, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:14:24 np0005597378 systemd[1]: Started libpod-conmon-bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5.scope.
Jan 27 08:14:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74f22c170646f08f26c4333347cd2263db768fe27721b13fcd06fa1d6c2f8b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.215870584 +0000 UTC m=+0.024127269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74f22c170646f08f26c4333347cd2263db768fe27721b13fcd06fa1d6c2f8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74f22c170646f08f26c4333347cd2263db768fe27721b13fcd06fa1d6c2f8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74f22c170646f08f26c4333347cd2263db768fe27721b13fcd06fa1d6c2f8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.328173167 +0000 UTC m=+0.136429852 container init bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.338866856 +0000 UTC m=+0.147123521 container start bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.343465075 +0000 UTC m=+0.151721740 container attach bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:24 np0005597378 python3[90487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:24 np0005597378 podman[90490]: 2026-01-27 13:14:24.48844741 +0000 UTC m=+0.041983864 container create 0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515 (image=quay.io/ceph/ceph:v20, name=crazy_roentgen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:14:24 np0005597378 systemd[1]: Started libpod-conmon-0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515.scope.
Jan 27 08:14:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2e1f73709b704cdcc796a21411f8dba501bb2f4d7e769addb5f5cc18c3d7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2e1f73709b704cdcc796a21411f8dba501bb2f4d7e769addb5f5cc18c3d7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:24 np0005597378 podman[90490]: 2026-01-27 13:14:24.552951609 +0000 UTC m=+0.106488073 container init 0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515 (image=quay.io/ceph/ceph:v20, name=crazy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:24 np0005597378 podman[90490]: 2026-01-27 13:14:24.558312989 +0000 UTC m=+0.111849453 container start 0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515 (image=quay.io/ceph/ceph:v20, name=crazy_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:24 np0005597378 podman[90490]: 2026-01-27 13:14:24.561169363 +0000 UTC m=+0.114705847 container attach 0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515 (image=quay.io/ceph/ceph:v20, name=crazy_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:24 np0005597378 podman[90490]: 2026-01-27 13:14:24.471651953 +0000 UTC m=+0.025188437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]: {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:    "0": [
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:        {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "devices": [
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "/dev/loop3"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            ],
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_name": "ceph_lv0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_size": "21470642176",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "name": "ceph_lv0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "tags": {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.crush_device_class": "",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.encrypted": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osd_id": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.type": "block",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.vdo": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.with_tpm": "0"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            },
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "type": "block",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "vg_name": "ceph_vg0"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:        }
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:    ],
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:    "1": [
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:        {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "devices": [
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "/dev/loop4"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            ],
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_name": "ceph_lv1",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_size": "21470642176",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "name": "ceph_lv1",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "tags": {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.crush_device_class": "",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.encrypted": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osd_id": "1",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.type": "block",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.vdo": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.with_tpm": "0"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            },
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "type": "block",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "vg_name": "ceph_vg1"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:        }
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:    ],
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:    "2": [
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:        {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "devices": [
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "/dev/loop5"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            ],
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_name": "ceph_lv2",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_size": "21470642176",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "name": "ceph_lv2",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "tags": {
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.crush_device_class": "",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.encrypted": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osd_id": "2",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.type": "block",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.vdo": "0",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:                "ceph.with_tpm": "0"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            },
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "type": "block",
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:            "vg_name": "ceph_vg2"
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:        }
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]:    ]
Jan 27 08:14:24 np0005597378 ecstatic_curie[90483]: }
Jan 27 08:14:24 np0005597378 systemd[1]: libpod-bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5.scope: Deactivated successfully.
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.656219588 +0000 UTC m=+0.464476243 container died bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:14:24 np0005597378 podman[90442]: 2026-01-27 13:14:24.698504188 +0000 UTC m=+0.506760853 container remove bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:14:24 np0005597378 systemd[1]: libpod-conmon-bdef7c243495bceacc9dcf7606783b5dd799cd0d52d6707e08a1b3ed88b9eab5.scope: Deactivated successfully.
Jan 27 08:14:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 3 creating+peering, 1 unknown; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Jan 27 08:14:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b74f22c170646f08f26c4333347cd2263db768fe27721b13fcd06fa1d6c2f8b9-merged.mount: Deactivated successfully.
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 27 08:14:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/958708226' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/958708226' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 27 08:14:25 np0005597378 crazy_roentgen[90506]: pool 'images' created
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 27 08:14:25 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3077025925' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/958708226' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/958708226' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:25 np0005597378 systemd[1]: libpod-0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515.scope: Deactivated successfully.
Jan 27 08:14:25 np0005597378 podman[90611]: 2026-01-27 13:14:25.115632577 +0000 UTC m=+0.028347480 container died 0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515 (image=quay.io/ceph/ceph:v20, name=crazy_roentgen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.134890448 +0000 UTC m=+0.041492701 container create 7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:14:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a9d2e1f73709b704cdcc796a21411f8dba501bb2f4d7e769addb5f5cc18c3d7b-merged.mount: Deactivated successfully.
Jan 27 08:14:25 np0005597378 podman[90611]: 2026-01-27 13:14:25.154125419 +0000 UTC m=+0.066840302 container remove 0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515 (image=quay.io/ceph/ceph:v20, name=crazy_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:14:25 np0005597378 systemd[1]: libpod-conmon-0a7450ba271969ae07f08bdf27213b762ff7e6ca1ba6fa0d34e174fc16187515.scope: Deactivated successfully.
Jan 27 08:14:25 np0005597378 systemd[1]: Started libpod-conmon-7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956.scope.
Jan 27 08:14:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.118224675 +0000 UTC m=+0.024826948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.221241826 +0000 UTC m=+0.127844089 container init 7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.229271535 +0000 UTC m=+0.135873788 container start 7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.232195501 +0000 UTC m=+0.138797754 container attach 7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:25 np0005597378 frosty_shaw[90642]: 167 167
Jan 27 08:14:25 np0005597378 systemd[1]: libpod-7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956.scope: Deactivated successfully.
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.234216253 +0000 UTC m=+0.140818506 container died 7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:14:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e55fc3cd45ae2e506ccd656d1645a868f3c942c7df96a3c90db9551d1ae4dbf5-merged.mount: Deactivated successfully.
Jan 27 08:14:25 np0005597378 podman[90612]: 2026-01-27 13:14:25.272436939 +0000 UTC m=+0.179039192 container remove 7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:14:25 np0005597378 systemd[1]: libpod-conmon-7fc5e4fa55a2142b94861a6191582db117749a7c7673910c96e49457883cf956.scope: Deactivated successfully.
Jan 27 08:14:25 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:25 np0005597378 podman[90691]: 2026-01-27 13:14:25.426667544 +0000 UTC m=+0.042552140 container create eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:14:25 np0005597378 python3[90685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:25 np0005597378 systemd[1]: Started libpod-conmon-eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53.scope.
Jan 27 08:14:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7af40a4b8f2d4939f04823a4c8065f1dae965a155f89a372c2a26b9343da850/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7af40a4b8f2d4939f04823a4c8065f1dae965a155f89a372c2a26b9343da850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7af40a4b8f2d4939f04823a4c8065f1dae965a155f89a372c2a26b9343da850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7af40a4b8f2d4939f04823a4c8065f1dae965a155f89a372c2a26b9343da850/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:25 np0005597378 podman[90691]: 2026-01-27 13:14:25.409968509 +0000 UTC m=+0.025853115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:25 np0005597378 podman[90708]: 2026-01-27 13:14:25.514670155 +0000 UTC m=+0.046753639 container create dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70 (image=quay.io/ceph/ceph:v20, name=hopeful_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:25 np0005597378 podman[90691]: 2026-01-27 13:14:25.534789648 +0000 UTC m=+0.150674254 container init eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elbakyan, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:14:25 np0005597378 podman[90691]: 2026-01-27 13:14:25.541596505 +0000 UTC m=+0.157481091 container start eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:25 np0005597378 podman[90691]: 2026-01-27 13:14:25.545049865 +0000 UTC m=+0.160934471 container attach eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elbakyan, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:25 np0005597378 systemd[1]: Started libpod-conmon-dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70.scope.
Jan 27 08:14:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b2a41ef576e7d72e1dde863f7ba07dd1656e09110da92847a5ab95fdd6226e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b2a41ef576e7d72e1dde863f7ba07dd1656e09110da92847a5ab95fdd6226e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:25 np0005597378 podman[90708]: 2026-01-27 13:14:25.491004018 +0000 UTC m=+0.023087532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:25 np0005597378 podman[90708]: 2026-01-27 13:14:25.590185961 +0000 UTC m=+0.122269475 container init dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70 (image=quay.io/ceph/ceph:v20, name=hopeful_ardinghelli, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:14:25 np0005597378 podman[90708]: 2026-01-27 13:14:25.59593294 +0000 UTC m=+0.128016444 container start dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70 (image=quay.io/ceph/ceph:v20, name=hopeful_ardinghelli, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 27 08:14:25 np0005597378 podman[90708]: 2026-01-27 13:14:25.603805034 +0000 UTC m=+0.135888588 container attach dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70 (image=quay.io/ceph/ceph:v20, name=hopeful_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 27 08:14:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/688245030' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/688245030' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 27 08:14:26 np0005597378 hopeful_ardinghelli[90728]: pool 'cephfs.cephfs.meta' created
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 27 08:14:26 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:26 np0005597378 systemd[1]: libpod-dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70.scope: Deactivated successfully.
Jan 27 08:14:26 np0005597378 podman[90708]: 2026-01-27 13:14:26.070064652 +0000 UTC m=+0.602148146 container died dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70 (image=quay.io/ceph/ceph:v20, name=hopeful_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/688245030' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/688245030' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-47b2a41ef576e7d72e1dde863f7ba07dd1656e09110da92847a5ab95fdd6226e-merged.mount: Deactivated successfully.
Jan 27 08:14:26 np0005597378 podman[90708]: 2026-01-27 13:14:26.113562964 +0000 UTC m=+0.645646448 container remove dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70 (image=quay.io/ceph/ceph:v20, name=hopeful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:26 np0005597378 systemd[1]: libpod-conmon-dd79ef21fe7b0cbd86cdaffbea304ce3002078ace37833a04420c566702f0e70.scope: Deactivated successfully.
Jan 27 08:14:26 np0005597378 lvm[90840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:26 np0005597378 lvm[90841]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:26 np0005597378 lvm[90840]: VG ceph_vg0 finished
Jan 27 08:14:26 np0005597378 lvm[90841]: VG ceph_vg1 finished
Jan 27 08:14:26 np0005597378 lvm[90843]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:26 np0005597378 lvm[90843]: VG ceph_vg2 finished
Jan 27 08:14:26 np0005597378 lvm[90844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:26 np0005597378 lvm[90844]: VG ceph_vg0 finished
Jan 27 08:14:26 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:26 np0005597378 nervous_elbakyan[90709]: {}
Jan 27 08:14:26 np0005597378 podman[90691]: 2026-01-27 13:14:26.316932079 +0000 UTC m=+0.932816675 container died eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elbakyan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:26 np0005597378 systemd[1]: libpod-eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53.scope: Deactivated successfully.
Jan 27 08:14:26 np0005597378 systemd[1]: libpod-eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53.scope: Consumed 1.223s CPU time.
Jan 27 08:14:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c7af40a4b8f2d4939f04823a4c8065f1dae965a155f89a372c2a26b9343da850-merged.mount: Deactivated successfully.
Jan 27 08:14:26 np0005597378 podman[90691]: 2026-01-27 13:14:26.354697952 +0000 UTC m=+0.970582538 container remove eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:14:26 np0005597378 systemd[1]: libpod-conmon-eeaf23e4eb5d2284d69fe770318e3b1e9dc89f00b353f7538a298f4f42e6dc53.scope: Deactivated successfully.
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:26 np0005597378 python3[90872]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:26 np0005597378 podman[90885]: 2026-01-27 13:14:26.472747565 +0000 UTC m=+0.044504020 container create 12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020 (image=quay.io/ceph/ceph:v20, name=epic_jang, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:26 np0005597378 systemd[1]: Started libpod-conmon-12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020.scope.
Jan 27 08:14:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ead4cdd678f2533d6b2e14499042dce4ef04bee29da2c0fe6de628767fa5100/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ead4cdd678f2533d6b2e14499042dce4ef04bee29da2c0fe6de628767fa5100/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:26 np0005597378 podman[90885]: 2026-01-27 13:14:26.454531021 +0000 UTC m=+0.026287496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:26 np0005597378 podman[90885]: 2026-01-27 13:14:26.558512537 +0000 UTC m=+0.130269042 container init 12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020 (image=quay.io/ceph/ceph:v20, name=epic_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:26 np0005597378 podman[90885]: 2026-01-27 13:14:26.570687775 +0000 UTC m=+0.142444230 container start 12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020 (image=quay.io/ceph/ceph:v20, name=epic_jang, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:14:26 np0005597378 podman[90885]: 2026-01-27 13:14:26.575555061 +0000 UTC m=+0.147311566 container attach 12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020 (image=quay.io/ceph/ceph:v20, name=epic_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v45: 6 pgs: 3 creating+peering, 3 unknown; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 27 08:14:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2697470374' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:26 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] creating main.db for devicehealth
Jan 27 08:14:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2697470374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 27 08:14:27 np0005597378 epic_jang[90925]: pool 'cephfs.cephfs.data' created
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 27 08:14:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:27 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:27 np0005597378 systemd[1]: libpod-12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020.scope: Deactivated successfully.
Jan 27 08:14:27 np0005597378 podman[90885]: 2026-01-27 13:14:27.08490276 +0000 UTC m=+0.656659225 container died 12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020 (image=quay.io/ceph/ceph:v20, name=epic_jang, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9ead4cdd678f2533d6b2e14499042dce4ef04bee29da2c0fe6de628767fa5100-merged.mount: Deactivated successfully.
Jan 27 08:14:27 np0005597378 podman[90885]: 2026-01-27 13:14:27.122463448 +0000 UTC m=+0.694219903 container remove 12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020 (image=quay.io/ceph/ceph:v20, name=epic_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:27 np0005597378 systemd[1]: libpod-conmon-12f5f8f4b91df303be91ee842470792e3646b11b1cce96626f63810f92984020.scope: Deactivated successfully.
Jan 27 08:14:27 np0005597378 python3[91004]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2697470374' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2697470374' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 27 08:14:27 np0005597378 podman[91005]: 2026-01-27 13:14:27.466602947 +0000 UTC m=+0.035602929 container create 13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e (image=quay.io/ceph/ceph:v20, name=focused_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:27 np0005597378 systemd[1]: Started libpod-conmon-13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e.scope.
Jan 27 08:14:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12166ce375d3dbcc1482744393e454892e4f8fd07d615be7fb3abf277eb54454/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12166ce375d3dbcc1482744393e454892e4f8fd07d615be7fb3abf277eb54454/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:27 np0005597378 podman[91005]: 2026-01-27 13:14:27.520393487 +0000 UTC m=+0.089393489 container init 13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e (image=quay.io/ceph/ceph:v20, name=focused_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:27 np0005597378 podman[91005]: 2026-01-27 13:14:27.525380887 +0000 UTC m=+0.094380869 container start 13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e (image=quay.io/ceph/ceph:v20, name=focused_torvalds, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:27 np0005597378 podman[91005]: 2026-01-27 13:14:27.529097343 +0000 UTC m=+0.098097335 container attach 13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e (image=quay.io/ceph/ceph:v20, name=focused_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:14:27 np0005597378 podman[91005]: 2026-01-27 13:14:27.452531491 +0000 UTC m=+0.021531483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 27 08:14:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/486669781' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/486669781' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 27 08:14:28 np0005597378 focused_torvalds[91020]: enabled application 'rbd' on pool 'vms'
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 27 08:14:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:28 np0005597378 systemd[1]: libpod-13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e.scope: Deactivated successfully.
Jan 27 08:14:28 np0005597378 podman[91005]: 2026-01-27 13:14:28.091386401 +0000 UTC m=+0.660386383 container died 13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e (image=quay.io/ceph/ceph:v20, name=focused_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-12166ce375d3dbcc1482744393e454892e4f8fd07d615be7fb3abf277eb54454-merged.mount: Deactivated successfully.
Jan 27 08:14:28 np0005597378 podman[91005]: 2026-01-27 13:14:28.124415791 +0000 UTC m=+0.693415773 container remove 13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e (image=quay.io/ceph/ceph:v20, name=focused_torvalds, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:14:28 np0005597378 systemd[1]: libpod-conmon-13407a8b6a1ad500802654a6e89a159b7449c2910ef3720137681ea2fdd9de5e.scope: Deactivated successfully.
Jan 27 08:14:28 np0005597378 python3[91083]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/486669781' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/486669781' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 27 08:14:28 np0005597378 podman[91084]: 2026-01-27 13:14:28.461546527 +0000 UTC m=+0.038037942 container create a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2 (image=quay.io/ceph/ceph:v20, name=clever_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:14:28 np0005597378 systemd[1]: Started libpod-conmon-a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2.scope.
Jan 27 08:14:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf3a9ed1e785193a85384d2ca7cd6fef805998677f2e83dec8c09b4b4d4ba03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf3a9ed1e785193a85384d2ca7cd6fef805998677f2e83dec8c09b4b4d4ba03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:28 np0005597378 podman[91084]: 2026-01-27 13:14:28.444102833 +0000 UTC m=+0.020594268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:28 np0005597378 podman[91084]: 2026-01-27 13:14:28.54077901 +0000 UTC m=+0.117270495 container init a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2 (image=quay.io/ceph/ceph:v20, name=clever_gates, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:14:28 np0005597378 podman[91084]: 2026-01-27 13:14:28.54812368 +0000 UTC m=+0.124615095 container start a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2 (image=quay.io/ceph/ceph:v20, name=clever_gates, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:14:28 np0005597378 podman[91084]: 2026-01-27 13:14:28.551661793 +0000 UTC m=+0.128153238 container attach a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2 (image=quay.io/ceph/ceph:v20, name=clever_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v48: 7 pgs: 4 active+clean, 2 creating+peering, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 27 08:14:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1363574714' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1363574714' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 27 08:14:29 np0005597378 clever_gates[91100]: enabled application 'rbd' on pool 'volumes'
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 27 08:14:29 np0005597378 systemd[1]: libpod-a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2.scope: Deactivated successfully.
Jan 27 08:14:29 np0005597378 podman[91084]: 2026-01-27 13:14:29.094510644 +0000 UTC m=+0.671002059 container died a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2 (image=quay.io/ceph/ceph:v20, name=clever_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:14:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3cf3a9ed1e785193a85384d2ca7cd6fef805998677f2e83dec8c09b4b4d4ba03-merged.mount: Deactivated successfully.
Jan 27 08:14:29 np0005597378 podman[91084]: 2026-01-27 13:14:29.126795304 +0000 UTC m=+0.703286719 container remove a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2 (image=quay.io/ceph/ceph:v20, name=clever_gates, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:29 np0005597378 systemd[1]: libpod-conmon-a60b6b1bd414e9782086dbf1380af76378cd847e3507afcb2c0f6ee2235eeca2.scope: Deactivated successfully.
Jan 27 08:14:29 np0005597378 python3[91162]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:29 np0005597378 podman[91163]: 2026-01-27 13:14:29.432605075 +0000 UTC m=+0.042109307 container create 0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a (image=quay.io/ceph/ceph:v20, name=reverent_nash, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.uujfpe(active, since 72s)
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1363574714' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1363574714' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 27 08:14:29 np0005597378 systemd[1]: Started libpod-conmon-0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a.scope.
Jan 27 08:14:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f30e26f7f50a73501e1ca25ee631fe92d37667c987aed0d2dfebc8634b1f060/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f30e26f7f50a73501e1ca25ee631fe92d37667c987aed0d2dfebc8634b1f060/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:29 np0005597378 podman[91163]: 2026-01-27 13:14:29.506436108 +0000 UTC m=+0.115940370 container init 0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a (image=quay.io/ceph/ceph:v20, name=reverent_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:29 np0005597378 podman[91163]: 2026-01-27 13:14:29.414652698 +0000 UTC m=+0.024156950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:29 np0005597378 podman[91163]: 2026-01-27 13:14:29.51231655 +0000 UTC m=+0.121820782 container start 0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a (image=quay.io/ceph/ceph:v20, name=reverent_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:14:29 np0005597378 podman[91163]: 2026-01-27 13:14:29.51536682 +0000 UTC m=+0.124871072 container attach 0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a (image=quay.io/ceph/ceph:v20, name=reverent_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 27 08:14:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1658151068' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 27 08:14:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 27 08:14:30 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1658151068' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 27 08:14:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1658151068' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 27 08:14:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 27 08:14:30 np0005597378 reverent_nash[91178]: enabled application 'rbd' on pool 'backups'
Jan 27 08:14:30 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 27 08:14:30 np0005597378 systemd[1]: libpod-0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a.scope: Deactivated successfully.
Jan 27 08:14:30 np0005597378 podman[91163]: 2026-01-27 13:14:30.497158458 +0000 UTC m=+1.106662710 container died 0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a (image=quay.io/ceph/ceph:v20, name=reverent_nash, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:14:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1f30e26f7f50a73501e1ca25ee631fe92d37667c987aed0d2dfebc8634b1f060-merged.mount: Deactivated successfully.
Jan 27 08:14:30 np0005597378 podman[91163]: 2026-01-27 13:14:30.537883188 +0000 UTC m=+1.147387420 container remove 0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a (image=quay.io/ceph/ceph:v20, name=reverent_nash, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:30 np0005597378 systemd[1]: libpod-conmon-0fc8a044dd5d0481b6472449cdc5592b099958f08413c6b051ac695f93e93c1a.scope: Deactivated successfully.
Jan 27 08:14:30 np0005597378 python3[91239]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:30 np0005597378 podman[91240]: 2026-01-27 13:14:30.884095141 +0000 UTC m=+0.041507412 container create f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd (image=quay.io/ceph/ceph:v20, name=adoring_noether, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:30 np0005597378 systemd[1]: Started libpod-conmon-f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd.scope.
Jan 27 08:14:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v51: 7 pgs: 6 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e211d28cc4c06358f520293ef51bcb934332a92c38a95e6c913e9d26bfaa7de2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e211d28cc4c06358f520293ef51bcb934332a92c38a95e6c913e9d26bfaa7de2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:30 np0005597378 podman[91240]: 2026-01-27 13:14:30.862985872 +0000 UTC m=+0.020398183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:30 np0005597378 podman[91240]: 2026-01-27 13:14:30.961817464 +0000 UTC m=+0.119229765 container init f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd (image=quay.io/ceph/ceph:v20, name=adoring_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:14:30 np0005597378 podman[91240]: 2026-01-27 13:14:30.975981832 +0000 UTC m=+0.133394123 container start f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd (image=quay.io/ceph/ceph:v20, name=adoring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:30 np0005597378 podman[91240]: 2026-01-27 13:14:30.98012183 +0000 UTC m=+0.137534131 container attach f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd (image=quay.io/ceph/ceph:v20, name=adoring_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/905526924' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1658151068' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/905526924' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/905526924' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 27 08:14:31 np0005597378 adoring_noether[91255]: enabled application 'rbd' on pool 'images'
Jan 27 08:14:31 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 27 08:14:31 np0005597378 systemd[1]: libpod-f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd.scope: Deactivated successfully.
Jan 27 08:14:31 np0005597378 podman[91240]: 2026-01-27 13:14:31.492784986 +0000 UTC m=+0.650197307 container died f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd (image=quay.io/ceph/ceph:v20, name=adoring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:14:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e211d28cc4c06358f520293ef51bcb934332a92c38a95e6c913e9d26bfaa7de2-merged.mount: Deactivated successfully.
Jan 27 08:14:31 np0005597378 podman[91240]: 2026-01-27 13:14:31.532764037 +0000 UTC m=+0.690176318 container remove f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd (image=quay.io/ceph/ceph:v20, name=adoring_noether, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:31 np0005597378 systemd[1]: libpod-conmon-f63b71fcfeb23d489231429c7947c549c1f0927c40cf4b60992b0ebe7c5fcbfd.scope: Deactivated successfully.
Jan 27 08:14:31 np0005597378 python3[91317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:31 np0005597378 podman[91318]: 2026-01-27 13:14:31.883476026 +0000 UTC m=+0.051433179 container create 9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2 (image=quay.io/ceph/ceph:v20, name=objective_dijkstra, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:31 np0005597378 systemd[1]: Started libpod-conmon-9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2.scope.
Jan 27 08:14:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f0a6dfde5f450a6d78e34cf0f4792cedf79359cdd5447fb82d1ad6093d6e39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f0a6dfde5f450a6d78e34cf0f4792cedf79359cdd5447fb82d1ad6093d6e39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:31 np0005597378 podman[91318]: 2026-01-27 13:14:31.858187608 +0000 UTC m=+0.026144781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:31 np0005597378 podman[91318]: 2026-01-27 13:14:31.968817488 +0000 UTC m=+0.136774661 container init 9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2 (image=quay.io/ceph/ceph:v20, name=objective_dijkstra, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:31 np0005597378 podman[91318]: 2026-01-27 13:14:31.978697615 +0000 UTC m=+0.146654788 container start 9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2 (image=quay.io/ceph/ceph:v20, name=objective_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:31 np0005597378 podman[91318]: 2026-01-27 13:14:31.981902309 +0000 UTC m=+0.149859512 container attach 9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2 (image=quay.io/ceph/ceph:v20, name=objective_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3806592102' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/905526924' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3806592102' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3806592102' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 27 08:14:32 np0005597378 objective_dijkstra[91333]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 27 08:14:32 np0005597378 systemd[1]: libpod-9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2.scope: Deactivated successfully.
Jan 27 08:14:32 np0005597378 podman[91318]: 2026-01-27 13:14:32.505236451 +0000 UTC m=+0.673193604 container died 9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2 (image=quay.io/ceph/ceph:v20, name=objective_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:14:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d5f0a6dfde5f450a6d78e34cf0f4792cedf79359cdd5447fb82d1ad6093d6e39-merged.mount: Deactivated successfully.
Jan 27 08:14:32 np0005597378 podman[91318]: 2026-01-27 13:14:32.538157549 +0000 UTC m=+0.706114702 container remove 9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2 (image=quay.io/ceph/ceph:v20, name=objective_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:32 np0005597378 systemd[1]: libpod-conmon-9bf94563b78a39dc364dd9fbf7a3b517d08a0c87d98fcf5ad91d30fe249995f2.scope: Deactivated successfully.
Jan 27 08:14:32 np0005597378 python3[91395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:32 np0005597378 podman[91396]: 2026-01-27 13:14:32.887738969 +0000 UTC m=+0.046757099 container create 3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc (image=quay.io/ceph/ceph:v20, name=hungry_leavitt, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:14:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:32 np0005597378 systemd[1]: Started libpod-conmon-3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc.scope.
Jan 27 08:14:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64663d943338aef0992fc45894ebb5e5e93c49b4c5e03166bc4d8ae1f6d41d2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64663d943338aef0992fc45894ebb5e5e93c49b4c5e03166bc4d8ae1f6d41d2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:32 np0005597378 podman[91396]: 2026-01-27 13:14:32.869131644 +0000 UTC m=+0.028149784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:32 np0005597378 podman[91396]: 2026-01-27 13:14:32.974687513 +0000 UTC m=+0.133705643 container init 3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc (image=quay.io/ceph/ceph:v20, name=hungry_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:14:32 np0005597378 podman[91396]: 2026-01-27 13:14:32.980775681 +0000 UTC m=+0.139793811 container start 3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc (image=quay.io/ceph/ceph:v20, name=hungry_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:32 np0005597378 podman[91396]: 2026-01-27 13:14:32.984348374 +0000 UTC m=+0.143366514 container attach 3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc (image=quay.io/ceph/ceph:v20, name=hungry_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3297071212' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3297071212' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 27 08:14:33 np0005597378 hungry_leavitt[91411]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3806592102' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3297071212' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 27 08:14:33 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 27 08:14:33 np0005597378 systemd[1]: libpod-3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc.scope: Deactivated successfully.
Jan 27 08:14:33 np0005597378 podman[91396]: 2026-01-27 13:14:33.51967709 +0000 UTC m=+0.678695270 container died 3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc (image=quay.io/ceph/ceph:v20, name=hungry_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-64663d943338aef0992fc45894ebb5e5e93c49b4c5e03166bc4d8ae1f6d41d2a-merged.mount: Deactivated successfully.
Jan 27 08:14:33 np0005597378 podman[91396]: 2026-01-27 13:14:33.5692788 +0000 UTC m=+0.728296960 container remove 3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc (image=quay.io/ceph/ceph:v20, name=hungry_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:33 np0005597378 systemd[1]: libpod-conmon-3681ce54c5589c8d97e68d58e1bcaf629090069653e3bb9a920414c40b1b3bdc.scope: Deactivated successfully.
Jan 27 08:14:34 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/3297071212' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 27 08:14:34 np0005597378 python3[91523]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:14:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:34 np0005597378 python3[91594]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519674.2262506-36630-94828920164200/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:14:35 np0005597378 python3[91696]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:14:35 np0005597378 python3[91771]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519675.2586005-36644-60638131610145/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=19abfefca5be5a2ed6cfb4eaebc3e9de25101fbe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:14:36 np0005597378 python3[91821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.322515763 +0000 UTC m=+0.057143578 container create 5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88 (image=quay.io/ceph/ceph:v20, name=eloquent_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:36 np0005597378 systemd[1]: Started libpod-conmon-5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88.scope.
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.28742623 +0000 UTC m=+0.022054025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0220c9f3fa0a7dd952b1e0f1fa4ea4822fe221b767a773c04bca534a7fdd270e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0220c9f3fa0a7dd952b1e0f1fa4ea4822fe221b767a773c04bca534a7fdd270e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0220c9f3fa0a7dd952b1e0f1fa4ea4822fe221b767a773c04bca534a7fdd270e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.412100215 +0000 UTC m=+0.146728020 container init 5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88 (image=quay.io/ceph/ceph:v20, name=eloquent_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.419460117 +0000 UTC m=+0.154087902 container start 5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88 (image=quay.io/ceph/ceph:v20, name=eloquent_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.423560263 +0000 UTC m=+0.158188088 container attach 5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88 (image=quay.io/ceph/ceph:v20, name=eloquent_shaw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:14:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 27 08:14:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328727925' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 27 08:14:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328727925' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:14:36 np0005597378 eloquent_shaw[91837]: 
Jan 27 08:14:36 np0005597378 eloquent_shaw[91837]: [global]
Jan 27 08:14:36 np0005597378 eloquent_shaw[91837]: #011fsid = 4d8fd694-f443-5fb1-b612-70034b2f3c6e
Jan 27 08:14:36 np0005597378 eloquent_shaw[91837]: #011mon_host = 192.168.122.100
Jan 27 08:14:36 np0005597378 eloquent_shaw[91837]: #011rgw_keystone_api_version = 3
Jan 27 08:14:36 np0005597378 systemd[1]: libpod-5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88.scope: Deactivated successfully.
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.823532356 +0000 UTC m=+0.558160201 container died 5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88 (image=quay.io/ceph/ceph:v20, name=eloquent_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:14:36 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1328727925' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 27 08:14:36 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1328727925' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 27 08:14:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0220c9f3fa0a7dd952b1e0f1fa4ea4822fe221b767a773c04bca534a7fdd270e-merged.mount: Deactivated successfully.
Jan 27 08:14:36 np0005597378 podman[91822]: 2026-01-27 13:14:36.872249523 +0000 UTC m=+0.606877298 container remove 5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88 (image=quay.io/ceph/ceph:v20, name=eloquent_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:14:36 np0005597378 systemd[1]: libpod-conmon-5654914a6e3dc435fccabe953566f69786ee38a16a8ec99518d6720b24833f88.scope: Deactivated successfully.
Jan 27 08:14:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:37 np0005597378 python3[91951]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:37 np0005597378 podman[91995]: 2026-01-27 13:14:37.342614348 +0000 UTC m=+0.048341619 container create 39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2 (image=quay.io/ceph/ceph:v20, name=reverent_swartz, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:14:37 np0005597378 podman[91997]: 2026-01-27 13:14:37.361960002 +0000 UTC m=+0.059575542 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:37 np0005597378 systemd[1]: Started libpod-conmon-39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2.scope.
Jan 27 08:14:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/929c220abc09678c7358c59c26df93af8339ac083bb61688b487c6ec2348b3a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/929c220abc09678c7358c59c26df93af8339ac083bb61688b487c6ec2348b3a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/929c220abc09678c7358c59c26df93af8339ac083bb61688b487c6ec2348b3a0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:37 np0005597378 podman[91995]: 2026-01-27 13:14:37.316071327 +0000 UTC m=+0.021798628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:37 np0005597378 podman[91995]: 2026-01-27 13:14:37.420302881 +0000 UTC m=+0.126030152 container init 39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2 (image=quay.io/ceph/ceph:v20, name=reverent_swartz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:14:37 np0005597378 podman[91995]: 2026-01-27 13:14:37.426493561 +0000 UTC m=+0.132220832 container start 39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2 (image=quay.io/ceph/ceph:v20, name=reverent_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:14:37 np0005597378 podman[91995]: 2026-01-27 13:14:37.429690105 +0000 UTC m=+0.135417386 container attach 39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2 (image=quay.io/ceph/ceph:v20, name=reverent_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:14:37 np0005597378 podman[91997]: 2026-01-27 13:14:37.456705048 +0000 UTC m=+0.154320598 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 27 08:14:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2343252790' entity='client.admin' 
Jan 27 08:14:37 np0005597378 reverent_swartz[92029]: set ssl_option
Jan 27 08:14:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:38 np0005597378 systemd[1]: libpod-39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2.scope: Deactivated successfully.
Jan 27 08:14:38 np0005597378 podman[91995]: 2026-01-27 13:14:38.009122208 +0000 UTC m=+0.714849479 container died 39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2 (image=quay.io/ceph/ceph:v20, name=reverent_swartz, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:14:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-929c220abc09678c7358c59c26df93af8339ac083bb61688b487c6ec2348b3a0-merged.mount: Deactivated successfully.
Jan 27 08:14:38 np0005597378 podman[91995]: 2026-01-27 13:14:38.049682814 +0000 UTC m=+0.755410075 container remove 39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2 (image=quay.io/ceph/ceph:v20, name=reverent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:14:38 np0005597378 systemd[1]: libpod-conmon-39a2a95dd3fdf77fe12cf03abccfd62b5e0f4f0a368a696815b44255ecce98b2.scope: Deactivated successfully.
Jan 27 08:14:38 np0005597378 python3[92270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:38 np0005597378 podman[92286]: 2026-01-27 13:14:38.435929369 +0000 UTC m=+0.059942002 container create 23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a (image=quay.io/ceph/ceph:v20, name=interesting_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:14:38 np0005597378 systemd[1]: Started libpod-conmon-23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a.scope.
Jan 27 08:14:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00d9012b4641dde3d9ea13a6b703bdb7fdc588b2716919d91e3c15dfad7efe2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00d9012b4641dde3d9ea13a6b703bdb7fdc588b2716919d91e3c15dfad7efe2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00d9012b4641dde3d9ea13a6b703bdb7fdc588b2716919d91e3c15dfad7efe2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:38 np0005597378 podman[92286]: 2026-01-27 13:14:38.419182213 +0000 UTC m=+0.043194876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:38 np0005597378 podman[92286]: 2026-01-27 13:14:38.521553078 +0000 UTC m=+0.145565721 container init 23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a (image=quay.io/ceph/ceph:v20, name=interesting_gould, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:14:38 np0005597378 podman[92286]: 2026-01-27 13:14:38.530075129 +0000 UTC m=+0.154087762 container start 23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a (image=quay.io/ceph/ceph:v20, name=interesting_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:38 np0005597378 podman[92286]: 2026-01-27 13:14:38.533857088 +0000 UTC m=+0.157869721 container attach 23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a (image=quay.io/ceph/ceph:v20, name=interesting_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:38 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:14:38 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 27 08:14:38 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:38 np0005597378 interesting_gould[92302]: Scheduled rgw.rgw update...
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2343252790' entity='client.admin' 
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:39 np0005597378 systemd[1]: libpod-23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a.scope: Deactivated successfully.
Jan 27 08:14:39 np0005597378 podman[92286]: 2026-01-27 13:14:39.004044368 +0000 UTC m=+0.628057011 container died 23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a (image=quay.io/ceph/ceph:v20, name=interesting_gould, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d00d9012b4641dde3d9ea13a6b703bdb7fdc588b2716919d91e3c15dfad7efe2-merged.mount: Deactivated successfully.
Jan 27 08:14:39 np0005597378 podman[92286]: 2026-01-27 13:14:39.049745118 +0000 UTC m=+0.673757751 container remove 23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a (image=quay.io/ceph/ceph:v20, name=interesting_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:39 np0005597378 systemd[1]: libpod-conmon-23a64438f15ffae387734161606fc7ed4e98a194974c14c21f310b0690bf051a.scope: Deactivated successfully.
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.137944524 +0000 UTC m=+0.039790027 container create 7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:14:39 np0005597378 systemd[1]: Started libpod-conmon-7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e.scope.
Jan 27 08:14:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.215321898 +0000 UTC m=+0.117167421 container init 7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.121512896 +0000 UTC m=+0.023358419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.223207013 +0000 UTC m=+0.125052516 container start 7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:14:39 np0005597378 kind_faraday[92432]: 167 167
Jan 27 08:14:39 np0005597378 systemd[1]: libpod-7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e.scope: Deactivated successfully.
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.228942693 +0000 UTC m=+0.130788206 container attach 7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.229433506 +0000 UTC m=+0.131278999 container died 7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-82da66b4aba3b864ae88f1319c437ec4e54815c81d2ca43f3587cd6c81beb206-merged.mount: Deactivated successfully.
Jan 27 08:14:39 np0005597378 podman[92415]: 2026-01-27 13:14:39.268447671 +0000 UTC m=+0.170293174 container remove 7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:14:39 np0005597378 systemd[1]: libpod-conmon-7535a9445208406103f552ccbbddff30139d21dbaa6adb3847215738465bc28e.scope: Deactivated successfully.
Jan 27 08:14:39 np0005597378 podman[92456]: 2026-01-27 13:14:39.443421016 +0000 UTC m=+0.051193384 container create 75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:14:39 np0005597378 systemd[1]: Started libpod-conmon-75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b.scope.
Jan 27 08:14:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2185bf50baa30da90826a18e445257c02ae44c9c870a8293e5ef42d38e6992b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2185bf50baa30da90826a18e445257c02ae44c9c870a8293e5ef42d38e6992b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2185bf50baa30da90826a18e445257c02ae44c9c870a8293e5ef42d38e6992b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2185bf50baa30da90826a18e445257c02ae44c9c870a8293e5ef42d38e6992b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2185bf50baa30da90826a18e445257c02ae44c9c870a8293e5ef42d38e6992b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:39 np0005597378 podman[92456]: 2026-01-27 13:14:39.420840498 +0000 UTC m=+0.028612906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:39 np0005597378 podman[92456]: 2026-01-27 13:14:39.520754089 +0000 UTC m=+0.128526467 container init 75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_euclid, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:39 np0005597378 podman[92456]: 2026-01-27 13:14:39.52655261 +0000 UTC m=+0.134324968 container start 75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:14:39 np0005597378 podman[92456]: 2026-01-27 13:14:39.529391514 +0000 UTC m=+0.137163872 container attach 75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:14:39 np0005597378 ceph-mon[75090]: Saving service rgw.rgw spec with placement compute-0
Jan 27 08:14:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:40 np0005597378 eloquent_euclid[92473]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:14:40 np0005597378 eloquent_euclid[92473]: --> All data devices are unavailable
Jan 27 08:14:40 np0005597378 systemd[1]: libpod-75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b.scope: Deactivated successfully.
Jan 27 08:14:40 np0005597378 podman[92456]: 2026-01-27 13:14:40.081698462 +0000 UTC m=+0.689470850 container died 75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_euclid, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2185bf50baa30da90826a18e445257c02ae44c9c870a8293e5ef42d38e6992b1-merged.mount: Deactivated successfully.
Jan 27 08:14:40 np0005597378 podman[92456]: 2026-01-27 13:14:40.129340602 +0000 UTC m=+0.737112960 container remove 75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:14:40 np0005597378 systemd[1]: libpod-conmon-75db79ce0dae14f7e8aace6d57566bb7b90c7dd27888c3c1c2d682c0dece219b.scope: Deactivated successfully.
Jan 27 08:14:40 np0005597378 python3[92565]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:14:40 np0005597378 python3[92702]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519679.856742-36685-226748494720579/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.548039161 +0000 UTC m=+0.038010280 container create 0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:40 np0005597378 systemd[1]: Started libpod-conmon-0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2.scope.
Jan 27 08:14:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.61870341 +0000 UTC m=+0.108674559 container init 0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.625648031 +0000 UTC m=+0.115619150 container start 0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.530476954 +0000 UTC m=+0.020448083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.628471265 +0000 UTC m=+0.118442404 container attach 0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:40 np0005597378 cool_shaw[92756]: 167 167
Jan 27 08:14:40 np0005597378 systemd[1]: libpod-0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2.scope: Deactivated successfully.
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.62982268 +0000 UTC m=+0.119793799 container died 0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-16ce6c5fc562114b60fcbbf4f17ee9f0074e6e04730023290df23f599310f815-merged.mount: Deactivated successfully.
Jan 27 08:14:40 np0005597378 podman[92716]: 2026-01-27 13:14:40.663361693 +0000 UTC m=+0.153332802 container remove 0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:40 np0005597378 systemd[1]: libpod-conmon-0dd4d4d3dc750e58e4b1674c1d58b46621e9e4969ead06112e796102aaad28d2.scope: Deactivated successfully.
Jan 27 08:14:40 np0005597378 podman[92778]: 2026-01-27 13:14:40.815955955 +0000 UTC m=+0.048699749 container create b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_vaughan, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:40 np0005597378 systemd[1]: Started libpod-conmon-b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4.scope.
Jan 27 08:14:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dff1fad0f4a3fa40ff4b8cb31e3ed5e0767d5611f7828db4fb89f2b84421b23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dff1fad0f4a3fa40ff4b8cb31e3ed5e0767d5611f7828db4fb89f2b84421b23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dff1fad0f4a3fa40ff4b8cb31e3ed5e0767d5611f7828db4fb89f2b84421b23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:40 np0005597378 podman[92778]: 2026-01-27 13:14:40.794566439 +0000 UTC m=+0.027310313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dff1fad0f4a3fa40ff4b8cb31e3ed5e0767d5611f7828db4fb89f2b84421b23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:40 np0005597378 podman[92778]: 2026-01-27 13:14:40.900993029 +0000 UTC m=+0.133736823 container init b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:14:40 np0005597378 podman[92778]: 2026-01-27 13:14:40.90988329 +0000 UTC m=+0.142627104 container start b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_vaughan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:40 np0005597378 podman[92778]: 2026-01-27 13:14:40.913763132 +0000 UTC m=+0.146506946 container attach b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_vaughan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:41 np0005597378 python3[92823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:41 np0005597378 podman[92826]: 2026-01-27 13:14:41.098252484 +0000 UTC m=+0.035159126 container create bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961 (image=quay.io/ceph/ceph:v20, name=determined_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:14:41 np0005597378 systemd[1]: Started libpod-conmon-bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961.scope.
Jan 27 08:14:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9faf286b5a034eb2e83f9bde6ad8dc72acc3cbcd2d0fed6c7baf20570b9a7f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9faf286b5a034eb2e83f9bde6ad8dc72acc3cbcd2d0fed6c7baf20570b9a7f2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9faf286b5a034eb2e83f9bde6ad8dc72acc3cbcd2d0fed6c7baf20570b9a7f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:41 np0005597378 podman[92826]: 2026-01-27 13:14:41.167481936 +0000 UTC m=+0.104388658 container init bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961 (image=quay.io/ceph/ceph:v20, name=determined_hopper, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:14:41 np0005597378 podman[92826]: 2026-01-27 13:14:41.173857643 +0000 UTC m=+0.110764285 container start bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961 (image=quay.io/ceph/ceph:v20, name=determined_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:14:41 np0005597378 podman[92826]: 2026-01-27 13:14:41.177449136 +0000 UTC m=+0.114355858 container attach bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961 (image=quay.io/ceph/ceph:v20, name=determined_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:14:41 np0005597378 podman[92826]: 2026-01-27 13:14:41.083953051 +0000 UTC m=+0.020859713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]: {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:    "0": [
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:        {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "devices": [
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "/dev/loop3"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            ],
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_name": "ceph_lv0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_size": "21470642176",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "name": "ceph_lv0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "tags": {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.crush_device_class": "",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.encrypted": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osd_id": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.type": "block",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.vdo": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.with_tpm": "0"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            },
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "type": "block",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "vg_name": "ceph_vg0"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:        }
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:    ],
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:    "1": [
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:        {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "devices": [
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "/dev/loop4"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            ],
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_name": "ceph_lv1",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_size": "21470642176",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "name": "ceph_lv1",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "tags": {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.crush_device_class": "",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.encrypted": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osd_id": "1",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.type": "block",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.vdo": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.with_tpm": "0"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            },
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "type": "block",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "vg_name": "ceph_vg1"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:        }
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:    ],
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:    "2": [
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:        {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "devices": [
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "/dev/loop5"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            ],
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_name": "ceph_lv2",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_size": "21470642176",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "name": "ceph_lv2",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "tags": {
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.crush_device_class": "",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.encrypted": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osd_id": "2",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.type": "block",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.vdo": "0",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:                "ceph.with_tpm": "0"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            },
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "type": "block",
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:            "vg_name": "ceph_vg2"
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:        }
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]:    ]
Jan 27 08:14:41 np0005597378 stoic_vaughan[92819]: }
Jan 27 08:14:41 np0005597378 systemd[1]: libpod-b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4.scope: Deactivated successfully.
Jan 27 08:14:41 np0005597378 conmon[92819]: conmon b4100ac5aeff526b1bbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4.scope/container/memory.events
Jan 27 08:14:41 np0005597378 podman[92778]: 2026-01-27 13:14:41.222431156 +0000 UTC m=+0.455175020 container died b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_vaughan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0dff1fad0f4a3fa40ff4b8cb31e3ed5e0767d5611f7828db4fb89f2b84421b23-merged.mount: Deactivated successfully.
Jan 27 08:14:41 np0005597378 podman[92778]: 2026-01-27 13:14:41.266508424 +0000 UTC m=+0.499252238 container remove b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_vaughan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:41 np0005597378 systemd[1]: libpod-conmon-b4100ac5aeff526b1bbb95c3f80286da846986a544f226a60c53f1f0c1f5a0e4.scope: Deactivated successfully.
Jan 27 08:14:41 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 27 08:14:41 np0005597378 ceph-mgr[75385]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 27 08:14:41 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0[75086]: 2026-01-27T13:14:41.619+0000 7f20d5625640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e2 new map
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-27T13:14:41:619740+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-27T13:14:41.619547+0000#012modified#0112026-01-27T13:14:41.619547+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 27 08:14:41 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 27 08:14:41 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 27 08:14:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:41 np0005597378 ceph-mgr[75385]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 27 08:14:41 np0005597378 systemd[1]: libpod-bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961.scope: Deactivated successfully.
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.669479345 +0000 UTC m=+0.034007837 container create d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_stonebraker, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:14:41 np0005597378 podman[92958]: 2026-01-27 13:14:41.697819942 +0000 UTC m=+0.021712127 container died bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961 (image=quay.io/ceph/ceph:v20, name=determined_hopper, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:41 np0005597378 systemd[1]: Started libpod-conmon-d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487.scope.
Jan 27 08:14:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f9faf286b5a034eb2e83f9bde6ad8dc72acc3cbcd2d0fed6c7baf20570b9a7f2-merged.mount: Deactivated successfully.
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.749410405 +0000 UTC m=+0.113938897 container init d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.653790625 +0000 UTC m=+0.018319137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.754919499 +0000 UTC m=+0.119447991 container start d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.759155439 +0000 UTC m=+0.123683981 container attach d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_stonebraker, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:41 np0005597378 gracious_stonebraker[92971]: 167 167
Jan 27 08:14:41 np0005597378 systemd[1]: libpod-d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487.scope: Deactivated successfully.
Jan 27 08:14:41 np0005597378 podman[92958]: 2026-01-27 13:14:41.764515919 +0000 UTC m=+0.088408084 container remove bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961 (image=quay.io/ceph/ceph:v20, name=determined_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.768534412 +0000 UTC m=+0.133062904 container died d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_stonebraker, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:14:41 np0005597378 systemd[1]: libpod-conmon-bdaab04720b0f39cb733ee7735db7555202140aa5bd04e3f18b95b4295d24961.scope: Deactivated successfully.
Jan 27 08:14:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1f64411bce99d53f4f6513a114643dfe76b2268ed7b9824a156c9503b7a9b479-merged.mount: Deactivated successfully.
Jan 27 08:14:41 np0005597378 podman[92943]: 2026-01-27 13:14:41.805014672 +0000 UTC m=+0.169543174 container remove d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:14:41 np0005597378 systemd[1]: libpod-conmon-d713e5888703bfe757d328e81ab821f52be049081f7441f54c46d3e2d301b487.scope: Deactivated successfully.
Jan 27 08:14:41 np0005597378 podman[93005]: 2026-01-27 13:14:41.95817085 +0000 UTC m=+0.049089639 container create f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:14:41 np0005597378 systemd[1]: Started libpod-conmon-f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d.scope.
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb17f69457a8fc26e6e0df9cd1d6b50ad333b0baa64a3e19ce3fea79ff76c4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb17f69457a8fc26e6e0df9cd1d6b50ad333b0baa64a3e19ce3fea79ff76c4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb17f69457a8fc26e6e0df9cd1d6b50ad333b0baa64a3e19ce3fea79ff76c4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb17f69457a8fc26e6e0df9cd1d6b50ad333b0baa64a3e19ce3fea79ff76c4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 podman[93005]: 2026-01-27 13:14:41.939800542 +0000 UTC m=+0.030719361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:42 np0005597378 podman[93005]: 2026-01-27 13:14:42.044560509 +0000 UTC m=+0.135479318 container init f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:42 np0005597378 podman[93005]: 2026-01-27 13:14:42.052149916 +0000 UTC m=+0.143068705 container start f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cori, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:42 np0005597378 podman[93005]: 2026-01-27 13:14:42.055213966 +0000 UTC m=+0.146132765 container attach f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:14:42 np0005597378 python3[93036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.132907618 +0000 UTC m=+0.034795697 container create 468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217 (image=quay.io/ceph/ceph:v20, name=youthful_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:42 np0005597378 systemd[1]: Started libpod-conmon-468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217.scope.
Jan 27 08:14:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f0b83df952f41bde9e8b1e67f760d81e092ae624c6bb12e5328aacdd02c98a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f0b83df952f41bde9e8b1e67f760d81e092ae624c6bb12e5328aacdd02c98a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f0b83df952f41bde9e8b1e67f760d81e092ae624c6bb12e5328aacdd02c98a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.190722643 +0000 UTC m=+0.092610721 container init 468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217 (image=quay.io/ceph/ceph:v20, name=youthful_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.196212876 +0000 UTC m=+0.098100954 container start 468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217 (image=quay.io/ceph/ceph:v20, name=youthful_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.198989429 +0000 UTC m=+0.100877507 container attach 468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217 (image=quay.io/ceph/ceph:v20, name=youthful_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.118110153 +0000 UTC m=+0.019998261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:42 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 08:14:42 np0005597378 ceph-mgr[75385]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 27 08:14:42 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:42 np0005597378 youthful_chatterjee[93062]: Scheduled mds.cephfs update...
Jan 27 08:14:42 np0005597378 systemd[1]: libpod-468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217.scope: Deactivated successfully.
Jan 27 08:14:42 np0005597378 conmon[93062]: conmon 468ff1d2df73dd71e828 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217.scope/container/memory.events
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.663638194 +0000 UTC m=+0.565526272 container died 468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217 (image=quay.io/ceph/ceph:v20, name=youthful_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-18f0b83df952f41bde9e8b1e67f760d81e092ae624c6bb12e5328aacdd02c98a-merged.mount: Deactivated successfully.
Jan 27 08:14:42 np0005597378 lvm[93160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:42 np0005597378 lvm[93162]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:42 np0005597378 lvm[93162]: VG ceph_vg1 finished
Jan 27 08:14:42 np0005597378 lvm[93160]: VG ceph_vg0 finished
Jan 27 08:14:42 np0005597378 podman[93047]: 2026-01-27 13:14:42.698918353 +0000 UTC m=+0.600806431 container remove 468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217 (image=quay.io/ceph/ceph:v20, name=youthful_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:42 np0005597378 lvm[93171]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:42 np0005597378 lvm[93171]: VG ceph_vg2 finished
Jan 27 08:14:42 np0005597378 systemd[1]: libpod-conmon-468ff1d2df73dd71e828ab83d9f3cf2338bf5d674f9ac83dd7cb3e13db448217.scope: Deactivated successfully.
Jan 27 08:14:42 np0005597378 great_cori[93041]: {}
Jan 27 08:14:42 np0005597378 systemd[1]: libpod-f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d.scope: Deactivated successfully.
Jan 27 08:14:42 np0005597378 systemd[1]: libpod-f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d.scope: Consumed 1.219s CPU time.
Jan 27 08:14:42 np0005597378 podman[93005]: 2026-01-27 13:14:42.81252357 +0000 UTC m=+0.903442379 container died f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:14:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ebb17f69457a8fc26e6e0df9cd1d6b50ad333b0baa64a3e19ce3fea79ff76c4c-merged.mount: Deactivated successfully.
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:42 np0005597378 podman[93005]: 2026-01-27 13:14:42.852778968 +0000 UTC m=+0.943697757 container remove f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cori, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:42 np0005597378 systemd[1]: libpod-conmon-f93eeafd6d7afd40405e8357565543ce3862e634d720b9fc9bde2977d358733d.scope: Deactivated successfully.
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:43 np0005597378 ceph-mon[75090]: Saving service mds.cephfs spec with placement compute-0
Jan 27 08:14:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:43 np0005597378 podman[93307]: 2026-01-27 13:14:43.579004792 +0000 UTC m=+0.134998995 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:14:43 np0005597378 podman[93307]: 2026-01-27 13:14:43.692737874 +0000 UTC m=+0.248732067 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:14:43 np0005597378 python3[93405]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: Saving service mds.cephfs spec with placement compute-0
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:44 np0005597378 python3[93566]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769519683.5199816-36737-178823366680947/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=b2765b5995bc6c569d1eddba58f26aa05958b691 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.611141141 +0000 UTC m=+0.048250657 container create 277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:14:44 np0005597378 systemd[1]: Started libpod-conmon-277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36.scope.
Jan 27 08:14:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:44 np0005597378 python3[93713]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.590074423 +0000 UTC m=+0.027183949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.686734989 +0000 UTC m=+0.123844495 container init 277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lehmann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.695565709 +0000 UTC m=+0.132675185 container start 277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:14:44 np0005597378 infallible_lehmann[93733]: 167 167
Jan 27 08:14:44 np0005597378 systemd[1]: libpod-277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36.scope: Deactivated successfully.
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.700429235 +0000 UTC m=+0.137538711 container attach 277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.701604816 +0000 UTC m=+0.138714292 container died 277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lehmann, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-56546b10d9b7a6be4036d19533f0a665124829ca1d4beae3a071388ddb7040db-merged.mount: Deactivated successfully.
Jan 27 08:14:44 np0005597378 podman[93716]: 2026-01-27 13:14:44.75361428 +0000 UTC m=+0.190723756 container remove 277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:14:44 np0005597378 systemd[1]: libpod-conmon-277f5d3df957246c2b0d731b55a8f9c3e4427a5592b8997686f8f804a13fef36.scope: Deactivated successfully.
Jan 27 08:14:44 np0005597378 podman[93736]: 2026-01-27 13:14:44.776985958 +0000 UTC m=+0.075421744 container create 02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d (image=quay.io/ceph/ceph:v20, name=sad_volhard, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:44 np0005597378 systemd[1]: Started libpod-conmon-02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d.scope.
Jan 27 08:14:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a217695b640bb9bbf0a2cf25db5d080f578952611d48f62e941748a6a1e28162/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a217695b640bb9bbf0a2cf25db5d080f578952611d48f62e941748a6a1e28162/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 podman[93736]: 2026-01-27 13:14:44.738088325 +0000 UTC m=+0.036524021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:44 np0005597378 podman[93736]: 2026-01-27 13:14:44.839570958 +0000 UTC m=+0.138006644 container init 02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d (image=quay.io/ceph/ceph:v20, name=sad_volhard, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 27 08:14:44 np0005597378 podman[93736]: 2026-01-27 13:14:44.84541736 +0000 UTC m=+0.143853026 container start 02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d (image=quay.io/ceph/ceph:v20, name=sad_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:14:44 np0005597378 podman[93736]: 2026-01-27 13:14:44.84851264 +0000 UTC m=+0.146948326 container attach 02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d (image=quay.io/ceph/ceph:v20, name=sad_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:44 np0005597378 podman[93774]: 2026-01-27 13:14:44.906037928 +0000 UTC m=+0.041685047 container create a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:14:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:44 np0005597378 systemd[1]: Started libpod-conmon-a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5.scope.
Jan 27 08:14:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd58aada22ceeb39a33272b21f6f89224a7df2b3743ad780773185861a6148f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd58aada22ceeb39a33272b21f6f89224a7df2b3743ad780773185861a6148f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd58aada22ceeb39a33272b21f6f89224a7df2b3743ad780773185861a6148f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd58aada22ceeb39a33272b21f6f89224a7df2b3743ad780773185861a6148f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd58aada22ceeb39a33272b21f6f89224a7df2b3743ad780773185861a6148f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:44 np0005597378 podman[93774]: 2026-01-27 13:14:44.985618809 +0000 UTC m=+0.121265958 container init a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:14:44 np0005597378 podman[93774]: 2026-01-27 13:14:44.88842888 +0000 UTC m=+0.024076009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:44 np0005597378 podman[93774]: 2026-01-27 13:14:44.991953065 +0000 UTC m=+0.127600214 container start a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:44 np0005597378 podman[93774]: 2026-01-27 13:14:44.995961818 +0000 UTC m=+0.131608937 container attach a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/9543956' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 27 08:14:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/9543956' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 27 08:14:45 np0005597378 systemd[1]: libpod-02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d.scope: Deactivated successfully.
Jan 27 08:14:45 np0005597378 podman[93736]: 2026-01-27 13:14:45.365286953 +0000 UTC m=+0.663722619 container died 02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d (image=quay.io/ceph/ceph:v20, name=sad_volhard, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:14:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a217695b640bb9bbf0a2cf25db5d080f578952611d48f62e941748a6a1e28162-merged.mount: Deactivated successfully.
Jan 27 08:14:45 np0005597378 podman[93736]: 2026-01-27 13:14:45.409206727 +0000 UTC m=+0.707642403 container remove 02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d (image=quay.io/ceph/ceph:v20, name=sad_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:14:45 np0005597378 systemd[1]: libpod-conmon-02e86cf9afafcac206c41bb44fe1ec9260af17742887782a12776be26923030d.scope: Deactivated successfully.
Jan 27 08:14:45 np0005597378 eloquent_hopper[93794]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:14:45 np0005597378 eloquent_hopper[93794]: --> All data devices are unavailable
Jan 27 08:14:45 np0005597378 systemd[1]: libpod-a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5.scope: Deactivated successfully.
Jan 27 08:14:45 np0005597378 conmon[93794]: conmon a89fe7b8418d70085c74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5.scope/container/memory.events
Jan 27 08:14:45 np0005597378 podman[93774]: 2026-01-27 13:14:45.523088461 +0000 UTC m=+0.658735580 container died a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:14:45 np0005597378 podman[93774]: 2026-01-27 13:14:45.560175196 +0000 UTC m=+0.695822305 container remove a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:14:45 np0005597378 systemd[1]: libpod-conmon-a89fe7b8418d70085c745a30d32464fec7746de2c84729d096fea4a88d4839e5.scope: Deactivated successfully.
Jan 27 08:14:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5bd58aada22ceeb39a33272b21f6f89224a7df2b3743ad780773185861a6148f-merged.mount: Deactivated successfully.
Jan 27 08:14:45 np0005597378 podman[93917]: 2026-01-27 13:14:45.977771177 +0000 UTC m=+0.043357470 container create 0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:14:46 np0005597378 systemd[1]: Started libpod-conmon-0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe.scope.
Jan 27 08:14:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:46 np0005597378 podman[93917]: 2026-01-27 13:14:45.960826376 +0000 UTC m=+0.026412699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:46 np0005597378 podman[93917]: 2026-01-27 13:14:46.067454862 +0000 UTC m=+0.133041175 container init 0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:14:46 np0005597378 podman[93917]: 2026-01-27 13:14:46.072836292 +0000 UTC m=+0.138422585 container start 0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:46 np0005597378 agitated_colden[93959]: 167 167
Jan 27 08:14:46 np0005597378 systemd[1]: libpod-0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe.scope: Deactivated successfully.
Jan 27 08:14:46 np0005597378 podman[93917]: 2026-01-27 13:14:46.077663947 +0000 UTC m=+0.143250270 container attach 0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:46 np0005597378 podman[93917]: 2026-01-27 13:14:46.077926414 +0000 UTC m=+0.143512727 container died 0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:14:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-918319c7d695cf24679b50c285494748f3d9927c2fd8d8ad1dd0bba72e952bac-merged.mount: Deactivated successfully.
Jan 27 08:14:46 np0005597378 podman[93917]: 2026-01-27 13:14:46.116592861 +0000 UTC m=+0.182179164 container remove 0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:14:46 np0005597378 systemd[1]: libpod-conmon-0951beea843e02146515a2e94c103e52d17f72fb74cf3af7376c44e9cc3eeefe.scope: Deactivated successfully.
Jan 27 08:14:46 np0005597378 python3[93961]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:46 np0005597378 podman[93980]: 2026-01-27 13:14:46.200406963 +0000 UTC m=+0.036923643 container create 585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f (image=quay.io/ceph/ceph:v20, name=agitated_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:46 np0005597378 systemd[1]: Started libpod-conmon-585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f.scope.
Jan 27 08:14:46 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/9543956' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 27 08:14:46 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/9543956' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 27 08:14:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacd8c8b42c2baf463b2fd4c1ef6b2aa142f7321af997479169923a50bfea10a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacd8c8b42c2baf463b2fd4c1ef6b2aa142f7321af997479169923a50bfea10a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:46 np0005597378 podman[93980]: 2026-01-27 13:14:46.265812125 +0000 UTC m=+0.102328815 container init 585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f (image=quay.io/ceph/ceph:v20, name=agitated_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:46 np0005597378 podman[93980]: 2026-01-27 13:14:46.270874447 +0000 UTC m=+0.107391127 container start 585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f (image=quay.io/ceph/ceph:v20, name=agitated_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:46 np0005597378 podman[93980]: 2026-01-27 13:14:46.273787853 +0000 UTC m=+0.110304533 container attach 585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f (image=quay.io/ceph/ceph:v20, name=agitated_feynman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:14:46 np0005597378 podman[94000]: 2026-01-27 13:14:46.280040516 +0000 UTC m=+0.046218434 container create dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:46 np0005597378 podman[93980]: 2026-01-27 13:14:46.184410026 +0000 UTC m=+0.020926726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:46 np0005597378 systemd[1]: Started libpod-conmon-dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00.scope.
Jan 27 08:14:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522804a7770dcd6114b18edd1d4deab7879fb9f1ba7bd8595b38beeb40c54fb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522804a7770dcd6114b18edd1d4deab7879fb9f1ba7bd8595b38beeb40c54fb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522804a7770dcd6114b18edd1d4deab7879fb9f1ba7bd8595b38beeb40c54fb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522804a7770dcd6114b18edd1d4deab7879fb9f1ba7bd8595b38beeb40c54fb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:46 np0005597378 podman[94000]: 2026-01-27 13:14:46.260029144 +0000 UTC m=+0.026207082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:46 np0005597378 podman[94000]: 2026-01-27 13:14:46.495128334 +0000 UTC m=+0.261306282 container init dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:14:46 np0005597378 podman[94000]: 2026-01-27 13:14:46.501574893 +0000 UTC m=+0.267752811 container start dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:46 np0005597378 podman[94000]: 2026-01-27 13:14:46.619909163 +0000 UTC m=+0.386087141 container attach dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]: {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:    "0": [
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:        {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "devices": [
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "/dev/loop3"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            ],
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_name": "ceph_lv0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_size": "21470642176",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "name": "ceph_lv0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "tags": {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.crush_device_class": "",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.encrypted": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osd_id": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.type": "block",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.vdo": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.with_tpm": "0"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            },
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "type": "block",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "vg_name": "ceph_vg0"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:        }
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:    ],
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:    "1": [
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:        {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "devices": [
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "/dev/loop4"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            ],
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_name": "ceph_lv1",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_size": "21470642176",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "name": "ceph_lv1",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "tags": {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.crush_device_class": "",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.encrypted": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osd_id": "1",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.type": "block",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.vdo": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.with_tpm": "0"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            },
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "type": "block",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "vg_name": "ceph_vg1"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:        }
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:    ],
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:    "2": [
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:        {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "devices": [
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "/dev/loop5"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            ],
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_name": "ceph_lv2",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_size": "21470642176",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "name": "ceph_lv2",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "tags": {
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.crush_device_class": "",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.encrypted": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osd_id": "2",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.type": "block",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.vdo": "0",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:                "ceph.with_tpm": "0"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            },
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "type": "block",
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:            "vg_name": "ceph_vg2"
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:        }
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]:    ]
Jan 27 08:14:46 np0005597378 thirsty_blackwell[94020]: }
Jan 27 08:14:46 np0005597378 systemd[1]: libpod-dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00.scope: Deactivated successfully.
Jan 27 08:14:46 np0005597378 podman[94000]: 2026-01-27 13:14:46.802789674 +0000 UTC m=+0.568967602 container died dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:14:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 27 08:14:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604935266' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 27 08:14:46 np0005597378 agitated_feynman[94002]: 
Jan 27 08:14:46 np0005597378 agitated_feynman[94002]: {"fsid":"4d8fd694-f443-5fb1-b612-70034b2f3c6e","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":27,"num_osds":3,"num_up_osds":3,"osd_up_since":1769519663,"num_in_osds":3,"osd_in_since":1769519634,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83746816,"bytes_avail":64328179712,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-27T13:14:41:619740+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-27T13:14:18.922578+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 27 08:14:46 np0005597378 systemd[1]: libpod-585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f.scope: Deactivated successfully.
Jan 27 08:14:46 np0005597378 conmon[94002]: conmon 585b28fa3400c20bda71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f.scope/container/memory.events
Jan 27 08:14:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-522804a7770dcd6114b18edd1d4deab7879fb9f1ba7bd8595b38beeb40c54fb8-merged.mount: Deactivated successfully.
Jan 27 08:14:47 np0005597378 podman[94000]: 2026-01-27 13:14:47.138555055 +0000 UTC m=+0.904732973 container remove dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:47 np0005597378 podman[93980]: 2026-01-27 13:14:47.167119618 +0000 UTC m=+1.003636298 container died 585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f (image=quay.io/ceph/ceph:v20, name=agitated_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:14:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dacd8c8b42c2baf463b2fd4c1ef6b2aa142f7321af997479169923a50bfea10a-merged.mount: Deactivated successfully.
Jan 27 08:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:14:47 np0005597378 podman[94062]: 2026-01-27 13:14:47.84827686 +0000 UTC m=+0.991465872 container remove 585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f (image=quay.io/ceph/ceph:v20, name=agitated_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:14:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:47 np0005597378 systemd[1]: libpod-conmon-585b28fa3400c20bda713794f15a93f8484de340f1fdd3b82fe922955ac7224f.scope: Deactivated successfully.
Jan 27 08:14:47 np0005597378 systemd[1]: libpod-conmon-dcfadf9faf9eb2b9ac30e6437715940c3d4a320b77ebfb5bbcfeb91c7538ed00.scope: Deactivated successfully.
Jan 27 08:14:47 np0005597378 podman[94138]: 2026-01-27 13:14:47.95276065 +0000 UTC m=+0.065777304 container create 77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:14:48 np0005597378 podman[94138]: 2026-01-27 13:14:47.919999617 +0000 UTC m=+0.033016291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:48 np0005597378 systemd[1]: Started libpod-conmon-77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c.scope.
Jan 27 08:14:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:48 np0005597378 podman[94138]: 2026-01-27 13:14:48.131143374 +0000 UTC m=+0.244160018 container init 77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:14:48 np0005597378 podman[94138]: 2026-01-27 13:14:48.13910009 +0000 UTC m=+0.252116754 container start 77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_black, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:14:48 np0005597378 lucid_black[94179]: 167 167
Jan 27 08:14:48 np0005597378 systemd[1]: libpod-77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c.scope: Deactivated successfully.
Jan 27 08:14:48 np0005597378 podman[94138]: 2026-01-27 13:14:48.163290341 +0000 UTC m=+0.276306965 container attach 77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_black, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:48 np0005597378 podman[94138]: 2026-01-27 13:14:48.163786713 +0000 UTC m=+0.276803337 container died 77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_black, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:14:48 np0005597378 python3[94181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-721fdf3b9453bb9fcbf9bb0f291a47bf9e0117c71f5f16b9a72ed3eb3b61e536-merged.mount: Deactivated successfully.
Jan 27 08:14:48 np0005597378 podman[94138]: 2026-01-27 13:14:48.366914031 +0000 UTC m=+0.479930695 container remove 77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_black, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:14:48 np0005597378 systemd[1]: libpod-conmon-77787f8a8b0d24f894f34e73394adb912bee0e7a1b108e1cf6c48452e23cb62c.scope: Deactivated successfully.
Jan 27 08:14:48 np0005597378 podman[94197]: 2026-01-27 13:14:48.483547417 +0000 UTC m=+0.249938357 container create 4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4 (image=quay.io/ceph/ceph:v20, name=stupefied_mayer, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:48 np0005597378 podman[94197]: 2026-01-27 13:14:48.391709087 +0000 UTC m=+0.158100117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:48 np0005597378 systemd[1]: Started libpod-conmon-4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4.scope.
Jan 27 08:14:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1ddd2b786f9dbfcf0d56cd3fa5dd42ed626003a02643632c034d0c704379af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1ddd2b786f9dbfcf0d56cd3fa5dd42ed626003a02643632c034d0c704379af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:48 np0005597378 podman[94217]: 2026-01-27 13:14:48.625525643 +0000 UTC m=+0.072454937 container create 27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:14:48 np0005597378 podman[94197]: 2026-01-27 13:14:48.651515129 +0000 UTC m=+0.417906119 container init 4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4 (image=quay.io/ceph/ceph:v20, name=stupefied_mayer, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:48 np0005597378 podman[94197]: 2026-01-27 13:14:48.659158949 +0000 UTC m=+0.425549929 container start 4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4 (image=quay.io/ceph/ceph:v20, name=stupefied_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:14:48 np0005597378 podman[94197]: 2026-01-27 13:14:48.664721944 +0000 UTC m=+0.431112914 container attach 4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4 (image=quay.io/ceph/ceph:v20, name=stupefied_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:14:48 np0005597378 systemd[1]: Started libpod-conmon-27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca.scope.
Jan 27 08:14:48 np0005597378 podman[94217]: 2026-01-27 13:14:48.590961403 +0000 UTC m=+0.037890797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0f5898afe2c4f670c5c2add4a5f48e591b7751aa6b933626f01353e90ad19c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0f5898afe2c4f670c5c2add4a5f48e591b7751aa6b933626f01353e90ad19c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0f5898afe2c4f670c5c2add4a5f48e591b7751aa6b933626f01353e90ad19c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0f5898afe2c4f670c5c2add4a5f48e591b7751aa6b933626f01353e90ad19c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:48 np0005597378 podman[94217]: 2026-01-27 13:14:48.70608602 +0000 UTC m=+0.153015314 container init 27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:48 np0005597378 podman[94217]: 2026-01-27 13:14:48.715761172 +0000 UTC m=+0.162690466 container start 27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:14:48 np0005597378 podman[94217]: 2026-01-27 13:14:48.719562021 +0000 UTC m=+0.166491345 container attach 27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937408495' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:14:49 np0005597378 stupefied_mayer[94233]: 
Jan 27 08:14:49 np0005597378 stupefied_mayer[94233]: {"epoch":1,"fsid":"4d8fd694-f443-5fb1-b612-70034b2f3c6e","modified":"2026-01-27T13:12:50.854365Z","created":"2026-01-27T13:12:50.854365Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 27 08:14:49 np0005597378 stupefied_mayer[94233]: dumped monmap epoch 1
Jan 27 08:14:49 np0005597378 systemd[1]: libpod-4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4.scope: Deactivated successfully.
Jan 27 08:14:49 np0005597378 podman[94197]: 2026-01-27 13:14:49.157198093 +0000 UTC m=+0.923589033 container died 4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4 (image=quay.io/ceph/ceph:v20, name=stupefied_mayer, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0b1ddd2b786f9dbfcf0d56cd3fa5dd42ed626003a02643632c034d0c704379af-merged.mount: Deactivated successfully.
Jan 27 08:14:49 np0005597378 podman[94197]: 2026-01-27 13:14:49.195907311 +0000 UTC m=+0.962298251 container remove 4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4 (image=quay.io/ceph/ceph:v20, name=stupefied_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:49 np0005597378 systemd[1]: libpod-conmon-4704d67fbc0e0cb36a68bfecc1a325951d136fa10933024cabdb5161c36602b4.scope: Deactivated successfully.
Jan 27 08:14:49 np0005597378 lvm[94350]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:49 np0005597378 lvm[94351]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:49 np0005597378 lvm[94350]: VG ceph_vg0 finished
Jan 27 08:14:49 np0005597378 lvm[94351]: VG ceph_vg1 finished
Jan 27 08:14:49 np0005597378 lvm[94353]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:49 np0005597378 lvm[94353]: VG ceph_vg2 finished
Jan 27 08:14:49 np0005597378 modest_khayyam[94240]: {}
Jan 27 08:14:49 np0005597378 systemd[1]: libpod-27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca.scope: Deactivated successfully.
Jan 27 08:14:49 np0005597378 podman[94217]: 2026-01-27 13:14:49.542211226 +0000 UTC m=+0.989140520 container died 27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:14:49 np0005597378 systemd[1]: libpod-27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca.scope: Consumed 1.296s CPU time.
Jan 27 08:14:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6b0f5898afe2c4f670c5c2add4a5f48e591b7751aa6b933626f01353e90ad19c-merged.mount: Deactivated successfully.
Jan 27 08:14:49 np0005597378 podman[94217]: 2026-01-27 13:14:49.620548115 +0000 UTC m=+1.067477409 container remove 27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:14:49 np0005597378 systemd[1]: libpod-conmon-27542a8fdc4ed58f0e445fcc9661fe95ef480eb826df6dc0f9c1131e4fd8faca.scope: Deactivated successfully.
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:49 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 5d07a6be-c3bf-4091-af5b-08ccbbc528fd (Updating rgw.rgw deployment (+1 -> 1))
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dfjhvm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dfjhvm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dfjhvm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:49 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dfjhvm on compute-0
Jan 27 08:14:49 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dfjhvm on compute-0
Jan 27 08:14:49 np0005597378 python3[94391]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:49 np0005597378 podman[94440]: 2026-01-27 13:14:49.833709695 +0000 UTC m=+0.043830793 container create c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6 (image=quay.io/ceph/ceph:v20, name=hopeful_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:49 np0005597378 systemd[1]: Started libpod-conmon-c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6.scope.
Jan 27 08:14:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9033aa83e4096e59b8bd53ca71a7a5f6f4d235c296a9a554283ea14634a5c22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9033aa83e4096e59b8bd53ca71a7a5f6f4d235c296a9a554283ea14634a5c22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:49 np0005597378 podman[94440]: 2026-01-27 13:14:49.81431173 +0000 UTC m=+0.024432818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:49 np0005597378 podman[94440]: 2026-01-27 13:14:49.910803071 +0000 UTC m=+0.120924159 container init c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6 (image=quay.io/ceph/ceph:v20, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:49 np0005597378 podman[94440]: 2026-01-27 13:14:49.918392779 +0000 UTC m=+0.128513847 container start c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6 (image=quay.io/ceph/ceph:v20, name=hopeful_austin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:49 np0005597378 podman[94440]: 2026-01-27 13:14:49.92844556 +0000 UTC m=+0.138566658 container attach c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6 (image=quay.io/ceph/ceph:v20, name=hopeful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.210255857 +0000 UTC m=+0.042738384 container create c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kowalevski, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:50 np0005597378 systemd[1]: Started libpod-conmon-c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07.scope.
Jan 27 08:14:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.2714969 +0000 UTC m=+0.103979427 container init c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.277634591 +0000 UTC m=+0.110117118 container start c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kowalevski, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:14:50 np0005597378 flamboyant_kowalevski[94538]: 167 167
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.280375162 +0000 UTC m=+0.112857709 container attach c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.281216753 +0000 UTC m=+0.113699280 container died c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:50 np0005597378 systemd[1]: libpod-c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07.scope: Deactivated successfully.
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.191311043 +0000 UTC m=+0.023793590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-61605b1793c42092facf62474a47260837d7e723d31b267d618fe0019d9bce10-merged.mount: Deactivated successfully.
Jan 27 08:14:50 np0005597378 podman[94522]: 2026-01-27 13:14:50.313657948 +0000 UTC m=+0.146140465 container remove c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:14:50 np0005597378 systemd[1]: libpod-conmon-c80d64f4dc6fc6fff8a28c9e23ff3e1998962048c9f6c414f1ff73cfb9038b07.scope: Deactivated successfully.
Jan 27 08:14:50 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:50 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:50 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1534766206' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 27 08:14:50 np0005597378 hopeful_austin[94457]: [client.openstack]
Jan 27 08:14:50 np0005597378 hopeful_austin[94457]: #011key = AQCpuXhpAAAAABAAvIlPmSdqWzxTdE6O8VTMdg==
Jan 27 08:14:50 np0005597378 hopeful_austin[94457]: #011caps mgr = "allow *"
Jan 27 08:14:50 np0005597378 hopeful_austin[94457]: #011caps mon = "profile rbd"
Jan 27 08:14:50 np0005597378 hopeful_austin[94457]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 27 08:14:50 np0005597378 podman[94440]: 2026-01-27 13:14:50.463408006 +0000 UTC m=+0.673529084 container died c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6 (image=quay.io/ceph/ceph:v20, name=hopeful_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:14:50 np0005597378 systemd[1]: libpod-c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6.scope: Deactivated successfully.
Jan 27 08:14:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d9033aa83e4096e59b8bd53ca71a7a5f6f4d235c296a9a554283ea14634a5c22-merged.mount: Deactivated successfully.
Jan 27 08:14:50 np0005597378 podman[94440]: 2026-01-27 13:14:50.612496718 +0000 UTC m=+0.822617786 container remove c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6 (image=quay.io/ceph/ceph:v20, name=hopeful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 27 08:14:50 np0005597378 systemd[1]: libpod-conmon-c9554ec884508e8559ecce3682f9a30107428f0cc4ee49cc6da44705dcf2f7f6.scope: Deactivated successfully.
Jan 27 08:14:50 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dfjhvm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dfjhvm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: Deploying daemon rgw.rgw.compute-0.dfjhvm on compute-0
Jan 27 08:14:50 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1534766206' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 27 08:14:50 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:50 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:50 np0005597378 systemd[1]: Starting Ceph rgw.rgw.compute-0.dfjhvm for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:14:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:51 np0005597378 podman[94693]: 2026-01-27 13:14:51.12925483 +0000 UTC m=+0.049316455 container create cb472c3daf6324ccad457d1e3232495095921d7cbbc9c7b922bb28df44fade59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-rgw-rgw-compute-0-dfjhvm, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b903e129905a64d5265334659b224e108e1b11348950dc9ca5605af4732300a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b903e129905a64d5265334659b224e108e1b11348950dc9ca5605af4732300a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b903e129905a64d5265334659b224e108e1b11348950dc9ca5605af4732300a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b903e129905a64d5265334659b224e108e1b11348950dc9ca5605af4732300a/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dfjhvm supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:51 np0005597378 podman[94693]: 2026-01-27 13:14:51.187007263 +0000 UTC m=+0.107068908 container init cb472c3daf6324ccad457d1e3232495095921d7cbbc9c7b922bb28df44fade59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-rgw-rgw-compute-0-dfjhvm, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:51 np0005597378 podman[94693]: 2026-01-27 13:14:51.192433635 +0000 UTC m=+0.112495270 container start cb472c3daf6324ccad457d1e3232495095921d7cbbc9c7b922bb28df44fade59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-rgw-rgw-compute-0-dfjhvm, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:51 np0005597378 podman[94693]: 2026-01-27 13:14:51.101886877 +0000 UTC m=+0.021948592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:51 np0005597378 bash[94693]: cb472c3daf6324ccad457d1e3232495095921d7cbbc9c7b922bb28df44fade59
Jan 27 08:14:51 np0005597378 systemd[1]: Started Ceph rgw.rgw.compute-0.dfjhvm for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:14:51 np0005597378 radosgw[94713]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:14:51 np0005597378 radosgw[94713]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 27 08:14:51 np0005597378 radosgw[94713]: framework: beast
Jan 27 08:14:51 np0005597378 radosgw[94713]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 27 08:14:51 np0005597378 radosgw[94713]: init_numa not setting numa affinity
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 5d07a6be-c3bf-4091-af5b-08ccbbc528fd (Updating rgw.rgw deployment (+1 -> 1))
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 5d07a6be-c3bf-4091-af5b-08ccbbc528fd (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 3a217c9f-665b-4430-b117-5f367acd11b1 (Updating mds.cephfs deployment (+1 -> 1))
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ukpmyo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ukpmyo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ukpmyo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ukpmyo on compute-0
Jan 27 08:14:51 np0005597378 ceph-mgr[75385]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ukpmyo on compute-0
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.753963042 +0000 UTC m=+0.036343627 container create 00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:51 np0005597378 systemd[1]: Started libpod-conmon-00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963.scope.
Jan 27 08:14:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.81226984 +0000 UTC m=+0.094650465 container init 00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.818565294 +0000 UTC m=+0.100945889 container start 00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.821619523 +0000 UTC m=+0.104000148 container attach 00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:14:51 np0005597378 tender_noether[94944]: 167 167
Jan 27 08:14:51 np0005597378 systemd[1]: libpod-00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963.scope: Deactivated successfully.
Jan 27 08:14:51 np0005597378 conmon[94944]: conmon 00aaf05983ba338cf41e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963.scope/container/memory.events
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.823505903 +0000 UTC m=+0.105886498 container died 00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.738606572 +0000 UTC m=+0.020987187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ad234c84282717d5f6f93bcb1b3581334f3975261ace24e76ce81dfa416bdc0b-merged.mount: Deactivated successfully.
Jan 27 08:14:51 np0005597378 podman[94897]: 2026-01-27 13:14:51.863535524 +0000 UTC m=+0.145916119 container remove 00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:51 np0005597378 systemd[1]: libpod-conmon-00aaf05983ba338cf41e60bfe86b5fc8f6ddd6f81bed0fdab04063e1dee01963.scope: Deactivated successfully.
Jan 27 08:14:51 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:51 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:51 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:52 np0005597378 systemd[1]: Reloading.
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: Saving service rgw.rgw spec with placement compute-0
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ukpmyo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ukpmyo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: Deploying daemon mds.cephfs.compute-0.ukpmyo on compute-0
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1150100252' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 27 08:14:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 28 pg[8.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:52 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:14:52 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:14:52 np0005597378 ansible-async_wrapper.py[95050]: Invoked with j490264282565 30 /home/zuul/.ansible/tmp/ansible-tmp-1769519691.5939164-36809-58524106844590/AnsiballZ_command.py _
Jan 27 08:14:52 np0005597378 ansible-async_wrapper.py[95091]: Starting module and watcher
Jan 27 08:14:52 np0005597378 ansible-async_wrapper.py[95091]: Start watching 95092 (30)
Jan 27 08:14:52 np0005597378 ansible-async_wrapper.py[95092]: Start module (95092)
Jan 27 08:14:52 np0005597378 ansible-async_wrapper.py[95050]: Return async_wrapper task started.
Jan 27 08:14:52 np0005597378 python3[95093]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:52 np0005597378 systemd[1]: Starting Ceph mds.cephfs.compute-0.ukpmyo for 4d8fd694-f443-5fb1-b612-70034b2f3c6e...
Jan 27 08:14:52 np0005597378 podman[95094]: 2026-01-27 13:14:52.497937879 +0000 UTC m=+0.042234781 container create 976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0 (image=quay.io/ceph/ceph:v20, name=stoic_beaver, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:14:52 np0005597378 systemd[1]: Started libpod-conmon-976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0.scope.
Jan 27 08:14:52 np0005597378 podman[95094]: 2026-01-27 13:14:52.480754202 +0000 UTC m=+0.025051124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e759312834e96878a68787b2dcd451516428400bee4530678a28f985ecaa3e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e759312834e96878a68787b2dcd451516428400bee4530678a28f985ecaa3e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:52 np0005597378 podman[95094]: 2026-01-27 13:14:52.600451208 +0000 UTC m=+0.144748140 container init 976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0 (image=quay.io/ceph/ceph:v20, name=stoic_beaver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:14:52 np0005597378 podman[95094]: 2026-01-27 13:14:52.612762538 +0000 UTC m=+0.157059450 container start 976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0 (image=quay.io/ceph/ceph:v20, name=stoic_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:52 np0005597378 podman[95094]: 2026-01-27 13:14:52.616390782 +0000 UTC m=+0.160687704 container attach 976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0 (image=quay.io/ceph/ceph:v20, name=stoic_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:52 np0005597378 podman[95161]: 2026-01-27 13:14:52.685303636 +0000 UTC m=+0.038482682 container create dd1c53a2b9af2dbb75028fb530926bd6bb9c0b3fa49c95f0f46e11301417f441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mds-cephfs-compute-0-ukpmyo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:14:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e702c8b9c6f03edbf4be3530871b484d3b96d069c4aef468ed604ebf10835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e702c8b9c6f03edbf4be3530871b484d3b96d069c4aef468ed604ebf10835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e702c8b9c6f03edbf4be3530871b484d3b96d069c4aef468ed604ebf10835/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e702c8b9c6f03edbf4be3530871b484d3b96d069c4aef468ed604ebf10835/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ukpmyo supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:52 np0005597378 podman[95161]: 2026-01-27 13:14:52.751439139 +0000 UTC m=+0.104618225 container init dd1c53a2b9af2dbb75028fb530926bd6bb9c0b3fa49c95f0f46e11301417f441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mds-cephfs-compute-0-ukpmyo, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:52 np0005597378 podman[95161]: 2026-01-27 13:14:52.757424574 +0000 UTC m=+0.110603630 container start dd1c53a2b9af2dbb75028fb530926bd6bb9c0b3fa49c95f0f46e11301417f441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mds-cephfs-compute-0-ukpmyo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:14:52 np0005597378 bash[95161]: dd1c53a2b9af2dbb75028fb530926bd6bb9c0b3fa49c95f0f46e11301417f441
Jan 27 08:14:52 np0005597378 podman[95161]: 2026-01-27 13:14:52.66892335 +0000 UTC m=+0.022102426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:52 np0005597378 systemd[1]: Started Ceph mds.cephfs.compute-0.ukpmyo for 4d8fd694-f443-5fb1-b612-70034b2f3c6e.
Jan 27 08:14:52 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 4 completed events
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mgr[75385]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: main not setting numa affinity
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: pidfile_write: ignore empty --pid-file
Jan 27 08:14:52 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mds-cephfs-compute-0-ukpmyo[95187]: starting mds.cephfs.compute-0.ukpmyo at 
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo Updating MDS map to version 2 from mon.0
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/2623538353,v1:192.168.122.100:6815/2623538353] as mds.0
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ukpmyo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 3a217c9f-665b-4430-b117-5f367acd11b1 (Updating mds.cephfs deployment (+1 -> 1))
Jan 27 08:14:52 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 3a217c9f-665b-4430-b117-5f367acd11b1 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e3 new map
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-27T13:14:52:862959+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-27T13:14:41.619547+0000#012modified#0112026-01-27T13:14:52.862952+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.ukpmyo{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/2623538353,v1:192.168.122.100:6815/2623538353] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo Updating MDS map to version 3 from mon.0
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.3 handle_mds_map I am now mds.0.3
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x1
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x100
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x600
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x601
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x602
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x603
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x604
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x605
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x606
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x607
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x608
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.cache creating system inode with ino:0x609
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2623538353,v1:192.168.122.100:6815/2623538353] up:boot
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ukpmyo=up:creating}
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ukpmyo"} v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.ukpmyo"} : dispatch
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e3 all = 0
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v67: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:14:52 np0005597378 ceph-mds[95200]: mds.0.3 creating_done
Jan 27 08:14:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ukpmyo is now active in filesystem cephfs as rank 0
Jan 27 08:14:53 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:14:53 np0005597378 stoic_beaver[95132]: 
Jan 27 08:14:53 np0005597378 stoic_beaver[95132]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:14:53 np0005597378 systemd[1]: libpod-976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0.scope: Deactivated successfully.
Jan 27 08:14:53 np0005597378 podman[95094]: 2026-01-27 13:14:53.042974598 +0000 UTC m=+0.587271500 container died 976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0 (image=quay.io/ceph/ceph:v20, name=stoic_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:14:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7e759312834e96878a68787b2dcd451516428400bee4530678a28f985ecaa3e8-merged.mount: Deactivated successfully.
Jan 27 08:14:53 np0005597378 podman[95094]: 2026-01-27 13:14:53.09264051 +0000 UTC m=+0.636937452 container remove 976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0 (image=quay.io/ceph/ceph:v20, name=stoic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:53 np0005597378 systemd[1]: libpod-conmon-976884308ccace233b6301f21e9ae7fe097f1d6dc5929a17b8840ff8c51976f0.scope: Deactivated successfully.
Jan 27 08:14:53 np0005597378 ansible-async_wrapper.py[95092]: Module complete (95092)
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1150100252' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1150100252' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: daemon mds.cephfs.compute-0.ukpmyo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: Cluster is now healthy
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: daemon mds.cephfs.compute-0.ukpmyo is now active in filesystem cephfs as rank 0
Jan 27 08:14:53 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 29 pg[8.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:53 np0005597378 podman[95972]: 2026-01-27 13:14:53.571547437 +0000 UTC m=+0.080990490 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:53 np0005597378 python3[95975]: ansible-ansible.legacy.async_status Invoked with jid=j490264282565.95050 mode=status _async_dir=/root/.ansible_async
Jan 27 08:14:53 np0005597378 podman[95972]: 2026-01-27 13:14:53.660807291 +0000 UTC m=+0.170250314 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:53 np0005597378 python3[96081]: ansible-ansible.legacy.async_status Invoked with jid=j490264282565.95050 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e4 new map
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-27T13:14:53:926941+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-27T13:14:41.619547+0000#012modified#0112026-01-27T13:14:53.926940+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.ukpmyo{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2623538353,v1:192.168.122.100:6815/2623538353] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 27 08:14:53 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo Updating MDS map to version 4 from mon.0
Jan 27 08:14:53 np0005597378 ceph-mds[95200]: mds.0.3 handle_mds_map I am now mds.0.3
Jan 27 08:14:53 np0005597378 ceph-mds[95200]: mds.0.3 handle_mds_map state change up:creating --> up:active
Jan 27 08:14:53 np0005597378 ceph-mds[95200]: mds.0.3 recovery_done -- successful recovery!
Jan 27 08:14:53 np0005597378 ceph-mds[95200]: mds.0.3 active_start
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2623538353,v1:192.168.122.100:6815/2623538353] up:active
Jan 27 08:14:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ukpmyo=up:active}
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/1150100252' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:14:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:14:54 np0005597378 python3[96237]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:54 np0005597378 podman[96252]: 2026-01-27 13:14:54.590056741 +0000 UTC m=+0.041583474 container create 8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2 (image=quay.io/ceph/ceph:v20, name=optimistic_hellman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:54 np0005597378 systemd[1]: Started libpod-conmon-8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2.scope.
Jan 27 08:14:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64f4896c76e4266d785184d083c8e91c9115293cdedd4b4ae70849cfa1a7ee9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64f4896c76e4266d785184d083c8e91c9115293cdedd4b4ae70849cfa1a7ee9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:54 np0005597378 podman[96252]: 2026-01-27 13:14:54.56699867 +0000 UTC m=+0.018525433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:54 np0005597378 podman[96252]: 2026-01-27 13:14:54.668603896 +0000 UTC m=+0.120130649 container init 8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2 (image=quay.io/ceph/ceph:v20, name=optimistic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:14:54 np0005597378 podman[96252]: 2026-01-27 13:14:54.6745418 +0000 UTC m=+0.126068533 container start 8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2 (image=quay.io/ceph/ceph:v20, name=optimistic_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:54 np0005597378 podman[96252]: 2026-01-27 13:14:54.677409295 +0000 UTC m=+0.128936048 container attach 8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2 (image=quay.io/ceph/ceph:v20, name=optimistic_hellman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 30 pg[9.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v70: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 7 op/s
Jan 27 08:14:54 np0005597378 podman[96336]: 2026-01-27 13:14:54.939555139 +0000 UTC m=+0.058947786 container create abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:54 np0005597378 systemd[1]: Started libpod-conmon-abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006.scope.
Jan 27 08:14:55 np0005597378 podman[96336]: 2026-01-27 13:14:54.909035664 +0000 UTC m=+0.028428331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:55 np0005597378 podman[96336]: 2026-01-27 13:14:55.041138493 +0000 UTC m=+0.160531180 container init abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 27 08:14:55 np0005597378 podman[96336]: 2026-01-27 13:14:55.052000137 +0000 UTC m=+0.171392794 container start abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:14:55 np0005597378 sweet_mclean[96351]: 167 167
Jan 27 08:14:55 np0005597378 systemd[1]: libpod-abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006.scope: Deactivated successfully.
Jan 27 08:14:55 np0005597378 podman[96336]: 2026-01-27 13:14:55.059694526 +0000 UTC m=+0.179087183 container attach abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:14:55 np0005597378 podman[96336]: 2026-01-27 13:14:55.060204609 +0000 UTC m=+0.179597266 container died abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:14:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-95bb00b0a40a7e7272cb379458e5ce7176bdf9ffa8c1401b88ecc69bb904f31d-merged.mount: Deactivated successfully.
Jan 27 08:14:55 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:14:55 np0005597378 optimistic_hellman[96301]: 
Jan 27 08:14:55 np0005597378 optimistic_hellman[96301]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 27 08:14:55 np0005597378 podman[96336]: 2026-01-27 13:14:55.111118135 +0000 UTC m=+0.230510792 container remove abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:14:55 np0005597378 systemd[1]: libpod-conmon-abe8acbcc5209aba41cc9fefff3fbf11aecb8732ed10a087e9ff9b8079fbf006.scope: Deactivated successfully.
Jan 27 08:14:55 np0005597378 systemd[1]: libpod-8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2.scope: Deactivated successfully.
Jan 27 08:14:55 np0005597378 podman[96252]: 2026-01-27 13:14:55.128189769 +0000 UTC m=+0.579716512 container died 8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2 (image=quay.io/ceph/ceph:v20, name=optimistic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:14:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a64f4896c76e4266d785184d083c8e91c9115293cdedd4b4ae70849cfa1a7ee9-merged.mount: Deactivated successfully.
Jan 27 08:14:55 np0005597378 podman[96252]: 2026-01-27 13:14:55.176467076 +0000 UTC m=+0.627993819 container remove 8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2 (image=quay.io/ceph/ceph:v20, name=optimistic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:55 np0005597378 systemd[1]: libpod-conmon-8a204a99e5f421c6498c501c5a19215baa0da31d0a2a1b34551d4bbc40b1ebe2.scope: Deactivated successfully.
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.253631295 +0000 UTC m=+0.029481468 container create 8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:55 np0005597378 systemd[1]: Started libpod-conmon-8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7.scope.
Jan 27 08:14:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8d8efbd452547105c21251cddca3db10901ec06b9943d48c268f28e764c71f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8d8efbd452547105c21251cddca3db10901ec06b9943d48c268f28e764c71f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8d8efbd452547105c21251cddca3db10901ec06b9943d48c268f28e764c71f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8d8efbd452547105c21251cddca3db10901ec06b9943d48c268f28e764c71f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8d8efbd452547105c21251cddca3db10901ec06b9943d48c268f28e764c71f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.326359859 +0000 UTC m=+0.102210032 container init 8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_davinci, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.333966446 +0000 UTC m=+0.109816599 container start 8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_davinci, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.337227861 +0000 UTC m=+0.113078014 container attach 8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.240566804 +0000 UTC m=+0.016416997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 27 08:14:55 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 31 pg[9.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:14:55 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 27 08:14:55 np0005597378 funny_davinci[96406]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:14:55 np0005597378 funny_davinci[96406]: --> All data devices are unavailable
Jan 27 08:14:55 np0005597378 systemd[1]: libpod-8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7.scope: Deactivated successfully.
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.873252725 +0000 UTC m=+0.649102968 container died 8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:14:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0e8d8efbd452547105c21251cddca3db10901ec06b9943d48c268f28e764c71f-merged.mount: Deactivated successfully.
Jan 27 08:14:55 np0005597378 podman[96390]: 2026-01-27 13:14:55.931871101 +0000 UTC m=+0.707721254 container remove 8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_davinci, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:14:55 np0005597378 systemd[1]: libpod-conmon-8d86cf5a10778027b804490e39b59a3084d8a2f85a1297a12ddebb93e3196bc7.scope: Deactivated successfully.
Jan 27 08:14:56 np0005597378 python3[96453]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.075117359 +0000 UTC m=+0.041745687 container create a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b (image=quay.io/ceph/ceph:v20, name=amazing_almeida, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:56 np0005597378 systemd[1]: Started libpod-conmon-a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b.scope.
Jan 27 08:14:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe55d58f5d2493b14e4f166a56d577b1636cecb5517359447dbcddbb70e88e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe55d58f5d2493b14e4f166a56d577b1636cecb5517359447dbcddbb70e88e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.056126235 +0000 UTC m=+0.022754593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.159183818 +0000 UTC m=+0.125812166 container init a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b (image=quay.io/ceph/ceph:v20, name=amazing_almeida, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.170454911 +0000 UTC m=+0.137083239 container start a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b (image=quay.io/ceph/ceph:v20, name=amazing_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.173526142 +0000 UTC m=+0.140154490 container attach a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b (image=quay.io/ceph/ceph:v20, name=amazing_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.391113786 +0000 UTC m=+0.041383509 container create 849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:14:56 np0005597378 systemd[1]: Started libpod-conmon-849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70.scope.
Jan 27 08:14:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.458641083 +0000 UTC m=+0.108910836 container init 849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.46581516 +0000 UTC m=+0.116084903 container start 849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gould, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:56 np0005597378 adoring_gould[96582]: 167 167
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.469681751 +0000 UTC m=+0.119951574 container attach 849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gould, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.374990525 +0000 UTC m=+0.025260278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:56 np0005597378 systemd[1]: libpod-849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70.scope: Deactivated successfully.
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.485300447 +0000 UTC m=+0.135570210 container died 849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gould, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:14:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-86a17f6bad2b1934b2f2a6980933347195f305ec318d104d5316796da3271d74-merged.mount: Deactivated successfully.
Jan 27 08:14:56 np0005597378 podman[96566]: 2026-01-27 13:14:56.538118792 +0000 UTC m=+0.188388555 container remove 849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:14:56 np0005597378 systemd[1]: libpod-conmon-849184ec5cb9dd7f0870546c2f03b950e1f06e1327cdd66a715e89ec21315a70.scope: Deactivated successfully.
Jan 27 08:14:56 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 08:14:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 08:14:56 np0005597378 amazing_almeida[96531]: 
Jan 27 08:14:56 np0005597378 amazing_almeida[96531]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 27 08:14:56 np0005597378 systemd[1]: libpod-a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b.scope: Deactivated successfully.
Jan 27 08:14:56 np0005597378 conmon[96531]: conmon a2bb2f7cc2e49a4eb538 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b.scope/container/memory.events
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.630304952 +0000 UTC m=+0.596933300 container died a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b (image=quay.io/ceph/ceph:v20, name=amazing_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cbfe55d58f5d2493b14e4f166a56d577b1636cecb5517359447dbcddbb70e88e-merged.mount: Deactivated successfully.
Jan 27 08:14:56 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 32 pg[10.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [2] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:56 np0005597378 podman[96489]: 2026-01-27 13:14:56.676218998 +0000 UTC m=+0.642847326 container remove a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b (image=quay.io/ceph/ceph:v20, name=amazing_almeida, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:14:56 np0005597378 systemd[1]: libpod-conmon-a2bb2f7cc2e49a4eb53863a7e8fb49b8b35821dde5484d1e140a9dee11f1902b.scope: Deactivated successfully.
Jan 27 08:14:56 np0005597378 podman[96617]: 2026-01-27 13:14:56.720758726 +0000 UTC m=+0.052233140 container create 7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:14:56 np0005597378 systemd[1]: Started libpod-conmon-7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182.scope.
Jan 27 08:14:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc9f485735a567ddff18a2c82363b9b9e6b4b06c6e738af827a38261a3f89ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc9f485735a567ddff18a2c82363b9b9e6b4b06c6e738af827a38261a3f89ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc9f485735a567ddff18a2c82363b9b9e6b4b06c6e738af827a38261a3f89ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc9f485735a567ddff18a2c82363b9b9e6b4b06c6e738af827a38261a3f89ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:56 np0005597378 podman[96617]: 2026-01-27 13:14:56.787573186 +0000 UTC m=+0.119047610 container init 7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:56 np0005597378 podman[96617]: 2026-01-27 13:14:56.697115122 +0000 UTC m=+0.028589586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:56 np0005597378 podman[96617]: 2026-01-27 13:14:56.794180298 +0000 UTC m=+0.125654712 container start 7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_shannon, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:56 np0005597378 podman[96617]: 2026-01-27 13:14:56.797228817 +0000 UTC m=+0.128703231 container attach 7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_shannon, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:14:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v73: 10 pgs: 2 unknown, 1 creating+peering, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]: {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:    "0": [
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:        {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "devices": [
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "/dev/loop3"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            ],
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_name": "ceph_lv0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_size": "21470642176",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "name": "ceph_lv0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "tags": {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.crush_device_class": "",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.encrypted": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osd_id": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.type": "block",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.vdo": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.with_tpm": "0"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            },
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "type": "block",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "vg_name": "ceph_vg0"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:        }
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:    ],
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:    "1": [
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:        {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "devices": [
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "/dev/loop4"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            ],
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_name": "ceph_lv1",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_size": "21470642176",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "name": "ceph_lv1",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "tags": {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.crush_device_class": "",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.encrypted": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osd_id": "1",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.type": "block",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.vdo": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.with_tpm": "0"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            },
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "type": "block",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "vg_name": "ceph_vg1"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:        }
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:    ],
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:    "2": [
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:        {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "devices": [
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "/dev/loop5"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            ],
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_name": "ceph_lv2",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_size": "21470642176",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "name": "ceph_lv2",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "tags": {
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.cluster_name": "ceph",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.crush_device_class": "",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.encrypted": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.objectstore": "bluestore",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osd_id": "2",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.type": "block",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.vdo": "0",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:                "ceph.with_tpm": "0"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            },
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "type": "block",
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:            "vg_name": "ceph_vg2"
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:        }
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]:    ]
Jan 27 08:14:57 np0005597378 lucid_shannon[96637]: }
Jan 27 08:14:57 np0005597378 systemd[1]: libpod-7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182.scope: Deactivated successfully.
Jan 27 08:14:57 np0005597378 conmon[96637]: conmon 7f29575a876d8b44c7f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182.scope/container/memory.events
Jan 27 08:14:57 np0005597378 podman[96617]: 2026-01-27 13:14:57.095212175 +0000 UTC m=+0.426686629 container died 7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_shannon, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:14:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5cc9f485735a567ddff18a2c82363b9b9e6b4b06c6e738af827a38261a3f89ee-merged.mount: Deactivated successfully.
Jan 27 08:14:57 np0005597378 podman[96617]: 2026-01-27 13:14:57.144834246 +0000 UTC m=+0.476308690 container remove 7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_shannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:14:57 np0005597378 systemd[1]: libpod-conmon-7f29575a876d8b44c7f701a5603b78dc15500678d6b787fceba3d1fe0d8eb182.scope: Deactivated successfully.
Jan 27 08:14:57 np0005597378 ansible-async_wrapper.py[95091]: Done in kid B.
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 27 08:14:57 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 33 pg[10.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [2] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.640121859 +0000 UTC m=+0.040355571 container create af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:14:57 np0005597378 systemd[1]: Started libpod-conmon-af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2.scope.
Jan 27 08:14:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:57 np0005597378 python3[96742]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.712618347 +0000 UTC m=+0.112852059 container init af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.620080248 +0000 UTC m=+0.020314000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.719528207 +0000 UTC m=+0.119761929 container start af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.723053768 +0000 UTC m=+0.123287490 container attach af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kalam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:14:57 np0005597378 sweet_kalam[96767]: 167 167
Jan 27 08:14:57 np0005597378 systemd[1]: libpod-af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2.scope: Deactivated successfully.
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.724570407 +0000 UTC m=+0.124804109 container died af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:14:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-71686486eb26287fd7a55e8755b2e6c4b40c83e7714d3a71cd6db27391fa6aa0-merged.mount: Deactivated successfully.
Jan 27 08:14:57 np0005597378 podman[96751]: 2026-01-27 13:14:57.761603422 +0000 UTC m=+0.161837144 container remove af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:14:57 np0005597378 podman[96770]: 2026-01-27 13:14:57.772422224 +0000 UTC m=+0.056210335 container create dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00 (image=quay.io/ceph/ceph:v20, name=relaxed_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:14:57 np0005597378 systemd[1]: libpod-conmon-af9244108aa93d89d3f920603cdfd741e7c12c4495090badb6f7be5a5e7c03c2.scope: Deactivated successfully.
Jan 27 08:14:57 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 5 completed events
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:57 np0005597378 systemd[1]: Started libpod-conmon-dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00.scope.
Jan 27 08:14:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419402dfc88558c0f7ba4e1daefb49e87e75fd25b899ac808c6f77e4d85b8022/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419402dfc88558c0f7ba4e1daefb49e87e75fd25b899ac808c6f77e4d85b8022/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:57 np0005597378 podman[96770]: 2026-01-27 13:14:57.84066413 +0000 UTC m=+0.124452231 container init dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00 (image=quay.io/ceph/ceph:v20, name=relaxed_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:14:57 np0005597378 podman[96770]: 2026-01-27 13:14:57.748129291 +0000 UTC m=+0.031917412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:57 np0005597378 podman[96770]: 2026-01-27 13:14:57.848141645 +0000 UTC m=+0.131929756 container start dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00 (image=quay.io/ceph/ceph:v20, name=relaxed_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:14:57 np0005597378 podman[96770]: 2026-01-27 13:14:57.851920143 +0000 UTC m=+0.135708294 container attach dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00 (image=quay.io/ceph/ceph:v20, name=relaxed_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:14:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:14:57 np0005597378 ceph-mds[95200]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 27 08:14:57 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mds-cephfs-compute-0-ukpmyo[95187]: 2026-01-27T13:14:57.902+0000 7efebd806640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 27 08:14:57 np0005597378 podman[96811]: 2026-01-27 13:14:57.924234095 +0000 UTC m=+0.037750873 container create e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williamson, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:57 np0005597378 systemd[1]: Started libpod-conmon-e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c.scope.
Jan 27 08:14:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d06aa3e38fe24744621576548d2dc7cfcf04c696f3366d1b27c66c50db70c02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d06aa3e38fe24744621576548d2dc7cfcf04c696f3366d1b27c66c50db70c02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d06aa3e38fe24744621576548d2dc7cfcf04c696f3366d1b27c66c50db70c02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d06aa3e38fe24744621576548d2dc7cfcf04c696f3366d1b27c66c50db70c02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:58 np0005597378 podman[96811]: 2026-01-27 13:14:57.906820433 +0000 UTC m=+0.020337221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:14:58 np0005597378 podman[96811]: 2026-01-27 13:14:58.066608232 +0000 UTC m=+0.180125000 container init e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:14:58 np0005597378 podman[96811]: 2026-01-27 13:14:58.072179247 +0000 UTC m=+0.185696015 container start e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williamson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:14:58 np0005597378 podman[96811]: 2026-01-27 13:14:58.076876649 +0000 UTC m=+0.190393417 container attach e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:14:58 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 27 08:14:58 np0005597378 relaxed_mayer[96802]: 
Jan 27 08:14:58 np0005597378 relaxed_mayer[96802]: [{"container_id": "c82a6acd43c4", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.24%", "created": "2026-01-27T13:13:39.050544Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-27T13:13:39.122791Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.341155Z", "memory_usage": 7782531, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-27T13:13:38.946259Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@crash.compute-0", "version": "20.2.0"}, {"container_id": "dd1c53a2b9af", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "6.31%", "created": "2026-01-27T13:14:52.770394Z", "daemon_id": "cephfs.compute-0.ukpmyo", "daemon_name": "mds.cephfs.compute-0.ukpmyo", "daemon_type": "mds", "events": ["2026-01-27T13:14:52.849158Z daemon:mds.cephfs.compute-0.ukpmyo [INFO] \"Deployed mds.cephfs.compute-0.ukpmyo on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.341567Z", "memory_usage": 15571353, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-27T13:14:52.672834Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mds.cephfs.compute-0.ukpmyo", "version": "20.2.0"}, {"container_id": "01727bd4ff0b", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.19%", "created": "2026-01-27T13:12:59.639722Z", "daemon_id": "compute-0.uujfpe", "daemon_name": "mgr.compute-0.uujfpe", "daemon_type": "mgr", "events": ["2026-01-27T13:13:43.784925Z daemon:mgr.compute-0.uujfpe [INFO] \"Reconfigured mgr.compute-0.uujfpe on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.341080Z", "memory_usage": 547146956, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-27T13:12:59.231165Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mgr.compute-0.uujfpe", "version": "20.2.0"}, {"container_id": "da35e91e4dd6", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.60%", "created": "2026-01-27T13:12:53.345584Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-27T13:13:43.212578Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.340976Z", "memory_request": 2147483648, "memory_usage": 39856373, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-27T13:12:57.369892Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@mon.compute-0", "version": "20.2.0"}, {"container_id": "1159b902fe1f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.58%", "created": "2026-01-27T13:14:02.663126Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-27T13:14:02.734838Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.341230Z", "memory_request": 4294967296, "memory_usage": 56696504, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-27T13:14:02.580298Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@osd.0", "version": "20.2.0"}, {"container_id": "7225f8e2277a", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2026-01-27T13:14:07.590250Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-27T13:14:08.000051Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.341300Z", "memory_request": 4294967296, "memory_usage": 58038681, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-27T13:14:07.186470Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@osd.1", "version": "20.2.0"}, {"container_id": "afdea70c1a44", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.85%", "created": "2026-01-27T13:14:13.146391Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-27T13:14:14.068981Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-27T13:14:54.341398Z", "memory_request": 4294967296, "memory_usage": 56088330, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-27T13:14:12.124993Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e@osd.2", "version": "20.2.0"}, {"container_id": "cb472c3daf63", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Jan 27 08:14:58 np0005597378 systemd[1]: libpod-dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00.scope: Deactivated successfully.
Jan 27 08:14:58 np0005597378 conmon[96802]: conmon dadd41f9ce7a99b94954 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00.scope/container/memory.events
Jan 27 08:14:58 np0005597378 podman[96770]: 2026-01-27 13:14:58.272757448 +0000 UTC m=+0.556545559 container died dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00 (image=quay.io/ceph/ceph:v20, name=relaxed_mayer, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-419402dfc88558c0f7ba4e1daefb49e87e75fd25b899ac808c6f77e4d85b8022-merged.mount: Deactivated successfully.
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 27 08:14:58 np0005597378 podman[96770]: 2026-01-27 13:14:58.476509902 +0000 UTC m=+0.760298003 container remove dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00 (image=quay.io/ceph/ceph:v20, name=relaxed_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 27 08:14:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:58 np0005597378 systemd[1]: libpod-conmon-dadd41f9ce7a99b94954c8a88d698bb2d86f5ef419a8d14cc6ac7b80954c9b00.scope: Deactivated successfully.
Jan 27 08:14:58 np0005597378 rsyslogd[1006]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "c82a6acd43c4", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 08:14:58 np0005597378 lvm[96940]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:14:58 np0005597378 lvm[96940]: VG ceph_vg1 finished
Jan 27 08:14:58 np0005597378 lvm[96939]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:14:58 np0005597378 lvm[96939]: VG ceph_vg0 finished
Jan 27 08:14:58 np0005597378 lvm[96942]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:14:58 np0005597378 lvm[96942]: VG ceph_vg2 finished
Jan 27 08:14:58 np0005597378 nostalgic_williamson[96847]: {}
Jan 27 08:14:58 np0005597378 systemd[1]: libpod-e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c.scope: Deactivated successfully.
Jan 27 08:14:58 np0005597378 systemd[1]: libpod-e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c.scope: Consumed 1.277s CPU time.
Jan 27 08:14:58 np0005597378 podman[96811]: 2026-01-27 13:14:58.903064716 +0000 UTC m=+1.016581484 container died e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:14:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v76: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 27 08:14:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2d06aa3e38fe24744621576548d2dc7cfcf04c696f3366d1b27c66c50db70c02-merged.mount: Deactivated successfully.
Jan 27 08:14:58 np0005597378 podman[96811]: 2026-01-27 13:14:58.961266022 +0000 UTC m=+1.074782830 container remove e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williamson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:14:58 np0005597378 systemd[1]: libpod-conmon-e88c78ba69e3dc99f4e9653422a5a42d6e1c72b0a74cae7f055fc41e2a94044c.scope: Deactivated successfully.
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 34 pg[11.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:14:59 np0005597378 python3[97057]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:14:59 np0005597378 podman[97060]: 2026-01-27 13:14:59.47482643 +0000 UTC m=+0.051708436 container create a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e (image=quay.io/ceph/ceph:v20, name=ecstatic_faraday, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 27 08:14:59 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 35 pg[11.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 27 08:14:59 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 27 08:14:59 np0005597378 systemd[1]: Started libpod-conmon-a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e.scope.
Jan 27 08:14:59 np0005597378 podman[97060]: 2026-01-27 13:14:59.451077722 +0000 UTC m=+0.027959768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:14:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:14:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe4cc02288155d23cbd43b95767d409ae6f010a2a7371e4062fea762008ab89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe4cc02288155d23cbd43b95767d409ae6f010a2a7371e4062fea762008ab89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:14:59 np0005597378 podman[97060]: 2026-01-27 13:14:59.572183035 +0000 UTC m=+0.149065091 container init a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e (image=quay.io/ceph/ceph:v20, name=ecstatic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:14:59 np0005597378 podman[97060]: 2026-01-27 13:14:59.582184245 +0000 UTC m=+0.159066251 container start a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e (image=quay.io/ceph/ceph:v20, name=ecstatic_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:14:59 np0005597378 podman[97060]: 2026-01-27 13:14:59.585288236 +0000 UTC m=+0.162170262 container attach a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e (image=quay.io/ceph/ceph:v20, name=ecstatic_faraday, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:14:59 np0005597378 podman[97123]: 2026-01-27 13:14:59.7317966 +0000 UTC m=+0.063529645 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:14:59 np0005597378 podman[97123]: 2026-01-27 13:14:59.824873142 +0000 UTC m=+0.156606207 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332721033' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 27 08:15:00 np0005597378 ecstatic_faraday[97089]: 
Jan 27 08:15:00 np0005597378 ecstatic_faraday[97089]: {"fsid":"4d8fd694-f443-5fb1-b612-70034b2f3c6e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":122,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1769519663,"num_in_osds":3,"osd_in_since":1769519634,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":10},{"state_name":"unknown","count":1}],"num_pgs":11,"num_pools":11,"num_objects":39,"data_bytes":463572,"bytes_used":83984384,"bytes_avail":64327942144,"bytes_total":64411926528,"unknown_pgs_ratio":0.090909093618392944,"read_bytes_sec":1535,"write_bytes_sec":2303,"read_op_per_sec":1,"write_op_per_sec":3},"fsmap":{"epoch":4,"btime":"2026-01-27T13:14:53:926941+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.ukpmyo","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-27T13:14:18.922578+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"895557fa-2d2a-43cb-b382-90e0a3ecc699":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 27 08:15:00 np0005597378 systemd[1]: libpod-a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e.scope: Deactivated successfully.
Jan 27 08:15:00 np0005597378 podman[97060]: 2026-01-27 13:15:00.090184189 +0000 UTC m=+0.667066195 container died a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e (image=quay.io/ceph/ceph:v20, name=ecstatic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:15:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5fe4cc02288155d23cbd43b95767d409ae6f010a2a7371e4062fea762008ab89-merged.mount: Deactivated successfully.
Jan 27 08:15:00 np0005597378 podman[97060]: 2026-01-27 13:15:00.134251006 +0000 UTC m=+0.711133012 container remove a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e (image=quay.io/ceph/ceph:v20, name=ecstatic_faraday, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:15:00 np0005597378 systemd[1]: libpod-conmon-a515712d3adb496055409d26a2a56ef1f59d7c4d2ecb5807bfdba707d167832e.scope: Deactivated successfully.
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: from='client.? 192.168.122.100:0/2201669510' entity='client.rgw.rgw.compute-0.dfjhvm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:15:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:15:00 np0005597378 radosgw[94713]: v1 topic migration: starting v1 topic migration..
Jan 27 08:15:00 np0005597378 radosgw[94713]: v1 topic migration: finished v1 topic migration
Jan 27 08:15:00 np0005597378 radosgw[94713]: framework: beast
Jan 27 08:15:00 np0005597378 radosgw[94713]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 27 08:15:00 np0005597378 radosgw[94713]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 27 08:15:00 np0005597378 radosgw[94713]: starting handler: beast
Jan 27 08:15:00 np0005597378 radosgw[94713]: set uid:gid to 167:167 (ceph:ceph)
Jan 27 08:15:00 np0005597378 radosgw[94713]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dfjhvm,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=0cc700cc-5820-4482-8dea-57e5d47000c8,zone_name=default,zonegroup_id=a1f14646-4c49-4783-851f-34f3f4b38dff,zonegroup_name=default}
Jan 27 08:15:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v79: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 5 op/s
Jan 27 08:15:00 np0005597378 podman[97444]: 2026-01-27 13:15:00.985448094 +0000 UTC m=+0.039629222 container create 57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:15:01 np0005597378 systemd[1]: Started libpod-conmon-57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6.scope.
Jan 27 08:15:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:01 np0005597378 podman[97444]: 2026-01-27 13:15:00.967117167 +0000 UTC m=+0.021298315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:15:01 np0005597378 podman[97444]: 2026-01-27 13:15:01.069869753 +0000 UTC m=+0.124050901 container init 57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:15:01 np0005597378 podman[97444]: 2026-01-27 13:15:01.079592136 +0000 UTC m=+0.133773264 container start 57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:15:01 np0005597378 agitated_stonebraker[97481]: 167 167
Jan 27 08:15:01 np0005597378 systemd[1]: libpod-57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6.scope: Deactivated successfully.
Jan 27 08:15:01 np0005597378 podman[97444]: 2026-01-27 13:15:01.08552933 +0000 UTC m=+0.139710498 container attach 57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:15:01 np0005597378 podman[97444]: 2026-01-27 13:15:01.086276969 +0000 UTC m=+0.140458097 container died 57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:15:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fdf9910e2ca9c8be431eec56730cc78862ec29f3bc3ea30a70a3bac7a7f26d44-merged.mount: Deactivated successfully.
Jan 27 08:15:01 np0005597378 podman[97444]: 2026-01-27 13:15:01.138360096 +0000 UTC m=+0.192541224 container remove 57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:15:01 np0005597378 systemd[1]: libpod-conmon-57ffb25aaabe28ee16cdec416fb69479bfb0ca76c030700e4efdf55f701897f6.scope: Deactivated successfully.
Jan 27 08:15:01 np0005597378 python3[97488]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.284618153 +0000 UTC m=+0.045923357 container create 6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f (image=quay.io/ceph/ceph:v20, name=sleepy_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:15:01 np0005597378 podman[97516]: 2026-01-27 13:15:01.309024677 +0000 UTC m=+0.049667893 container create ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:15:01 np0005597378 systemd[1]: Started libpod-conmon-6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f.scope.
Jan 27 08:15:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:01 np0005597378 systemd[1]: Started libpod-conmon-ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00.scope.
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b4e32bbbf80b31680f2fa0e21f663e0478cf174155201b69dc5de3b05c2919/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b4e32bbbf80b31680f2fa0e21f663e0478cf174155201b69dc5de3b05c2919/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.265008802 +0000 UTC m=+0.026314016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:15:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.366811863 +0000 UTC m=+0.128117087 container init 6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f (image=quay.io/ceph/ceph:v20, name=sleepy_tharp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/538c9074988d9c4b0699237cfbd2543c4d99ebb6fb99e355460907e906dfc746/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/538c9074988d9c4b0699237cfbd2543c4d99ebb6fb99e355460907e906dfc746/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/538c9074988d9c4b0699237cfbd2543c4d99ebb6fb99e355460907e906dfc746/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/538c9074988d9c4b0699237cfbd2543c4d99ebb6fb99e355460907e906dfc746/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/538c9074988d9c4b0699237cfbd2543c4d99ebb6fb99e355460907e906dfc746/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.37630859 +0000 UTC m=+0.137613784 container start 6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f (image=quay.io/ceph/ceph:v20, name=sleepy_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:15:01 np0005597378 podman[97516]: 2026-01-27 13:15:01.282903458 +0000 UTC m=+0.023546724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.383122756 +0000 UTC m=+0.144427960 container attach 6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f (image=quay.io/ceph/ceph:v20, name=sleepy_tharp, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:15:01 np0005597378 podman[97516]: 2026-01-27 13:15:01.393311712 +0000 UTC m=+0.133954928 container init ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:15:01 np0005597378 podman[97516]: 2026-01-27 13:15:01.39862046 +0000 UTC m=+0.139263676 container start ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:15:01 np0005597378 podman[97516]: 2026-01-27 13:15:01.404804721 +0000 UTC m=+0.145448147 container attach ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 27 08:15:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572520284' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 27 08:15:01 np0005597378 sleepy_tharp[97539]: 
Jan 27 08:15:01 np0005597378 sleepy_tharp[97539]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dfjhvm","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 27 08:15:01 np0005597378 systemd[1]: libpod-6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f.scope: Deactivated successfully.
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.756831625 +0000 UTC m=+0.518136839 container died 6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f (image=quay.io/ceph/ceph:v20, name=sleepy_tharp, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:15:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-35b4e32bbbf80b31680f2fa0e21f663e0478cf174155201b69dc5de3b05c2919-merged.mount: Deactivated successfully.
Jan 27 08:15:01 np0005597378 podman[97506]: 2026-01-27 13:15:01.79428102 +0000 UTC m=+0.555586214 container remove 6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f (image=quay.io/ceph/ceph:v20, name=sleepy_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:15:01 np0005597378 systemd[1]: libpod-conmon-6e05c66245317d17404049d66dbecfe1028ef343f923f5e95c97dc86fe38813f.scope: Deactivated successfully.
Jan 27 08:15:01 np0005597378 fervent_greider[97545]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:15:01 np0005597378 fervent_greider[97545]: --> All data devices are unavailable
Jan 27 08:15:01 np0005597378 systemd[1]: libpod-ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00.scope: Deactivated successfully.
Jan 27 08:15:01 np0005597378 podman[97598]: 2026-01-27 13:15:01.872131457 +0000 UTC m=+0.020864774 container died ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:15:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-538c9074988d9c4b0699237cfbd2543c4d99ebb6fb99e355460907e906dfc746-merged.mount: Deactivated successfully.
Jan 27 08:15:01 np0005597378 podman[97598]: 2026-01-27 13:15:01.915424434 +0000 UTC m=+0.064157741 container remove ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:01 np0005597378 systemd[1]: libpod-conmon-ad0c28befe6f4340588dac95c21dee089c3eb330b5b71238acad1b71d3cbbb00.scope: Deactivated successfully.
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.30714083 +0000 UTC m=+0.034728274 container create 1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:15:02 np0005597378 systemd[1]: Started libpod-conmon-1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe.scope.
Jan 27 08:15:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.386106117 +0000 UTC m=+0.113693581 container init 1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hofstadter, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.292182702 +0000 UTC m=+0.019770166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.392213805 +0000 UTC m=+0.119801249 container start 1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hofstadter, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.395311006 +0000 UTC m=+0.122898480 container attach 1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hofstadter, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:02 np0005597378 kind_hofstadter[97691]: 167 167
Jan 27 08:15:02 np0005597378 systemd[1]: libpod-1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe.scope: Deactivated successfully.
Jan 27 08:15:02 np0005597378 conmon[97691]: conmon 1013545d92a634673cf8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe.scope/container/memory.events
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.397470682 +0000 UTC m=+0.125058126 container died 1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hofstadter, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e49de3e9bb49a00ee44b6dbd4b3d1331aae3a9732888d58959fc611e12b74d69-merged.mount: Deactivated successfully.
Jan 27 08:15:02 np0005597378 podman[97675]: 2026-01-27 13:15:02.429863945 +0000 UTC m=+0.157451389 container remove 1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hofstadter, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:15:02 np0005597378 systemd[1]: libpod-conmon-1013545d92a634673cf8f81c0e9d02af58ad277a5c37ce812f9ab816b65517fe.scope: Deactivated successfully.
Jan 27 08:15:02 np0005597378 podman[97715]: 2026-01-27 13:15:02.574689975 +0000 UTC m=+0.047656241 container create e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:02 np0005597378 systemd[1]: Started libpod-conmon-e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab.scope.
Jan 27 08:15:02 np0005597378 podman[97715]: 2026-01-27 13:15:02.54989318 +0000 UTC m=+0.022859546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:15:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20838fdcbf593b104a60f2a016ce987f2103d3fd4acab45e9c90cac7e620c80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20838fdcbf593b104a60f2a016ce987f2103d3fd4acab45e9c90cac7e620c80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20838fdcbf593b104a60f2a016ce987f2103d3fd4acab45e9c90cac7e620c80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20838fdcbf593b104a60f2a016ce987f2103d3fd4acab45e9c90cac7e620c80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:02 np0005597378 podman[97715]: 2026-01-27 13:15:02.661478164 +0000 UTC m=+0.134444460 container init e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:15:02 np0005597378 podman[97715]: 2026-01-27 13:15:02.669713899 +0000 UTC m=+0.142680165 container start e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:15:02 np0005597378 podman[97715]: 2026-01-27 13:15:02.672549853 +0000 UTC m=+0.145516139 container attach e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:02 np0005597378 python3[97758]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:15:02 np0005597378 podman[97763]: 2026-01-27 13:15:02.806720476 +0000 UTC m=+0.042547579 container create 6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5 (image=quay.io/ceph/ceph:v20, name=tender_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:15:02 np0005597378 systemd[1]: Started libpod-conmon-6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5.scope.
Jan 27 08:15:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99994102510833b0413783b7b776fc6c787300140cd8be7a0c7257fb9ecbaad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99994102510833b0413783b7b776fc6c787300140cd8be7a0c7257fb9ecbaad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:02 np0005597378 podman[97763]: 2026-01-27 13:15:02.786782827 +0000 UTC m=+0.022609920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:15:02 np0005597378 podman[97763]: 2026-01-27 13:15:02.888250218 +0000 UTC m=+0.124077381 container init 6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5 (image=quay.io/ceph/ceph:v20, name=tender_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:15:02 np0005597378 podman[97763]: 2026-01-27 13:15:02.896854722 +0000 UTC m=+0.132681795 container start 6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5 (image=quay.io/ceph/ceph:v20, name=tender_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:15:02 np0005597378 podman[97763]: 2026-01-27 13:15:02.90021512 +0000 UTC m=+0.136042213 container attach 6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5 (image=quay.io/ceph/ceph:v20, name=tender_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:15:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v80: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 13 KiB/s wr, 263 op/s
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]: {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:    "0": [
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:        {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "devices": [
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "/dev/loop3"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            ],
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_name": "ceph_lv0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_size": "21470642176",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "name": "ceph_lv0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "tags": {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cluster_name": "ceph",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.crush_device_class": "",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.encrypted": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.objectstore": "bluestore",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osd_id": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.type": "block",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.vdo": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.with_tpm": "0"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            },
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "type": "block",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "vg_name": "ceph_vg0"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:        }
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:    ],
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:    "1": [
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:        {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "devices": [
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "/dev/loop4"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            ],
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_name": "ceph_lv1",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_size": "21470642176",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "name": "ceph_lv1",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "tags": {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cluster_name": "ceph",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.crush_device_class": "",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.encrypted": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.objectstore": "bluestore",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osd_id": "1",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.type": "block",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.vdo": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.with_tpm": "0"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            },
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "type": "block",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "vg_name": "ceph_vg1"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:        }
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:    ],
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:    "2": [
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:        {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "devices": [
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "/dev/loop5"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            ],
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_name": "ceph_lv2",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_size": "21470642176",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "name": "ceph_lv2",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "tags": {
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.cluster_name": "ceph",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.crush_device_class": "",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.encrypted": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.objectstore": "bluestore",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osd_id": "2",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.type": "block",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.vdo": "0",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:                "ceph.with_tpm": "0"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            },
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "type": "block",
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:            "vg_name": "ceph_vg2"
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:        }
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]:    ]
Jan 27 08:15:02 np0005597378 charming_dijkstra[97757]: }
Jan 27 08:15:02 np0005597378 systemd[1]: libpod-e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab.scope: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97786]: 2026-01-27 13:15:03.029658909 +0000 UTC m=+0.023734979 container died e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:15:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e20838fdcbf593b104a60f2a016ce987f2103d3fd4acab45e9c90cac7e620c80-merged.mount: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97786]: 2026-01-27 13:15:03.067738171 +0000 UTC m=+0.061814231 container remove e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:15:03 np0005597378 systemd[1]: libpod-conmon-e225cc1a23a46929f219100f965f34908b5f7e12cbac1e855201f78f81ca34ab.scope: Deactivated successfully.
Jan 27 08:15:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 27 08:15:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133451331' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 27 08:15:03 np0005597378 tender_hamilton[97780]: mimic
Jan 27 08:15:03 np0005597378 systemd[1]: libpod-6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5.scope: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97763]: 2026-01-27 13:15:03.387121915 +0000 UTC m=+0.622948998 container died 6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5 (image=quay.io/ceph/ceph:v20, name=tender_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:15:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b99994102510833b0413783b7b776fc6c787300140cd8be7a0c7257fb9ecbaad-merged.mount: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97763]: 2026-01-27 13:15:03.431581632 +0000 UTC m=+0.667408715 container remove 6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5 (image=quay.io/ceph/ceph:v20, name=tender_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:03 np0005597378 systemd[1]: libpod-conmon-6e09a37f1ddba95bfbce8e80cb0bde83357ba03b062b839c60ae3b6b77b3d1e5.scope: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.510168288 +0000 UTC m=+0.037223390 container create 9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_morse, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:15:03 np0005597378 systemd[1]: Started libpod-conmon-9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b.scope.
Jan 27 08:15:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.577059579 +0000 UTC m=+0.104114701 container init 9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_morse, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.58167066 +0000 UTC m=+0.108725762 container start 9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_morse, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:15:03 np0005597378 keen_morse[97913]: 167 167
Jan 27 08:15:03 np0005597378 systemd[1]: libpod-9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b.scope: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.586250788 +0000 UTC m=+0.113305890 container attach 9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.586550016 +0000 UTC m=+0.113605118 container died 9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_morse, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.494248813 +0000 UTC m=+0.021303935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:15:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e418f9bb992c138ce58b086f9e3538b82f75480810b7b9a35c1bbc51ce87e56f-merged.mount: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97897]: 2026-01-27 13:15:03.631594929 +0000 UTC m=+0.158650031 container remove 9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:15:03 np0005597378 systemd[1]: libpod-conmon-9774ee65687bd1b99ba9a3151d67b16f53686200c9336741b97667f707008b7b.scope: Deactivated successfully.
Jan 27 08:15:03 np0005597378 podman[97938]: 2026-01-27 13:15:03.770437413 +0000 UTC m=+0.039012306 container create 92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:15:03 np0005597378 systemd[1]: Started libpod-conmon-92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b.scope.
Jan 27 08:15:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fe87a73f9fbecd52ea4de3b4bf77d417033fca4e35f0c3a69512473d37ec42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fe87a73f9fbecd52ea4de3b4bf77d417033fca4e35f0c3a69512473d37ec42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fe87a73f9fbecd52ea4de3b4bf77d417033fca4e35f0c3a69512473d37ec42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fe87a73f9fbecd52ea4de3b4bf77d417033fca4e35f0c3a69512473d37ec42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:03 np0005597378 podman[97938]: 2026-01-27 13:15:03.753214865 +0000 UTC m=+0.021789778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:15:03 np0005597378 podman[97938]: 2026-01-27 13:15:03.850367134 +0000 UTC m=+0.118942037 container init 92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wescoff, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:15:03 np0005597378 podman[97938]: 2026-01-27 13:15:03.857644274 +0000 UTC m=+0.126219167 container start 92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:15:03 np0005597378 podman[97938]: 2026-01-27 13:15:03.86096003 +0000 UTC m=+0.129534953 container attach 92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:15:04 np0005597378 python3[98028]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:15:04 np0005597378 podman[98051]: 2026-01-27 13:15:04.491651941 +0000 UTC m=+0.040277714 container create 06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194 (image=quay.io/ceph/ceph:v20, name=boring_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:15:04 np0005597378 lvm[98070]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:15:04 np0005597378 lvm[98070]: VG ceph_vg0 finished
Jan 27 08:15:04 np0005597378 lvm[98072]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:15:04 np0005597378 lvm[98072]: VG ceph_vg1 finished
Jan 27 08:15:04 np0005597378 systemd[1]: Started libpod-conmon-06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194.scope.
Jan 27 08:15:04 np0005597378 lvm[98079]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:15:04 np0005597378 lvm[98079]: VG ceph_vg2 finished
Jan 27 08:15:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:04 np0005597378 podman[98051]: 2026-01-27 13:15:04.47336086 +0000 UTC m=+0.021986653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:15:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b18695c637efc513b0231b33ffadd1ee364c1ee7137dc38e40a3f7e64f6635d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b18695c637efc513b0231b33ffadd1ee364c1ee7137dc38e40a3f7e64f6635d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:04 np0005597378 podman[98051]: 2026-01-27 13:15:04.585918861 +0000 UTC m=+0.134544654 container init 06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194 (image=quay.io/ceph/ceph:v20, name=boring_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:15:04 np0005597378 podman[98051]: 2026-01-27 13:15:04.593471841 +0000 UTC m=+0.142097614 container start 06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194 (image=quay.io/ceph/ceph:v20, name=boring_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:15:04 np0005597378 podman[98051]: 2026-01-27 13:15:04.597432972 +0000 UTC m=+0.146058745 container attach 06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194 (image=quay.io/ceph/ceph:v20, name=boring_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:15:04 np0005597378 nostalgic_wescoff[97954]: {}
Jan 27 08:15:04 np0005597378 systemd[1]: libpod-92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b.scope: Deactivated successfully.
Jan 27 08:15:04 np0005597378 podman[97938]: 2026-01-27 13:15:04.681765495 +0000 UTC m=+0.950340378 container died 92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wescoff, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:15:04 np0005597378 systemd[1]: libpod-92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b.scope: Consumed 1.283s CPU time.
Jan 27 08:15:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-87fe87a73f9fbecd52ea4de3b4bf77d417033fca4e35f0c3a69512473d37ec42-merged.mount: Deactivated successfully.
Jan 27 08:15:04 np0005597378 podman[97938]: 2026-01-27 13:15:04.722820081 +0000 UTC m=+0.991394974 container remove 92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wescoff, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:15:04 np0005597378 systemd[1]: libpod-conmon-92551488ec534f6b6cbd811420001c81a4ab238c09ca5bca9f1bd431fc4b591b.scope: Deactivated successfully.
Jan 27 08:15:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:15:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:15:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v81: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 9.9 KiB/s wr, 220 op/s
Jan 27 08:15:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 27 08:15:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922356027' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 27 08:15:05 np0005597378 boring_colden[98075]: 
Jan 27 08:15:05 np0005597378 systemd[1]: libpod-06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194.scope: Deactivated successfully.
Jan 27 08:15:05 np0005597378 conmon[98075]: conmon 06d384d615f75ca3847c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194.scope/container/memory.events
Jan 27 08:15:05 np0005597378 boring_colden[98075]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 27 08:15:05 np0005597378 podman[98051]: 2026-01-27 13:15:05.098565283 +0000 UTC m=+0.647191066 container died 06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194 (image=quay.io/ceph/ceph:v20, name=boring_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:15:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2b18695c637efc513b0231b33ffadd1ee364c1ee7137dc38e40a3f7e64f6635d-merged.mount: Deactivated successfully.
Jan 27 08:15:05 np0005597378 podman[98051]: 2026-01-27 13:15:05.141235163 +0000 UTC m=+0.689860946 container remove 06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194 (image=quay.io/ceph/ceph:v20, name=boring_colden, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:15:05 np0005597378 systemd[1]: libpod-conmon-06d384d615f75ca3847c72c9a9bfa3efbce871e227d4924a70738b3a780f5194.scope: Deactivated successfully.
Jan 27 08:15:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v82: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.0 KiB/s wr, 177 op/s
Jan 27 08:15:07 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 895557fa-2d2a-43cb-b382-90e0a3ecc699 (Global Recovery Event) in 15 seconds
Jan 27 08:15:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v83: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 6.8 KiB/s wr, 150 op/s
Jan 27 08:15:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 6.1 KiB/s wr, 135 op/s
Jan 27 08:15:12 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 6 completed events
Jan 27 08:15:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:15:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Jan 27 08:15:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:15:16
Jan 27 08:15:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:15:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:15:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'default.rgw.control']
Jan 27 08:15:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.654512962186803e-07 of space, bias 4.0, pg target 0.0007985415554624163 quantized to 16 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:15:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:15:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 27 08:15:18 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev fa82195d-eb07-40fc-8683-e9116b3d38f5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 27 08:15:19 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 89536cc5-ba64-4e36-802a-c8754adcfa59 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 27 08:15:20 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 3d5560f6-ed42-43fe-a5c8-474191c8dc79 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=16/17 n=0 ec=14/14 lis/c=16/16 les/c/f=17/17/0 sis=38 pruub=8.019707680s) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active pruub 73.762268066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=16/17 n=0 ec=14/14 lis/c=16/16 les/c/f=17/17/0 sis=38 pruub=8.019707680s) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown pruub 73.762268066s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=16/17 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v92: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 27 08:15:21 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 2ce48677-367b-49cc-8510-48ea23a6108f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 27 08:15:21 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=8.012632370s) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 85.917800903s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 27 08:15:21 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:21 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=8.012632370s) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown pruub 85.917800903s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=38/40 n=0 ec=14/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=16/16 les/c/f=17/17/0 sis=38) [2] r=0 lpr=38 pi=[16,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:21 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=40 pruub=14.591603279s) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active pruub 87.875556946s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:21 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=40 pruub=14.591603279s) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown pruub 87.875556946s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 27 08:15:22 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 3c845cc9-8ebb-4b4b-881d-6a27bc65f6c7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=40/41 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [0] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=40/41 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [1] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 27 08:15:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 27 08:15:22 np0005597378 ceph-mgr[75385]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v95: 104 pgs: 62 unknown, 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 27 08:15:23 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev d5359287-f209-4ac7-89dd-d89c1b3c3393 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:23 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 42 pg[6.0( v 30'39 (0'0,30'39] local-lis/les=19/20 n=22 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=8.025353432s) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 28'38 mlcod 28'38 active pruub 87.940910339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:23 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 42 pg[6.0( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=8.025353432s) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 28'38 mlcod 0'0 unknown pruub 87.940910339s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:23 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=14.885575294s) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active pruub 83.762825012s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:23 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=14.885575294s) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown pruub 83.762825012s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:23 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 27 08:15:23 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 27 08:15:24 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 0c9ad5ab-f79c-43c7-8965-c26923dab1f4 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.a( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.9( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.5( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.4( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.8( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.7( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.b( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.6( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.1( v 30'39 (0'0,30'39] local-lis/les=19/20 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.3( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.2( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.c( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.e( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.d( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.f( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=19/20 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.8( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.5( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.4( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.7( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.1( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=42/43 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.0( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 28'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.6( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.3( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.c( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.e( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.2( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 43 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [2] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 27 08:15:24 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 27 08:15:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v98: 150 pgs: 1 peering, 31 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 27 08:15:25 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 99b00e7b-dc37-4c87-8fed-afb31883a415 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:25 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 27 08:15:25 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 27 08:15:26 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 98eb2293-1013-40de-84d4-3739bb4133e4 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=14.011640549s) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active pruub 91.898544312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 44 pg[8.0( v 29'6 (0'0,29'6] local-lis/les=28/29 n=6 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=44 pruub=15.214392662s) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 29'5 mlcod 29'5 active pruub 93.101356506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=14.011640549s) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown pruub 91.898544312s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.0( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=44 pruub=15.214392662s) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 29'5 mlcod 0'0 unknown pruub 93.101356506s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.7( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.12( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.17( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.19( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1d( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1( v 29'6 (0'0,29'6] local-lis/les=28/29 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.3( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.2( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.4( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.5( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.6( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.7( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.8( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.9( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.a( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.b( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.c( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.d( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.e( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.f( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.10( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.11( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.12( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.13( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.14( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.15( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1a( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.16( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.17( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1c( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1f( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.18( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1e( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.19( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 45 pg[8.1b( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 27 08:15:26 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 27 08:15:26 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 27 08:15:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v101: 212 pgs: 1 peering, 93 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] update: starting ev 49c18a4d-a153-4771-bee6-e48649c4003a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[9.0( v 36'483 (0'0,36'483] local-lis/les=30/31 n=210 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46 pruub=8.385560036s) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 36'482 mlcod 36'482 active pruub 87.276626587s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev fa82195d-eb07-40fc-8683-e9116b3d38f5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event fa82195d-eb07-40fc-8683-e9116b3d38f5 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 89536cc5-ba64-4e36-802a-c8754adcfa59 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 89536cc5-ba64-4e36-802a-c8754adcfa59 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 3d5560f6-ed42-43fe-a5c8-474191c8dc79 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 3d5560f6-ed42-43fe-a5c8-474191c8dc79 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 2ce48677-367b-49cc-8510-48ea23a6108f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 2ce48677-367b-49cc-8510-48ea23a6108f (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 3c845cc9-8ebb-4b4b-881d-6a27bc65f6c7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 3c845cc9-8ebb-4b4b-881d-6a27bc65f6c7 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev d5359287-f209-4ac7-89dd-d89c1b3c3393 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event d5359287-f209-4ac7-89dd-d89c1b3c3393 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 0c9ad5ab-f79c-43c7-8965-c26923dab1f4 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 0c9ad5ab-f79c-43c7-8965-c26923dab1f4 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 99b00e7b-dc37-4c87-8fed-afb31883a415 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 99b00e7b-dc37-4c87-8fed-afb31883a415 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 98eb2293-1013-40de-84d4-3739bb4133e4 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 98eb2293-1013-40de-84d4-3739bb4133e4 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] complete: finished ev 49c18a4d-a153-4771-bee6-e48649c4003a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 49c18a4d-a153-4771-bee6-e48649c4003a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.16( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.17( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.13( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.e( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.a( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.8( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.3( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.f( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=44/46 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.0( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 29'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.7( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.6( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.5( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1a( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.19( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1e( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1d( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [1] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=28/28 les/c/f=29/29/0 sis=44) [1] r=0 lpr=44 pi=[28,44)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 46 pg[9.0( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46 pruub=8.385560036s) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 36'482 mlcod 0'0 unknown pruub 87.276626587s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7924000 space 0x5640b7021a40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7847380 space 0x5640b6b5dd40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b78a3b80 space 0x5640b780fd40 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b789da80 space 0x5640b71d6540 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b787bd00 space 0x5640b6b36840 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b78a2280 space 0x5640b71d7440 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7915e80 space 0x5640b70ce840 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859680 space 0x5640b70ac240 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859e80 space 0x5640b7244540 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7914400 space 0x5640b79b6840 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7858580 space 0x5640b70add40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7847180 space 0x5640b7da5140 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b6bca180 space 0x5640b704ba40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859880 space 0x5640b7245740 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7924a80 space 0x5640b7cbd740 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7914c00 space 0x5640b70c4540 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7837880 space 0x5640b7094e40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7837d80 space 0x5640b7106240 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7836300 space 0x5640b7102240 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7858300 space 0x5640b70cf140 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7858b80 space 0x5640b70c7440 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7858780 space 0x5640b70c6240 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7846a00 space 0x5640b70fae40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859280 space 0x5640b70ad440 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b78a2300 space 0x5640b7cbcb40 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7846800 space 0x5640b780e240 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7851c00 space 0x5640b7094840 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b789df00 space 0x5640b70a6b40 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7850980 space 0x5640b6b09440 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7837680 space 0x5640b7106b40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7851a80 space 0x5640b704b140 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b6b43a80 space 0x5640b70c8540 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b78cf600 space 0x5640b70bc540 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7925080 space 0x5640b7097740 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b6b43d00 space 0x5640b799eb40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b787be80 space 0x5640b70fa240 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859c80 space 0x5640b7244e40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b789df80 space 0x5640b70c9440 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7924d00 space 0x5640b70bdd40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7836a00 space 0x5640b6b04240 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7846600 space 0x5640b780eb40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859480 space 0x5640b70acb40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7837f00 space 0x5640b6b37740 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7847b00 space 0x5640b70fb740 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7851780 space 0x5640b6b5d140 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7851180 space 0x5640b7020240 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7850800 space 0x5640b7cbc240 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7924700 space 0x5640b70fa540 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7850b00 space 0x5640b7095a40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7837f80 space 0x5640b70c5740 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859c00 space 0x5640b7020840 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7858980 space 0x5640b70c6b40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b6bca600 space 0x5640b6b5c540 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7846480 space 0x5640b70a6240 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b789de80 space 0x5640b7020e40 0x0~9a clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7859180 space 0x5640b7102b40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7858100 space 0x5640b70cfa40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7850700 space 0x5640b7096240 0x0~98 clean)
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5640b7eb3440) split_cache   moving buffer(0x5640b7836180 space 0x5640b70c4e40 0x0~6e clean)
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 27 08:15:27 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 27 08:15:27 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 16 completed events
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 27 08:15:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 27 08:15:28 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.14( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.17( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.16( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.15( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.10( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.11( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.13( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.12( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.d( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.c( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.f( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.9( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.2( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.b( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.e( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.a( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.6( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.7( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.4( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.5( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1a( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1b( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.8( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.18( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.19( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1e( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1f( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1c( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1d( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.14( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.3( v 36'483 lc 0'0 (0'0,36'483] local-lis/les=30/31 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.10( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.12( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.0( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 36'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.a( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.2( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.5( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.4( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1a( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.18( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1c( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.1e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 47 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=36'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 46 pg[10.0( v 36'18 (0'0,36'18] local-lis/les=32/33 n=9 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=9.117191315s) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 36'17 mlcod 36'17 active pruub 83.179557800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.0( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=9.117191315s) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 36'17 mlcod 0'0 unknown pruub 83.179557800s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1( v 36'18 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.2( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.3( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.4( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.5( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.6( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.7( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.8( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.9( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.a( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.c( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.b( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.d( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.e( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.f( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.10( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.11( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.13( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.12( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.14( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.15( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.16( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.17( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.18( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.19( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1a( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1b( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1c( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1d( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1e( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 47 pg[10.1f( v 36'18 lc 0'0 (0'0,36'18] local-lis/les=32/33 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 27 08:15:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 27 08:15:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v104: 274 pgs: 33 peering, 31 unknown, 210 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 27 08:15:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 27 08:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 27 08:15:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 27 08:15:29 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=10.354310036s) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active pruub 91.306900024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:29 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 27 08:15:29 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=10.354310036s) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown pruub 91.306900024s@ mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.11( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1f( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.12( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.10( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1e( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1d( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1c( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1b( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1a( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.19( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.3( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.18( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.4( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.7( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.6( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.5( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.8( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.f( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.0( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 36'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.9( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.a( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.b( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.e( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.1( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.d( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.c( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.2( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.14( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.13( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.16( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.17( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 48 pg[10.15( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=32/32 les/c/f=33/33/0 sis=46) [2] r=0 lpr=46 pi=[32,46)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:29 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 27 08:15:29 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 27 08:15:29 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 27 08:15:29 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 27 08:15:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 27 08:15:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 27 08:15:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 27 08:15:30 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.16( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.14( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.13( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.12( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.11( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.10( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.e( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.f( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.d( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.17( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.9( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.b( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.2( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.15( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.3( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.c( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.8( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.a( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.5( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.4( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.6( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.7( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.18( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.19( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1a( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1b( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1c( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1d( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1e( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1f( empty local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.16( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.14( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.13( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.12( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.11( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.10( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.d( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.e( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.f( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.9( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=48/49 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.2( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.b( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.8( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.3( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.c( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.a( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.5( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.4( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.6( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.18( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.7( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1a( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1b( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.19( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1d( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1c( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1e( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.1f( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.17( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 49 pg[11.15( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v107: 305 pgs: 33 peering, 62 unknown, 210 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:31 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 27 08:15:31 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 27 08:15:32 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 27 08:15:32 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 27 08:15:32 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 27 08:15:32 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 27 08:15:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 32 peering, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:15:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867611885s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916229248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867573738s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916229248s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867572784s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916275024s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867518425s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916275024s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.5( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.875465393s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.924591064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.5( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.875430107s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.924591064s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867056847s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916252136s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867002487s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916252136s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866959572s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916275024s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866918564s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916275024s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867265701s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916641235s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867720604s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917121887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867250443s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916641235s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.867647171s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917121887s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866767883s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916305542s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.875036240s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.924621582s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.7( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.875060081s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.924659729s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.875019073s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.924621582s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866694450s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916305542s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866978645s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916633606s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866942406s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916633606s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866892815s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916656494s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.7( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.875028610s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.924659729s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866878510s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916656494s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.1( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.874888420s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.924736023s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.877340317s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.926551819s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.3( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876603127s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.926483154s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.3( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876589775s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926483154s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.1( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.874855995s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.924736023s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876666069s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926551819s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866866112s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916801453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866843224s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916801453s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876497269s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.926612854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876443863s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926612854s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866576195s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916770935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866433144s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916671753s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866543770s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916770935s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866597176s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916786194s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876272202s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 active pruub 104.926582336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866440773s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.916801453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866479874s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916786194s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866950989s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917297363s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866426468s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916801453s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.876247406s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926582336s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866908073s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917297363s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866412163s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.916671753s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866756439s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917243958s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866632462s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917129517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866744995s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917243958s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866676331s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917289734s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866619110s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917129517s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866660118s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917289734s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866481781s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917152405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866449356s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917152405s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866653442s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 102.917350769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.866557121s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 102.917350769s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857151031s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.766090393s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857127190s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.766090393s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.12( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.949865341s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 active pruub 90.859169006s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.12( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.949830055s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 unknown NOTIFY pruub 90.859169006s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.11( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.939722061s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.849288940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.11( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.939700127s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.849288940s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.861932755s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771598816s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.861917496s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771598816s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852448463s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.762161255s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852425575s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.762161255s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.1e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.10( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.949390411s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859184265s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.10( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.949378967s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859184265s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852344513s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.762229919s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852310181s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.762222290s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852330208s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.762229919s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852299690s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.762222290s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.1e( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.949167252s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859184265s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.1e( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.949152946s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859184265s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852123260s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.762229919s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852108955s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.762229919s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860647202s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.770774841s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860630035s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.770774841s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852023125s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.762237549s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.852009773s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.762237549s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860786438s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771049500s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860771179s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771049500s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860486984s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.770889282s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860472679s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.770889282s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.851220131s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761657715s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.851205826s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761657715s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860219002s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.770782471s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.1a( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948714256s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859298706s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860194206s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.770782471s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860282898s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.770896912s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860267639s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.770896912s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.1a( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948677063s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859298706s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.850783348s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761520386s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.850769997s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761520386s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.19( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948597908s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859321594s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.19( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948496819s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859321594s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.7( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948603630s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859489441s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.850584030s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761497498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860518456s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.7( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948434830s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859489441s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.850427628s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761497498s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.6( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948519707s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859672546s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.859697342s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.770912170s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.850131035s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761352539s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.859680176s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.770912170s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.850115776s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761352539s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.6( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948499680s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859672546s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860207558s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771415710s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.4( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948262215s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859642029s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.849956512s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761352539s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.4( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948245049s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859642029s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.849943161s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761352539s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.859498024s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.770988464s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.8( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948196411s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859703064s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.8( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.948136330s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859703064s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860958099s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.772560120s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.859408379s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.770988464s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.860935211s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.772560120s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.849667549s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761383057s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.849653244s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761383057s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.849586487s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761352539s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.849570274s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761352539s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.859577179s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771392822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.9( v 48'19 (0'0,48'19] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.947005272s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 active pruub 90.859733582s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.9( v 48'19 (0'0,48'19] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946971893s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 unknown NOTIFY pruub 90.859733582s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.848449707s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761314392s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.859537125s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771392822s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.848428726s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761314392s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.858601570s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771598816s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.858579636s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771598816s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.f( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946640968s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859710693s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.b( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946733475s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859756470s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.f( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946625710s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859710693s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.b( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946651459s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859756470s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.14( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.848131180s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761299133s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.848107338s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761299133s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.858144760s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771492004s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.847843170s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761207581s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.858091354s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771484375s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.858069420s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771484375s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.858094215s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771492004s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.d( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946349144s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 active pruub 90.859779358s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.847753525s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761199951s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.15( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.d( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946324348s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 unknown NOTIFY pruub 90.859779358s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857916832s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771492004s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857900620s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771492004s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.e( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.946154594s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 active pruub 90.859771729s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.847516060s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761207581s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.847586632s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761199951s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857834816s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.771690369s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857815742s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.771690369s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.e( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945904732s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 unknown NOTIFY pruub 90.859771729s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.847297668s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761184692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.1( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945762634s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859779358s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.847263336s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761184692s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.1( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945736885s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859779358s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.846989632s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761192322s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.846968651s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761192322s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.2( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945520401s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859817505s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.2( v 36'18 (0'0,36'18] local-lis/les=46/48 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945477486s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859817505s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.13( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945481300s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859832764s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.13( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.945460320s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859832764s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.846712112s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761177063s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.846671104s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761154175s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.846691132s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761177063s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.846630096s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761154175s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.14( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944939613s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 active pruub 90.859825134s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.15( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944908142s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 active pruub 90.859855652s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.839408875s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.754379272s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.15( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944883347s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 unknown NOTIFY pruub 90.859855652s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.839385986s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.754379272s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857586861s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.772636414s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857572556s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.772636414s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.16( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944760323s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859848022s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.16( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944746017s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859848022s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.839219093s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.754364014s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857461929s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.772644043s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.14( v 48'19 (0'0,48'19] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944668770s) [1] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 36'18 unknown NOTIFY pruub 90.859825134s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.857446671s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.772644043s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.17( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944574356s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 active pruub 90.859855652s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.839087486s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.754364014s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.7( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[10.17( v 36'18 (0'0,36'18] local-lis/les=46/48 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50 pruub=9.944549561s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 unknown NOTIFY pruub 90.859855652s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.845593452s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 90.761169434s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=38/40 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50 pruub=9.845574379s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 90.761169434s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.856922150s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 93.772552490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=12.856904984s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 93.772552490s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.5( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.3( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.834710121s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.868415833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.858513832s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892295837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.858479500s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892295837s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.834672928s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.868415833s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.858634949s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892623901s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.858613968s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892623901s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.900575638s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934776306s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.900551796s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934776306s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.839756012s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874114990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.839740753s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874114990s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.858153343s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892631531s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.858133316s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892631531s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.857970238s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892593384s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.1e( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.940877914s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.975509644s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.940863609s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.975509644s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.857943535s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892593384s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.940814972s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.975517273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.839510918s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874313354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.940754890s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.975517273s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.839491844s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874313354s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.898910522s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934020996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.898875237s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934020996s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.856760979s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892662048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.937189102s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973083496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.4( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.855440140s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892662048s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.935669899s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973083496s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.836581230s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874183655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.836532593s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874183655s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.854718208s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892723083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.854691505s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892723083s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.895979881s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934059143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.895943642s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934059143s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.934961319s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973182678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.934933662s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973182678s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.854455948s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892753601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.854428291s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892753601s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.934559822s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973220825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.934535027s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973220825s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.853933334s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892745972s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.853889465s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892745972s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.1f( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.894646645s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934272766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.894618988s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934272766s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.933101654s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973320007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.933076859s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973320007s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.852239609s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892707825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[5.2( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.1b( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.852219582s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892707825s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.852149963s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892784119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.852136612s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892784119s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.932838440s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973609924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.833749771s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.932826996s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973609924s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.833354950s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874168396s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851871490s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892768860s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851859093s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892768860s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.833182335s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.833166122s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874168396s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851737976s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892799377s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851721764s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892799377s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.893174171s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934280396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.893157959s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934280396s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.932265282s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973518372s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851516724s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892807007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.932238579s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973518372s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.832859993s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851498604s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892807007s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.832840919s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874176025s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851313591s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892799377s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851294518s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892799377s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.931881905s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973426819s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.931868553s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973426819s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.832539558s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.832524300s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874176025s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.e( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851143837s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.892814636s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.e( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.851131439s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.892814636s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.892539978s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934310913s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.931881905s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973670959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.892526627s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934310913s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.931870461s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973670959s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.832262993s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874183655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.832244873s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874183655s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.892164230s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934318542s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.931450844s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973625183s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.931434631s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973625183s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.892139435s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934318542s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.831585884s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874191284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850252151s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892845154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.831542969s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874191284s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850205421s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892845154s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850183487s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892852783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.1d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850150108s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892852783s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850022316s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892990112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850002289s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892990112s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.892066956s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.935058594s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.892026901s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.935058594s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.18( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.831083298s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874198914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.930510521s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973655701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.831060410s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874198914s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.7( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.930496216s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973655701s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849649429s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.892967224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850439072s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.893844604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850421906s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.893844604s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.890896797s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.934371948s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.930194855s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973701477s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.890876770s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.934371948s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.930171013s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973701477s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849508286s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.892967224s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.f( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850162506s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.893852234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.f( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850147247s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.893852234s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850120544s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.893989563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850090027s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.893951416s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850106239s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.893989563s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.850067139s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.893951416s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849949837s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.893974304s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849936485s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.894020081s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849911690s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.893974304s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849916458s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.894020081s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.929553986s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973815918s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.929537773s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973815918s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.929369926s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973678589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.929319382s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973678589s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.829936028s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874389648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849481583s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.894004822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.829897881s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874389648s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.849466324s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.894004822s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.893507004s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.938140869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.929109573s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973785400s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.893442154s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.938140869s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.852976799s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.897720337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.929034233s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973785400s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.852941513s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.897720337s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.828452110s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.828423500s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874267578s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.5( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.6( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.3( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.1( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.11( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.12( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.13( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.9( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.8( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.9( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.16( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.c( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.a( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.c( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.820683479s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874465942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.820662498s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874465942s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840149879s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.894081116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840128899s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.894081116s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.883559227s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.937530518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.919830322s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973823547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840007782s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.894004822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.883532524s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.937530518s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.6( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840000153s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.894027710s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.919775963s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973823547s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.820406914s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874496460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.6( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.839970589s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.894027710s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.839931488s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.894004822s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.820391655s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874496460s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.819844246s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874488831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.819828033s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874488831s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.882693291s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 36'483 active pruub 95.937469482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.842562675s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.897338867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.919004440s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973831177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=44/46 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.842528343s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.897338867s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918985367s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973831177s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.882636070s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 36'483 unknown NOTIFY pruub 95.937469482s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.839209557s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.894165039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918915749s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973884583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.839185715s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.894165039s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918859482s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973884583s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.842220306s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.897331238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.842189789s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.897331238s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.1( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.818985939s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874557495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.818968773s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874557495s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1a( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.841668129s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.897331238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.882208824s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.937881470s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.882196426s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.937881470s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1a( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.841639519s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.897331238s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918037415s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973854065s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918023109s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973854065s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.818915367s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.874778748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.818902969s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.874778748s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.917882919s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.973854065s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.917868614s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.973854065s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.841882706s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.897918701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.841868401s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.897918701s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.881766319s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.937866211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.881749153s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.937866211s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.919200897s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.975425720s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.919186592s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.975425720s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.841117859s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.897453308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.841068268s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.897453308s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.881516457s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.937927246s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.881500244s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.937927246s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840879440s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.897415161s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840858459s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.897415161s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.821982384s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.878616333s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.821963310s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.878616333s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.821883202s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.878616333s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.821867943s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.878616333s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1d( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840952873s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.897743225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918628693s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.975494385s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.918600082s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.975494385s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1d( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840928078s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.897743225s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.881059647s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 active pruub 95.937995911s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=8.881040573s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 95.937995911s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840905190s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 active pruub 102.897933960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840887070s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 unknown NOTIFY pruub 102.897933960s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.821506500s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 97.878608704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=10.821475029s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 97.878608704s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840411186s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 active pruub 102.897888184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.917926788s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 active pruub 97.975471497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=44/46 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50 pruub=15.840333939s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 unknown NOTIFY pruub 102.897888184s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=10.917904854s) [2] r=-1 lpr=50 pi=[48,50)/1 crt=0'0 unknown NOTIFY pruub 97.975471497s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.f( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[3.16( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.1a( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.19( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 50 pg[5.18( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.12( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.15( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:35 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 50 pg[3.17( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.1c( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.1e( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.18( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.1b( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.1a( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.1e( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.18( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.19( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 36'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 36'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.1d( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.8( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.5( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.7( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=50/51 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.e( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.1( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.a( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=50/51 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.e( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.11( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.13( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.16( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[4.11( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[3.18( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [2] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.1b( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.f( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.9( v 48'19 lc 33'8 (0'0,48'19] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=48'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.c( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.8( v 36'18 (0'0,36'18] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.10( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.7( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.1( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.15( v 48'19 lc 33'3 (0'0,48'19] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=48'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.1d( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.4( v 36'18 (0'0,36'18] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.6( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=50/51 n=1 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.4( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.7( v 36'18 (0'0,36'18] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.f( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.1c( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.5( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.1f( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.17( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.2( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.d( v 48'19 lc 33'5 (0'0,48'19] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=48'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.6( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.2( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.f( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.3( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.e( v 48'19 lc 33'4 (0'0,48'19] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=48'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.e( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.b( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.8( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.a( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.9( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.1( v 36'18 (0'0,36'18] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.16( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.1e( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.15( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.1d( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.3( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.15( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.17( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.12( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[5.14( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.1a( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[3.1f( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=50/51 n=0 ec=44/28 lis/c=44/44 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[10.16( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=50/51 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.13( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 51 pg[2.11( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.14( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.9( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.b( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.5( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.7( v 30'39 lc 28'20 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.12( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.7( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.8( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.a( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.d( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.f( v 30'39 lc 28'1 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.2( v 36'18 (0'0,36'18] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.f( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.d( v 30'39 lc 28'13 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.9( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.4( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[4.2( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.3( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.1( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.13( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.5( v 30'39 lc 28'11 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[6.1( v 30'39 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.1a( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.14( v 48'19 lc 33'7 (0'0,48'19] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=48'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.1b( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.18( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.19( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.4( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.6( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.b( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.7( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.d( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.c( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.3( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.9( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.6( v 36'18 (0'0,36'18] local-lis/les=50/51 n=1 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.f( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.19( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.16( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.f( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.13( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.5( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.12( v 48'19 lc 36'17 (0'0,48'19] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=48'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.12( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.1a( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.15( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.11( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.10( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[5.1d( empty local-lis/les=50/51 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[10.11( v 36'18 (0'0,36'18] local-lis/les=50/51 n=0 ec=46/32 lis/c=46/46 les/c/f=48/48/0 sis=50) [1] r=0 lpr=50 pi=[46,50)/1 crt=36'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 51 pg[2.17( empty local-lis/les=50/51 n=0 ec=38/14 lis/c=38/38 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 27 08:15:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.787698746s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 active pruub 104.924446106s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.787643433s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.924446106s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.6( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.788977623s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 active pruub 104.926467896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.6( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.788927078s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926467896s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.2( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.788765907s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 active pruub 104.926582336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.e( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.788722038s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 active pruub 104.926589966s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.2( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.788714409s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926582336s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 52 pg[6.e( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.788673401s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 104.926589966s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.5( v 47'484 (0'0,47'484] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=47'484 lcod 36'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 52 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-mgr[75385]: [progress INFO root] Completed event 860b3652-e5c0-4828-86df-62f8770d7ad5 (Global Recovery Event) in 15 seconds
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 27 08:15:37 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.397862434s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101715088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.397787094s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101715088s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.392918587s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.097160339s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.397212029s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101585388s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.397087097s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101585388s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.392572403s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.097160339s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.396593094s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101379395s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.396551132s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101379395s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.396718979s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101654053s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.396691322s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101654053s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.395898819s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101173401s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.395843506s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101173401s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.395767212s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101242065s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.395538330s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101242065s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.5( v 52'485 (0'0,52'485] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.395028114s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=47'484 lcod 47'484 active pruub 105.101661682s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.5( v 52'485 (0'0,52'485] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.394966125s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=47'484 lcod 47'484 unknown NOTIFY pruub 105.101661682s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.390417099s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.097274780s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.390368462s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.097274780s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.394456863s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101478577s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.394397736s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101478577s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[6.2( v 30'39 (0'0,30'39] local-lis/les=52/53 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.393465996s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 active pruub 105.101135254s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.393339157s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101135254s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[6.6( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=52/53 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=30'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[6.e( v 30'39 lc 28'19 (0'0,30'39] local-lis/les=52/53 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.5( v 52'485 (0'0,52'485] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=47'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.5( v 52'485 (0'0,52'485] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=47'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 53 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:37 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 53 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=52/53 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:15:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 27 08:15:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 27 08:15:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 27 08:15:38 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.382763863s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 active pruub 105.101699829s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.382144928s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 active pruub 105.101486206s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.382050514s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101486206s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.381459236s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101699829s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.380300522s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 active pruub 105.101280212s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.380515099s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 active pruub 105.101547241s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.380160332s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101280212s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=51/52 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.380001068s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101547241s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.379340172s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 active pruub 105.101158142s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:38 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 54 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=51/52 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.379253387s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 105.101158142s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.13( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.5( v 52'485 (0'0,52'485] local-lis/les=53/54 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=52'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.b( v 36'483 (0'0,36'483] local-lis/les=53/54 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.17( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.9( v 36'483 (0'0,36'483] local-lis/les=53/54 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.f( v 36'483 (0'0,36'483] local-lis/les=53/54 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.19( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.1( v 36'483 (0'0,36'483] local-lis/les=53/54 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 54 pg[9.1b( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 5 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 1 active+recovering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/249 objects degraded (0.402%); 30/249 objects misplaced (12.048%); 326 B/s, 2 keys/s, 5 objects/s recovering
Jan 27 08:15:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 27 08:15:39 np0005597378 ceph-mon[75090]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/249 objects degraded (0.402%), 1 pg degraded (PG_DEGRADED)
Jan 27 08:15:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 27 08:15:39 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 27 08:15:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 55 pg[9.11( v 36'483 (0'0,36'483] local-lis/les=54/55 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 55 pg[9.7( v 36'483 (0'0,36'483] local-lis/les=54/55 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 55 pg[9.d( v 36'483 (0'0,36'483] local-lis/les=54/55 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 55 pg[9.3( v 36'483 (0'0,36'483] local-lis/les=54/55 n=7 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 55 pg[9.1d( v 36'483 (0'0,36'483] local-lis/les=54/55 n=6 ec=46/30 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:40 np0005597378 ceph-mon[75090]: Health check failed: Degraded data redundancy: 1/249 objects degraded (0.402%), 1 pg degraded (PG_DEGRADED)
Jan 27 08:15:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 5 peering, 1 active+recovery_wait+degraded, 1 active+recovering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/249 objects degraded (0.402%); 1/249 objects misplaced (0.402%); 1.3 KiB/s, 2 keys/s, 30 objects/s recovering
Jan 27 08:15:42 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 27 08:15:42 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 27 08:15:42 np0005597378 ceph-mgr[75385]: [progress INFO root] Writing back 17 completed events
Jan 27 08:15:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 27 08:15:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 5 peering, 300 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 907 B/s, 1 keys/s, 21 objects/s recovering
Jan 27 08:15:43 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 27 08:15:43 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 27 08:15:43 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 27 08:15:43 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 27 08:15:43 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/249 objects degraded (0.402%), 1 pg degraded)
Jan 27 08:15:43 np0005597378 ceph-mon[75090]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 27 08:15:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:15:44 np0005597378 ceph-mon[75090]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/249 objects degraded (0.402%), 1 pg degraded)
Jan 27 08:15:44 np0005597378 ceph-mon[75090]: Cluster is now healthy
Jan 27 08:15:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 733 B/s, 1 keys/s, 17 objects/s recovering
Jan 27 08:15:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 27 08:15:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 27 08:15:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 27 08:15:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 27 08:15:45 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 27 08:15:45 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 27 08:15:45 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 27 08:15:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:15:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 27 08:15:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 494 B/s, 12 objects/s recovering
Jan 27 08:15:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 27 08:15:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 27 08:15:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 27 08:15:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 27 08:15:47 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 27 08:15:47 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 27 08:15:47 np0005597378 systemd[76472]: Starting Mark boot as successful...
Jan 27 08:15:47 np0005597378 systemd[76472]: Finished Mark boot as successful.
Jan 27 08:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 27 08:15:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 56 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.536466599s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 active pruub 112.069435120s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 57 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.536387444s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 unknown NOTIFY pruub 112.069435120s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 56 pg[6.7( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.536008835s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 active pruub 112.069107056s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 56 pg[6.3( v 30'39 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.536713600s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 active pruub 112.069877625s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 57 pg[6.7( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.535519600s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 unknown NOTIFY pruub 112.069107056s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 57 pg[6.3( v 30'39 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.536108971s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 unknown NOTIFY pruub 112.069877625s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 56 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.535050392s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 active pruub 112.068923950s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 57 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=11.534884453s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=30'39 unknown NOTIFY pruub 112.068923950s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.4( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.342707634s) [1] r=-1 lpr=57 pi=[42,57)/1 crt=30'39 lcod 0'0 active pruub 120.925094604s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.4( v 30'39 (0'0,30'39] local-lis/les=42/43 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.342446327s) [1] r=-1 lpr=57 pi=[42,57)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 120.925094604s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 57 pg[6.4( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.c( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.343018532s) [1] r=-1 lpr=57 pi=[42,57)/1 crt=30'39 lcod 0'0 active pruub 120.926956177s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 57 pg[6.c( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.341960907s) [1] r=-1 lpr=57 pi=[42,57)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 120.926956177s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 57 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 58 pg[6.b( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=30'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 58 pg[6.4( v 30'39 lc 28'15 (0'0,30'39] local-lis/les=57/58 n=2 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 58 pg[6.7( v 30'39 lc 28'20 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 58 pg[6.f( v 30'39 lc 28'1 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:48 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 58 pg[6.3( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=56/58 n=2 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=57 pi=[50,56)/1 crt=30'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:48 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 58 pg[6.c( v 30'39 lc 28'17 (0'0,30'39] local-lis/les=57/58 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 27 08:15:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 27 08:15:49 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 27 08:15:49 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 27 08:15:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 27 08:15:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:15:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 27 08:15:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 127 B/s, 1 keys/s, 1 objects/s recovering
Jan 27 08:15:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 27 08:15:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 27 08:15:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 27 08:15:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 59 pg[6.5( v 30'39 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=50/50 les/c/f=51/52/0 sis=59 pruub=9.231604576s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=30'39 active pruub 112.070022583s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 59 pg[6.5( v 30'39 (0'0,30'39] local-lis/les=50/51 n=2 ec=42/19 lis/c=50/50 les/c/f=51/52/0 sis=59 pruub=9.231559753s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=30'39 unknown NOTIFY pruub 112.070022583s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 59 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=9.231099129s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=30'39 active pruub 112.069633484s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 59 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=9.231070518s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=30'39 unknown NOTIFY pruub 112.069633484s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:51 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 59 pg[6.5( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/52/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:51 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.164998055s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 active pruub 111.934326172s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.164929390s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 111.934326172s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.164560318s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 active pruub 111.934791565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.164523125s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 111.934791565s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:51 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60) [2] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.167145729s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 active pruub 111.937751770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.167096138s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 111.937751770s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.1e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.167349815s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 active pruub 111.938583374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:51 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 60 pg[9.1e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.167328835s) [2] r=-1 lpr=60 pi=[46,60)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 111.938583374s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:51 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 27 08:15:51 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60) [2] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:51 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60) [2] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:51 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=60) [2] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:51 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 60 pg[6.5( v 30'39 lc 28'11 (0'0,30'39] local-lis/les=59/60 n=2 ec=42/19 lis/c=50/50 les/c/f=51/52/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:51 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 60 pg[6.d( v 30'39 lc 28'13 (0'0,30'39] local-lis/les=59/60 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:52 np0005597378 python3[98183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:15:52 np0005597378 podman[98184]: 2026-01-27 13:15:52.792799703 +0000 UTC m=+0.049689528 container create c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c (image=quay.io/ceph/ceph:v20, name=compassionate_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:15:52 np0005597378 systemd[1]: Started libpod-conmon-c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c.scope.
Jan 27 08:15:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4378c041453c058e13468f354e389304d5f8914250f667189b82ec25d303297b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4378c041453c058e13468f354e389304d5f8914250f667189b82ec25d303297b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:52 np0005597378 podman[98184]: 2026-01-27 13:15:52.77228253 +0000 UTC m=+0.029172385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:15:52 np0005597378 podman[98184]: 2026-01-27 13:15:52.872809425 +0000 UTC m=+0.129699290 container init c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c (image=quay.io/ceph/ceph:v20, name=compassionate_joliot, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:15:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 27 08:15:52 np0005597378 podman[98184]: 2026-01-27 13:15:52.880006745 +0000 UTC m=+0.136896570 container start c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c (image=quay.io/ceph/ceph:v20, name=compassionate_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:15:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 27 08:15:52 np0005597378 podman[98184]: 2026-01-27 13:15:52.885617002 +0000 UTC m=+0.142506827 container attach c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c (image=quay.io/ceph/ceph:v20, name=compassionate_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:15:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.1e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 61 pg[9.1e( v 36'483 (0'0,36'483] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:52 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[46,61)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 562 B/s, 2 keys/s, 2 objects/s recovering
Jan 27 08:15:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:15:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 27 08:15:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 27 08:15:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 27 08:15:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 27 08:15:53 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 62 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=61/62 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:53 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 62 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=61/62 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:53 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 62 pg[9.e( v 36'483 (0'0,36'483] local-lis/les=61/62 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:53 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 62 pg[9.1e( v 36'483 (0'0,36'483] local-lis/les=61/62 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[46,61)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:54 np0005597378 compassionate_joliot[98199]: could not fetch user info: no user info saved
Jan 27 08:15:54 np0005597378 systemd[1]: libpod-c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c.scope: Deactivated successfully.
Jan 27 08:15:54 np0005597378 podman[98184]: 2026-01-27 13:15:54.057932678 +0000 UTC m=+1.314822503 container died c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c (image=quay.io/ceph/ceph:v20, name=compassionate_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:15:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4378c041453c058e13468f354e389304d5f8914250f667189b82ec25d303297b-merged.mount: Deactivated successfully.
Jan 27 08:15:54 np0005597378 podman[98184]: 2026-01-27 13:15:54.09956639 +0000 UTC m=+1.356456205 container remove c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c (image=quay.io/ceph/ceph:v20, name=compassionate_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:15:54 np0005597378 systemd[1]: libpod-conmon-c48d3092e8c711b6483f53d0f39729f6c8128a1e022baac9248013332d7fae9c.scope: Deactivated successfully.
Jan 27 08:15:54 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 27 08:15:54 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 27 08:15:54 np0005597378 python3[98321]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 4d8fd694-f443-5fb1-b612-70034b2f3c6e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.454251955 +0000 UTC m=+0.043710670 container create 1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69 (image=quay.io/ceph/ceph:v20, name=infallible_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:15:54 np0005597378 systemd[1]: Started libpod-conmon-1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69.scope.
Jan 27 08:15:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:15:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a700df0209567d8a5c94f68ee0ba29fda91c89b06b417fb2b0129c6051bc5401/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a700df0209567d8a5c94f68ee0ba29fda91c89b06b417fb2b0129c6051bc5401/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.434646698 +0000 UTC m=+0.024105453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.533741303 +0000 UTC m=+0.123200038 container init 1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69 (image=quay.io/ceph/ceph:v20, name=infallible_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.5390172 +0000 UTC m=+0.128475915 container start 1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69 (image=quay.io/ceph/ceph:v20, name=infallible_banach, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.542821287 +0000 UTC m=+0.132280032 container attach 1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69 (image=quay.io/ceph/ceph:v20, name=infallible_banach, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:15:54 np0005597378 infallible_banach[98338]: {
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "user_id": "openstack",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "display_name": "openstack",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "email": "",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "suspended": 0,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "max_buckets": 1000,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "subusers": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "keys": [
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        {
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:            "user": "openstack",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:            "access_key": "0RBBW2DPALJH1Q3217MA",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:            "secret_key": "xCQhzcNtGfbl4ziBIv3DBTqf0fCiXJYtTqxnXfCu",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:            "active": true,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:            "create_date": "2026-01-27T13:15:54.730170Z"
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        }
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    ],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "swift_keys": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "caps": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "op_mask": "read, write, delete",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "default_placement": "",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "default_storage_class": "",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "placement_tags": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "bucket_quota": {
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "enabled": false,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "check_on_raw": false,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "max_size": -1,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "max_size_kb": 0,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "max_objects": -1
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    },
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "user_quota": {
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "enabled": false,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "check_on_raw": false,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "max_size": -1,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "max_size_kb": 0,
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:        "max_objects": -1
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    },
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "temp_url_keys": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "type": "rgw",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "mfa_ids": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "account_id": "",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "path": "/",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "create_date": "2026-01-27T13:15:54.729595Z",
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "tags": [],
Jan 27 08:15:54 np0005597378 infallible_banach[98338]:    "group_ids": []
Jan 27 08:15:54 np0005597378 infallible_banach[98338]: }
Jan 27 08:15:54 np0005597378 infallible_banach[98338]: 
Jan 27 08:15:54 np0005597378 systemd[1]: libpod-1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69.scope: Deactivated successfully.
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.75870824 +0000 UTC m=+0.348166955 container died 1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69 (image=quay.io/ceph/ceph:v20, name=infallible_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:15:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a700df0209567d8a5c94f68ee0ba29fda91c89b06b417fb2b0129c6051bc5401-merged.mount: Deactivated successfully.
Jan 27 08:15:54 np0005597378 podman[98322]: 2026-01-27 13:15:54.796182255 +0000 UTC m=+0.385640970 container remove 1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69 (image=quay.io/ceph/ceph:v20, name=infallible_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:15:54 np0005597378 systemd[1]: libpod-conmon-1456ae1fc836d0cf0daa2225b97ef14616c6118bb56ad47762af39ce20d71d69.scope: Deactivated successfully.
Jan 27 08:15:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 22 op/s; 354 B/s, 1 objects/s recovering
Jan 27 08:15:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 27 08:15:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 27 08:15:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.e( v 62'489 (0'0,62'489] local-lis/les=0/0 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 pct=0'0 crt=62'488 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.e( v 62'489 (0'0,62'489] local-lis/les=0/0 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=62'488 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 pct=0'0 crt=62'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 63 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=62'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=61/62 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.935853004s) [2] async=[2] r=-1 lpr=63 pi=[46,63)/1 crt=62'484 lcod 62'484 active pruub 121.726577759s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=61/62 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.930058479s) [2] async=[2] r=-1 lpr=63 pi=[46,63)/1 crt=36'483 lcod 0'0 active pruub 121.720832825s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=61/62 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.935789108s) [2] r=-1 lpr=63 pi=[46,63)/1 crt=62'484 lcod 62'484 unknown NOTIFY pruub 121.726577759s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=61/62 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.932979584s) [2] async=[2] r=-1 lpr=63 pi=[46,63)/1 crt=36'483 lcod 0'0 active pruub 121.723777771s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.e( v 62'489 (0'0,62'489] local-lis/les=61/62 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.932995796s) [2] async=[2] r=-1 lpr=63 pi=[46,63)/1 crt=62'488 lcod 62'488 active pruub 121.723793030s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=61/62 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.929990768s) [2] r=-1 lpr=63 pi=[46,63)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 121.720832825s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=61/62 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.932931900s) [2] r=-1 lpr=63 pi=[46,63)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 121.723777771s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:54 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 63 pg[9.e( v 62'489 (0'0,62'489] local-lis/les=61/62 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63 pruub=14.932921410s) [2] r=-1 lpr=63 pi=[46,63)/1 crt=62'488 lcod 62'488 unknown NOTIFY pruub 121.723793030s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 27 08:15:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 27 08:15:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 27 08:15:55 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 64 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=63/64 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:55 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 64 pg[9.e( v 62'489 (0'0,62'489] local-lis/les=63/64 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=62'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:55 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 64 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=63/64 n=6 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=62'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:55 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 64 pg[9.6( v 36'483 (0'0,36'483] local-lis/les=63/64 n=7 ec=46/30 lis/c=61/46 les/c/f=62/47/0 sis=63) [2] r=0 lpr=63 pi=[46,63)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:15:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 27 op/s; 438 B/s, 2 objects/s recovering
Jan 27 08:15:57 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 27 08:15:57 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 27 08:15:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:15:58 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 27 08:15:58 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 27 08:15:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 B/s wr, 46 op/s; 164 B/s, 4 objects/s recovering
Jan 27 08:15:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 27 08:15:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 27 08:15:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 27 08:15:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 27 08:15:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 27 08:15:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:15:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:15:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 27 08:15:59 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 27 08:15:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 27 08:15:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=54/55 n=7 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=12.806687355s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=62'486 lcod 62'486 active pruub 128.788864136s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=11.785198212s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=62'484 lcod 62'484 active pruub 127.767425537s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=54/55 n=7 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=12.806622505s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=62'486 lcod 62'486 unknown NOTIFY pruub 128.788864136s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=11.785128593s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=62'484 lcod 62'484 unknown NOTIFY pruub 127.767425537s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=53/54 n=7 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=11.785295486s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=62'484 lcod 62'484 active pruub 127.767860413s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=53/54 n=7 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=11.785219193s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=62'484 lcod 62'484 unknown NOTIFY pruub 127.767860413s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=11.784882545s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=36'483 active pruub 127.767860413s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:15:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 65 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=11.784840584s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=36'483 unknown NOTIFY pruub 127.767860413s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:15:59 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:59 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:59 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:59 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:15:59 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 27 08:15:59 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 27 08:15:59 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 27 08:15:59 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[53,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[53,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[53,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[53,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[53,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[53,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=36'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=36'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=53/54 n=7 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=62'484 lcod 62'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=62'484 lcod 62'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=54/55 n=7 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=53/54 n=7 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=62'484 lcod 62'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=62'484 lcod 62'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 66 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=54/55 n=7 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 27 08:16:00 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 27 08:16:00 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 27 08:16:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 342 B/s wr, 28 op/s; 138 B/s, 4 objects/s recovering
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 27 08:16:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 27 08:16:01 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 67 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=15.065641403s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=36'483 lcod 0'0 active pruub 127.938667297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:01 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 67 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=15.065486908s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 127.938667297s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:01 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 67 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=15.065509796s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=62'486 lcod 62'486 active pruub 127.938903809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:01 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 67 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=67 pruub=15.065467834s) [2] r=-1 lpr=67 pi=[46,67)/1 crt=62'486 lcod 62'486 unknown NOTIFY pruub 127.938903809s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:01 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 27 08:16:01 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=67) [2] r=0 lpr=67 pi=[46,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 67 pg[6.8( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=67 pruub=11.001783371s) [2] r=-1 lpr=67 pi=[42,67)/1 crt=30'39 lcod 0'0 active pruub 128.925308228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 67 pg[6.8( v 30'39 (0'0,30'39] local-lis/les=42/43 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=67 pruub=11.001719475s) [2] r=-1 lpr=67 pi=[42,67)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 128.925308228s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:01 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=67) [2] r=0 lpr=67 pi=[46,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:01 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=67) [2] r=0 lpr=67 pi=[42,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 67 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=66/67 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=62'485 lcod 62'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 67 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=66/67 n=7 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=62'485 lcod 62'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 67 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=66/67 n=7 ec=46/30 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=62'487 lcod 62'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 67 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=66/67 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=36'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:01 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 27 08:16:01 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 27 08:16:01 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 27 08:16:01 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 27 08:16:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 27 08:16:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:16:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 27 08:16:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 27 08:16:02 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 27 08:16:02 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 68 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 68 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:02 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 68 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 68 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=0 lpr=68 pi=[46,68)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 pct=0'0 crt=62'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=66/67 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995712280s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=62'485 lcod 62'484 active pruub 133.930267334s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=62'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=66/67 n=7 ec=46/30 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.995637894s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=62'487 lcod 62'486 active pruub 133.930282593s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=66/67 n=7 ec=46/30 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.995573044s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=62'487 lcod 62'486 unknown NOTIFY pruub 133.930282593s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=66/67 n=7 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995438576s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=62'485 lcod 62'484 active pruub 133.930282593s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=66/67 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995431900s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=62'485 lcod 62'484 unknown NOTIFY pruub 133.930267334s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=0/0 n=7 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 pct=0'0 crt=62'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=0/0 n=7 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=62'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=66/67 n=7 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995359421s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=62'485 lcod 62'484 unknown NOTIFY pruub 133.930282593s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[46,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=66/67 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996149063s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=36'483 active pruub 133.931411743s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 68 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=66/67 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996102333s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=36'483 unknown NOTIFY pruub 133.931411743s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=0/0 n=7 ec=46/30 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 pct=0'0 crt=62'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=0/0 n=7 ec=46/30 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=62'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 68 pg[6.8( v 30'39 (0'0,30'39] local-lis/les=67/68 n=1 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=67) [2] r=0 lpr=67 pi=[42,67)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:02 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 27 08:16:02 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 27 08:16:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 2 remapped+peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 27 08:16:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 27 08:16:03 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 27 08:16:03 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 69 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=68/69 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:03 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 69 pg[9.7( v 62'487 (0'0,62'487] local-lis/les=68/69 n=7 ec=46/30 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=62'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:03 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 69 pg[9.17( v 62'485 (0'0,62'485] local-lis/les=68/69 n=6 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=62'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:03 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 69 pg[9.f( v 62'485 (0'0,62'485] local-lis/les=68/69 n=7 ec=46/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=62'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:03 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 69 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=68/69 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[46,68)/1 crt=62'487 lcod 62'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:03 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 69 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=68/69 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[46,68)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:03 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 27 08:16:03 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 27 08:16:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 27 08:16:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 27 08:16:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 27 08:16:04 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 70 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=68/69 n=7 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=15.006863594s) [2] async=[2] r=-1 lpr=70 pi=[46,70)/1 crt=36'483 lcod 0'0 active pruub 130.914764404s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:04 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 70 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=68/69 n=7 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=15.006768227s) [2] r=-1 lpr=70 pi=[46,70)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 130.914764404s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:04 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 70 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=68/69 n=6 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=15.006615639s) [2] async=[2] r=-1 lpr=70 pi=[46,70)/1 crt=62'487 lcod 62'486 active pruub 130.914764404s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:04 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 70 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=68/69 n=6 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70 pruub=15.006548882s) [2] r=-1 lpr=70 pi=[46,70)/1 crt=62'487 lcod 62'486 unknown NOTIFY pruub 130.914764404s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:04 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 70 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:04 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 70 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 pct=0'0 crt=62'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:04 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 70 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:04 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 70 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=62'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 27 08:16:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 27 08:16:04 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 27 08:16:04 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 27 08:16:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 2 remapped+peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 27 08:16:05 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 71 pg[9.8( v 36'483 (0'0,36'483] local-lis/les=70/71 n=7 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:05 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 71 pg[9.18( v 62'487 (0'0,62'487] local-lis/les=70/71 n=6 ec=46/30 lis/c=68/46 les/c/f=69/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=62'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:16:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:16:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 27 08:16:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 27 08:16:05 np0005597378 podman[98577]: 2026-01-27 13:16:05.89027684 +0000 UTC m=+0.061580949 container create 8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haslett, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:16:05 np0005597378 systemd[1]: Started libpod-conmon-8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d.scope.
Jan 27 08:16:05 np0005597378 podman[98577]: 2026-01-27 13:16:05.861808996 +0000 UTC m=+0.033113145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:16:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:16:05 np0005597378 podman[98577]: 2026-01-27 13:16:05.969629604 +0000 UTC m=+0.140933733 container init 8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haslett, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:16:05 np0005597378 podman[98577]: 2026-01-27 13:16:05.980707464 +0000 UTC m=+0.152011563 container start 8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haslett, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:16:05 np0005597378 podman[98577]: 2026-01-27 13:16:05.983880152 +0000 UTC m=+0.155184381 container attach 8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:16:05 np0005597378 exciting_haslett[98594]: 167 167
Jan 27 08:16:05 np0005597378 systemd[1]: libpod-8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d.scope: Deactivated successfully.
Jan 27 08:16:05 np0005597378 conmon[98594]: conmon 8491aa31865b0999fad5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d.scope/container/memory.events
Jan 27 08:16:05 np0005597378 podman[98577]: 2026-01-27 13:16:05.986538276 +0000 UTC m=+0.157842385 container died 8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:16:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8efe06c41ad6b1d16343dcfd8d130a0f6e1689ea7d2d578302e283bd783bc1dc-merged.mount: Deactivated successfully.
Jan 27 08:16:06 np0005597378 podman[98577]: 2026-01-27 13:16:06.024916666 +0000 UTC m=+0.196220775 container remove 8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:16:06 np0005597378 systemd[1]: libpod-conmon-8491aa31865b0999fad59ae1d0c7c899c18c9604fbe1e867f7788bae5e9eea8d.scope: Deactivated successfully.
Jan 27 08:16:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:16:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:16:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.195164847 +0000 UTC m=+0.038682951 container create f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:16:06 np0005597378 systemd[1]: Started libpod-conmon-f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f.scope.
Jan 27 08:16:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:16:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3aaa789a28ac8f013b7b319428ecbe45e503859fddd9726808c8642a1452bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3aaa789a28ac8f013b7b319428ecbe45e503859fddd9726808c8642a1452bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3aaa789a28ac8f013b7b319428ecbe45e503859fddd9726808c8642a1452bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3aaa789a28ac8f013b7b319428ecbe45e503859fddd9726808c8642a1452bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3aaa789a28ac8f013b7b319428ecbe45e503859fddd9726808c8642a1452bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.17806422 +0000 UTC m=+0.021582334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.278297455 +0000 UTC m=+0.121815559 container init f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.291056341 +0000 UTC m=+0.134574475 container start f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.295587269 +0000 UTC m=+0.139105403 container attach f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:16:06 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 27 08:16:06 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 27 08:16:06 np0005597378 sweet_burnell[98634]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:16:06 np0005597378 sweet_burnell[98634]: --> All data devices are unavailable
Jan 27 08:16:06 np0005597378 systemd[1]: libpod-f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f.scope: Deactivated successfully.
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.824919456 +0000 UTC m=+0.668437550 container died f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:16:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-eb3aaa789a28ac8f013b7b319428ecbe45e503859fddd9726808c8642a1452bc-merged.mount: Deactivated successfully.
Jan 27 08:16:06 np0005597378 podman[98618]: 2026-01-27 13:16:06.865491088 +0000 UTC m=+0.709009182 container remove f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:16:06 np0005597378 systemd[1]: libpod-conmon-f1396e805159a1d51a42f5f59093e62d8180123c9d968c36fcf541ce1166621f.scope: Deactivated successfully.
Jan 27 08:16:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 2 remapped+peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 237 B/s, 4 objects/s recovering
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.297624974 +0000 UTC m=+0.050755357 container create 5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:16:07 np0005597378 systemd[1]: Started libpod-conmon-5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42.scope.
Jan 27 08:16:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.278077368 +0000 UTC m=+0.031207781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.375794685 +0000 UTC m=+0.128925148 container init 5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.385571998 +0000 UTC m=+0.138702401 container start 5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.389448266 +0000 UTC m=+0.142578739 container attach 5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:16:07 np0005597378 charming_carver[98744]: 167 167
Jan 27 08:16:07 np0005597378 systemd[1]: libpod-5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42.scope: Deactivated successfully.
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.39317019 +0000 UTC m=+0.146300593 container died 5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:16:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f44231cae1c644e8151f5098f71b9b7f1308bb1fda9905c1b2022093c80c164d-merged.mount: Deactivated successfully.
Jan 27 08:16:07 np0005597378 podman[98728]: 2026-01-27 13:16:07.44157744 +0000 UTC m=+0.194707843 container remove 5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carver, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:16:07 np0005597378 systemd[1]: libpod-conmon-5d6d786033091fa6040b4068c0a5b94e436e3ad079febd3ddfc8c3473693da42.scope: Deactivated successfully.
Jan 27 08:16:07 np0005597378 podman[98768]: 2026-01-27 13:16:07.650778436 +0000 UTC m=+0.068212353 container create 5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:16:07 np0005597378 systemd[1]: Started libpod-conmon-5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c.scope.
Jan 27 08:16:07 np0005597378 podman[98768]: 2026-01-27 13:16:07.622958181 +0000 UTC m=+0.040392168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:16:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:16:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a3162df445d55b67db7d8281e9a2a98cad4bb91c88f21fb61f21423298593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a3162df445d55b67db7d8281e9a2a98cad4bb91c88f21fb61f21423298593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a3162df445d55b67db7d8281e9a2a98cad4bb91c88f21fb61f21423298593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a3162df445d55b67db7d8281e9a2a98cad4bb91c88f21fb61f21423298593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:07 np0005597378 podman[98768]: 2026-01-27 13:16:07.743306329 +0000 UTC m=+0.160740296 container init 5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:16:07 np0005597378 podman[98768]: 2026-01-27 13:16:07.750430137 +0000 UTC m=+0.167864044 container start 5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:16:07 np0005597378 podman[98768]: 2026-01-27 13:16:07.754671476 +0000 UTC m=+0.172105403 container attach 5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:16:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]: {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:    "0": [
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:        {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "devices": [
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "/dev/loop3"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            ],
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_name": "ceph_lv0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_size": "21470642176",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "name": "ceph_lv0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "tags": {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cluster_name": "ceph",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.crush_device_class": "",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.encrypted": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.objectstore": "bluestore",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osd_id": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.type": "block",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.vdo": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.with_tpm": "0"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            },
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "type": "block",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "vg_name": "ceph_vg0"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:        }
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:    ],
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:    "1": [
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:        {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "devices": [
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "/dev/loop4"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            ],
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_name": "ceph_lv1",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_size": "21470642176",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "name": "ceph_lv1",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "tags": {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cluster_name": "ceph",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.crush_device_class": "",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.encrypted": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.objectstore": "bluestore",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osd_id": "1",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.type": "block",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.vdo": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.with_tpm": "0"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            },
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "type": "block",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "vg_name": "ceph_vg1"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:        }
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:    ],
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:    "2": [
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:        {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "devices": [
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "/dev/loop5"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            ],
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_name": "ceph_lv2",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_size": "21470642176",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "name": "ceph_lv2",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "tags": {
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.cluster_name": "ceph",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.crush_device_class": "",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.encrypted": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.objectstore": "bluestore",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osd_id": "2",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.type": "block",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.vdo": "0",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:                "ceph.with_tpm": "0"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            },
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "type": "block",
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:            "vg_name": "ceph_vg2"
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:        }
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]:    ]
Jan 27 08:16:08 np0005597378 relaxed_varahamihira[98784]: }
Jan 27 08:16:08 np0005597378 systemd[1]: libpod-5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c.scope: Deactivated successfully.
Jan 27 08:16:08 np0005597378 podman[98768]: 2026-01-27 13:16:08.081589766 +0000 UTC m=+0.499023673 container died 5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:16:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cf7a3162df445d55b67db7d8281e9a2a98cad4bb91c88f21fb61f21423298593-merged.mount: Deactivated successfully.
Jan 27 08:16:08 np0005597378 podman[98768]: 2026-01-27 13:16:08.127029744 +0000 UTC m=+0.544463651 container remove 5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:16:08 np0005597378 systemd[1]: libpod-conmon-5ddf243208eee7e720345364de8c2cbac96c783fd6af1ea66225703ea047e18c.scope: Deactivated successfully.
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.554748707 +0000 UTC m=+0.039615816 container create ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hertz, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:16:08 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 27 08:16:08 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 27 08:16:08 np0005597378 systemd[1]: Started libpod-conmon-ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc.scope.
Jan 27 08:16:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.632600039 +0000 UTC m=+0.117467168 container init ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.5398357 +0000 UTC m=+0.024702829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.640427587 +0000 UTC m=+0.125294696 container start ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.643572495 +0000 UTC m=+0.128439634 container attach ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hertz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:16:08 np0005597378 gracious_hertz[98882]: 167 167
Jan 27 08:16:08 np0005597378 systemd[1]: libpod-ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc.scope: Deactivated successfully.
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.645289713 +0000 UTC m=+0.130156882 container died ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hertz, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:16:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e75aa7c37902f78189b084a6e138fc4b6c16918463254bc40c195f78a150b16f-merged.mount: Deactivated successfully.
Jan 27 08:16:08 np0005597378 podman[98866]: 2026-01-27 13:16:08.683638023 +0000 UTC m=+0.168505132 container remove ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hertz, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:16:08 np0005597378 systemd[1]: libpod-conmon-ccafb6411cc4149a5c34bcfd6ee2a489143b3a1868358bfbb722244368a09adc.scope: Deactivated successfully.
Jan 27 08:16:08 np0005597378 podman[98906]: 2026-01-27 13:16:08.87028867 +0000 UTC m=+0.056193479 container create a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:16:08 np0005597378 systemd[1]: Started libpod-conmon-a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719.scope.
Jan 27 08:16:08 np0005597378 podman[98906]: 2026-01-27 13:16:08.840065587 +0000 UTC m=+0.025970466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:16:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:16:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 6 objects/s recovering
Jan 27 08:16:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 27 08:16:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 27 08:16:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 27 08:16:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 27 08:16:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b889dae47d37e164e1595adac3da7aecb1ffcd2171d7f4b8aa219be85b60a6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b889dae47d37e164e1595adac3da7aecb1ffcd2171d7f4b8aa219be85b60a6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b889dae47d37e164e1595adac3da7aecb1ffcd2171d7f4b8aa219be85b60a6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b889dae47d37e164e1595adac3da7aecb1ffcd2171d7f4b8aa219be85b60a6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:16:08 np0005597378 podman[98906]: 2026-01-27 13:16:08.985230977 +0000 UTC m=+0.171135816 container init a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:16:09 np0005597378 podman[98906]: 2026-01-27 13:16:09.001096 +0000 UTC m=+0.187000809 container start a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:16:09 np0005597378 podman[98906]: 2026-01-27 13:16:09.005216945 +0000 UTC m=+0.191121834 container attach a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 27 08:16:09 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 27 08:16:09 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 27 08:16:09 np0005597378 lvm[99004]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:16:09 np0005597378 lvm[99004]: VG ceph_vg1 finished
Jan 27 08:16:09 np0005597378 lvm[99003]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:16:09 np0005597378 lvm[99003]: VG ceph_vg0 finished
Jan 27 08:16:09 np0005597378 lvm[99006]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:16:09 np0005597378 lvm[99006]: VG ceph_vg2 finished
Jan 27 08:16:09 np0005597378 systemd-logind[786]: New session 33 of user zuul.
Jan 27 08:16:09 np0005597378 systemd[1]: Started Session 33 of User zuul.
Jan 27 08:16:09 np0005597378 festive_austin[98923]: {}
Jan 27 08:16:09 np0005597378 systemd[1]: libpod-a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719.scope: Deactivated successfully.
Jan 27 08:16:09 np0005597378 systemd[1]: libpod-a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719.scope: Consumed 1.341s CPU time.
Jan 27 08:16:09 np0005597378 podman[98906]: 2026-01-27 13:16:09.823668255 +0000 UTC m=+1.009573054 container died a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:16:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5b889dae47d37e164e1595adac3da7aecb1ffcd2171d7f4b8aa219be85b60a6a-merged.mount: Deactivated successfully.
Jan 27 08:16:09 np0005597378 podman[98906]: 2026-01-27 13:16:09.871024546 +0000 UTC m=+1.056929325 container remove a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:16:09 np0005597378 systemd[1]: libpod-conmon-a6c3231d004c9b0ae77eb88b3b434e0f28059bd0e8c462c2b19209d72190d719.scope: Deactivated successfully.
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:16:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:16:10 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 72 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=72 pruub=13.879007339s) [0] r=-1 lpr=72 pi=[50,72)/1 crt=30'39 lcod 0'0 active pruub 136.069396973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:10 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 72 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=50/51 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=72 pruub=13.878953934s) [0] r=-1 lpr=72 pi=[50,72)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 136.069396973s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:10 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 72 pg[6.9( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=72) [0] r=0 lpr=72 pi=[50,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:10 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 27 08:16:10 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 27 08:16:10 np0005597378 python3.9[99195]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:16:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 1 objects/s recovering
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 27 08:16:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 27 08:16:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 27 08:16:11 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 73 pg[6.9( v 30'39 (0'0,30'39] local-lis/les=72/73 n=1 ec=42/19 lis/c=50/50 les/c/f=51/51/0 sis=72) [0] r=0 lpr=72 pi=[50,72)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:11 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 27 08:16:11 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 27 08:16:11 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 27 08:16:11 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 73 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=52/53 n=1 ec=42/19 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=14.163084984s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=30'39 lcod 0'0 active pruub 137.713668823s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:11 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 73 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=52/53 n=1 ec=42/19 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=14.163055420s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=30'39 lcod 0'0 unknown NOTIFY pruub 137.713668823s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:11 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 73 pg[6.a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:11 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 27 08:16:12 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 74 pg[6.a( v 30'39 (0'0,30'39] local-lis/les=73/74 n=1 ec=42/19 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=30'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:12 np0005597378 python3.9[99413]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 27 08:16:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 27 08:16:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 27 08:16:13 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 27 08:16:13 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 27 08:16:13 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 75 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=75 pruub=15.045678139s) [1] r=-1 lpr=75 pi=[56,75)/1 crt=30'39 active pruub 145.778717041s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:13 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 75 pg[6.b( v 30'39 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=75 pruub=15.045351982s) [1] r=-1 lpr=75 pi=[56,75)/1 crt=30'39 unknown NOTIFY pruub 145.778717041s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:13 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 75 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=75) [1] r=0 lpr=75 pi=[56,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 27 08:16:14 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 76 pg[6.b( v 30'39 lc 0'0 (0'0,30'39] local-lis/les=75/76 n=1 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=75) [1] r=0 lpr=75 pi=[56,75)/1 crt=30'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:14 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 27 08:16:14 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 27 08:16:14 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 27 08:16:14 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 27 08:16:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 462 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 27 08:16:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 27 08:16:15 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 77 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=77 pruub=8.878094673s) [2] r=-1 lpr=77 pi=[46,77)/1 crt=36'483 lcod 0'0 active pruub 135.935134888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:15 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 77 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=77 pruub=8.878053665s) [2] r=-1 lpr=77 pi=[46,77)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 135.935134888s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 27 08:16:15 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 77 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=77 pruub=8.880710602s) [2] r=-1 lpr=77 pi=[46,77)/1 crt=62'486 lcod 62'486 active pruub 135.939025879s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:15 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 77 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=77 pruub=8.880682945s) [2] r=-1 lpr=77 pi=[46,77)/1 crt=62'486 lcod 62'486 unknown NOTIFY pruub 135.939025879s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:15 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=77) [2] r=0 lpr=77 pi=[46,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:15 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=77) [2] r=0 lpr=77 pi=[46,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 27 08:16:15 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 27 08:16:16 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[46,78)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:16 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[46,78)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:16 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[46,78)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:16 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[46,78)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 27 08:16:16 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 78 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=0 lpr=78 pi=[46,78)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:16 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 78 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=46/47 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=0 lpr=78 pi=[46,78)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:16 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 78 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=0 lpr=78 pi=[46,78)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:16 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 78 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=46/47 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] r=0 lpr=78 pi=[46,78)/1 crt=36'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 27 08:16:16 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 27 08:16:16 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 27 08:16:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:16:16
Jan 27 08:16:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:16:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:16:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Jan 27 08:16:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:16:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 462 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 27 08:16:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 27 08:16:17 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 79 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=59/60 n=1 ec=42/19 lis/c=59/59 les/c/f=60/60/0 sis=79 pruub=14.686210632s) [1] r=-1 lpr=79 pi=[59,79)/1 crt=30'39 active pruub 148.825515747s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:17 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 79 pg[6.d( v 30'39 (0'0,30'39] local-lis/les=59/60 n=1 ec=42/19 lis/c=59/59 les/c/f=60/60/0 sis=79 pruub=14.686142921s) [1] r=-1 lpr=79 pi=[59,79)/1 crt=30'39 unknown NOTIFY pruub 148.825515747s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 79 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=59/59 les/c/f=60/60/0 sis=79) [1] r=0 lpr=79 pi=[59,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 79 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=78/79 n=7 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[46,78)/1 crt=36'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 79 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=78/79 n=6 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[46,78)/1 crt=62'487 lcod 62'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:16:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 27 08:16:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 80 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=78/79 n=7 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80 pruub=15.716400146s) [2] async=[2] r=-1 lpr=80 pi=[46,80)/1 crt=36'483 lcod 0'0 active pruub 145.431716919s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 80 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=78/79 n=7 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80 pruub=15.715875626s) [2] r=-1 lpr=80 pi=[46,80)/1 crt=36'483 lcod 0'0 unknown NOTIFY pruub 145.431716919s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 80 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=78/79 n=6 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80 pruub=15.715685844s) [2] async=[2] r=-1 lpr=80 pi=[46,80)/1 crt=62'487 lcod 62'486 active pruub 145.431777954s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 80 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=78/79 n=6 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80 pruub=15.715592384s) [2] r=-1 lpr=80 pi=[46,80)/1 crt=62'487 lcod 62'486 unknown NOTIFY pruub 145.431777954s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 80 pg[6.d( v 30'39 lc 28'13 (0'0,30'39] local-lis/les=79/80 n=1 ec=42/19 lis/c=59/59 les/c/f=60/60/0 sis=79) [1] r=0 lpr=79 pi=[59,79)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:17 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 80 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80) [2] r=0 lpr=80 pi=[46,80)/1 pct=0'0 crt=62'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:17 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 80 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80) [2] r=0 lpr=80 pi=[46,80)/1 crt=62'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:17 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 80 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80) [2] r=0 lpr=80 pi=[46,80)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:17 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 80 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=0/0 n=7 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80) [2] r=0 lpr=80 pi=[46,80)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 27 08:16:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 27 08:16:18 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 27 08:16:18 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 27 08:16:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 27 08:16:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 27 08:16:18 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 27 08:16:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 2 peering, 1 active+recovering, 302 active+clean; 462 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 1/251 objects misplaced (0.398%); 112 B/s, 4 objects/s recovering
Jan 27 08:16:18 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 81 pg[9.c( v 36'483 (0'0,36'483] local-lis/les=80/81 n=7 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80) [2] r=0 lpr=80 pi=[46,80)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:18 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 81 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=80/81 n=6 ec=46/30 lis/c=78/46 les/c/f=79/47/0 sis=80) [2] r=0 lpr=80 pi=[46,80)/1 crt=62'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:19 np0005597378 systemd[1]: session-33.scope: Deactivated successfully.
Jan 27 08:16:19 np0005597378 systemd[1]: session-33.scope: Consumed 8.014s CPU time.
Jan 27 08:16:19 np0005597378 systemd-logind[786]: Session 33 logged out. Waiting for processes to exit.
Jan 27 08:16:19 np0005597378 systemd-logind[786]: Removed session 33.
Jan 27 08:16:19 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 27 08:16:19 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 27 08:16:20 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 27 08:16:20 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 27 08:16:20 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 27 08:16:20 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 27 08:16:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 2 peering, 1 active+recovering, 302 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 1/251 objects misplaced (0.398%); 88 B/s, 3 objects/s recovering
Jan 27 08:16:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 2 objects/s recovering
Jan 27 08:16:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 2 objects/s recovering
Jan 27 08:16:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 27 08:16:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 27 08:16:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 27 08:16:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 27 08:16:25 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 27 08:16:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:16:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 27 08:16:26 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 27 08:16:26 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 27 08:16:26 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 27 08:16:26 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 27 08:16:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 27 08:16:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 27 08:16:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 27 08:16:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 27 08:16:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.477009074688113e-06 of space, bias 4.0, pg target 0.0017724108896257356 quantized to 16 (current 16)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:16:27 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 27 08:16:27 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 27 08:16:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:28 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:16:28 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 27 08:16:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 27 08:16:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 27 08:16:28 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 27 08:16:28 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 27 08:16:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 27 08:16:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 27 08:16:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 27 08:16:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 27 08:16:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 27 08:16:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 27 08:16:29 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 27 08:16:29 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 83 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=83 pruub=15.849114418s) [2] r=-1 lpr=83 pi=[56,83)/1 crt=30'39 active pruub 161.781982422s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:29 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 84 pg[6.f( v 30'39 (0'0,30'39] local-lis/les=56/58 n=1 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=83 pruub=15.848934174s) [2] r=-1 lpr=83 pi=[56,83)/1 crt=30'39 unknown NOTIFY pruub 161.781982422s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:29 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 84 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=83) [2] r=0 lpr=84 pi=[56,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 27 08:16:29 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 27 08:16:29 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 27 08:16:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 27 08:16:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 27 08:16:30 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 27 08:16:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 27 08:16:30 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 85 pg[6.f( v 30'39 lc 28'1 (0'0,30'39] local-lis/les=83/85 n=1 ec=42/19 lis/c=56/56 les/c/f=58/58/0 sis=83) [2] r=0 lpr=84 pi=[56,83)/1 crt=30'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:30 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 27 08:16:30 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 27 08:16:30 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 27 08:16:30 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 27 08:16:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 27 08:16:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 27 08:16:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 27 08:16:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 27 08:16:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 27 08:16:31 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 27 08:16:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 27 08:16:31 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 27 08:16:31 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 27 08:16:32 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 27 08:16:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 27 08:16:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 27 08:16:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 27 08:16:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 27 08:16:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 27 08:16:33 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 27 08:16:33 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 27 08:16:34 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 27 08:16:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 0 objects/s recovering
Jan 27 08:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 27 08:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 27 08:16:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 27 08:16:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 27 08:16:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 27 08:16:35 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 27 08:16:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 27 08:16:35 np0005597378 systemd-logind[786]: New session 34 of user zuul.
Jan 27 08:16:35 np0005597378 systemd[1]: Started Session 34 of User zuul.
Jan 27 08:16:35 np0005597378 python3.9[99623]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 08:16:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 27 08:16:36 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 27 08:16:36 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 27 08:16:36 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 27 08:16:36 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 27 08:16:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 27 08:16:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 27 08:16:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 27 08:16:37 np0005597378 python3.9[99797]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:16:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 27 08:16:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 27 08:16:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 27 08:16:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 27 08:16:37 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 27 08:16:37 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 27 08:16:37 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 27 08:16:37 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 27 08:16:37 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 27 08:16:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 88 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=12.979897499s) [2] r=-1 lpr=88 pi=[53,88)/1 crt=62'484 lcod 62'484 active pruub 167.767089844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:37 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 88 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=12.979838371s) [2] r=-1 lpr=88 pi=[53,88)/1 crt=62'484 lcod 62'484 unknown NOTIFY pruub 167.767089844s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:37 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=88) [2] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:38 np0005597378 python3.9[99955]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:16:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 27 08:16:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 27 08:16:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 27 08:16:38 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 27 08:16:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 90 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=90) [2]/[0] r=0 lpr=90 pi=[53,90)/1 crt=62'484 lcod 62'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:38 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 90 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=90) [2]/[0] r=0 lpr=90 pi=[53,90)/1 crt=62'484 lcod 62'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:38 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[53,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:38 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[53,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:38 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 27 08:16:38 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 27 08:16:38 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 27 08:16:38 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 27 08:16:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 0 objects/s recovering
Jan 27 08:16:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 27 08:16:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 27 08:16:38 np0005597378 python3.9[100108]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:16:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 27 08:16:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 27 08:16:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 27 08:16:39 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 27 08:16:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 91 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=91 pruub=11.726081848s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=36'483 active pruub 167.768417358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 91 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=91 pruub=11.725968361s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=36'483 unknown NOTIFY pruub 167.768417358s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 27 08:16:39 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:39 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 27 08:16:39 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 27 08:16:39 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 27 08:16:39 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 91 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=90/91 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[53,90)/1 crt=62'485 lcod 62'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:39 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 27 08:16:39 np0005597378 python3.9[100262]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:16:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 27 08:16:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 27 08:16:40 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 27 08:16:40 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 92 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=90/53 les/c/f=91/54/0 sis=92) [2] r=0 lpr=92 pi=[53,92)/1 pct=0'0 crt=62'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:40 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 92 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=90/53 les/c/f=91/54/0 sis=92) [2] r=0 lpr=92 pi=[53,92)/1 crt=62'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:40 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 92 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=90/91 n=6 ec=46/30 lis/c=90/53 les/c/f=91/54/0 sis=92 pruub=15.243656158s) [2] async=[2] r=-1 lpr=92 pi=[53,92)/1 crt=62'485 lcod 62'484 active pruub 172.300186157s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:40 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 92 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=90/91 n=6 ec=46/30 lis/c=90/53 les/c/f=91/54/0 sis=92 pruub=15.243572235s) [2] r=-1 lpr=92 pi=[53,92)/1 crt=62'485 lcod 62'484 unknown NOTIFY pruub 172.300186157s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:40 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 92 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=92) [1]/[0] r=0 lpr=92 pi=[53,92)/1 crt=36'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:40 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 92 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=92) [1]/[0] r=0 lpr=92 pi=[53,92)/1 crt=36'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:40 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[53,92)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:40 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[53,92)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 27 08:16:40 np0005597378 python3.9[100414]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:16:40 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 27 08:16:40 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 27 08:16:40 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 27 08:16:40 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 27 08:16:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 1 active+remapped, 1 active+clean+scrubbing, 303 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 27 08:16:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 27 08:16:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 27 08:16:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 27 08:16:41 np0005597378 python3.9[100564]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:16:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 27 08:16:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 27 08:16:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 27 08:16:41 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 93 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=93 pruub=10.794535637s) [0] r=-1 lpr=93 pi=[63,93)/1 crt=36'483 active pruub 157.700790405s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:41 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 93 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=93 pruub=10.794378281s) [0] r=-1 lpr=93 pi=[63,93)/1 crt=36'483 unknown NOTIFY pruub 157.700790405s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:41 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=93) [0] r=0 lpr=93 pi=[63,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:41 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 93 pg[9.13( v 62'485 (0'0,62'485] local-lis/les=92/93 n=6 ec=46/30 lis/c=90/53 les/c/f=91/54/0 sis=92) [2] r=0 lpr=92 pi=[53,92)/1 crt=62'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 27 08:16:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 27 08:16:41 np0005597378 network[100581]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:16:41 np0005597378 network[100582]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:16:41 np0005597378 network[100583]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:16:41 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 27 08:16:41 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 27 08:16:41 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 27 08:16:41 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 27 08:16:41 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 93 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=92/93 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=92) [1]/[0] async=[1] r=0 lpr=92 pi=[53,92)/1 crt=36'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 27 08:16:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 27 08:16:42 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 27 08:16:42 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 94 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=94) [0]/[2] r=0 lpr=94 pi=[63,94)/1 crt=36'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:42 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 94 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=94) [0]/[2] r=0 lpr=94 pi=[63,94)/1 crt=36'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:42 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[63,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:42 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[63,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:42 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 94 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=92/93 n=6 ec=46/30 lis/c=92/53 les/c/f=93/54/0 sis=94 pruub=15.485583305s) [1] async=[1] r=-1 lpr=94 pi=[53,94)/1 crt=36'483 active pruub 174.555038452s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:42 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 94 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=92/93 n=6 ec=46/30 lis/c=92/53 les/c/f=93/54/0 sis=94 pruub=15.485515594s) [1] r=-1 lpr=94 pi=[53,94)/1 crt=36'483 unknown NOTIFY pruub 174.555038452s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:42 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 94 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=92/53 les/c/f=93/54/0 sis=94) [1] r=0 lpr=94 pi=[53,94)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:42 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 94 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=92/53 les/c/f=93/54/0 sis=94) [1] r=0 lpr=94 pi=[53,94)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:42 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 27 08:16:42 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 27 08:16:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 1 peering, 1 active+remapped, 1 active+clean+scrubbing, 302 active+clean; 462 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 2 objects/s recovering
Jan 27 08:16:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 27 08:16:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 27 08:16:43 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 27 08:16:43 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 95 pg[9.15( v 36'483 (0'0,36'483] local-lis/les=94/95 n=6 ec=46/30 lis/c=92/53 les/c/f=93/54/0 sis=94) [1] r=0 lpr=94 pi=[53,94)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:43 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 95 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=94/95 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[63,94)/1 crt=36'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:43 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 27 08:16:43 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 27 08:16:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 27 08:16:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 27 08:16:44 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 27 08:16:44 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 96 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=94/95 n=6 ec=46/30 lis/c=94/63 les/c/f=95/64/0 sis=96 pruub=14.958088875s) [0] async=[0] r=-1 lpr=96 pi=[63,96)/1 crt=36'483 active pruub 164.926467896s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:44 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 96 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=94/95 n=6 ec=46/30 lis/c=94/63 les/c/f=95/64/0 sis=96 pruub=14.958012581s) [0] r=-1 lpr=96 pi=[63,96)/1 crt=36'483 unknown NOTIFY pruub 164.926467896s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:44 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 96 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=94/63 les/c/f=95/64/0 sis=96) [0] r=0 lpr=96 pi=[63,96)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:44 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 96 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=94/63 les/c/f=95/64/0 sis=96) [0] r=0 lpr=96 pi=[63,96)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:44 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 27 08:16:44 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 27 08:16:44 np0005597378 python3.9[100843]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:16:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Jan 27 08:16:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 27 08:16:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 27 08:16:45 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 27 08:16:45 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 97 pg[9.16( v 36'483 (0'0,36'483] local-lis/les=96/97 n=6 ec=46/30 lis/c=94/63 les/c/f=95/64/0 sis=96) [0] r=0 lpr=96 pi=[63,96)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:45 np0005597378 python3.9[100993]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:16:45 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 27 08:16:45 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 27 08:16:46 np0005597378 python3.9[101147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:16:46 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 27 08:16:46 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 27 08:16:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 1 objects/s recovering
Jan 27 08:16:47 np0005597378 python3.9[101305]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:16:47 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 27 08:16:47 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 27 08:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:16:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:48 np0005597378 python3.9[101389]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:16:48 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 27 08:16:48 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 27 08:16:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 27 08:16:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 27 08:16:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 27 08:16:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 27 08:16:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 27 08:16:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 27 08:16:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 27 08:16:51 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 27 08:16:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 27 08:16:51 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 27 08:16:51 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 27 08:16:51 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 27 08:16:51 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 27 08:16:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 27 08:16:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 27 08:16:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 27 08:16:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 27 08:16:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 27 08:16:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 27 08:16:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 27 08:16:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 27 08:16:54 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 27 08:16:54 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 27 08:16:54 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 27 08:16:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 27 08:16:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 27 08:16:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 27 08:16:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 27 08:16:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 27 08:16:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 27 08:16:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 27 08:16:55 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 100 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=100 pruub=11.516548157s) [2] r=-1 lpr=100 pi=[53,100)/1 crt=62'486 lcod 62'486 active pruub 183.769332886s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:55 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 100 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=100 pruub=11.516499519s) [2] r=-1 lpr=100 pi=[53,100)/1 crt=62'486 lcod 62'486 unknown NOTIFY pruub 183.769332886s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:55 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=100) [2] r=0 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:55 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 27 08:16:55 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 27 08:16:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 27 08:16:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 27 08:16:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 27 08:16:56 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 27 08:16:56 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[53,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:56 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[53,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:56 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 101 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [2]/[0] r=0 lpr=101 pi=[53,101)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:56 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 101 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=53/54 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [2]/[0] r=0 lpr=101 pi=[53,101)/1 crt=62'486 lcod 62'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:16:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 27 08:16:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 27 08:16:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 27 08:16:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 27 08:16:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 27 08:16:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 27 08:16:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 27 08:16:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:16:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 27 08:16:58 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 102 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=101/102 n=6 ec=46/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[53,101)/1 crt=62'487 lcod 62'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:16:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:16:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 27 08:16:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 27 08:16:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 27 08:16:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 27 08:16:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 27 08:16:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 27 08:16:59 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 27 08:16:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 103 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=101/102 n=6 ec=46/30 lis/c=101/53 les/c/f=102/54/0 sis=103 pruub=14.987945557s) [2] async=[2] r=-1 lpr=103 pi=[53,103)/1 crt=62'487 lcod 62'486 active pruub 191.292755127s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:59 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 103 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=101/102 n=6 ec=46/30 lis/c=101/53 les/c/f=102/54/0 sis=103 pruub=14.986870766s) [2] r=-1 lpr=103 pi=[53,103)/1 crt=62'487 lcod 62'486 unknown NOTIFY pruub 191.292755127s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:16:59 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 103 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=101/53 les/c/f=102/54/0 sis=103) [2] r=0 lpr=103 pi=[53,103)/1 pct=0'0 crt=62'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:16:59 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 103 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=101/53 les/c/f=102/54/0 sis=103) [2] r=0 lpr=103 pi=[53,103)/1 crt=62'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 27 08:17:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 27 08:17:00 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 27 08:17:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 27 08:17:00 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 104 pg[9.19( v 62'487 (0'0,62'487] local-lis/les=103/104 n=6 ec=46/30 lis/c=101/53 les/c/f=102/54/0 sis=103) [2] r=0 lpr=103 pi=[53,103)/1 crt=62'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 27 08:17:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 27 08:17:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 27 08:17:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 27 08:17:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 27 08:17:01 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 27 08:17:01 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 105 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=80/81 n=6 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=105 pruub=13.496417046s) [0] r=-1 lpr=105 pi=[80,105)/1 crt=62'487 active pruub 180.672668457s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:01 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 105 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=80/81 n=6 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=105 pruub=13.496206284s) [0] r=-1 lpr=105 pi=[80,105)/1 crt=62'487 unknown NOTIFY pruub 180.672668457s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:01 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 27 08:17:01 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=105) [0] r=0 lpr=105 pi=[80,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:02 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 27 08:17:02 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 27 08:17:02 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 27 08:17:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[80,106)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:02 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[80,106)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 106 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=80/81 n=6 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=106) [0]/[2] r=0 lpr=106 pi=[80,106)/1 crt=62'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:02 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 106 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=80/81 n=6 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=106) [0]/[2] r=0 lpr=106 pi=[80,106)/1 crt=62'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:02 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 27 08:17:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 27 08:17:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 27 08:17:03 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 27 08:17:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 27 08:17:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 27 08:17:03 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 27 08:17:03 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 27 08:17:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 27 08:17:03 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 107 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=106/107 n=6 ec=46/30 lis/c=80/80 les/c/f=81/81/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[80,106)/1 crt=62'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:03 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 27 08:17:03 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 27 08:17:04 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 27 08:17:04 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 27 08:17:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 27 08:17:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 27 08:17:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 27 08:17:04 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 108 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=106/107 n=6 ec=46/30 lis/c=106/80 les/c/f=107/81/0 sis=108 pruub=14.997726440s) [0] async=[0] r=-1 lpr=108 pi=[80,108)/1 crt=62'487 active pruub 185.229400635s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:04 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 108 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=106/107 n=6 ec=46/30 lis/c=106/80 les/c/f=107/81/0 sis=108 pruub=14.996887207s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=62'487 unknown NOTIFY pruub 185.229400635s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:04 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 108 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=106/80 les/c/f=107/81/0 sis=108) [0] r=0 lpr=108 pi=[80,108)/1 pct=0'0 crt=62'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:04 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 108 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=0/0 n=6 ec=46/30 lis/c=106/80 les/c/f=107/81/0 sis=108) [0] r=0 lpr=108 pi=[80,108)/1 crt=62'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 27 08:17:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 27 08:17:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 27 08:17:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 4 objects/s recovering
Jan 27 08:17:05 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 27 08:17:05 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 27 08:17:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 27 08:17:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 27 08:17:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 27 08:17:05 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 109 pg[9.1c( v 62'487 (0'0,62'487] local-lis/les=108/109 n=6 ec=46/30 lis/c=106/80 les/c/f=107/81/0 sis=108) [0] r=0 lpr=108 pi=[80,108)/1 crt=62'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 27 08:17:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 27 08:17:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 186 B/s, 4 objects/s recovering
Jan 27 08:17:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:08 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 27 08:17:08 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 27 08:17:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 2 objects/s recovering
Jan 27 08:17:09 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 27 08:17:09 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 27 08:17:10 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 27 08:17:10 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 27 08:17:10 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 27 08:17:10 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:17:10 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 27 08:17:10 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 27 08:17:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 27 08:17:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.12315282 +0000 UTC m=+0.045952023 container create b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:17:11 np0005597378 systemd[1]: Started libpod-conmon-b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567.scope.
Jan 27 08:17:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.103790548 +0000 UTC m=+0.026589781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.214505318 +0000 UTC m=+0.137304541 container init b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.223617028 +0000 UTC m=+0.146416231 container start b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wilson, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:17:11 np0005597378 elegant_wilson[101666]: 167 167
Jan 27 08:17:11 np0005597378 systemd[1]: libpod-b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567.scope: Deactivated successfully.
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.237544131 +0000 UTC m=+0.160343354 container attach b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wilson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.239521015 +0000 UTC m=+0.162320228 container died b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:17:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0c27f7a0daa8a2ec263acf3f1b7fa9f7412bbc2b90a5eb83c616721ad601280c-merged.mount: Deactivated successfully.
Jan 27 08:17:11 np0005597378 podman[101650]: 2026-01-27 13:17:11.27794161 +0000 UTC m=+0.200740813 container remove b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wilson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:17:11 np0005597378 systemd[1]: libpod-conmon-b3ceec59dfd9384de7a3ae0a5cc6d61d445a6cc7f237ab0332adff2d79f17567.scope: Deactivated successfully.
Jan 27 08:17:11 np0005597378 podman[101689]: 2026-01-27 13:17:11.436784201 +0000 UTC m=+0.043042313 container create 0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:17:11 np0005597378 systemd[1]: Started libpod-conmon-0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8.scope.
Jan 27 08:17:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:17:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d66d9254dc01796897d3b050aad88b5c2e780eabfa96412cbb3a986acc79a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d66d9254dc01796897d3b050aad88b5c2e780eabfa96412cbb3a986acc79a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d66d9254dc01796897d3b050aad88b5c2e780eabfa96412cbb3a986acc79a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d66d9254dc01796897d3b050aad88b5c2e780eabfa96412cbb3a986acc79a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d66d9254dc01796897d3b050aad88b5c2e780eabfa96412cbb3a986acc79a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:11 np0005597378 podman[101689]: 2026-01-27 13:17:11.417573743 +0000 UTC m=+0.023831885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:17:11 np0005597378 podman[101689]: 2026-01-27 13:17:11.519067371 +0000 UTC m=+0.125325513 container init 0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:17:11 np0005597378 podman[101689]: 2026-01-27 13:17:11.526837344 +0000 UTC m=+0.133095466 container start 0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:17:11 np0005597378 podman[101689]: 2026-01-27 13:17:11.535390439 +0000 UTC m=+0.141648571 container attach 0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:17:11 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 27 08:17:11 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 27 08:17:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 27 08:17:11 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 110 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=110 pruub=12.308694839s) [0] r=-1 lpr=110 pi=[63,110)/1 crt=62'485 active pruub 189.702972412s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:11 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 110 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=110 pruub=12.307962418s) [0] r=-1 lpr=110 pi=[63,110)/1 crt=62'485 unknown NOTIFY pruub 189.702972412s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:11 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=110) [0] r=0 lpr=110 pi=[63,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:11 np0005597378 friendly_bardeen[101706]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:17:11 np0005597378 friendly_bardeen[101706]: --> All data devices are unavailable
Jan 27 08:17:12 np0005597378 systemd[1]: libpod-0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8.scope: Deactivated successfully.
Jan 27 08:17:12 np0005597378 podman[101689]: 2026-01-27 13:17:12.00636642 +0000 UTC m=+0.612624572 container died 0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:17:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-92d66d9254dc01796897d3b050aad88b5c2e780eabfa96412cbb3a986acc79a8-merged.mount: Deactivated successfully.
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 27 08:17:12 np0005597378 podman[101689]: 2026-01-27 13:17:12.051980943 +0000 UTC m=+0.658239065 container remove 0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:17:12 np0005597378 systemd[1]: libpod-conmon-0b1585f663294a0850b07fd65065e3c58e615a6c8db9a3f08723018611a1d4d8.scope: Deactivated successfully.
Jan 27 08:17:12 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 27 08:17:12 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.465309732 +0000 UTC m=+0.038450317 container create 1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:17:12 np0005597378 systemd[1]: Started libpod-conmon-1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e.scope.
Jan 27 08:17:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.531027266 +0000 UTC m=+0.104167891 container init 1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.537562735 +0000 UTC m=+0.110703310 container start 1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:17:12 np0005597378 cranky_nobel[101823]: 167 167
Jan 27 08:17:12 np0005597378 systemd[1]: libpod-1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e.scope: Deactivated successfully.
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.542233704 +0000 UTC m=+0.115374279 container attach 1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:17:12 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.448308465 +0000 UTC m=+0.021449060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.544781314 +0000 UTC m=+0.117921889 container died 1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:17:12 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 27 08:17:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-510e630ab4830956c0f8f644b91f8b4b1289f72bca9b3ec805a48f41c89f4326-merged.mount: Deactivated successfully.
Jan 27 08:17:12 np0005597378 podman[101807]: 2026-01-27 13:17:12.578593172 +0000 UTC m=+0.151733747 container remove 1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_nobel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:17:12 np0005597378 systemd[1]: libpod-conmon-1d7f54ae33f0c3e57e19be2cf4917cf3b850fa3ee54708fcd8518060902cfe3e.scope: Deactivated successfully.
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 27 08:17:12 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 111 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=111) [0]/[2] r=0 lpr=111 pi=[63,111)/1 crt=62'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:12 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 111 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=63/64 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=111) [0]/[2] r=0 lpr=111 pi=[63,111)/1 crt=62'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 27 08:17:12 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[63,111)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:12 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[63,111)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:12 np0005597378 podman[101846]: 2026-01-27 13:17:12.716103578 +0000 UTC m=+0.040134993 container create 34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:17:12 np0005597378 systemd[1]: Started libpod-conmon-34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c.scope.
Jan 27 08:17:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:17:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273218ed3643c1b40ad3a60ca13f479c0939e19481d61cd2835ddedd93276a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273218ed3643c1b40ad3a60ca13f479c0939e19481d61cd2835ddedd93276a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273218ed3643c1b40ad3a60ca13f479c0939e19481d61cd2835ddedd93276a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b273218ed3643c1b40ad3a60ca13f479c0939e19481d61cd2835ddedd93276a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:12 np0005597378 podman[101846]: 2026-01-27 13:17:12.695653396 +0000 UTC m=+0.019684831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:17:12 np0005597378 podman[101846]: 2026-01-27 13:17:12.796456624 +0000 UTC m=+0.120488049 container init 34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:17:12 np0005597378 podman[101846]: 2026-01-27 13:17:12.80177024 +0000 UTC m=+0.125801655 container start 34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:17:12 np0005597378 podman[101846]: 2026-01-27 13:17:12.804782912 +0000 UTC m=+0.128814327 container attach 34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 27 08:17:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:17:13 np0005597378 elastic_raman[101862]: {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:    "0": [
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:        {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "devices": [
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "/dev/loop3"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            ],
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_name": "ceph_lv0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_size": "21470642176",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "name": "ceph_lv0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "tags": {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cluster_name": "ceph",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.crush_device_class": "",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.encrypted": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.objectstore": "bluestore",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osd_id": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.type": "block",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.vdo": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.with_tpm": "0"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            },
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "type": "block",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "vg_name": "ceph_vg0"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:        }
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:    ],
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:    "1": [
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:        {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "devices": [
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "/dev/loop4"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            ],
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_name": "ceph_lv1",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_size": "21470642176",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "name": "ceph_lv1",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "tags": {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cluster_name": "ceph",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.crush_device_class": "",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.encrypted": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.objectstore": "bluestore",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osd_id": "1",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.type": "block",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.vdo": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.with_tpm": "0"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            },
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "type": "block",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "vg_name": "ceph_vg1"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:        }
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:    ],
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:    "2": [
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:        {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "devices": [
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "/dev/loop5"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            ],
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_name": "ceph_lv2",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_size": "21470642176",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "name": "ceph_lv2",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "tags": {
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.cluster_name": "ceph",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.crush_device_class": "",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.encrypted": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.objectstore": "bluestore",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osd_id": "2",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.type": "block",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.vdo": "0",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:                "ceph.with_tpm": "0"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            },
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "type": "block",
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:            "vg_name": "ceph_vg2"
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:        }
Jan 27 08:17:13 np0005597378 elastic_raman[101862]:    ]
Jan 27 08:17:13 np0005597378 elastic_raman[101862]: }
Jan 27 08:17:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 27 08:17:13 np0005597378 systemd[1]: libpod-34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c.scope: Deactivated successfully.
Jan 27 08:17:13 np0005597378 podman[101846]: 2026-01-27 13:17:13.083550896 +0000 UTC m=+0.407582311 container died 34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_raman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 08:17:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b273218ed3643c1b40ad3a60ca13f479c0939e19481d61cd2835ddedd93276a7-merged.mount: Deactivated successfully.
Jan 27 08:17:13 np0005597378 podman[101846]: 2026-01-27 13:17:13.128203653 +0000 UTC m=+0.452235068 container remove 34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:17:13 np0005597378 systemd[1]: libpod-conmon-34b473bc08de63eb539425bbca605b5bd72b6979774fedcd0e27b7ff03cf0d6c.scope: Deactivated successfully.
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.584841381 +0000 UTC m=+0.052940314 container create 47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_chatelet, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:17:13 np0005597378 systemd[1]: Started libpod-conmon-47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c.scope.
Jan 27 08:17:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.555975599 +0000 UTC m=+0.024074572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.654457463 +0000 UTC m=+0.122556436 container init 47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.661031883 +0000 UTC m=+0.129130816 container start 47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_chatelet, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.66383432 +0000 UTC m=+0.131933253 container attach 47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:17:13 np0005597378 sleepy_chatelet[101969]: 167 167
Jan 27 08:17:13 np0005597378 systemd[1]: libpod-47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c.scope: Deactivated successfully.
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.665622899 +0000 UTC m=+0.133721822 container died 47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:17:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a398cd8723242d5ab6d03c6bccd8f208f9afb06729fe7cb674ef2f16d6fb7a57-merged.mount: Deactivated successfully.
Jan 27 08:17:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 27 08:17:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:17:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 27 08:17:13 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 27 08:17:13 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 112 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=68/69 n=6 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=112 pruub=9.381750107s) [1] r=-1 lpr=112 pi=[68,112)/1 crt=36'483 active pruub 188.799026489s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:13 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 112 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=68/69 n=6 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=112 pruub=9.381711006s) [1] r=-1 lpr=112 pi=[68,112)/1 crt=36'483 unknown NOTIFY pruub 188.799026489s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:13 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=112) [1] r=0 lpr=112 pi=[68,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:13 np0005597378 podman[101953]: 2026-01-27 13:17:13.714303116 +0000 UTC m=+0.182402059 container remove 47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_chatelet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:17:13 np0005597378 systemd[1]: libpod-conmon-47b4791ba27dbf459d84d68686ffd89734d90d2de20088f5dcec556e765afd1c.scope: Deactivated successfully.
Jan 27 08:17:13 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 27 08:17:13 np0005597378 podman[102001]: 2026-01-27 13:17:13.909560587 +0000 UTC m=+0.079583326 container create 431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:17:13 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 27 08:17:13 np0005597378 systemd[1]: Started libpod-conmon-431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d.scope.
Jan 27 08:17:13 np0005597378 podman[102001]: 2026-01-27 13:17:13.852639045 +0000 UTC m=+0.022661804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:17:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:17:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e532e5c72f6dd2788aff02a0c0139dbdd89dab771d5e958f6777a4fa12f18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e532e5c72f6dd2788aff02a0c0139dbdd89dab771d5e958f6777a4fa12f18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e532e5c72f6dd2788aff02a0c0139dbdd89dab771d5e958f6777a4fa12f18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e532e5c72f6dd2788aff02a0c0139dbdd89dab771d5e958f6777a4fa12f18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:17:13 np0005597378 podman[102001]: 2026-01-27 13:17:13.992780263 +0000 UTC m=+0.162803022 container init 431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:17:13 np0005597378 podman[102001]: 2026-01-27 13:17:13.998143831 +0000 UTC m=+0.168166560 container start 431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:17:14 np0005597378 podman[102001]: 2026-01-27 13:17:14.003149137 +0000 UTC m=+0.173171876 container attach 431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 112 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=111/112 n=6 ec=46/30 lis/c=63/63 les/c/f=64/64/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[63,111)/1 crt=62'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:14 np0005597378 lvm[102095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:17:14 np0005597378 lvm[102096]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:17:14 np0005597378 lvm[102095]: VG ceph_vg0 finished
Jan 27 08:17:14 np0005597378 lvm[102096]: VG ceph_vg1 finished
Jan 27 08:17:14 np0005597378 lvm[102098]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:17:14 np0005597378 lvm[102098]: VG ceph_vg2 finished
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 113 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=68/69 n=6 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=0 lpr=113 pi=[68,113)/1 crt=36'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 113 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=68/69 n=6 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=0 lpr=113 pi=[68,113)/1 crt=36'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 113 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=111/112 n=6 ec=46/30 lis/c=111/63 les/c/f=112/64/0 sis=113 pruub=15.817802429s) [0] async=[0] r=-1 lpr=113 pi=[63,113)/1 crt=62'485 active pruub 196.248031616s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:14 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 113 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=111/112 n=6 ec=46/30 lis/c=111/63 les/c/f=112/64/0 sis=113 pruub=15.817653656s) [0] r=-1 lpr=113 pi=[63,113)/1 crt=62'485 unknown NOTIFY pruub 196.248031616s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:14 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[68,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:14 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[68,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:14 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 113 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=111/63 les/c/f=112/64/0 sis=113) [0] r=0 lpr=113 pi=[63,113)/1 pct=0'0 crt=62'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:14 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 113 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=0/0 n=6 ec=46/30 lis/c=111/63 les/c/f=112/64/0 sis=113) [0] r=0 lpr=113 pi=[63,113)/1 crt=62'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:14 np0005597378 festive_fermi[102017]: {}
Jan 27 08:17:14 np0005597378 systemd[1]: libpod-431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d.scope: Deactivated successfully.
Jan 27 08:17:14 np0005597378 systemd[1]: libpod-431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d.scope: Consumed 1.222s CPU time.
Jan 27 08:17:14 np0005597378 podman[102001]: 2026-01-27 13:17:14.770150439 +0000 UTC m=+0.940173188 container died 431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:17:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c17e532e5c72f6dd2788aff02a0c0139dbdd89dab771d5e958f6777a4fa12f18-merged.mount: Deactivated successfully.
Jan 27 08:17:14 np0005597378 podman[102001]: 2026-01-27 13:17:14.823826223 +0000 UTC m=+0.993849002 container remove 431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:17:14 np0005597378 systemd[1]: libpod-conmon-431da34e79dc62a8dbb0dd2877588ae7f98fc0a24c500a4bb04ceceb63e87e9d.scope: Deactivated successfully.
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:17:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:17:14 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 27 08:17:14 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 27 08:17:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 27 08:17:15 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:17:15 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:17:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 27 08:17:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 27 08:17:15 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 27 08:17:15 np0005597378 ceph-osd[85897]: osd.0 pg_epoch: 114 pg[9.1e( v 62'485 (0'0,62'485] local-lis/les=113/114 n=6 ec=46/30 lis/c=111/63 les/c/f=112/64/0 sis=113) [0] r=0 lpr=113 pi=[63,113)/1 crt=62'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:15 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 114 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=113/114 n=6 ec=46/30 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] async=[1] r=0 lpr=113 pi=[68,113)/1 crt=36'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:16 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 27 08:17:16 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 27 08:17:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 27 08:17:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:17:16
Jan 27 08:17:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:17:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Jan 27 08:17:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Jan 27 08:17:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 27 08:17:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 27 08:17:17 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 115 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=113/114 n=6 ec=46/30 lis/c=113/68 les/c/f=114/69/0 sis=115 pruub=14.872119904s) [1] async=[1] r=-1 lpr=115 pi=[68,115)/1 crt=36'483 active pruub 197.661422729s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:17 np0005597378 ceph-osd[88005]: osd.2 pg_epoch: 115 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=113/114 n=6 ec=46/30 lis/c=113/68 les/c/f=114/69/0 sis=115 pruub=14.871969223s) [1] r=-1 lpr=115 pi=[68,115)/1 crt=36'483 unknown NOTIFY pruub 197.661422729s@ mbc={}] state<Start>: transitioning to Stray
Jan 27 08:17:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 115 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 pct=0'0 crt=36'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 27 08:17:17 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 115 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=0/0 n=6 ec=46/30 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 crt=36'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:17:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:17:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 27 08:17:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 27 08:17:18 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 27 08:17:18 np0005597378 ceph-osd[86941]: osd.1 pg_epoch: 116 pg[9.1f( v 36'483 (0'0,36'483] local-lis/les=115/116 n=6 ec=46/30 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 crt=36'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 27 08:17:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 27 08:17:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 27 08:17:18 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 27 08:17:18 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 27 08:17:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 2 objects/s recovering
Jan 27 08:17:19 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 27 08:17:19 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 27 08:17:19 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 27 08:17:19 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 27 08:17:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 1 peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 27 08:17:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 27 08:17:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 27 08:17:21 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 27 08:17:21 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 27 08:17:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 27 08:17:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 27 08:17:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 27 08:17:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 27 08:17:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Jan 27 08:17:23 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 27 08:17:23 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 27 08:17:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 27 08:17:25 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 27 08:17:25 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 27 08:17:26 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 27 08:17:26 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 27 08:17:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:17:27 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 27 08:17:27 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 27 08:17:27 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 27 08:17:27 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 27 08:17:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 27 08:17:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 27 08:17:28 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 27 08:17:28 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 27 08:17:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:29 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 27 08:17:29 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 27 08:17:30 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 27 08:17:30 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 27 08:17:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:31 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 27 08:17:31 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 27 08:17:32 np0005597378 python3.9[102295]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:17:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:33 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 27 08:17:33 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 27 08:17:33 np0005597378 python3.9[102582]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 08:17:34 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 27 08:17:34 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 27 08:17:34 np0005597378 python3.9[102736]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 08:17:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:35 np0005597378 python3.9[102888]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:17:36 np0005597378 python3.9[103040]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 08:17:36 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 27 08:17:36 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 27 08:17:36 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 27 08:17:36 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 27 08:17:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:37 np0005597378 python3.9[103192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:17:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:38 np0005597378 python3.9[103344]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:17:38 np0005597378 python3.9[103422]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:17:38 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 27 08:17:38 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 27 08:17:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:39 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 27 08:17:39 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 27 08:17:39 np0005597378 python3.9[103574]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:17:39 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 27 08:17:39 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 27 08:17:40 np0005597378 python3.9[103728]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 08:17:40 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 27 08:17:40 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 27 08:17:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:41 np0005597378 python3.9[103881]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 08:17:42 np0005597378 python3.9[104034]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:17:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:43 np0005597378 python3.9[104186]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 08:17:43 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 27 08:17:43 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 27 08:17:43 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 27 08:17:43 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 27 08:17:44 np0005597378 python3.9[104338]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:17:44 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 27 08:17:44 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 27 08:17:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:45 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 27 08:17:45 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 27 08:17:45 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 27 08:17:45 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 27 08:17:45 np0005597378 python3.9[104491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:17:46 np0005597378 python3.9[104643]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:17:46 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 27 08:17:46 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 27 08:17:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:47 np0005597378 python3.9[104721]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:17:47 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 27 08:17:47 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 27 08:17:47 np0005597378 python3.9[104873]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:17:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:48 np0005597378 python3.9[104951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:17:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:49 np0005597378 python3.9[105103]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:17:49 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 27 08:17:49 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 27 08:17:49 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 27 08:17:49 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 27 08:17:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:51 np0005597378 python3.9[105254]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:17:51 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 27 08:17:51 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 27 08:17:51 np0005597378 python3.9[105406]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 08:17:52 np0005597378 python3.9[105556]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:17:52 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 27 08:17:52 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 27 08:17:52 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 27 08:17:52 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 27 08:17:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:53 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 27 08:17:53 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 27 08:17:53 np0005597378 python3.9[105708]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:17:54 np0005597378 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 08:17:54 np0005597378 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 08:17:54 np0005597378 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 08:17:54 np0005597378 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 08:17:54 np0005597378 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 08:17:54 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 27 08:17:54 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 27 08:17:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:55 np0005597378 python3.9[105870]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 08:17:55 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 27 08:17:55 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 27 08:17:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:57 np0005597378 python3.9[106022]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:17:57 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 27 08:17:57 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 27 08:17:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:17:58 np0005597378 python3.9[106176]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:17:58 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 27 08:17:58 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 27 08:17:58 np0005597378 systemd[1]: session-34.scope: Deactivated successfully.
Jan 27 08:17:58 np0005597378 systemd[1]: session-34.scope: Consumed 1min 2.948s CPU time.
Jan 27 08:17:58 np0005597378 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Jan 27 08:17:58 np0005597378 systemd-logind[786]: Removed session 34.
Jan 27 08:17:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:17:59 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 27 08:17:59 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 27 08:17:59 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 27 08:17:59 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 27 08:18:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:01 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 27 08:18:01 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 27 08:18:02 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 27 08:18:02 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 27 08:18:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:04 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 27 08:18:04 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 27 08:18:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 27 08:18:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 27 08:18:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:05 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 27 08:18:05 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 27 08:18:05 np0005597378 systemd-logind[786]: New session 35 of user zuul.
Jan 27 08:18:05 np0005597378 systemd[1]: Started Session 35 of User zuul.
Jan 27 08:18:06 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 27 08:18:06 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 27 08:18:06 np0005597378 python3.9[106356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:07 np0005597378 python3.9[106512]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 08:18:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:08 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 27 08:18:08 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 27 08:18:08 np0005597378 python3.9[106665]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:18:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:09 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 27 08:18:09 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 27 08:18:09 np0005597378 python3.9[106749]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:18:10 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 27 08:18:10 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 27 08:18:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:11 np0005597378 python3.9[106902]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:18:12 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 27 08:18:12 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 27 08:18:12 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 27 08:18:12 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 27 08:18:12 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 27 08:18:12 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 27 08:18:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:13 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 27 08:18:13 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 27 08:18:14 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 27 08:18:14 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 27 08:18:14 np0005597378 python3.9[107055]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:18:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:15 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 27 08:18:15 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 27 08:18:15 np0005597378 python3.9[107258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:15 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 27 08:18:15 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:18:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:18:16 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:18:16 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:18:16 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:18:16 np0005597378 podman[107474]: 2026-01-27 13:18:16.09887331 +0000 UTC m=+0.084248185 container create 91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:18:16 np0005597378 podman[107474]: 2026-01-27 13:18:16.035772817 +0000 UTC m=+0.021147712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:18:16 np0005597378 systemd[1]: Started libpod-conmon-91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a.scope.
Jan 27 08:18:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:18:16 np0005597378 python3.9[107516]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 08:18:16 np0005597378 podman[107474]: 2026-01-27 13:18:16.474507875 +0000 UTC m=+0.459882750 container init 91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:18:16 np0005597378 podman[107474]: 2026-01-27 13:18:16.483262085 +0000 UTC m=+0.468636950 container start 91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:18:16 np0005597378 systemd[1]: libpod-91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a.scope: Deactivated successfully.
Jan 27 08:18:16 np0005597378 romantic_sanderson[107519]: 167 167
Jan 27 08:18:16 np0005597378 conmon[107519]: conmon 91776dfd17d56f65e4a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a.scope/container/memory.events
Jan 27 08:18:16 np0005597378 podman[107474]: 2026-01-27 13:18:16.575693463 +0000 UTC m=+0.561068338 container attach 91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:18:16 np0005597378 podman[107474]: 2026-01-27 13:18:16.577199664 +0000 UTC m=+0.562574579 container died 91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sanderson, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:18:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7550e19b4c20bb5ceda2aa42bd4df48cbaeaeaf14bc2eb401f9e45e6f8f446ae-merged.mount: Deactivated successfully.
Jan 27 08:18:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:18:16
Jan 27 08:18:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:18:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:18:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Jan 27 08:18:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:17 np0005597378 podman[107474]: 2026-01-27 13:18:17.282906412 +0000 UTC m=+1.268281287 container remove 91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:18:17 np0005597378 systemd[1]: libpod-conmon-91776dfd17d56f65e4a222713b626aaea46b5e0097be3a9c463c00f6c862654a.scope: Deactivated successfully.
Jan 27 08:18:17 np0005597378 python3.9[107685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:17 np0005597378 podman[107693]: 2026-01-27 13:18:17.471797349 +0000 UTC m=+0.090563888 container create edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_babbage, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:18:17 np0005597378 podman[107693]: 2026-01-27 13:18:17.401684534 +0000 UTC m=+0.020451093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:18:17 np0005597378 systemd[1]: Started libpod-conmon-edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd.scope.
Jan 27 08:18:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:18:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02870e21f21b3a997acf744f3c7e8de6bdecdfdbca70e16c7a996cf6848954e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02870e21f21b3a997acf744f3c7e8de6bdecdfdbca70e16c7a996cf6848954e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02870e21f21b3a997acf744f3c7e8de6bdecdfdbca70e16c7a996cf6848954e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02870e21f21b3a997acf744f3c7e8de6bdecdfdbca70e16c7a996cf6848954e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02870e21f21b3a997acf744f3c7e8de6bdecdfdbca70e16c7a996cf6848954e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:18:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:18:17 np0005597378 podman[107693]: 2026-01-27 13:18:17.832193956 +0000 UTC m=+0.450960535 container init edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_babbage, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:18:17 np0005597378 podman[107693]: 2026-01-27 13:18:17.84618328 +0000 UTC m=+0.464949819 container start edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_babbage, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:18:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:17 np0005597378 podman[107693]: 2026-01-27 13:18:17.906843586 +0000 UTC m=+0.525610125 container attach edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:18:18 np0005597378 hardcore_babbage[107716]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:18:18 np0005597378 hardcore_babbage[107716]: --> All data devices are unavailable
Jan 27 08:18:18 np0005597378 systemd[1]: libpod-edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd.scope: Deactivated successfully.
Jan 27 08:18:18 np0005597378 podman[107888]: 2026-01-27 13:18:18.397230806 +0000 UTC m=+0.032100469 container died edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:18:18 np0005597378 python3.9[107879]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:18:18 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 27 08:18:18 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 27 08:18:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e02870e21f21b3a997acf744f3c7e8de6bdecdfdbca70e16c7a996cf6848954e-merged.mount: Deactivated successfully.
Jan 27 08:18:18 np0005597378 podman[107888]: 2026-01-27 13:18:18.501552086 +0000 UTC m=+0.136421749 container remove edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_babbage, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:18:18 np0005597378 systemd[1]: libpod-conmon-edd70e71c5fc41febad02c0e1f6228e6821130c1b891619e55c2cb5d5673bbbd.scope: Deactivated successfully.
Jan 27 08:18:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:19.012058134 +0000 UTC m=+0.039179268 container create 5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:18:19 np0005597378 systemd[1]: Started libpod-conmon-5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0.scope.
Jan 27 08:18:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:18.994922594 +0000 UTC m=+0.022043758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:19.124065439 +0000 UTC m=+0.151186593 container init 5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:19.129659945 +0000 UTC m=+0.156781079 container start 5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_franklin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 08:18:19 np0005597378 amazing_franklin[107984]: 167 167
Jan 27 08:18:19 np0005597378 systemd[1]: libpod-5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0.scope: Deactivated successfully.
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:19.145957682 +0000 UTC m=+0.173078856 container attach 5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:19.146675761 +0000 UTC m=+0.173796955 container died 5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_franklin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:18:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d262e554ccd1be2325f9349539b4c21cfb2a3dc38744ed4ec5af934bf85b4398-merged.mount: Deactivated successfully.
Jan 27 08:18:19 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 27 08:18:19 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 27 08:18:19 np0005597378 podman[107967]: 2026-01-27 13:18:19.447633164 +0000 UTC m=+0.474754298 container remove 5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:18:19 np0005597378 systemd[1]: libpod-conmon-5c588fe9271a32248446de4eb5e376ad4ffb04540650815e5e9bf55737b10cc0.scope: Deactivated successfully.
Jan 27 08:18:19 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 27 08:18:19 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 27 08:18:19 np0005597378 podman[108007]: 2026-01-27 13:18:19.616253604 +0000 UTC m=+0.039729733 container create 0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_neumann, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:18:19 np0005597378 systemd[1]: Started libpod-conmon-0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6.scope.
Jan 27 08:18:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:18:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746362c589dd37b628127ad1d511f8d6181823b96145f37a9ffac02f7cfbf108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746362c589dd37b628127ad1d511f8d6181823b96145f37a9ffac02f7cfbf108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746362c589dd37b628127ad1d511f8d6181823b96145f37a9ffac02f7cfbf108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746362c589dd37b628127ad1d511f8d6181823b96145f37a9ffac02f7cfbf108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:19 np0005597378 podman[108007]: 2026-01-27 13:18:19.598017844 +0000 UTC m=+0.021493983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:18:19 np0005597378 podman[108007]: 2026-01-27 13:18:19.706206332 +0000 UTC m=+0.129682481 container init 0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:18:19 np0005597378 podman[108007]: 2026-01-27 13:18:19.713863116 +0000 UTC m=+0.137339245 container start 0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:18:19 np0005597378 podman[108007]: 2026-01-27 13:18:19.732813476 +0000 UTC m=+0.156289625 container attach 0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:18:19 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 27 08:18:19 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]: {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:    "0": [
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:        {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "devices": [
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "/dev/loop3"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            ],
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_name": "ceph_lv0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_size": "21470642176",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "name": "ceph_lv0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "tags": {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cluster_name": "ceph",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.crush_device_class": "",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.encrypted": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.objectstore": "bluestore",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osd_id": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.type": "block",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.vdo": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.with_tpm": "0"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            },
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "type": "block",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "vg_name": "ceph_vg0"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:        }
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:    ],
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:    "1": [
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:        {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "devices": [
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "/dev/loop4"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            ],
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_name": "ceph_lv1",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_size": "21470642176",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "name": "ceph_lv1",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "tags": {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cluster_name": "ceph",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.crush_device_class": "",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.encrypted": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.objectstore": "bluestore",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osd_id": "1",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.type": "block",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.vdo": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.with_tpm": "0"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            },
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "type": "block",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "vg_name": "ceph_vg1"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:        }
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:    ],
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:    "2": [
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:        {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "devices": [
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "/dev/loop5"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            ],
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_name": "ceph_lv2",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_size": "21470642176",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "name": "ceph_lv2",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "tags": {
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.cluster_name": "ceph",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.crush_device_class": "",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.encrypted": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.objectstore": "bluestore",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osd_id": "2",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.type": "block",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.vdo": "0",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:                "ceph.with_tpm": "0"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            },
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "type": "block",
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:            "vg_name": "ceph_vg2"
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:        }
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]:    ]
Jan 27 08:18:19 np0005597378 optimistic_neumann[108023]: }
Jan 27 08:18:20 np0005597378 systemd[1]: libpod-0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6.scope: Deactivated successfully.
Jan 27 08:18:20 np0005597378 podman[108007]: 2026-01-27 13:18:20.015357494 +0000 UTC m=+0.438833633 container died 0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:18:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-746362c589dd37b628127ad1d511f8d6181823b96145f37a9ffac02f7cfbf108-merged.mount: Deactivated successfully.
Jan 27 08:18:20 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 27 08:18:20 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 27 08:18:20 np0005597378 python3.9[108194]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:18:20 np0005597378 podman[108007]: 2026-01-27 13:18:20.599070331 +0000 UTC m=+1.022546460 container remove 0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:18:20 np0005597378 systemd[1]: libpod-conmon-0adf8c1e893abefbc21a1eea2622da0e5d83cdb34155ee1ac0cb857f9ccdfba6.scope: Deactivated successfully.
Jan 27 08:18:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.104087395 +0000 UTC m=+0.040020430 container create 7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:18:21 np0005597378 systemd[1]: Started libpod-conmon-7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a.scope.
Jan 27 08:18:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.176881503 +0000 UTC m=+0.112814568 container init 7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dhawan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.08492619 +0000 UTC m=+0.020859245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.1846294 +0000 UTC m=+0.120562435 container start 7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.188608261 +0000 UTC m=+0.124541306 container attach 7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:18:21 np0005597378 kind_dhawan[108410]: 167 167
Jan 27 08:18:21 np0005597378 systemd[1]: libpod-7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a.scope: Deactivated successfully.
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.191663227 +0000 UTC m=+0.127596262 container died 7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:18:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-747c3940deea207611e2f38cf713904e34cd98a41792b40e2642c609a77c89b3-merged.mount: Deactivated successfully.
Jan 27 08:18:21 np0005597378 podman[108393]: 2026-01-27 13:18:21.230604087 +0000 UTC m=+0.166537122 container remove 7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dhawan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:18:21 np0005597378 systemd[1]: libpod-conmon-7b20b555f5c991aa6a760ce10a3972b21d734e060773d7da63356dbb41fa924a.scope: Deactivated successfully.
Jan 27 08:18:21 np0005597378 podman[108481]: 2026-01-27 13:18:21.384954986 +0000 UTC m=+0.054691871 container create 03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:18:21 np0005597378 systemd[1]: Started libpod-conmon-03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8.scope.
Jan 27 08:18:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:18:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f33f31b42184194cccd39a35c0a2a1e1e7da50bbe414bbb89777f6247b1463c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f33f31b42184194cccd39a35c0a2a1e1e7da50bbe414bbb89777f6247b1463c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f33f31b42184194cccd39a35c0a2a1e1e7da50bbe414bbb89777f6247b1463c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f33f31b42184194cccd39a35c0a2a1e1e7da50bbe414bbb89777f6247b1463c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:18:21 np0005597378 podman[108481]: 2026-01-27 13:18:21.354212656 +0000 UTC m=+0.023949561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:18:21 np0005597378 podman[108481]: 2026-01-27 13:18:21.451027606 +0000 UTC m=+0.120764511 container init 03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:18:21 np0005597378 podman[108481]: 2026-01-27 13:18:21.456667893 +0000 UTC m=+0.126404778 container start 03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:18:21 np0005597378 podman[108481]: 2026-01-27 13:18:21.463625888 +0000 UTC m=+0.133362773 container attach 03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:18:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 27 08:18:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 27 08:18:21 np0005597378 python3.9[108616]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 08:18:22 np0005597378 lvm[108714]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:18:22 np0005597378 lvm[108714]: VG ceph_vg0 finished
Jan 27 08:18:22 np0005597378 lvm[108725]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:18:22 np0005597378 lvm[108725]: VG ceph_vg1 finished
Jan 27 08:18:22 np0005597378 lvm[108731]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:18:22 np0005597378 lvm[108731]: VG ceph_vg2 finished
Jan 27 08:18:22 np0005597378 strange_ramanujan[108526]: {}
Jan 27 08:18:22 np0005597378 systemd[1]: libpod-03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8.scope: Deactivated successfully.
Jan 27 08:18:22 np0005597378 systemd[1]: libpod-03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8.scope: Consumed 1.239s CPU time.
Jan 27 08:18:22 np0005597378 podman[108481]: 2026-01-27 13:18:22.235023058 +0000 UTC m=+0.904759953 container died 03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:18:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0f33f31b42184194cccd39a35c0a2a1e1e7da50bbe414bbb89777f6247b1463c-merged.mount: Deactivated successfully.
Jan 27 08:18:22 np0005597378 podman[108481]: 2026-01-27 13:18:22.277774555 +0000 UTC m=+0.947511440 container remove 03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:18:22 np0005597378 systemd[1]: libpod-conmon-03a9434328bfa6957944a94256378a75e01641aadd2a56f5eabf777b14667cd8.scope: Deactivated successfully.
Jan 27 08:18:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:18:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:18:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:18:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:18:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 27 08:18:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 27 08:18:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 27 08:18:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 27 08:18:22 np0005597378 python3.9[108871]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:18:22 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 27 08:18:22 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 27 08:18:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:18:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:18:23 np0005597378 python3.9[109025]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:18:24 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 27 08:18:24 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 27 08:18:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:25 np0005597378 python3.9[109178]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:18:25 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 27 08:18:25 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 27 08:18:25 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 27 08:18:25 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 27 08:18:26 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 27 08:18:26 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:18:27 np0005597378 python3.9[109331]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:18:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:28 np0005597378 python3.9[109485]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 27 08:18:28 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 27 08:18:28 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 27 08:18:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:29 np0005597378 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Jan 27 08:18:29 np0005597378 systemd[1]: session-35.scope: Deactivated successfully.
Jan 27 08:18:29 np0005597378 systemd[1]: session-35.scope: Consumed 17.273s CPU time.
Jan 27 08:18:29 np0005597378 systemd-logind[786]: Removed session 35.
Jan 27 08:18:29 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 27 08:18:29 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 27 08:18:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:31 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 27 08:18:31 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 27 08:18:32 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 27 08:18:32 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 27 08:18:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:33 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 27 08:18:33 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 27 08:18:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:35 np0005597378 systemd-logind[786]: New session 36 of user zuul.
Jan 27 08:18:35 np0005597378 systemd[1]: Started Session 36 of User zuul.
Jan 27 08:18:36 np0005597378 python3.9[109663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:36 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 27 08:18:36 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 27 08:18:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:37 np0005597378 python3.9[109817]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:18:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:38 np0005597378 python3.9[110010]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:18:38 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 27 08:18:38 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 27 08:18:38 np0005597378 systemd[1]: session-36.scope: Deactivated successfully.
Jan 27 08:18:38 np0005597378 systemd[1]: session-36.scope: Consumed 2.049s CPU time.
Jan 27 08:18:38 np0005597378 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Jan 27 08:18:38 np0005597378 systemd-logind[786]: Removed session 36.
Jan 27 08:18:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:39 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 27 08:18:39 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 27 08:18:39 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 27 08:18:39 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 27 08:18:40 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 27 08:18:40 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 27 08:18:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:41 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 27 08:18:41 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 27 08:18:41 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 27 08:18:41 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 27 08:18:42 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 27 08:18:42 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 27 08:18:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:44 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 27 08:18:44 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 27 08:18:44 np0005597378 systemd-logind[786]: New session 37 of user zuul.
Jan 27 08:18:44 np0005597378 systemd[1]: Started Session 37 of User zuul.
Jan 27 08:18:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:45 np0005597378 python3.9[110189]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:46 np0005597378 python3.9[110343]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:47 np0005597378 systemd[76472]: Created slice User Background Tasks Slice.
Jan 27 08:18:47 np0005597378 systemd[76472]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 08:18:47 np0005597378 systemd[76472]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 08:18:47 np0005597378 python3.9[110499]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:18:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:48 np0005597378 python3.9[110584]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:18:48 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 27 08:18:48 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 27 08:18:48 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 27 08:18:48 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 27 08:18:48 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 27 08:18:48 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 27 08:18:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:50 np0005597378 python3.9[110737]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:18:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:51 np0005597378 python3.9[110932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:18:52 np0005597378 python3.9[111086]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:18:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:53 np0005597378 python3.9[111251]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:18:53 np0005597378 python3.9[111329]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:18:53 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 27 08:18:53 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 27 08:18:53 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 27 08:18:53 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 27 08:18:54 np0005597378 python3.9[111481]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:18:54 np0005597378 python3.9[111559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:55 np0005597378 python3.9[111711]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:56 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 27 08:18:56 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 27 08:18:56 np0005597378 python3.9[111863]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:56 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 27 08:18:56 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 27 08:18:56 np0005597378 python3.9[112015]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:57 np0005597378 python3.9[112167]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:18:57 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 27 08:18:57 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 27 08:18:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:18:58 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 27 08:18:58 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 27 08:18:58 np0005597378 python3.9[112319]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:18:58 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 27 08:18:58 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 27 08:18:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:18:59 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 27 08:18:59 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 27 08:18:59 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 27 08:18:59 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 27 08:19:00 np0005597378 python3.9[112472]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:19:00 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 27 08:19:00 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 27 08:19:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:01 np0005597378 python3.9[112626]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:19:01 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 27 08:19:01 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 27 08:19:02 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 27 08:19:02 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 27 08:19:02 np0005597378 python3.9[112778]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:19:02 np0005597378 python3.9[112930]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:19:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:03 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 27 08:19:03 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 27 08:19:03 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 27 08:19:03 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 27 08:19:03 np0005597378 python3.9[113083]: ansible-service_facts Invoked
Jan 27 08:19:03 np0005597378 network[113100]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:19:03 np0005597378 network[113101]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:19:03 np0005597378 network[113102]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:19:04 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 27 08:19:04 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 27 08:19:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 27 08:19:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 27 08:19:06 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 27 08:19:06 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 27 08:19:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:07 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 27 08:19:07 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 27 08:19:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:08 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 27 08:19:08 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 27 08:19:08 np0005597378 python3.9[113554]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:19:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:11 np0005597378 python3.9[113707]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 08:19:11 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 27 08:19:11 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 27 08:19:12 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 27 08:19:12 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 27 08:19:12 np0005597378 python3.9[113859]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:12 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 27 08:19:12 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 27 08:19:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:13 np0005597378 python3.9[113937]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:13 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 27 08:19:13 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 27 08:19:13 np0005597378 python3.9[114089]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:13 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 27 08:19:13 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 27 08:19:14 np0005597378 python3.9[114167]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:15 np0005597378 python3.9[114319]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:16 np0005597378 python3.9[114472]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:19:16 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 27 08:19:16 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 27 08:19:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:19:16
Jan 27 08:19:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:19:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:19:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', '.mgr']
Jan 27 08:19:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:17 np0005597378 python3.9[114556]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:19:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:19:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:18 np0005597378 systemd[1]: session-37.scope: Deactivated successfully.
Jan 27 08:19:18 np0005597378 systemd[1]: session-37.scope: Consumed 22.108s CPU time.
Jan 27 08:19:18 np0005597378 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Jan 27 08:19:18 np0005597378 systemd-logind[786]: Removed session 37.
Jan 27 08:19:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 27 08:19:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 27 08:19:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:19 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 27 08:19:19 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 27 08:19:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 27 08:19:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 27 08:19:22 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 27 08:19:22 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 27 08:19:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 27 08:19:22 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 27 08:19:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:19:23 np0005597378 podman[114726]: 2026-01-27 13:19:23.494742935 +0000 UTC m=+0.021537604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:19:23 np0005597378 podman[114726]: 2026-01-27 13:19:23.649831215 +0000 UTC m=+0.176625864 container create ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:19:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:19:23 np0005597378 systemd-logind[786]: New session 38 of user zuul.
Jan 27 08:19:23 np0005597378 systemd[1]: Started Session 38 of User zuul.
Jan 27 08:19:23 np0005597378 systemd[1]: Started libpod-conmon-ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b.scope.
Jan 27 08:19:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:19:23 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 27 08:19:23 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 27 08:19:23 np0005597378 podman[114726]: 2026-01-27 13:19:23.867981932 +0000 UTC m=+0.394776601 container init ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:19:23 np0005597378 podman[114726]: 2026-01-27 13:19:23.874811264 +0000 UTC m=+0.401605913 container start ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:19:23 np0005597378 systemd[1]: libpod-ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b.scope: Deactivated successfully.
Jan 27 08:19:23 np0005597378 clever_meninsky[114745]: 167 167
Jan 27 08:19:23 np0005597378 conmon[114745]: conmon ec526a32c805c3934a04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b.scope/container/memory.events
Jan 27 08:19:23 np0005597378 podman[114726]: 2026-01-27 13:19:23.946205961 +0000 UTC m=+0.473000640 container attach ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:19:23 np0005597378 podman[114726]: 2026-01-27 13:19:23.947211798 +0000 UTC m=+0.474006457 container died ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:19:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3eb8709f28281d34bceed09de5a266c86523bc1e4865c5943da55f70efe406ef-merged.mount: Deactivated successfully.
Jan 27 08:19:24 np0005597378 podman[114726]: 2026-01-27 13:19:24.283442923 +0000 UTC m=+0.810237562 container remove ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:19:24 np0005597378 systemd[1]: libpod-conmon-ec526a32c805c3934a04ef7afb26a3a961bb256d4aaa74bafa973e9a35647e5b.scope: Deactivated successfully.
Jan 27 08:19:24 np0005597378 podman[114922]: 2026-01-27 13:19:24.64159775 +0000 UTC m=+0.070900125 container create d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:19:24 np0005597378 podman[114922]: 2026-01-27 13:19:24.593569023 +0000 UTC m=+0.022871418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:19:24 np0005597378 systemd[1]: Started libpod-conmon-d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe.scope.
Jan 27 08:19:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:19:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8aa4a327b63f3579772e0e824bb6cb0c7da3ad37a63e8da249f575986ecc01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8aa4a327b63f3579772e0e824bb6cb0c7da3ad37a63e8da249f575986ecc01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8aa4a327b63f3579772e0e824bb6cb0c7da3ad37a63e8da249f575986ecc01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8aa4a327b63f3579772e0e824bb6cb0c7da3ad37a63e8da249f575986ecc01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8aa4a327b63f3579772e0e824bb6cb0c7da3ad37a63e8da249f575986ecc01/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:24 np0005597378 python3.9[114916]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:24 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 27 08:19:24 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 27 08:19:24 np0005597378 podman[114922]: 2026-01-27 13:19:24.797573355 +0000 UTC m=+0.226875750 container init d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:19:24 np0005597378 podman[114922]: 2026-01-27 13:19:24.805404803 +0000 UTC m=+0.234707178 container start d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:19:24 np0005597378 podman[114922]: 2026-01-27 13:19:24.872953608 +0000 UTC m=+0.302256003 container attach d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:19:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:25 np0005597378 admiring_wright[114938]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:19:25 np0005597378 admiring_wright[114938]: --> All data devices are unavailable
Jan 27 08:19:25 np0005597378 systemd[1]: libpod-d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe.scope: Deactivated successfully.
Jan 27 08:19:25 np0005597378 podman[114922]: 2026-01-27 13:19:25.265114729 +0000 UTC m=+0.694417134 container died d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wright, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:19:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f8aa4a327b63f3579772e0e824bb6cb0c7da3ad37a63e8da249f575986ecc01-merged.mount: Deactivated successfully.
Jan 27 08:19:25 np0005597378 python3.9[115120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:25 np0005597378 podman[114922]: 2026-01-27 13:19:25.638073349 +0000 UTC m=+1.067375724 container remove d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:19:25 np0005597378 systemd[1]: libpod-conmon-d0b6adbeaf914f9f0fa29c79f2862739574a111c76e87a4b290882ee4db6adbe.scope: Deactivated successfully.
Jan 27 08:19:25 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 27 08:19:25 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 27 08:19:25 np0005597378 python3.9[115244]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.052913783 +0000 UTC m=+0.056349509 container create f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.020743039 +0000 UTC m=+0.024178795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:19:26 np0005597378 systemd[1]: Started libpod-conmon-f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c.scope.
Jan 27 08:19:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.244550745 +0000 UTC m=+0.247986501 container init f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.253407461 +0000 UTC m=+0.256843187 container start f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:19:26 np0005597378 naughty_bartik[115301]: 167 167
Jan 27 08:19:26 np0005597378 systemd[1]: libpod-f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c.scope: Deactivated successfully.
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.268766309 +0000 UTC m=+0.272202065 container attach f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.270272819 +0000 UTC m=+0.273708565 container died f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:19:26 np0005597378 systemd[1]: session-38.scope: Deactivated successfully.
Jan 27 08:19:26 np0005597378 systemd[1]: session-38.scope: Consumed 1.419s CPU time.
Jan 27 08:19:26 np0005597378 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Jan 27 08:19:26 np0005597378 systemd-logind[786]: Removed session 38.
Jan 27 08:19:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-54856d675a1455ec8290d5b32c8e12626922531ef042b6a98f0a278f76b47f40-merged.mount: Deactivated successfully.
Jan 27 08:19:26 np0005597378 podman[115264]: 2026-01-27 13:19:26.772886864 +0000 UTC m=+0.776322590 container remove f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_bartik, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:19:26 np0005597378 systemd[1]: libpod-conmon-f56abc8552ec20f5611fdf0f3862fcacc710b2902cbbd5cf5479062493a8508c.scope: Deactivated successfully.
Jan 27 08:19:26 np0005597378 podman[115326]: 2026-01-27 13:19:26.893069339 +0000 UTC m=+0.021112062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:19:27 np0005597378 podman[115326]: 2026-01-27 13:19:27.136668922 +0000 UTC m=+0.264711565 container create f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:19:27 np0005597378 systemd[1]: Started libpod-conmon-f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868.scope.
Jan 27 08:19:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:19:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff0639f70763fe9361cd8861fd4ed3c7195874b0bf4b32a616661f593dce8f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff0639f70763fe9361cd8861fd4ed3c7195874b0bf4b32a616661f593dce8f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff0639f70763fe9361cd8861fd4ed3c7195874b0bf4b32a616661f593dce8f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff0639f70763fe9361cd8861fd4ed3c7195874b0bf4b32a616661f593dce8f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:27 np0005597378 podman[115326]: 2026-01-27 13:19:27.440760532 +0000 UTC m=+0.568803225 container init f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_aryabhata, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:19:27 np0005597378 podman[115326]: 2026-01-27 13:19:27.448034406 +0000 UTC m=+0.576077049 container start f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:19:27 np0005597378 podman[115326]: 2026-01-27 13:19:27.478198568 +0000 UTC m=+0.606241211 container attach f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]: {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:    "0": [
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:        {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "devices": [
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "/dev/loop3"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            ],
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_name": "ceph_lv0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_size": "21470642176",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "name": "ceph_lv0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "tags": {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cluster_name": "ceph",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.crush_device_class": "",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.encrypted": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.objectstore": "bluestore",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osd_id": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.type": "block",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.vdo": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.with_tpm": "0"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            },
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "type": "block",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "vg_name": "ceph_vg0"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:        }
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:    ],
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:    "1": [
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:        {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "devices": [
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "/dev/loop4"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            ],
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_name": "ceph_lv1",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_size": "21470642176",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "name": "ceph_lv1",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "tags": {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cluster_name": "ceph",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.crush_device_class": "",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.encrypted": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.objectstore": "bluestore",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osd_id": "1",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.type": "block",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.vdo": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.with_tpm": "0"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            },
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "type": "block",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "vg_name": "ceph_vg1"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:        }
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:    ],
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:    "2": [
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:        {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "devices": [
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "/dev/loop5"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            ],
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_name": "ceph_lv2",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_size": "21470642176",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "name": "ceph_lv2",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "tags": {
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.cluster_name": "ceph",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.crush_device_class": "",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.encrypted": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.objectstore": "bluestore",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osd_id": "2",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.type": "block",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.vdo": "0",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:                "ceph.with_tpm": "0"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            },
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "type": "block",
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:            "vg_name": "ceph_vg2"
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:        }
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]:    ]
Jan 27 08:19:27 np0005597378 admiring_aryabhata[115343]: }
Jan 27 08:19:27 np0005597378 systemd[1]: libpod-f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868.scope: Deactivated successfully.
Jan 27 08:19:27 np0005597378 podman[115326]: 2026-01-27 13:19:27.724313067 +0000 UTC m=+0.852355710 container died f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_aryabhata, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:19:27 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 27 08:19:27 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 27 08:19:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-aff0639f70763fe9361cd8861fd4ed3c7195874b0bf4b32a616661f593dce8f9-merged.mount: Deactivated successfully.
Jan 27 08:19:28 np0005597378 podman[115326]: 2026-01-27 13:19:28.3998987 +0000 UTC m=+1.527941363 container remove f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:19:28 np0005597378 systemd[1]: libpod-conmon-f3f1644ac19285a0cd9ed471353a6a133c8bbbdb5994d017de4e8a320de45868.scope: Deactivated successfully.
Jan 27 08:19:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 27 08:19:28 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 27 08:19:28 np0005597378 podman[115427]: 2026-01-27 13:19:28.916792655 +0000 UTC m=+0.086411947 container create f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:19:28 np0005597378 podman[115427]: 2026-01-27 13:19:28.856834942 +0000 UTC m=+0.026454254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:19:28 np0005597378 systemd[1]: Started libpod-conmon-f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126.scope.
Jan 27 08:19:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:19:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:29 np0005597378 podman[115427]: 2026-01-27 13:19:29.107271286 +0000 UTC m=+0.276890638 container init f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_fermi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:19:29 np0005597378 podman[115427]: 2026-01-27 13:19:29.117743325 +0000 UTC m=+0.287362617 container start f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:19:29 np0005597378 podman[115427]: 2026-01-27 13:19:29.123179639 +0000 UTC m=+0.292798961 container attach f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:19:29 np0005597378 dreamy_fermi[115443]: 167 167
Jan 27 08:19:29 np0005597378 systemd[1]: libpod-f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126.scope: Deactivated successfully.
Jan 27 08:19:29 np0005597378 podman[115427]: 2026-01-27 13:19:29.125216654 +0000 UTC m=+0.294835956 container died f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:19:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9533e027e60c2c53cf12e0670dd0fcf1408723fb469a03199bfb794bc0f29550-merged.mount: Deactivated successfully.
Jan 27 08:19:29 np0005597378 podman[115427]: 2026-01-27 13:19:29.466510623 +0000 UTC m=+0.636129915 container remove f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_fermi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:19:29 np0005597378 systemd[1]: libpod-conmon-f68f060aa8f70d3181d64650330c21f4595b0d0e027885ce50b2a132d0b02126.scope: Deactivated successfully.
Jan 27 08:19:29 np0005597378 podman[115468]: 2026-01-27 13:19:29.62820814 +0000 UTC m=+0.058584638 container create 63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_johnson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:19:29 np0005597378 systemd[1]: Started libpod-conmon-63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28.scope.
Jan 27 08:19:29 np0005597378 podman[115468]: 2026-01-27 13:19:29.59057423 +0000 UTC m=+0.020950738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:19:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:19:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b46004a08f03318f77c03374a4f73b6f142605dfd3a808cd19bbee8a8a8e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b46004a08f03318f77c03374a4f73b6f142605dfd3a808cd19bbee8a8a8e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b46004a08f03318f77c03374a4f73b6f142605dfd3a808cd19bbee8a8a8e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b46004a08f03318f77c03374a4f73b6f142605dfd3a808cd19bbee8a8a8e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:19:29 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 27 08:19:29 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 27 08:19:29 np0005597378 podman[115468]: 2026-01-27 13:19:29.767613595 +0000 UTC m=+0.197990083 container init 63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:19:29 np0005597378 podman[115468]: 2026-01-27 13:19:29.773394658 +0000 UTC m=+0.203771146 container start 63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_johnson, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:19:29 np0005597378 podman[115468]: 2026-01-27 13:19:29.854708299 +0000 UTC m=+0.285084797 container attach 63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 27 08:19:30 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 27 08:19:30 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 27 08:19:30 np0005597378 lvm[115564]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:19:30 np0005597378 lvm[115564]: VG ceph_vg0 finished
Jan 27 08:19:30 np0005597378 lvm[115565]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:19:30 np0005597378 lvm[115565]: VG ceph_vg1 finished
Jan 27 08:19:30 np0005597378 lvm[115567]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:19:30 np0005597378 lvm[115567]: VG ceph_vg2 finished
Jan 27 08:19:30 np0005597378 priceless_johnson[115484]: {}
Jan 27 08:19:30 np0005597378 systemd[1]: libpod-63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28.scope: Deactivated successfully.
Jan 27 08:19:30 np0005597378 systemd[1]: libpod-63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28.scope: Consumed 1.244s CPU time.
Jan 27 08:19:30 np0005597378 podman[115468]: 2026-01-27 13:19:30.56486315 +0000 UTC m=+0.995239638 container died 63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_johnson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:19:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ae2b46004a08f03318f77c03374a4f73b6f142605dfd3a808cd19bbee8a8a8e0-merged.mount: Deactivated successfully.
Jan 27 08:19:30 np0005597378 podman[115468]: 2026-01-27 13:19:30.648500613 +0000 UTC m=+1.078877101 container remove 63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:19:30 np0005597378 systemd[1]: libpod-conmon-63024b69152d7fb0c1855c2a293a26cc573c91b341b86b6b282d060ce75dcc28.scope: Deactivated successfully.
Jan 27 08:19:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:19:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:19:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:19:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:19:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:31 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 27 08:19:31 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 27 08:19:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:19:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:19:31 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 27 08:19:31 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 27 08:19:32 np0005597378 systemd-logind[786]: New session 39 of user zuul.
Jan 27 08:19:32 np0005597378 systemd[1]: Started Session 39 of User zuul.
Jan 27 08:19:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:33 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 27 08:19:33 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 27 08:19:33 np0005597378 python3.9[115758]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:19:34 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 27 08:19:34 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 27 08:19:34 np0005597378 python3.9[115916]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:34 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 27 08:19:34 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 27 08:19:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:35 np0005597378 python3.9[116091]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:35 np0005597378 python3.9[116169]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.a19jjy61 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:36 np0005597378 python3.9[116321]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.490642) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976490721, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7150, "num_deletes": 251, "total_data_size": 9738639, "memory_usage": 10056816, "flush_reason": "Manual Compaction"}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976566594, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7734527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 153, "largest_seqno": 7300, "table_properties": {"data_size": 7708030, "index_size": 17358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74294, "raw_average_key_size": 23, "raw_value_size": 7646062, "raw_average_value_size": 2381, "num_data_blocks": 764, "num_entries": 3211, "num_filter_entries": 3211, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519577, "oldest_key_time": 1769519577, "file_creation_time": 1769519976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 76025 microseconds, and 13733 cpu microseconds.
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.566668) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7734527 bytes OK
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.566690) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.574610) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.574650) EVENT_LOG_v1 {"time_micros": 1769519976574641, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.574686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9707622, prev total WAL file size 9707622, number of live WAL files 2.
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.576915) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7553KB) 13(59KB) 8(1944B)]
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976576991, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7797693, "oldest_snapshot_seqno": -1}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3039 keys, 7749884 bytes, temperature: kUnknown
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976629199, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7749884, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7723784, "index_size": 17414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72775, "raw_average_key_size": 23, "raw_value_size": 7663115, "raw_average_value_size": 2521, "num_data_blocks": 768, "num_entries": 3039, "num_filter_entries": 3039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769519976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.629428) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7749884 bytes
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.632508) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 148.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3329, records dropped: 290 output_compression: NoCompression
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.632582) EVENT_LOG_v1 {"time_micros": 1769519976632555, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52283, "compaction_time_cpu_micros": 16113, "output_level": 6, "num_output_files": 1, "total_output_size": 7749884, "num_input_records": 3329, "num_output_records": 3039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976635208, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976635405, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769519976635473, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 27 08:19:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:19:36.576787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:19:36 np0005597378 python3.9[116400]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.4lu1xxw9 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:37 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 27 08:19:37 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 27 08:19:37 np0005597378 python3.9[116552]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:19:37 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 27 08:19:37 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 27 08:19:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:37 np0005597378 python3.9[116704]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:38 np0005597378 python3.9[116782]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:19:38 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 27 08:19:38 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 27 08:19:38 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 27 08:19:38 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 27 08:19:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:39 np0005597378 python3.9[116934]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:39 np0005597378 python3.9[117012]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:19:40 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 27 08:19:40 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 27 08:19:40 np0005597378 python3.9[117164]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:40 np0005597378 python3.9[117316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:41 np0005597378 python3.9[117394]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:41 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 27 08:19:41 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 27 08:19:41 np0005597378 python3.9[117546]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:42 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 27 08:19:42 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 27 08:19:42 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 27 08:19:42 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 27 08:19:42 np0005597378 python3.9[117624]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:43 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 27 08:19:43 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 27 08:19:43 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 27 08:19:43 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 27 08:19:43 np0005597378 python3.9[117776]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:19:43 np0005597378 systemd[1]: Reloading.
Jan 27 08:19:43 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:19:43 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:19:44 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 27 08:19:44 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 27 08:19:44 np0005597378 python3.9[117967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:45 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 27 08:19:45 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 27 08:19:45 np0005597378 python3.9[118045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:45 np0005597378 python3.9[118197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:46 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 27 08:19:46 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 27 08:19:46 np0005597378 python3.9[118275]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:46 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 27 08:19:46 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:47 np0005597378 python3.9[118427]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:19:47 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 27 08:19:47 np0005597378 systemd[1]: Reloading.
Jan 27 08:19:47 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 27 08:19:47 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:19:47 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:19:47 np0005597378 systemd[1]: Starting Create netns directory...
Jan 27 08:19:47 np0005597378 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:19:47 np0005597378 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:19:47 np0005597378 systemd[1]: Finished Create netns directory.
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:19:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:48 np0005597378 python3.9[118618]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:19:48 np0005597378 network[118635]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:19:48 np0005597378 network[118636]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:19:48 np0005597378 network[118637]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:19:48 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 27 08:19:48 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 27 08:19:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:49 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 27 08:19:49 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 27 08:19:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:51 np0005597378 python3.9[118899]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:51 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 27 08:19:51 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 27 08:19:52 np0005597378 python3.9[118977]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:52 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 27 08:19:52 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 27 08:19:52 np0005597378 python3.9[119129]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:52 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 27 08:19:52 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 27 08:19:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:53 np0005597378 python3.9[119281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:53 np0005597378 python3.9[119359]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:53 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 27 08:19:53 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 27 08:19:54 np0005597378 python3.9[119511]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 08:19:54 np0005597378 systemd[1]: Starting Time & Date Service...
Jan 27 08:19:54 np0005597378 systemd[1]: Started Time & Date Service.
Jan 27 08:19:54 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 27 08:19:54 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 27 08:19:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:55 np0005597378 python3.9[119667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:56 np0005597378 python3.9[119819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:56 np0005597378 python3.9[119897]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:56 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 27 08:19:57 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 27 08:19:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:57 np0005597378 python3.9[120049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:57 np0005597378 python3.9[120127]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.37s59_62 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:19:58 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 27 08:19:58 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 27 08:19:58 np0005597378 python3.9[120279]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:19:58 np0005597378 python3.9[120357]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:19:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:19:59 np0005597378 python3.9[120509]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:19:59 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 27 08:19:59 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 27 08:20:00 np0005597378 python3[120662]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:20:00 np0005597378 python3.9[120814]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:20:00 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 27 08:20:00 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 27 08:20:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:01 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 27 08:20:01 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 27 08:20:01 np0005597378 python3.9[120892]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:01 np0005597378 python3.9[121044]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:20:02 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 27 08:20:02 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 27 08:20:02 np0005597378 python3.9[121169]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520001.3542418-308-209756436484893/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:02 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 27 08:20:02 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 27 08:20:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:03 np0005597378 python3.9[121321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:20:03 np0005597378 python3.9[121399]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:04 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 27 08:20:04 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 27 08:20:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 27 08:20:04 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 27 08:20:04 np0005597378 python3.9[121551]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:20:04 np0005597378 python3.9[121629]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:05 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 27 08:20:05 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 27 08:20:05 np0005597378 python3.9[121781]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:20:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 27 08:20:05 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 27 08:20:06 np0005597378 python3.9[121859]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:06 np0005597378 python3.9[122011]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:20:07 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 27 08:20:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:07 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 27 08:20:07 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 27 08:20:07 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 27 08:20:07 np0005597378 python3.9[122166]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:07 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 27 08:20:07 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 27 08:20:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:08 np0005597378 python3.9[122318]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:08 np0005597378 python3.9[122470]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:08 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 27 08:20:08 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 27 08:20:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:09 np0005597378 python3.9[122622]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:20:10 np0005597378 python3.9[122774]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 08:20:10 np0005597378 systemd[1]: session-39.scope: Deactivated successfully.
Jan 27 08:20:10 np0005597378 systemd[1]: session-39.scope: Consumed 27.527s CPU time.
Jan 27 08:20:10 np0005597378 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Jan 27 08:20:10 np0005597378 systemd-logind[786]: Removed session 39.
Jan 27 08:20:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:11 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 27 08:20:11 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 27 08:20:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:14 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 27 08:20:14 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 27 08:20:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:15 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 27 08:20:15 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 27 08:20:16 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 27 08:20:16 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 27 08:20:16 np0005597378 systemd-logind[786]: New session 40 of user zuul.
Jan 27 08:20:16 np0005597378 systemd[1]: Started Session 40 of User zuul.
Jan 27 08:20:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:20:16
Jan 27 08:20:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:20:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:20:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', '.mgr', 'vms', 'cephfs.cephfs.data']
Jan 27 08:20:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:17 np0005597378 python3.9[122954]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:20:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:20:17 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 27 08:20:17 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 27 08:20:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 27 08:20:18 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 27 08:20:18 np0005597378 python3.9[123106]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:20:18 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 27 08:20:18 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 27 08:20:18 np0005597378 python3.9[123260]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 27 08:20:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:19 np0005597378 python3.9[123412]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.jc7me16v follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:20:20 np0005597378 python3.9[123537]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.jc7me16v mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520019.0105522-44-199154956933706/.source.jc7me16v _original_basename=.upav1mgv follow=False checksum=e2d529bada8e55d4ff19930ddb4964424379a2cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:20 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 27 08:20:20 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 27 08:20:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 27 08:20:21 np0005597378 python3.9[123689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:20:21 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 27 08:20:21 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 27 08:20:21 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 27 08:20:21 np0005597378 python3.9[123841]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaCAaHmoxOQtBxrLj6sOYqfJ38HHCN1WHb9NSlOr664dGULo40YBM5+Bp/CGRllxwKlP/vqV191qSZR6oHUF3BNPAo9QAbYuQP+EfzS0WhCgrtPNloflsWb7IiB7KVzJLYF+Cifd/SFwEL2gpEK3UbK9dOqM/m0HeaznPyx4ILCiDJUsLGyNKkhvz/OP3twKij+g1uhhTO3ANNFVDPpzwQ0ISXAucJSxMvSIzjvmK5DN2a4mV8Y9mphdak/4VrXaqVk3jnlkB3yC25iAQag/DV3UL2KcLDWS+BB6StMLdx5JvPEM9faZ11bFWtK1uz8OPP2iojE84H9Y1SRZ/l8kmX/jCfo5plNhiXVDOsuVvMsm9ZRsvnmrRI6K38jEZpwl03Rs8LUXl/7OnX8hjwsgOzO3aDQJwuDvumc4m6uUUmYdHEAxvf7LttfF9D5hRM38yYDdPPaw79orh7juX8cJsHuxEAJQObeiKfPSVU69K1Wh5UOXRwdjExAXyxTwAtmyE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB1PFwCS+7a6RUIsPBWIZprjQyynEnTLsqUopEPTlz7Q#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLY/Q33X8GSkQiJHflskwY+OHS0Yva0wW27rCFHMBIpjFySd5HNHu05T8+TVxYbMxsoGAm1JnSEWJNufae5pANE=#012 create=True mode=0644 path=/tmp/ansible.jc7me16v state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:22 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 27 08:20:22 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 27 08:20:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 27 08:20:22 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 27 08:20:22 np0005597378 python3.9[123993]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.jc7me16v' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:20:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:23 np0005597378 python3.9[124147]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.jc7me16v state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:23 np0005597378 systemd[1]: session-40.scope: Deactivated successfully.
Jan 27 08:20:23 np0005597378 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Jan 27 08:20:23 np0005597378 systemd[1]: session-40.scope: Consumed 4.682s CPU time.
Jan 27 08:20:23 np0005597378 systemd-logind[786]: Removed session 40.
Jan 27 08:20:24 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 27 08:20:24 np0005597378 ceph-osd[88005]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 27 08:20:24 np0005597378 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 08:20:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:25 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 27 08:20:25 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 27 08:20:25 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 27 08:20:25 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 27 08:20:26 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 27 08:20:26 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 27 08:20:26 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 27 08:20:26 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:20:27 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 27 08:20:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:27 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 27 08:20:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:29 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 27 08:20:29 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 27 08:20:29 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 27 08:20:29 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 27 08:20:30 np0005597378 systemd-logind[786]: New session 41 of user zuul.
Jan 27 08:20:30 np0005597378 systemd[1]: Started Session 41 of User zuul.
Jan 27 08:20:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:31 np0005597378 python3.9[124379]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:20:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:20:31 np0005597378 podman[124553]: 2026-01-27 13:20:31.94544528 +0000 UTC m=+0.063580287 container create da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_driscoll, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:20:31 np0005597378 systemd[1]: Started libpod-conmon-da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5.scope.
Jan 27 08:20:32 np0005597378 podman[124553]: 2026-01-27 13:20:31.905745442 +0000 UTC m=+0.023880469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:20:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:20:32 np0005597378 podman[124553]: 2026-01-27 13:20:32.052558567 +0000 UTC m=+0.170693594 container init da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:20:32 np0005597378 podman[124553]: 2026-01-27 13:20:32.060433872 +0000 UTC m=+0.178568879 container start da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:20:32 np0005597378 clever_driscoll[124570]: 167 167
Jan 27 08:20:32 np0005597378 systemd[1]: libpod-da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5.scope: Deactivated successfully.
Jan 27 08:20:32 np0005597378 podman[124553]: 2026-01-27 13:20:32.096431299 +0000 UTC m=+0.214566336 container attach da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:20:32 np0005597378 podman[124553]: 2026-01-27 13:20:32.096754288 +0000 UTC m=+0.214889295 container died da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_driscoll, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:20:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-43e6cf659cb35924bb06d98029cf556e08738be665937e41ebc54267c4f880f4-merged.mount: Deactivated successfully.
Jan 27 08:20:32 np0005597378 podman[124553]: 2026-01-27 13:20:32.236618374 +0000 UTC m=+0.354753381 container remove da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_driscoll, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:20:32 np0005597378 systemd[1]: libpod-conmon-da344a9aa4793ce9b1bc2e5de578bb2683389237ead1016247f5053cec8cbcc5.scope: Deactivated successfully.
Jan 27 08:20:32 np0005597378 podman[124669]: 2026-01-27 13:20:32.440392725 +0000 UTC m=+0.094390073 container create c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:20:32 np0005597378 podman[124669]: 2026-01-27 13:20:32.369513022 +0000 UTC m=+0.023510400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:20:32 np0005597378 python3.9[124661]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 08:20:32 np0005597378 systemd[1]: Started libpod-conmon-c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b.scope.
Jan 27 08:20:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:20:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8611b8f6a06e0dd03098b901c03c88101cd83219e7d0f4889ad9bc97111fa5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8611b8f6a06e0dd03098b901c03c88101cd83219e7d0f4889ad9bc97111fa5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8611b8f6a06e0dd03098b901c03c88101cd83219e7d0f4889ad9bc97111fa5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8611b8f6a06e0dd03098b901c03c88101cd83219e7d0f4889ad9bc97111fa5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8611b8f6a06e0dd03098b901c03c88101cd83219e7d0f4889ad9bc97111fa5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:32 np0005597378 podman[124669]: 2026-01-27 13:20:32.600938333 +0000 UTC m=+0.254935701 container init c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:20:32 np0005597378 podman[124669]: 2026-01-27 13:20:32.609780423 +0000 UTC m=+0.263777771 container start c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:20:32 np0005597378 podman[124669]: 2026-01-27 13:20:32.631506303 +0000 UTC m=+0.285503661 container attach c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:20:32 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 27 08:20:32 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 27 08:20:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:33 np0005597378 vigilant_dirac[124689]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:20:33 np0005597378 vigilant_dirac[124689]: --> All data devices are unavailable
Jan 27 08:20:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:33 np0005597378 systemd[1]: libpod-c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b.scope: Deactivated successfully.
Jan 27 08:20:33 np0005597378 podman[124669]: 2026-01-27 13:20:33.084963172 +0000 UTC m=+0.738960520 container died c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:20:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e8611b8f6a06e0dd03098b901c03c88101cd83219e7d0f4889ad9bc97111fa5b-merged.mount: Deactivated successfully.
Jan 27 08:20:33 np0005597378 podman[124669]: 2026-01-27 13:20:33.307932944 +0000 UTC m=+0.961930292 container remove c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:20:33 np0005597378 systemd[1]: libpod-conmon-c5957a86b2fbce8eb09934b176fb29e4e2506f09420fd8d49f2ea0a257455f1b.scope: Deactivated successfully.
Jan 27 08:20:33 np0005597378 python3.9[124857]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.774139498 +0000 UTC m=+0.045404794 container create 7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:20:33 np0005597378 systemd[1]: Started libpod-conmon-7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad.scope.
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.751760771 +0000 UTC m=+0.023026087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:20:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.875279973 +0000 UTC m=+0.146545299 container init 7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.882240802 +0000 UTC m=+0.153506098 container start 7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:20:33 np0005597378 cranky_yonath[125029]: 167 167
Jan 27 08:20:33 np0005597378 systemd[1]: libpod-7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad.scope: Deactivated successfully.
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.894178177 +0000 UTC m=+0.165443483 container attach 7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.894900746 +0000 UTC m=+0.166166042 container died 7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:20:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d8f4ee6f3c1f6df86d5cab732921431a599d7907f417487ee3ab14201081bdff-merged.mount: Deactivated successfully.
Jan 27 08:20:33 np0005597378 podman[125013]: 2026-01-27 13:20:33.96912308 +0000 UTC m=+0.240388376 container remove 7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:20:33 np0005597378 systemd[1]: libpod-conmon-7ccb8ac3d74446fa62a91f2b13f5939fbb229122edf61602ca3df0fc36aac4ad.scope: Deactivated successfully.
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.133742199 +0000 UTC m=+0.047685745 container create c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:20:34 np0005597378 systemd[1]: Started libpod-conmon-c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03.scope.
Jan 27 08:20:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:20:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3370d2c1ece632adabb207416d7501413e93719ad514df1c2d16e1018bc15615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3370d2c1ece632adabb207416d7501413e93719ad514df1c2d16e1018bc15615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3370d2c1ece632adabb207416d7501413e93719ad514df1c2d16e1018bc15615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3370d2c1ece632adabb207416d7501413e93719ad514df1c2d16e1018bc15615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.111916196 +0000 UTC m=+0.025859762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.222915109 +0000 UTC m=+0.136858685 container init c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.230406433 +0000 UTC m=+0.144349969 container start c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:20:34 np0005597378 python3.9[125124]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.237520686 +0000 UTC m=+0.151464262 container attach c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:20:34 np0005597378 keen_bartik[125147]: {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:    "0": [
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:        {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "devices": [
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "/dev/loop3"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            ],
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_name": "ceph_lv0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_size": "21470642176",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "name": "ceph_lv0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "tags": {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cluster_name": "ceph",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.crush_device_class": "",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.encrypted": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.objectstore": "bluestore",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osd_id": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.type": "block",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.vdo": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.with_tpm": "0"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            },
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "type": "block",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "vg_name": "ceph_vg0"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:        }
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:    ],
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:    "1": [
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:        {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "devices": [
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "/dev/loop4"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            ],
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_name": "ceph_lv1",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_size": "21470642176",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "name": "ceph_lv1",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "tags": {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cluster_name": "ceph",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.crush_device_class": "",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.encrypted": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.objectstore": "bluestore",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osd_id": "1",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.type": "block",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.vdo": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.with_tpm": "0"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            },
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "type": "block",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "vg_name": "ceph_vg1"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:        }
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:    ],
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:    "2": [
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:        {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "devices": [
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "/dev/loop5"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            ],
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_name": "ceph_lv2",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_size": "21470642176",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "name": "ceph_lv2",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "tags": {
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.cluster_name": "ceph",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.crush_device_class": "",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.encrypted": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.objectstore": "bluestore",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osd_id": "2",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.type": "block",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.vdo": "0",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:                "ceph.with_tpm": "0"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            },
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "type": "block",
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:            "vg_name": "ceph_vg2"
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:        }
Jan 27 08:20:34 np0005597378 keen_bartik[125147]:    ]
Jan 27 08:20:34 np0005597378 keen_bartik[125147]: }
Jan 27 08:20:34 np0005597378 systemd[1]: libpod-c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03.scope: Deactivated successfully.
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.571248394 +0000 UTC m=+0.485191960 container died c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:20:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3370d2c1ece632adabb207416d7501413e93719ad514df1c2d16e1018bc15615-merged.mount: Deactivated successfully.
Jan 27 08:20:34 np0005597378 podman[125130]: 2026-01-27 13:20:34.617788047 +0000 UTC m=+0.531731593 container remove c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:20:34 np0005597378 systemd[1]: libpod-conmon-c8374fb2e7ec50994c0a34232dfa1956f39c024c50fae0240875bcfa588d5a03.scope: Deactivated successfully.
Jan 27 08:20:34 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 27 08:20:34 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 27 08:20:35 np0005597378 python3.9[125370]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.040746689 +0000 UTC m=+0.034130938 container create 0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:20:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:35 np0005597378 systemd[1]: Started libpod-conmon-0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9.scope.
Jan 27 08:20:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.112103996 +0000 UTC m=+0.105488265 container init 0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.117992025 +0000 UTC m=+0.111376274 container start 0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_darwin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:20:35 np0005597378 epic_darwin[125398]: 167 167
Jan 27 08:20:35 np0005597378 systemd[1]: libpod-0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9.scope: Deactivated successfully.
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.026239624 +0000 UTC m=+0.019623893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:20:35 np0005597378 conmon[125398]: conmon 0a86412b2cb0bb89767c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9.scope/container/memory.events
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.124856452 +0000 UTC m=+0.118240711 container attach 0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.125833138 +0000 UTC m=+0.119217387 container died 0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_darwin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:20:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4297e374829847373219b9faab87c9df7650497aa6ca33a41c45f66c07bc21fe-merged.mount: Deactivated successfully.
Jan 27 08:20:35 np0005597378 podman[125382]: 2026-01-27 13:20:35.171526228 +0000 UTC m=+0.164910477 container remove 0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:20:35 np0005597378 systemd[1]: libpod-conmon-0a86412b2cb0bb89767ce52240935c7ff9c85e366d10f152a3973087627dd9a9.scope: Deactivated successfully.
Jan 27 08:20:35 np0005597378 podman[125492]: 2026-01-27 13:20:35.329321482 +0000 UTC m=+0.038611500 container create a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:20:35 np0005597378 systemd[1]: Started libpod-conmon-a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd.scope.
Jan 27 08:20:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:20:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079db72ac4e6c2190ed7ea89e00fdef701b607434c17898d40a53c6394a18b99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079db72ac4e6c2190ed7ea89e00fdef701b607434c17898d40a53c6394a18b99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079db72ac4e6c2190ed7ea89e00fdef701b607434c17898d40a53c6394a18b99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079db72ac4e6c2190ed7ea89e00fdef701b607434c17898d40a53c6394a18b99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:20:35 np0005597378 podman[125492]: 2026-01-27 13:20:35.407439651 +0000 UTC m=+0.116729699 container init a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:20:35 np0005597378 podman[125492]: 2026-01-27 13:20:35.311418146 +0000 UTC m=+0.020708184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:20:35 np0005597378 podman[125492]: 2026-01-27 13:20:35.41768882 +0000 UTC m=+0.126978838 container start a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_villani, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:20:35 np0005597378 podman[125492]: 2026-01-27 13:20:35.422177162 +0000 UTC m=+0.131467180 container attach a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_villani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:20:35 np0005597378 python3.9[125605]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:20:36 np0005597378 lvm[125692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:20:36 np0005597378 lvm[125692]: VG ceph_vg0 finished
Jan 27 08:20:36 np0005597378 lvm[125695]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:20:36 np0005597378 lvm[125695]: VG ceph_vg1 finished
Jan 27 08:20:36 np0005597378 lvm[125697]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:20:36 np0005597378 lvm[125697]: VG ceph_vg2 finished
Jan 27 08:20:36 np0005597378 systemd[1]: session-41.scope: Deactivated successfully.
Jan 27 08:20:36 np0005597378 systemd[1]: session-41.scope: Consumed 3.820s CPU time.
Jan 27 08:20:36 np0005597378 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Jan 27 08:20:36 np0005597378 systemd-logind[786]: Removed session 41.
Jan 27 08:20:36 np0005597378 hopeful_villani[125515]: {}
Jan 27 08:20:36 np0005597378 systemd[1]: libpod-a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd.scope: Deactivated successfully.
Jan 27 08:20:36 np0005597378 systemd[1]: libpod-a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd.scope: Consumed 1.385s CPU time.
Jan 27 08:20:36 np0005597378 podman[125492]: 2026-01-27 13:20:36.301615123 +0000 UTC m=+1.010905171 container died a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_villani, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:20:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-079db72ac4e6c2190ed7ea89e00fdef701b607434c17898d40a53c6394a18b99-merged.mount: Deactivated successfully.
Jan 27 08:20:36 np0005597378 podman[125492]: 2026-01-27 13:20:36.349312798 +0000 UTC m=+1.058602826 container remove a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:20:36 np0005597378 systemd[1]: libpod-conmon-a495c78df19833d7ce3cdda58dc2b91fce743bcbd1068b0f88ebeba85eb5efdd.scope: Deactivated successfully.
Jan 27 08:20:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:20:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:20:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:20:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:20:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:20:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:20:37 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 27 08:20:37 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 27 08:20:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:37 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 27 08:20:37 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 27 08:20:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:40 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 27 08:20:40 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 27 08:20:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:41 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 27 08:20:41 np0005597378 ceph-osd[86941]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 27 08:20:42 np0005597378 systemd-logind[786]: New session 42 of user zuul.
Jan 27 08:20:42 np0005597378 systemd[1]: Started Session 42 of User zuul.
Jan 27 08:20:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:43 np0005597378 python3.9[125891]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:20:43 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 27 08:20:43 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 27 08:20:44 np0005597378 python3.9[126047]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:20:44 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 27 08:20:44 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 27 08:20:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:45 np0005597378 python3.9[126131]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 08:20:46 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 27 08:20:46 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 27 08:20:47 np0005597378 python3.9[126282]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:20:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:48 np0005597378 python3.9[126433]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:20:48 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 27 08:20:48 np0005597378 python3.9[126583]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:20:48 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 27 08:20:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:49 np0005597378 python3.9[126733]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:20:49 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 27 08:20:49 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 27 08:20:50 np0005597378 systemd[1]: session-42.scope: Deactivated successfully.
Jan 27 08:20:50 np0005597378 systemd[1]: session-42.scope: Consumed 5.449s CPU time.
Jan 27 08:20:50 np0005597378 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Jan 27 08:20:50 np0005597378 systemd-logind[786]: Removed session 42.
Jan 27 08:20:50 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 27 08:20:50 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 27 08:20:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:51 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 27 08:20:51 np0005597378 ceph-osd[85897]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 27 08:20:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:54 np0005597378 systemd[1]: session-17.scope: Deactivated successfully.
Jan 27 08:20:54 np0005597378 systemd[1]: session-17.scope: Consumed 1min 32.546s CPU time.
Jan 27 08:20:54 np0005597378 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Jan 27 08:20:54 np0005597378 systemd-logind[786]: Removed session 17.
Jan 27 08:20:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:57 np0005597378 systemd-logind[786]: New session 43 of user zuul.
Jan 27 08:20:57 np0005597378 systemd[1]: Started Session 43 of User zuul.
Jan 27 08:20:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:20:57 np0005597378 python3.9[126911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:20:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:20:59 np0005597378 python3.9[127067]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:00 np0005597378 python3.9[127219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:01 np0005597378 python3.9[127371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:01 np0005597378 python3.9[127494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520060.4644866-60-229659199620317/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=291c76c993cd36b2b210e54d3ae2c2656ac1a83d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:02 np0005597378 python3.9[127646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:02 np0005597378 python3.9[127769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520061.9664865-60-195175163997436/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=be022af6581cb08439429b15f1ef6e0a88d92a52 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:03 np0005597378 python3.9[127921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:04 np0005597378 python3.9[128044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520063.046782-60-90666864849298/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7e425aefa656360315879ec2d4fe230436ea880d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:04 np0005597378 python3.9[128196]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:05 np0005597378 python3.9[128348]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:05 np0005597378 python3.9[128500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:06 np0005597378 python3.9[128623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520065.3937314-119-176635865580114/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5471f1d7afd3f71e064199fc18c12f1c2ef02892 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:06 np0005597378 python3.9[128775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:07 np0005597378 python3.9[128898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520066.4645114-119-274778932062476/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=28a8f847b273c5a5e4cca9396d5c5b9a8983fa41 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:07 np0005597378 python3.9[129050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:08 np0005597378 python3.9[129173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520067.505111-119-52571034523895/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=19813346cc572898b6d450a8cdee4651a06be3b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:09 np0005597378 python3.9[129327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:09 np0005597378 python3.9[129479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:10 np0005597378 python3.9[129631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:10 np0005597378 python3.9[129754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520069.7953084-178-202082410677575/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=92a8d9e3f209b6b761a7816b339cb40591eb8108 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:11 np0005597378 python3.9[129906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:11 np0005597378 python3.9[130029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520070.8933132-178-42634593911931/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=28a8f847b273c5a5e4cca9396d5c5b9a8983fa41 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:12 np0005597378 python3.9[130181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:12 np0005597378 python3.9[130304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520071.9758055-178-138839905244003/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ae01e8534082ee4ce0e8b059ea192c1373972702 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:13 np0005597378 python3.9[130456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:14 np0005597378 python3.9[130608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:15 np0005597378 python3.9[130731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520074.1067562-246-110972391099305/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:15 np0005597378 python3.9[130883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:16 np0005597378 python3.9[131035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:16 np0005597378 python3.9[131158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520075.8727436-270-253801364343920/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:21:16
Jan 27 08:21:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:21:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:21:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 27 08:21:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:17 np0005597378 python3.9[131310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:21:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:21:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:18 np0005597378 python3.9[131462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:18 np0005597378 python3.9[131585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520077.6766047-294-28661161350724/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:19 np0005597378 python3.9[131737]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:19 np0005597378 python3.9[131889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:20 np0005597378 python3.9[132012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520079.4700565-318-42104567100248/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:21 np0005597378 python3.9[132164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:21 np0005597378 python3.9[132316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:22 np0005597378 python3.9[132439]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520081.257343-342-60401264572391/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:22 np0005597378 python3.9[132591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:23 np0005597378 python3.9[132743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:24 np0005597378 python3.9[132866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520083.158651-366-129630490166852/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=97a46cb0fadae87f68ea27d8bade7fcd25874cec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:24 np0005597378 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Jan 27 08:21:24 np0005597378 systemd[1]: session-43.scope: Deactivated successfully.
Jan 27 08:21:24 np0005597378 systemd[1]: session-43.scope: Consumed 20.540s CPU time.
Jan 27 08:21:24 np0005597378 systemd-logind[786]: Removed session 43.
Jan 27 08:21:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:21:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:30 np0005597378 systemd-logind[786]: New session 44 of user zuul.
Jan 27 08:21:30 np0005597378 systemd[1]: Started Session 44 of User zuul.
Jan 27 08:21:31 np0005597378 python3.9[133046]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:31 np0005597378 python3.9[133198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:32 np0005597378 python3.9[133321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520091.167019-29-80835542950273/.source.conf _original_basename=ceph.conf follow=False checksum=74f72a6350979f747e6bbdcb12dd4b20855d0adc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:32 np0005597378 python3.9[133473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:33 np0005597378 python3.9[133596]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520092.5508163-29-138985914887211/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=b2765b5995bc6c569d1eddba58f26aa05958b691 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:33 np0005597378 systemd[1]: session-44.scope: Deactivated successfully.
Jan 27 08:21:33 np0005597378 systemd[1]: session-44.scope: Consumed 2.371s CPU time.
Jan 27 08:21:33 np0005597378 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Jan 27 08:21:33 np0005597378 systemd-logind[786]: Removed session 44.
Jan 27 08:21:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:21:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:21:37 np0005597378 podman[133763]: 2026-01-27 13:21:37.48196838 +0000 UTC m=+0.022273286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:21:37 np0005597378 podman[133763]: 2026-01-27 13:21:37.616810832 +0000 UTC m=+0.157115728 container create b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:21:37 np0005597378 systemd[1]: Started libpod-conmon-b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58.scope.
Jan 27 08:21:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:21:37 np0005597378 podman[133763]: 2026-01-27 13:21:37.835438704 +0000 UTC m=+0.375743610 container init b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lewin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:21:37 np0005597378 podman[133763]: 2026-01-27 13:21:37.84354135 +0000 UTC m=+0.383846256 container start b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lewin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:21:37 np0005597378 systemd[1]: libpod-b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58.scope: Deactivated successfully.
Jan 27 08:21:37 np0005597378 hardcore_lewin[133780]: 167 167
Jan 27 08:21:37 np0005597378 conmon[133780]: conmon b89277da7fa80b2313c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58.scope/container/memory.events
Jan 27 08:21:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:37 np0005597378 podman[133763]: 2026-01-27 13:21:37.937381148 +0000 UTC m=+0.477686084 container attach b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:21:37 np0005597378 podman[133763]: 2026-01-27 13:21:37.937891802 +0000 UTC m=+0.478196688 container died b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:21:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-87dca25c455469f7b1582fbce06eaba890f9483b5d82071e9a82dc0b819abd45-merged.mount: Deactivated successfully.
Jan 27 08:21:38 np0005597378 podman[133763]: 2026-01-27 13:21:38.199564193 +0000 UTC m=+0.739869079 container remove b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lewin, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:21:38 np0005597378 systemd[1]: libpod-conmon-b89277da7fa80b2313c72768829ee07ebd55552798d549134753bbd7fc708e58.scope: Deactivated successfully.
Jan 27 08:21:38 np0005597378 podman[133805]: 2026-01-27 13:21:38.387837754 +0000 UTC m=+0.058683109 container create 620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_pare, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:21:38 np0005597378 systemd[1]: Started libpod-conmon-620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d.scope.
Jan 27 08:21:38 np0005597378 podman[133805]: 2026-01-27 13:21:38.354164654 +0000 UTC m=+0.025010009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:21:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:21:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a182abf71f7d1b7e26402c7fd75e2927951c4438e6b7bd1509172cae42f9a04a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a182abf71f7d1b7e26402c7fd75e2927951c4438e6b7bd1509172cae42f9a04a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a182abf71f7d1b7e26402c7fd75e2927951c4438e6b7bd1509172cae42f9a04a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a182abf71f7d1b7e26402c7fd75e2927951c4438e6b7bd1509172cae42f9a04a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a182abf71f7d1b7e26402c7fd75e2927951c4438e6b7bd1509172cae42f9a04a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:38 np0005597378 podman[133805]: 2026-01-27 13:21:38.493565568 +0000 UTC m=+0.164410933 container init 620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:21:38 np0005597378 podman[133805]: 2026-01-27 13:21:38.499352803 +0000 UTC m=+0.170198158 container start 620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_pare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:21:38 np0005597378 podman[133805]: 2026-01-27 13:21:38.509513685 +0000 UTC m=+0.180359040 container attach 620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:21:38 np0005597378 festive_pare[133821]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:21:38 np0005597378 festive_pare[133821]: --> All data devices are unavailable
Jan 27 08:21:38 np0005597378 systemd[1]: libpod-620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d.scope: Deactivated successfully.
Jan 27 08:21:38 np0005597378 podman[133805]: 2026-01-27 13:21:38.953250622 +0000 UTC m=+0.624095977 container died 620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:21:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a182abf71f7d1b7e26402c7fd75e2927951c4438e6b7bd1509172cae42f9a04a-merged.mount: Deactivated successfully.
Jan 27 08:21:39 np0005597378 podman[133805]: 2026-01-27 13:21:39.012215397 +0000 UTC m=+0.683060752 container remove 620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:21:39 np0005597378 systemd[1]: libpod-conmon-620c037de29c585df73ef31d8e77fccf0dc3f7640476e621f56c7957f71c8e4d.scope: Deactivated successfully.
Jan 27 08:21:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:39 np0005597378 systemd-logind[786]: New session 45 of user zuul.
Jan 27 08:21:39 np0005597378 systemd[1]: Started Session 45 of User zuul.
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.443294005 +0000 UTC m=+0.040250917 container create fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:21:39 np0005597378 systemd[1]: Started libpod-conmon-fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca.scope.
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.424197995 +0000 UTC m=+0.021154907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:21:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.55463507 +0000 UTC m=+0.151592002 container init fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_blackburn, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.562237033 +0000 UTC m=+0.159193945 container start fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_blackburn, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:21:39 np0005597378 infallible_blackburn[133977]: 167 167
Jan 27 08:21:39 np0005597378 systemd[1]: libpod-fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca.scope: Deactivated successfully.
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.569776894 +0000 UTC m=+0.166733836 container attach fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.571450059 +0000 UTC m=+0.168406981 container died fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 27 08:21:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-633698da6cee2a59f18c6c5b603f0cc9ce7dc0b08f65c2606ffd7aa57a62222f-merged.mount: Deactivated successfully.
Jan 27 08:21:39 np0005597378 podman[133918]: 2026-01-27 13:21:39.656590924 +0000 UTC m=+0.253547836 container remove fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:21:39 np0005597378 systemd[1]: libpod-conmon-fa95541982cd3002e891be8330c8ae3e590f5fc50ebf1dbd4622a375e6050fca.scope: Deactivated successfully.
Jan 27 08:21:39 np0005597378 podman[134011]: 2026-01-27 13:21:39.809294724 +0000 UTC m=+0.048674922 container create a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_blackwell, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:21:39 np0005597378 systemd[1]: Started libpod-conmon-a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d.scope.
Jan 27 08:21:39 np0005597378 podman[134011]: 2026-01-27 13:21:39.782285813 +0000 UTC m=+0.021666031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:21:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:21:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d2c60565af2ff2ab0efdaea3abdac58dfaf3932397df5d7811a78a41854793/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d2c60565af2ff2ab0efdaea3abdac58dfaf3932397df5d7811a78a41854793/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d2c60565af2ff2ab0efdaea3abdac58dfaf3932397df5d7811a78a41854793/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d2c60565af2ff2ab0efdaea3abdac58dfaf3932397df5d7811a78a41854793/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:39 np0005597378 podman[134011]: 2026-01-27 13:21:39.903310656 +0000 UTC m=+0.142690874 container init a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:21:39 np0005597378 podman[134011]: 2026-01-27 13:21:39.909866712 +0000 UTC m=+0.149246910 container start a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:21:39 np0005597378 podman[134011]: 2026-01-27 13:21:39.918301997 +0000 UTC m=+0.157682195 container attach a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]: {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:    "0": [
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:        {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "devices": [
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "/dev/loop3"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            ],
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_name": "ceph_lv0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_size": "21470642176",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "name": "ceph_lv0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "tags": {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cluster_name": "ceph",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.crush_device_class": "",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.encrypted": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.objectstore": "bluestore",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osd_id": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.type": "block",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.vdo": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.with_tpm": "0"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            },
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "type": "block",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "vg_name": "ceph_vg0"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:        }
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:    ],
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:    "1": [
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:        {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "devices": [
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "/dev/loop4"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            ],
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_name": "ceph_lv1",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_size": "21470642176",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "name": "ceph_lv1",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "tags": {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cluster_name": "ceph",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.crush_device_class": "",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.encrypted": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.objectstore": "bluestore",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osd_id": "1",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.type": "block",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.vdo": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.with_tpm": "0"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            },
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "type": "block",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "vg_name": "ceph_vg1"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:        }
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:    ],
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:    "2": [
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:        {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "devices": [
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "/dev/loop5"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            ],
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_name": "ceph_lv2",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_size": "21470642176",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "name": "ceph_lv2",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "tags": {
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.cluster_name": "ceph",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.crush_device_class": "",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.encrypted": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.objectstore": "bluestore",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osd_id": "2",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.type": "block",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.vdo": "0",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:                "ceph.with_tpm": "0"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            },
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "type": "block",
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:            "vg_name": "ceph_vg2"
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:        }
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]:    ]
Jan 27 08:21:40 np0005597378 optimistic_blackwell[134052]: }
Jan 27 08:21:40 np0005597378 systemd[1]: libpod-a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d.scope: Deactivated successfully.
Jan 27 08:21:40 np0005597378 podman[134011]: 2026-01-27 13:21:40.220600044 +0000 UTC m=+0.459980272 container died a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:21:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-79d2c60565af2ff2ab0efdaea3abdac58dfaf3932397df5d7811a78a41854793-merged.mount: Deactivated successfully.
Jan 27 08:21:40 np0005597378 podman[134011]: 2026-01-27 13:21:40.275117221 +0000 UTC m=+0.514497419 container remove a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:21:40 np0005597378 systemd[1]: libpod-conmon-a05e8908707beca983e786e89c655183ba1a283619fd4f63e975b08f1a9ecb3d.scope: Deactivated successfully.
Jan 27 08:21:40 np0005597378 python3.9[134130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.715836936 +0000 UTC m=+0.054044204 container create 5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:21:40 np0005597378 systemd[1]: Started libpod-conmon-5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9.scope.
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.684807797 +0000 UTC m=+0.023015085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:21:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.795031712 +0000 UTC m=+0.133239000 container init 5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.801053773 +0000 UTC m=+0.139261041 container start 5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:21:40 np0005597378 sharp_boyd[134274]: 167 167
Jan 27 08:21:40 np0005597378 systemd[1]: libpod-5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9.scope: Deactivated successfully.
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.821987402 +0000 UTC m=+0.160194690 container attach 5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.822531208 +0000 UTC m=+0.160738476 container died 5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:21:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-174d75470a0283f38717d2da30ad5792add97f8bcc381aa73636432533d8ab74-merged.mount: Deactivated successfully.
Jan 27 08:21:40 np0005597378 podman[134238]: 2026-01-27 13:21:40.898009424 +0000 UTC m=+0.236216692 container remove 5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:21:40 np0005597378 systemd[1]: libpod-conmon-5b1a0d499e18c36b05a5615ec3fe3b62acad945cb1a147341b312065d0fa20c9.scope: Deactivated successfully.
Jan 27 08:21:41 np0005597378 podman[134332]: 2026-01-27 13:21:41.051349121 +0000 UTC m=+0.045877127 container create 3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:21:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:41 np0005597378 systemd[1]: Started libpod-conmon-3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589.scope.
Jan 27 08:21:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:21:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30147a305c7ca4d4b79d8b82d16c1657355279bd121551042be2cb535d23ace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:41 np0005597378 podman[134332]: 2026-01-27 13:21:41.027951596 +0000 UTC m=+0.022479632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:21:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30147a305c7ca4d4b79d8b82d16c1657355279bd121551042be2cb535d23ace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30147a305c7ca4d4b79d8b82d16c1657355279bd121551042be2cb535d23ace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30147a305c7ca4d4b79d8b82d16c1657355279bd121551042be2cb535d23ace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:21:41 np0005597378 podman[134332]: 2026-01-27 13:21:41.135819858 +0000 UTC m=+0.130347864 container init 3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:21:41 np0005597378 podman[134332]: 2026-01-27 13:21:41.145435675 +0000 UTC m=+0.139963681 container start 3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:21:41 np0005597378 podman[134332]: 2026-01-27 13:21:41.150896741 +0000 UTC m=+0.145424777 container attach 3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:21:41 np0005597378 python3.9[134427]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:41 np0005597378 lvm[134652]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:21:41 np0005597378 lvm[134650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:21:41 np0005597378 lvm[134652]: VG ceph_vg1 finished
Jan 27 08:21:41 np0005597378 lvm[134650]: VG ceph_vg0 finished
Jan 27 08:21:41 np0005597378 lvm[134656]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:21:41 np0005597378 lvm[134656]: VG ceph_vg2 finished
Jan 27 08:21:41 np0005597378 keen_banach[134370]: {}
Jan 27 08:21:42 np0005597378 systemd[1]: libpod-3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589.scope: Deactivated successfully.
Jan 27 08:21:42 np0005597378 systemd[1]: libpod-3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589.scope: Consumed 1.398s CPU time.
Jan 27 08:21:42 np0005597378 podman[134332]: 2026-01-27 13:21:42.006194764 +0000 UTC m=+1.000722770 container died 3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:21:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f30147a305c7ca4d4b79d8b82d16c1657355279bd121551042be2cb535d23ace-merged.mount: Deactivated successfully.
Jan 27 08:21:42 np0005597378 podman[134332]: 2026-01-27 13:21:42.069208878 +0000 UTC m=+1.063736894 container remove 3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 08:21:42 np0005597378 systemd[1]: libpod-conmon-3028f08d3a67582a187429c7251fdf3e69069f26652223063d0fc1cf219ec589.scope: Deactivated successfully.
Jan 27 08:21:42 np0005597378 python3.9[134654]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:21:42 np0005597378 python3.9[134845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:21:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:43 np0005597378 python3.9[134997]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 08:21:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:45 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 27 08:21:45 np0005597378 python3.9[135155]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:21:46 np0005597378 python3.9[135239]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:21:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:49 np0005597378 python3.9[135392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:21:50 np0005597378 python3[135547]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 27 08:21:50 np0005597378 python3.9[135701]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:51 np0005597378 python3.9[135853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:51 np0005597378 python3.9[135931]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:52 np0005597378 python3.9[136083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:52 np0005597378 python3.9[136161]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.18yq5vlf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:53 np0005597378 python3.9[136313]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:53 np0005597378 python3.9[136391]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:54 np0005597378 python3.9[136543]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:21:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:55 np0005597378 python3[136696]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:21:56 np0005597378 python3.9[136848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:56 np0005597378 python3.9[136973]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520115.8020291-152-8064062555159/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:57 np0005597378 python3.9[137125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:21:58 np0005597378 python3.9[137250]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520117.0589476-167-267013022378628/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:58 np0005597378 python3.9[137402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:21:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:21:59 np0005597378 python3.9[137527]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520118.2102134-182-174634191122955/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:21:59 np0005597378 python3.9[137679]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:00 np0005597378 python3.9[137804]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520119.3165138-197-145515267452709/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:01 np0005597378 python3.9[137956]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:01 np0005597378 python3.9[138081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520120.4439871-212-161727727855672/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:02 np0005597378 python3.9[138233]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:02 np0005597378 python3.9[138385]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:03 np0005597378 python3.9[138540]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:04 np0005597378 python3.9[138692]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:04 np0005597378 python3.9[138845]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:22:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:05 np0005597378 python3.9[138999]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:06 np0005597378 python3.9[139154]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:07 np0005597378 python3.9[139304]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:22:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.229700) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520128229765, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1659, "num_deletes": 251, "total_data_size": 2355572, "memory_usage": 2398104, "flush_reason": "Manual Compaction"}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520128280516, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1388387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7301, "largest_seqno": 8959, "table_properties": {"data_size": 1382888, "index_size": 2446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16639, "raw_average_key_size": 21, "raw_value_size": 1369636, "raw_average_value_size": 1735, "num_data_blocks": 114, "num_entries": 789, "num_filter_entries": 789, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519977, "oldest_key_time": 1769519977, "file_creation_time": 1769520128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 50854 microseconds, and 4362 cpu microseconds.
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.280567) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1388387 bytes OK
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.280582) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.286196) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.286246) EVENT_LOG_v1 {"time_micros": 1769520128286237, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.286276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2348076, prev total WAL file size 2348076, number of live WAL files 2.
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.287189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1355KB)], [20(7568KB)]
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520128287219, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9138271, "oldest_snapshot_seqno": -1}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3384 keys, 7075820 bytes, temperature: kUnknown
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520128411736, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7075820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7049988, "index_size": 16251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81029, "raw_average_key_size": 23, "raw_value_size": 6985644, "raw_average_value_size": 2064, "num_data_blocks": 720, "num_entries": 3384, "num_filter_entries": 3384, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769520128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.412026) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7075820 bytes
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.416831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.3 rd, 56.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.7) write-amplify(5.1) OK, records in: 3828, records dropped: 444 output_compression: NoCompression
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.416867) EVENT_LOG_v1 {"time_micros": 1769520128416852, "job": 6, "event": "compaction_finished", "compaction_time_micros": 124604, "compaction_time_cpu_micros": 18064, "output_level": 6, "num_output_files": 1, "total_output_size": 7075820, "num_input_records": 3828, "num_output_records": 3384, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520128417232, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520128418617, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.287131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.418808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.418819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.418823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.418826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:22:08 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:22:08.418829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:22:08 np0005597378 python3.9[139457]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:08 np0005597378 ovs-vsctl[139458]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 27 08:22:09 np0005597378 python3.9[139610]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:09 np0005597378 python3.9[139765]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:09 np0005597378 ovs-vsctl[139766]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 27 08:22:10 np0005597378 python3.9[139916]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:22:10 np0005597378 python3.9[140070]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:11 np0005597378 python3.9[140222]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:12 np0005597378 python3.9[140300]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:12 np0005597378 python3.9[140452]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:13 np0005597378 python3.9[140530]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:13 np0005597378 python3.9[140682]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:14 np0005597378 python3.9[140834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:14 np0005597378 python3.9[140912]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:15 np0005597378 python3.9[141064]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:15 np0005597378 python3.9[141142]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:16 np0005597378 python3.9[141294]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:22:16 np0005597378 systemd[1]: Reloading.
Jan 27 08:22:16 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:16 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:22:16
Jan 27 08:22:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:22:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:22:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 27 08:22:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:17 np0005597378 python3.9[141483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:22:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:22:18 np0005597378 python3.9[141561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:18 np0005597378 python3.9[141713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:19 np0005597378 python3.9[141791]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:19 np0005597378 python3.9[141943]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:22:19 np0005597378 systemd[1]: Reloading.
Jan 27 08:22:19 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:19 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:20 np0005597378 systemd[1]: Starting Create netns directory...
Jan 27 08:22:20 np0005597378 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:22:20 np0005597378 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:22:20 np0005597378 systemd[1]: Finished Create netns directory.
Jan 27 08:22:20 np0005597378 python3.9[142136]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:21 np0005597378 python3.9[142288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:21 np0005597378 python3.9[142411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520140.9641962-463-29351669222005/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:22 np0005597378 python3.9[142563]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:23 np0005597378 python3.9[142715]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:23 np0005597378 python3.9[142867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:24 np0005597378 python3.9[142990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520143.4939435-496-29141880569618/.source.json _original_basename=.s1xrduqf follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:24 np0005597378 python3.9[143140]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:27 np0005597378 python3.9[143563]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:22:28 np0005597378 python3.9[143715]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:22:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:29 np0005597378 python3[143867]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:22:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:34 np0005597378 podman[143879]: 2026-01-27 13:22:34.859794545 +0000 UTC m=+5.630366392 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 08:22:34 np0005597378 podman[143996]: 2026-01-27 13:22:34.983059228 +0000 UTC m=+0.046644848 container create 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:22:34 np0005597378 podman[143996]: 2026-01-27 13:22:34.956825757 +0000 UTC m=+0.020411407 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 08:22:34 np0005597378 python3[143867]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 08:22:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:35 np0005597378 python3.9[144186]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:22:36 np0005597378 python3.9[144340]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:36 np0005597378 python3.9[144416]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:22:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:37 np0005597378 python3.9[144567]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769520156.9122024-574-226018139877306/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:38 np0005597378 python3.9[144643]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:22:38 np0005597378 systemd[1]: Reloading.
Jan 27 08:22:38 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:38 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:38 np0005597378 python3.9[144755]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:22:38 np0005597378 systemd[1]: Reloading.
Jan 27 08:22:39 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:39 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:39 np0005597378 systemd[1]: Starting ovn_controller container...
Jan 27 08:22:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8912ad17ab7dc52fd830af920de977d64dec609ffbfa3662ea6777de1ee293/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:39 np0005597378 systemd[1]: Started /usr/bin/podman healthcheck run 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882.
Jan 27 08:22:39 np0005597378 podman[144797]: 2026-01-27 13:22:39.380479271 +0000 UTC m=+0.121983965 container init 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + sudo -E kolla_set_configs
Jan 27 08:22:39 np0005597378 podman[144797]: 2026-01-27 13:22:39.405025756 +0000 UTC m=+0.146530450 container start 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:22:39 np0005597378 edpm-start-podman-container[144797]: ovn_controller
Jan 27 08:22:39 np0005597378 systemd[1]: Created slice User Slice of UID 0.
Jan 27 08:22:39 np0005597378 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 27 08:22:39 np0005597378 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 27 08:22:39 np0005597378 systemd[1]: Starting User Manager for UID 0...
Jan 27 08:22:39 np0005597378 edpm-start-podman-container[144796]: Creating additional drop-in dependency for "ovn_controller" (71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882)
Jan 27 08:22:39 np0005597378 podman[144819]: 2026-01-27 13:22:39.485465581 +0000 UTC m=+0.066003861 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 08:22:39 np0005597378 systemd[1]: 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882-787ead50f8d40382.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 08:22:39 np0005597378 systemd[1]: 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882-787ead50f8d40382.service: Failed with result 'exit-code'.
Jan 27 08:22:39 np0005597378 systemd[1]: Reloading.
Jan 27 08:22:39 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:22:39 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:22:39 np0005597378 systemd[144843]: Queued start job for default target Main User Target.
Jan 27 08:22:39 np0005597378 systemd[144843]: Created slice User Application Slice.
Jan 27 08:22:39 np0005597378 systemd[144843]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 27 08:22:39 np0005597378 systemd[144843]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 08:22:39 np0005597378 systemd[144843]: Reached target Paths.
Jan 27 08:22:39 np0005597378 systemd[144843]: Reached target Timers.
Jan 27 08:22:39 np0005597378 systemd[144843]: Starting D-Bus User Message Bus Socket...
Jan 27 08:22:39 np0005597378 systemd[144843]: Starting Create User's Volatile Files and Directories...
Jan 27 08:22:39 np0005597378 systemd[144843]: Listening on D-Bus User Message Bus Socket.
Jan 27 08:22:39 np0005597378 systemd[144843]: Finished Create User's Volatile Files and Directories.
Jan 27 08:22:39 np0005597378 systemd[144843]: Reached target Sockets.
Jan 27 08:22:39 np0005597378 systemd[144843]: Reached target Basic System.
Jan 27 08:22:39 np0005597378 systemd[144843]: Reached target Main User Target.
Jan 27 08:22:39 np0005597378 systemd[144843]: Startup finished in 131ms.
Jan 27 08:22:39 np0005597378 systemd[1]: Started User Manager for UID 0.
Jan 27 08:22:39 np0005597378 systemd[1]: Started ovn_controller container.
Jan 27 08:22:39 np0005597378 systemd[1]: Started Session c1 of User root.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: INFO:__main__:Validating config file
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: INFO:__main__:Writing out command to execute
Jan 27 08:22:39 np0005597378 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: ++ cat /run_command
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + ARGS=
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + sudo kolla_copy_cacerts
Jan 27 08:22:39 np0005597378 systemd[1]: Started Session c2 of User root.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + [[ ! -n '' ]]
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + . kolla_extend_start
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + umask 0022
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 27 08:22:39 np0005597378 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 27 08:22:39 np0005597378 NetworkManager[48904]: <info>  [1769520159.9179] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 27 08:22:39 np0005597378 NetworkManager[48904]: <info>  [1769520159.9185] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:22:39 np0005597378 NetworkManager[48904]: <warn>  [1769520159.9187] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:22:39 np0005597378 NetworkManager[48904]: <info>  [1769520159.9192] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 27 08:22:39 np0005597378 NetworkManager[48904]: <info>  [1769520159.9196] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 27 08:22:39 np0005597378 NetworkManager[48904]: <info>  [1769520159.9198] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 08:22:39 np0005597378 kernel: br-int: entered promiscuous mode
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 08:22:39 np0005597378 systemd-udevd[144948]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 27 08:22:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:39Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 27 08:22:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:40Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:22:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:40Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 08:22:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:40Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:22:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:40Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 08:22:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:40Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:22:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:22:40Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 08:22:40 np0005597378 NetworkManager[48904]: <info>  [1769520160.1059] manager: (ovn-2f8bbb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 27 08:22:40 np0005597378 kernel: genev_sys_6081: entered promiscuous mode
Jan 27 08:22:40 np0005597378 systemd-udevd[144950]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:22:40 np0005597378 NetworkManager[48904]: <info>  [1769520160.1258] device (genev_sys_6081): carrier: link connected
Jan 27 08:22:40 np0005597378 NetworkManager[48904]: <info>  [1769520160.1261] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 27 08:22:40 np0005597378 python3.9[145078]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 08:22:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:41 np0005597378 python3.9[145230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:22:41 np0005597378 python3.9[145353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520160.9897184-619-100107287605168/.source.yaml _original_basename=.m1bhyw__ follow=False checksum=3b5d69051acb0559327bb5af213e6d42c4a25d3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:22:42 np0005597378 python3.9[145555]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:42 np0005597378 ovs-vsctl[145571]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:22:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:22:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:22:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:22:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:22:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.266300158 +0000 UTC m=+0.041364455 container create 339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shamir, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:22:43 np0005597378 systemd[1]: Started libpod-conmon-339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789.scope.
Jan 27 08:22:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.251099182 +0000 UTC m=+0.026163499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.356675388 +0000 UTC m=+0.131739705 container init 339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shamir, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.367052526 +0000 UTC m=+0.142116823 container start 339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shamir, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.370870087 +0000 UTC m=+0.145934384 container attach 339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:22:43 np0005597378 hopeful_shamir[145819]: 167 167
Jan 27 08:22:43 np0005597378 systemd[1]: libpod-339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789.scope: Deactivated successfully.
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.373930929 +0000 UTC m=+0.148995236 container died 339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shamir, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:22:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ad6c1278ef4f989797c611fe6374127c6fbdfc03a261870e32c8869a73186e3f-merged.mount: Deactivated successfully.
Jan 27 08:22:43 np0005597378 python3.9[145808]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:43 np0005597378 podman[145794]: 2026-01-27 13:22:43.415483577 +0000 UTC m=+0.190547874 container remove 339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:22:43 np0005597378 ovs-vsctl[145837]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 27 08:22:43 np0005597378 systemd[1]: libpod-conmon-339fb9dd01f8a0f56bf68e68fbadfded758bd14462d2e03afc358c9236618789.scope: Deactivated successfully.
Jan 27 08:22:43 np0005597378 podman[145870]: 2026-01-27 13:22:43.575027213 +0000 UTC m=+0.043223044 container create 7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:22:43 np0005597378 systemd[1]: Started libpod-conmon-7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905.scope.
Jan 27 08:22:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ee6572151caeadb835502297198b3a8ac74cfe5936a4866d3234b5f48693d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ee6572151caeadb835502297198b3a8ac74cfe5936a4866d3234b5f48693d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ee6572151caeadb835502297198b3a8ac74cfe5936a4866d3234b5f48693d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ee6572151caeadb835502297198b3a8ac74cfe5936a4866d3234b5f48693d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ee6572151caeadb835502297198b3a8ac74cfe5936a4866d3234b5f48693d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:43 np0005597378 podman[145870]: 2026-01-27 13:22:43.648718889 +0000 UTC m=+0.116914740 container init 7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_merkle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:22:43 np0005597378 podman[145870]: 2026-01-27 13:22:43.556291083 +0000 UTC m=+0.024486934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:22:43 np0005597378 podman[145870]: 2026-01-27 13:22:43.661234163 +0000 UTC m=+0.129429994 container start 7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_merkle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:22:43 np0005597378 podman[145870]: 2026-01-27 13:22:43.670352236 +0000 UTC m=+0.138548067 container attach 7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:22:44 np0005597378 reverent_merkle[145886]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:22:44 np0005597378 reverent_merkle[145886]: --> All data devices are unavailable
Jan 27 08:22:44 np0005597378 systemd[1]: libpod-7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905.scope: Deactivated successfully.
Jan 27 08:22:44 np0005597378 podman[146034]: 2026-01-27 13:22:44.159542326 +0000 UTC m=+0.022752069 container died 7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_merkle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:22:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a5ee6572151caeadb835502297198b3a8ac74cfe5936a4866d3234b5f48693d5-merged.mount: Deactivated successfully.
Jan 27 08:22:44 np0005597378 podman[146034]: 2026-01-27 13:22:44.200396355 +0000 UTC m=+0.063606078 container remove 7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:22:44 np0005597378 systemd[1]: libpod-conmon-7220749a36790e276aa07bb55e128dad55f923ac28467345739271fb7fdd6905.scope: Deactivated successfully.
Jan 27 08:22:44 np0005597378 python3.9[146029]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:22:44 np0005597378 ovs-vsctl[146049]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.610615558 +0000 UTC m=+0.040302616 container create df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:22:44 np0005597378 systemd[1]: Started libpod-conmon-df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2.scope.
Jan 27 08:22:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.681772746 +0000 UTC m=+0.111459834 container init df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:22:44 np0005597378 systemd[1]: session-45.scope: Deactivated successfully.
Jan 27 08:22:44 np0005597378 systemd[1]: session-45.scope: Consumed 53.388s CPU time.
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.688424783 +0000 UTC m=+0.118111831 container start df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mcclintock, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:22:44 np0005597378 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.594660513 +0000 UTC m=+0.024347591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:22:44 np0005597378 systemd-logind[786]: Removed session 45.
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.692275666 +0000 UTC m=+0.121962744 container attach df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mcclintock, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:22:44 np0005597378 crazy_mcclintock[146152]: 167 167
Jan 27 08:22:44 np0005597378 systemd[1]: libpod-df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2.scope: Deactivated successfully.
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.69391039 +0000 UTC m=+0.123597448 container died df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:22:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4e2a1ee2707b5647c31d6ac85160e8823f10f90639bf91ffa09a21f41169e931-merged.mount: Deactivated successfully.
Jan 27 08:22:44 np0005597378 podman[146136]: 2026-01-27 13:22:44.738659073 +0000 UTC m=+0.168346131 container remove df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:22:44 np0005597378 systemd[1]: libpod-conmon-df4e47464fcee09c4b3c8907d1e44841b678625e0485baa2b6fb392bbf61b7b2.scope: Deactivated successfully.
Jan 27 08:22:44 np0005597378 podman[146174]: 2026-01-27 13:22:44.892410455 +0000 UTC m=+0.037711937 container create 12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:22:44 np0005597378 systemd[1]: Started libpod-conmon-12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f.scope.
Jan 27 08:22:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81925459d42b0e9ced6b42b55f1123677577bb6f0c47b9f14be823aa84a6ffe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81925459d42b0e9ced6b42b55f1123677577bb6f0c47b9f14be823aa84a6ffe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81925459d42b0e9ced6b42b55f1123677577bb6f0c47b9f14be823aa84a6ffe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81925459d42b0e9ced6b42b55f1123677577bb6f0c47b9f14be823aa84a6ffe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:44 np0005597378 podman[146174]: 2026-01-27 13:22:44.876732167 +0000 UTC m=+0.022033679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:22:44 np0005597378 podman[146174]: 2026-01-27 13:22:44.978634215 +0000 UTC m=+0.123935727 container init 12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:22:44 np0005597378 podman[146174]: 2026-01-27 13:22:44.984503412 +0000 UTC m=+0.129804944 container start 12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:22:44 np0005597378 podman[146174]: 2026-01-27 13:22:44.988296023 +0000 UTC m=+0.133597545 container attach 12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hamilton, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:22:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]: {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:    "0": [
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:        {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "devices": [
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "/dev/loop3"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            ],
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_name": "ceph_lv0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_size": "21470642176",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "name": "ceph_lv0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "tags": {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cluster_name": "ceph",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.crush_device_class": "",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.encrypted": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.objectstore": "bluestore",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osd_id": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.type": "block",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.vdo": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.with_tpm": "0"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            },
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "type": "block",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "vg_name": "ceph_vg0"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:        }
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:    ],
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:    "1": [
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:        {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "devices": [
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "/dev/loop4"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            ],
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_name": "ceph_lv1",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_size": "21470642176",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "name": "ceph_lv1",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "tags": {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cluster_name": "ceph",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.crush_device_class": "",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.encrypted": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.objectstore": "bluestore",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osd_id": "1",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.type": "block",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.vdo": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.with_tpm": "0"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            },
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "type": "block",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "vg_name": "ceph_vg1"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:        }
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:    ],
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:    "2": [
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:        {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "devices": [
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "/dev/loop5"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            ],
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_name": "ceph_lv2",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_size": "21470642176",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "name": "ceph_lv2",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "tags": {
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.cluster_name": "ceph",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.crush_device_class": "",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.encrypted": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.objectstore": "bluestore",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osd_id": "2",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.type": "block",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.vdo": "0",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:                "ceph.with_tpm": "0"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            },
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "type": "block",
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:            "vg_name": "ceph_vg2"
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:        }
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]:    ]
Jan 27 08:22:45 np0005597378 trusting_hamilton[146191]: }
Jan 27 08:22:45 np0005597378 systemd[1]: libpod-12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f.scope: Deactivated successfully.
Jan 27 08:22:45 np0005597378 podman[146174]: 2026-01-27 13:22:45.293895545 +0000 UTC m=+0.439197037 container died 12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:22:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-81925459d42b0e9ced6b42b55f1123677577bb6f0c47b9f14be823aa84a6ffe0-merged.mount: Deactivated successfully.
Jan 27 08:22:45 np0005597378 podman[146174]: 2026-01-27 13:22:45.338508995 +0000 UTC m=+0.483810487 container remove 12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:22:45 np0005597378 systemd[1]: libpod-conmon-12cff8337cdf3eea2f6751c8506062e5d3be067840e9d9b66eeb9d2ad53d5e2f.scope: Deactivated successfully.
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.738849855 +0000 UTC m=+0.044736705 container create 243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:22:45 np0005597378 systemd[1]: Started libpod-conmon-243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008.scope.
Jan 27 08:22:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.809851048 +0000 UTC m=+0.115737978 container init 243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.720588678 +0000 UTC m=+0.026475548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.817443221 +0000 UTC m=+0.123330071 container start 243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.82039085 +0000 UTC m=+0.126277730 container attach 243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:22:45 np0005597378 magical_zhukovsky[146291]: 167 167
Jan 27 08:22:45 np0005597378 systemd[1]: libpod-243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008.scope: Deactivated successfully.
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.821940021 +0000 UTC m=+0.127826881 container died 243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:22:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7078f0626d2e641e469350fac772e5302d0a7ed9c6882773d53cc6d93b39c6d9-merged.mount: Deactivated successfully.
Jan 27 08:22:45 np0005597378 podman[146274]: 2026-01-27 13:22:45.856291398 +0000 UTC m=+0.162178248 container remove 243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:22:45 np0005597378 systemd[1]: libpod-conmon-243bf944800cff7f9e0cf0fa3b881560f1f2f5795fbf2b9ec01944becc3ad008.scope: Deactivated successfully.
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:46.011794186 +0000 UTC m=+0.040053850 container create 1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:22:46 np0005597378 systemd[1]: Started libpod-conmon-1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6.scope.
Jan 27 08:22:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:22:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bc571c29543b0bb9764e9297f3f5e26eee592e411cbf65b05a01a8bf84a477/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bc571c29543b0bb9764e9297f3f5e26eee592e411cbf65b05a01a8bf84a477/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bc571c29543b0bb9764e9297f3f5e26eee592e411cbf65b05a01a8bf84a477/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bc571c29543b0bb9764e9297f3f5e26eee592e411cbf65b05a01a8bf84a477/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:46.08317778 +0000 UTC m=+0.111437454 container init 1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:46.088994305 +0000 UTC m=+0.117253959 container start 1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:45.995068149 +0000 UTC m=+0.023327833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:46.092501728 +0000 UTC m=+0.120761472 container attach 1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:22:46 np0005597378 lvm[146410]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:22:46 np0005597378 lvm[146410]: VG ceph_vg0 finished
Jan 27 08:22:46 np0005597378 lvm[146411]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:22:46 np0005597378 lvm[146411]: VG ceph_vg1 finished
Jan 27 08:22:46 np0005597378 lvm[146413]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:22:46 np0005597378 lvm[146413]: VG ceph_vg2 finished
Jan 27 08:22:46 np0005597378 lvm[146414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:22:46 np0005597378 lvm[146414]: VG ceph_vg2 finished
Jan 27 08:22:46 np0005597378 sharp_edison[146332]: {}
Jan 27 08:22:46 np0005597378 systemd[1]: libpod-1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6.scope: Deactivated successfully.
Jan 27 08:22:46 np0005597378 systemd[1]: libpod-1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6.scope: Consumed 1.233s CPU time.
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:46.878271789 +0000 UTC m=+0.906531453 container died 1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:22:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-82bc571c29543b0bb9764e9297f3f5e26eee592e411cbf65b05a01a8bf84a477-merged.mount: Deactivated successfully.
Jan 27 08:22:46 np0005597378 podman[146316]: 2026-01-27 13:22:46.962953598 +0000 UTC m=+0.991213292 container remove 1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:22:46 np0005597378 systemd[1]: libpod-conmon-1db37623fbcc40e299f3de625c96e1bc45b4249e0bd7be7b1ef30f10280cb6c6.scope: Deactivated successfully.
Jan 27 08:22:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:22:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:22:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:22:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:22:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:22:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:22:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:50 np0005597378 systemd[1]: Stopping User Manager for UID 0...
Jan 27 08:22:50 np0005597378 systemd[144843]: Activating special unit Exit the Session...
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped target Main User Target.
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped target Basic System.
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped target Paths.
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped target Sockets.
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped target Timers.
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 08:22:50 np0005597378 systemd[144843]: Closed D-Bus User Message Bus Socket.
Jan 27 08:22:50 np0005597378 systemd[144843]: Stopped Create User's Volatile Files and Directories.
Jan 27 08:22:50 np0005597378 systemd[144843]: Removed slice User Application Slice.
Jan 27 08:22:50 np0005597378 systemd[144843]: Reached target Shutdown.
Jan 27 08:22:50 np0005597378 systemd[144843]: Finished Exit the Session.
Jan 27 08:22:50 np0005597378 systemd[144843]: Reached target Exit the Session.
Jan 27 08:22:50 np0005597378 systemd[1]: user@0.service: Deactivated successfully.
Jan 27 08:22:50 np0005597378 systemd[1]: Stopped User Manager for UID 0.
Jan 27 08:22:50 np0005597378 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 27 08:22:50 np0005597378 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 27 08:22:50 np0005597378 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 27 08:22:50 np0005597378 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 27 08:22:50 np0005597378 systemd[1]: Removed slice User Slice of UID 0.
Jan 27 08:22:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:52 np0005597378 systemd-logind[786]: New session 47 of user zuul.
Jan 27 08:22:52 np0005597378 systemd[1]: Started Session 47 of User zuul.
Jan 27 08:22:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:54 np0005597378 python3.9[146617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:22:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:55 np0005597378 python3.9[146773]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:55 np0005597378 python3.9[146925]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:56 np0005597378 python3.9[147077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:57 np0005597378 python3.9[147229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:22:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:22:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 2079 writes, 9196 keys, 2079 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2079 writes, 2079 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2079 writes, 9196 keys, 2079 commit groups, 1.0 writes per commit group, ingest: 12.22 MB, 0.02 MB/s#012Interval WAL: 2079 writes, 2079 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.0      0.14              0.02         3    0.046       0      0       0.0       0.0#012  L6      1/0    6.75 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     91.3     79.9      0.18              0.03         2    0.088    7157    734       0.0       0.0#012 Sum      1/0    6.75 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     51.5     73.0      0.31              0.05         5    0.063    7157    734       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     53.2     75.2      0.30              0.05         4    0.076    7157    734       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     91.3     79.9      0.18              0.03         2    0.088    7157    734       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     68.6      0.13              0.02         2    0.063       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 308.00 MB usage: 641.20 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,554.42 KB,0.175788%) FilterBlock(6,27.61 KB,0.00875399%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 08:22:57 np0005597378 python3.9[147381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:22:58 np0005597378 python3.9[147532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:22:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:22:59 np0005597378 python3.9[147684]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 08:22:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:00 np0005597378 python3.9[147834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:01 np0005597378 python3.9[147955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520180.010514-81-250681505134705/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:01 np0005597378 python3.9[148105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:02 np0005597378 python3.9[148226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520181.4242702-96-69048069524424/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:03 np0005597378 python3.9[148378]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:23:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:04 np0005597378 python3.9[148462]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:23:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:06 np0005597378 python3.9[148615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:23:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:07 np0005597378 python3.9[148768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:07 np0005597378 python3.9[148889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520186.80545-133-159595824212811/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:08 np0005597378 python3.9[149039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:08 np0005597378 python3.9[149160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520187.8682585-133-208655206117070/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:23:09Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Jan 27 08:23:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:23:09Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 27 08:23:09 np0005597378 podman[149237]: 2026-01-27 13:23:09.750773155 +0000 UTC m=+0.089607122 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:23:09 np0005597378 python3.9[149336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:10 np0005597378 python3.9[149457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520189.54461-177-47050623215079/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:11 np0005597378 python3.9[149607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.284179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520191284212, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 741, "num_deletes": 251, "total_data_size": 966167, "memory_usage": 979272, "flush_reason": "Manual Compaction"}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520191298779, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 957717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8960, "largest_seqno": 9700, "table_properties": {"data_size": 953872, "index_size": 1625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8179, "raw_average_key_size": 18, "raw_value_size": 946238, "raw_average_value_size": 2145, "num_data_blocks": 76, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769520128, "oldest_key_time": 1769520128, "file_creation_time": 1769520191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 14657 microseconds, and 3201 cpu microseconds.
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.298832) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 957717 bytes OK
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.298852) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.320918) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.320970) EVENT_LOG_v1 {"time_micros": 1769520191320960, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.320994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 962385, prev total WAL file size 962385, number of live WAL files 2.
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.321569) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(935KB)], [23(6909KB)]
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520191321602, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8033537, "oldest_snapshot_seqno": -1}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3311 keys, 6189304 bytes, temperature: kUnknown
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520191445702, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6189304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6165360, "index_size": 14555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80283, "raw_average_key_size": 24, "raw_value_size": 6103676, "raw_average_value_size": 1843, "num_data_blocks": 634, "num_entries": 3311, "num_filter_entries": 3311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769520191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.445892) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6189304 bytes
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.454341) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 64.7 rd, 49.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.7 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(14.9) write-amplify(6.5) OK, records in: 3825, records dropped: 514 output_compression: NoCompression
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.454368) EVENT_LOG_v1 {"time_micros": 1769520191454356, "job": 8, "event": "compaction_finished", "compaction_time_micros": 124155, "compaction_time_cpu_micros": 14155, "output_level": 6, "num_output_files": 1, "total_output_size": 6189304, "num_input_records": 3825, "num_output_records": 3311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520191454616, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520191455808, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.321529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.455857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.455861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.455863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.455893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:23:11 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:23:11.455895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:23:11 np0005597378 python3.9[149728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520190.5996413-177-50551667222864/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:12 np0005597378 python3.9[149878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:23:12 np0005597378 python3.9[150032]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:13 np0005597378 python3.9[150184]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:13 np0005597378 python3.9[150262]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:14 np0005597378 python3.9[150414]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:14 np0005597378 python3.9[150492]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:15 np0005597378 python3.9[150644]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:16 np0005597378 python3.9[150796]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:16 np0005597378 python3.9[150874]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:23:16
Jan 27 08:23:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:23:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:23:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'vms']
Jan 27 08:23:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:17 np0005597378 python3.9[151026]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:17 np0005597378 python3.9[151104]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:23:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:23:18 np0005597378 python3.9[151256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:23:18 np0005597378 systemd[1]: Reloading.
Jan 27 08:23:18 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:23:18 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:23:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:19 np0005597378 python3.9[151446]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:19 np0005597378 python3.9[151524]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:20 np0005597378 python3.9[151676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:20 np0005597378 python3.9[151754]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:21 np0005597378 python3.9[151906]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:23:21 np0005597378 systemd[1]: Reloading.
Jan 27 08:23:21 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:23:21 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:23:21 np0005597378 systemd[1]: Starting Create netns directory...
Jan 27 08:23:21 np0005597378 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 08:23:21 np0005597378 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 08:23:21 np0005597378 systemd[1]: Finished Create netns directory.
Jan 27 08:23:22 np0005597378 python3.9[152099]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:23 np0005597378 python3.9[152251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:23 np0005597378 python3.9[152374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520202.5644128-328-238089599544272/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:24 np0005597378 python3.9[152526]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:25 np0005597378 python3.9[152678]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:23:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:25 np0005597378 python3.9[152832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:26 np0005597378 python3.9[152955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520205.1806939-361-164912687093279/.source.json _original_basename=.lic6on9q follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:26 np0005597378 python3.9[153105]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:23:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:28 np0005597378 python3.9[153528]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 27 08:23:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:29 np0005597378 python3.9[153680]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:23:30 np0005597378 python3[153832]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:23:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:39 np0005597378 podman[153844]: 2026-01-27 13:23:39.591137098 +0000 UTC m=+8.922185349 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:23:39 np0005597378 podman[153963]: 2026-01-27 13:23:39.701700447 +0000 UTC m=+0.022455520 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:23:40 np0005597378 podman[153963]: 2026-01-27 13:23:40.068703246 +0000 UTC m=+0.389458299 container create a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:23:40 np0005597378 python3[153832]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:23:40 np0005597378 podman[153997]: 2026-01-27 13:23:40.298493886 +0000 UTC m=+0.083110028 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:23:40 np0005597378 python3.9[154173]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:23:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:41 np0005597378 python3.9[154327]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:41 np0005597378 python3.9[154403]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:23:42 np0005597378 python3.9[154554]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769520221.8913884-439-20638056203232/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:43 np0005597378 python3.9[154630]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:23:43 np0005597378 systemd[1]: Reloading.
Jan 27 08:23:43 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:23:43 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:23:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:43 np0005597378 python3.9[154741]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:23:43 np0005597378 systemd[1]: Reloading.
Jan 27 08:23:43 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:23:43 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:23:44 np0005597378 systemd[1]: Starting ovn_metadata_agent container...
Jan 27 08:23:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76364115f4350d876673e81d9c32dda7ba3a131cb0bdbfa64f05fc8914396d7c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76364115f4350d876673e81d9c32dda7ba3a131cb0bdbfa64f05fc8914396d7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:44 np0005597378 systemd[1]: Started /usr/bin/podman healthcheck run a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e.
Jan 27 08:23:44 np0005597378 podman[154782]: 2026-01-27 13:23:44.302931835 +0000 UTC m=+0.116841962 container init a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + sudo -E kolla_set_configs
Jan 27 08:23:44 np0005597378 podman[154782]: 2026-01-27 13:23:44.336048486 +0000 UTC m=+0.149958603 container start a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 08:23:44 np0005597378 edpm-start-podman-container[154782]: ovn_metadata_agent
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Validating config file
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Copying service configuration files
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Writing out command to execute
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: ++ cat /run_command
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + CMD=neutron-ovn-metadata-agent
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + ARGS=
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + sudo kolla_copy_cacerts
Jan 27 08:23:44 np0005597378 edpm-start-podman-container[154781]: Creating additional drop-in dependency for "ovn_metadata_agent" (a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e)
Jan 27 08:23:44 np0005597378 podman[154804]: 2026-01-27 13:23:44.402811272 +0000 UTC m=+0.055844978 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + [[ ! -n '' ]]
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + . kolla_extend_start
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: Running command: 'neutron-ovn-metadata-agent'
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + umask 0022
Jan 27 08:23:44 np0005597378 ovn_metadata_agent[154797]: + exec neutron-ovn-metadata-agent
Jan 27 08:23:44 np0005597378 systemd[1]: Reloading.
Jan 27 08:23:44 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:23:44 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:23:44 np0005597378 systemd[1]: Started ovn_metadata_agent container.
Jan 27 08:23:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:45 np0005597378 python3.9[155034]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.221 154802 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.221 154802 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.221 154802 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.222 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.222 154802 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.222 154802 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.223 154802 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.223 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.223 154802 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.223 154802 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.224 154802 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.225 154802 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.226 154802 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.227 154802 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.228 154802 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.229 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.230 154802 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.231 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.232 154802 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.233 154802 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.233 154802 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.233 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.233 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.233 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.233 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.234 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.235 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.235 154802 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.235 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.235 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.235 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.235 154802 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.236 154802 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.237 154802 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.238 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.239 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.239 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.239 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.239 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.239 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.239 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.240 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.240 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.240 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.240 154802 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.240 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.240 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.241 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.241 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.241 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.241 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.241 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.242 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.242 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.242 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.242 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.242 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.242 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.243 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.244 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.244 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.244 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.244 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.244 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.244 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.245 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.246 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.247 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.248 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.249 154802 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.250 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.251 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.251 154802 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.251 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.251 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.251 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.251 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.252 154802 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.253 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.254 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.255 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.256 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.257 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.257 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.257 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.257 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.257 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.257 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.258 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.259 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.260 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.260 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.260 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.260 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.260 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.260 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.261 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.262 154802 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.262 154802 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.262 154802 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.262 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.262 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.262 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.263 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.264 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.264 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.264 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.264 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.264 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.264 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.265 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.265 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.265 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.265 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.265 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.265 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.266 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.267 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.268 154802 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.268 154802 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.279 154802 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.279 154802 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.279 154802 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.280 154802 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.280 154802 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.293 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 65761215-e4d7-402d-90c8-18b025613da8 (UUID: 65761215-e4d7-402d-90c8-18b025613da8) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 27 08:23:46 np0005597378 python3.9[155186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.322 154802 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.322 154802 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.322 154802 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.323 154802 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.325 154802 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.331 154802 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.335 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '65761215-e4d7-402d-90c8-18b025613da8'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], external_ids={}, name=65761215-e4d7-402d-90c8-18b025613da8, nb_cfg_timestamp=1769520167981, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.336 154802 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5b443883d0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.337 154802 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.342 154802 DEBUG oslo_service.service [-] Started child 155189 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.345 154802 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp49qmhd1q/privsep.sock']#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.345 155189 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-362517'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.367 155189 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.368 155189 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.368 155189 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.371 155189 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.377 155189 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.384 155189 INFO eventlet.wsgi.server [-] (155189) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 27 08:23:46 np0005597378 python3.9[155316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520225.8682034-484-214127203485389/.source.yaml _original_basename=.h4zkbl_0 follow=False checksum=869f40ea502abd7d02aefca7ee5607de92109d29 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:23:46 np0005597378 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.987 154802 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.988 154802 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp49qmhd1q/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.874 155324 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.878 155324 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.882 155324 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.882 155324 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155324#033[00m
Jan 27 08:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:46.992 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d27d209d-dbd2-4753-9454-2c21216e8516]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:47 np0005597378 systemd[1]: session-47.scope: Deactivated successfully.
Jan 27 08:23:47 np0005597378 systemd[1]: session-47.scope: Consumed 51.175s CPU time.
Jan 27 08:23:47 np0005597378 systemd-logind[786]: Session 47 logged out. Waiting for processes to exit.
Jan 27 08:23:47 np0005597378 systemd-logind[786]: Removed session 47.
Jan 27 08:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:47.503 155324 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:47.503 155324 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:47.504 155324 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:23:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:23:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:23:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.059 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b5aab003-e4c2-41b7-b5c1-791ece7decca]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.062 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, column=external_ids, values=({'neutron:ovn-metadata-id': 'c3266b26-a49c-5bf8-9032-120742b42ede'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.072 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.078 154802 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.078 154802 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.078 154802 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.079 154802 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.080 154802 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.080 154802 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.080 154802 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.080 154802 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.080 154802 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.081 154802 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.081 154802 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.081 154802 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.081 154802 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.081 154802 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.081 154802 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.082 154802 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.082 154802 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.082 154802 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.082 154802 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.082 154802 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.082 154802 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.083 154802 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.084 154802 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.085 154802 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.086 154802 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.086 154802 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.086 154802 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.086 154802 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.086 154802 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.086 154802 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.087 154802 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.088 154802 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.089 154802 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.090 154802 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.091 154802 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.092 154802 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.092 154802 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.092 154802 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.092 154802 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.092 154802 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.092 154802 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.093 154802 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.094 154802 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.095 154802 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.096 154802 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.097 154802 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.098 154802 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.099 154802 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.100 154802 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.101 154802 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.102 154802 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.103 154802 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.104 154802 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.105 154802 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.106 154802 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.107 154802 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.108 154802 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.109 154802 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.110 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.111 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.112 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.113 154802 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.114 154802 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.114 154802 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.114 154802 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:23:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:23:48.114 154802 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.523695173 +0000 UTC m=+0.037254319 container create e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:23:48 np0005597378 systemd[1]: Started libpod-conmon-e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146.scope.
Jan 27 08:23:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.506773162 +0000 UTC m=+0.020332328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.604019925 +0000 UTC m=+0.117579111 container init e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dirac, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.612498576 +0000 UTC m=+0.126057722 container start e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.616124779 +0000 UTC m=+0.129683915 container attach e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:23:48 np0005597378 quirky_dirac[155576]: 167 167
Jan 27 08:23:48 np0005597378 systemd[1]: libpod-e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146.scope: Deactivated successfully.
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.618699322 +0000 UTC m=+0.132258478 container died e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dirac, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 08:23:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4b7de5c07796544dafe9f4c73547cea4b5eb505f549ef6317696c31d4611805a-merged.mount: Deactivated successfully.
Jan 27 08:23:48 np0005597378 podman[155560]: 2026-01-27 13:23:48.658926475 +0000 UTC m=+0.172485621 container remove e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:23:48 np0005597378 systemd[1]: libpod-conmon-e7f687f8367d4e85cfc218566fe0b7d68f942b2e0f4dca8fdb289b166d3f2146.scope: Deactivated successfully.
Jan 27 08:23:48 np0005597378 podman[155601]: 2026-01-27 13:23:48.812121147 +0000 UTC m=+0.040360667 container create 437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_goldwasser, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:23:48 np0005597378 systemd[1]: Started libpod-conmon-437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c.scope.
Jan 27 08:23:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128a9a0a6a685cdaa01d2392e2d6eed50ba01ddf59fe1284ca6d96876c7583d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:48 np0005597378 podman[155601]: 2026-01-27 13:23:48.79461439 +0000 UTC m=+0.022854000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:23:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128a9a0a6a685cdaa01d2392e2d6eed50ba01ddf59fe1284ca6d96876c7583d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128a9a0a6a685cdaa01d2392e2d6eed50ba01ddf59fe1284ca6d96876c7583d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128a9a0a6a685cdaa01d2392e2d6eed50ba01ddf59fe1284ca6d96876c7583d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128a9a0a6a685cdaa01d2392e2d6eed50ba01ddf59fe1284ca6d96876c7583d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:48 np0005597378 podman[155601]: 2026-01-27 13:23:48.900834218 +0000 UTC m=+0.129073758 container init 437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:23:48 np0005597378 podman[155601]: 2026-01-27 13:23:48.907651052 +0000 UTC m=+0.135890572 container start 437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_goldwasser, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:23:48 np0005597378 podman[155601]: 2026-01-27 13:23:48.911650535 +0000 UTC m=+0.139890085 container attach 437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_goldwasser, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:23:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:49 np0005597378 recursing_goldwasser[155618]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:23:49 np0005597378 recursing_goldwasser[155618]: --> All data devices are unavailable
Jan 27 08:23:49 np0005597378 systemd[1]: libpod-437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c.scope: Deactivated successfully.
Jan 27 08:23:49 np0005597378 podman[155601]: 2026-01-27 13:23:49.349914507 +0000 UTC m=+0.578154037 container died 437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_goldwasser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:23:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-128a9a0a6a685cdaa01d2392e2d6eed50ba01ddf59fe1284ca6d96876c7583d2-merged.mount: Deactivated successfully.
Jan 27 08:23:49 np0005597378 podman[155601]: 2026-01-27 13:23:49.395502932 +0000 UTC m=+0.623742452 container remove 437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:23:49 np0005597378 systemd[1]: libpod-conmon-437362701d11c662bc65232eece7a0d7e6bb1c7b5bcf8363d36b562dbe7ce58c.scope: Deactivated successfully.
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.784947977 +0000 UTC m=+0.035241862 container create 40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:23:49 np0005597378 systemd[1]: Started libpod-conmon-40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447.scope.
Jan 27 08:23:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.8353804 +0000 UTC m=+0.085674315 container init 40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.841776022 +0000 UTC m=+0.092069907 container start 40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ritchie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.844793447 +0000 UTC m=+0.095087332 container attach 40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ritchie, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:23:49 np0005597378 epic_ritchie[155730]: 167 167
Jan 27 08:23:49 np0005597378 systemd[1]: libpod-40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447.scope: Deactivated successfully.
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.845991292 +0000 UTC m=+0.096285177 container died 40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:23:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-193813438d42cb73846cf8c50e7a3878994bd9f9d1c6e291f29cea18c7f8dee8-merged.mount: Deactivated successfully.
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.769660822 +0000 UTC m=+0.019954727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:23:49 np0005597378 podman[155712]: 2026-01-27 13:23:49.87937938 +0000 UTC m=+0.129673265 container remove 40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ritchie, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:23:49 np0005597378 systemd[1]: libpod-conmon-40909e12c690a7f20c1abdb90c6c1e5717ae0b878d9f7aa4da198d20d39a4447.scope: Deactivated successfully.
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.027449207 +0000 UTC m=+0.038419823 container create 40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:23:50 np0005597378 systemd[1]: Started libpod-conmon-40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e.scope.
Jan 27 08:23:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a57206dda06c5b148ecdbc44361376f6ed76773b8c6eb19a927d7bc2d33882/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a57206dda06c5b148ecdbc44361376f6ed76773b8c6eb19a927d7bc2d33882/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a57206dda06c5b148ecdbc44361376f6ed76773b8c6eb19a927d7bc2d33882/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a57206dda06c5b148ecdbc44361376f6ed76773b8c6eb19a927d7bc2d33882/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.01242141 +0000 UTC m=+0.023392046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.112699229 +0000 UTC m=+0.123669855 container init 40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.120954763 +0000 UTC m=+0.131925379 container start 40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.12402071 +0000 UTC m=+0.134991326 container attach 40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gauss, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:23:50 np0005597378 serene_gauss[155770]: {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:    "0": [
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:        {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "devices": [
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "/dev/loop3"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            ],
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_name": "ceph_lv0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_size": "21470642176",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "name": "ceph_lv0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "tags": {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cluster_name": "ceph",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.crush_device_class": "",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.encrypted": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.objectstore": "bluestore",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osd_id": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.type": "block",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.vdo": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.with_tpm": "0"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            },
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "type": "block",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "vg_name": "ceph_vg0"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:        }
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:    ],
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:    "1": [
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:        {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "devices": [
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "/dev/loop4"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            ],
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_name": "ceph_lv1",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_size": "21470642176",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "name": "ceph_lv1",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "tags": {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cluster_name": "ceph",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.crush_device_class": "",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.encrypted": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.objectstore": "bluestore",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osd_id": "1",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.type": "block",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.vdo": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.with_tpm": "0"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            },
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "type": "block",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "vg_name": "ceph_vg1"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:        }
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:    ],
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:    "2": [
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:        {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "devices": [
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "/dev/loop5"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            ],
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_name": "ceph_lv2",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_size": "21470642176",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "name": "ceph_lv2",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "tags": {
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.cluster_name": "ceph",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.crush_device_class": "",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.encrypted": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.objectstore": "bluestore",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osd_id": "2",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.type": "block",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.vdo": "0",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:                "ceph.with_tpm": "0"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            },
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "type": "block",
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:            "vg_name": "ceph_vg2"
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:        }
Jan 27 08:23:50 np0005597378 serene_gauss[155770]:    ]
Jan 27 08:23:50 np0005597378 serene_gauss[155770]: }
Jan 27 08:23:50 np0005597378 systemd[1]: libpod-40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e.scope: Deactivated successfully.
Jan 27 08:23:50 np0005597378 conmon[155770]: conmon 40ad829fe2cfc38a5f74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e.scope/container/memory.events
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.47489745 +0000 UTC m=+0.485868066 container died 40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:23:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e6a57206dda06c5b148ecdbc44361376f6ed76773b8c6eb19a927d7bc2d33882-merged.mount: Deactivated successfully.
Jan 27 08:23:50 np0005597378 podman[155753]: 2026-01-27 13:23:50.519021753 +0000 UTC m=+0.529992369 container remove 40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gauss, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:23:50 np0005597378 systemd[1]: libpod-conmon-40ad829fe2cfc38a5f74b5b5792c04f10f58524f02ee93cc6382c1cb3cdae67e.scope: Deactivated successfully.
Jan 27 08:23:50 np0005597378 podman[155849]: 2026-01-27 13:23:50.953111537 +0000 UTC m=+0.036423726 container create 32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:23:50 np0005597378 systemd[1]: Started libpod-conmon-32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5.scope.
Jan 27 08:23:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:51 np0005597378 podman[155849]: 2026-01-27 13:23:51.028357924 +0000 UTC m=+0.111670133 container init 32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_visvesvaraya, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:23:51 np0005597378 podman[155849]: 2026-01-27 13:23:50.935167167 +0000 UTC m=+0.018479386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:23:51 np0005597378 podman[155849]: 2026-01-27 13:23:51.036209997 +0000 UTC m=+0.119522186 container start 32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_visvesvaraya, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:23:51 np0005597378 podman[155849]: 2026-01-27 13:23:51.038923355 +0000 UTC m=+0.122235534 container attach 32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:23:51 np0005597378 flamboyant_visvesvaraya[155865]: 167 167
Jan 27 08:23:51 np0005597378 systemd[1]: libpod-32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5.scope: Deactivated successfully.
Jan 27 08:23:51 np0005597378 podman[155849]: 2026-01-27 13:23:51.041683043 +0000 UTC m=+0.124995232 container died 32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:23:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-700c1197fba24860e951da0ff8c5f58be3b8dfbabb68cf1fbbaade4e6d60d782-merged.mount: Deactivated successfully.
Jan 27 08:23:51 np0005597378 podman[155849]: 2026-01-27 13:23:51.100394441 +0000 UTC m=+0.183706630 container remove 32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:23:51 np0005597378 systemd[1]: libpod-conmon-32842fc210647dff0a522f112f77c86a9ebe5eed31d9c8e3427b475df45d42d5.scope: Deactivated successfully.
Jan 27 08:23:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:51 np0005597378 podman[155888]: 2026-01-27 13:23:51.253702806 +0000 UTC m=+0.043547248 container create 0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:23:51 np0005597378 systemd[1]: Started libpod-conmon-0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392.scope.
Jan 27 08:23:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:23:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01dd7a42bce6080034e6b5a8168faecef7f5df0c19fa85537738db315fdff94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01dd7a42bce6080034e6b5a8168faecef7f5df0c19fa85537738db315fdff94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01dd7a42bce6080034e6b5a8168faecef7f5df0c19fa85537738db315fdff94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01dd7a42bce6080034e6b5a8168faecef7f5df0c19fa85537738db315fdff94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:23:51 np0005597378 podman[155888]: 2026-01-27 13:23:51.328362928 +0000 UTC m=+0.118207400 container init 0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:23:51 np0005597378 podman[155888]: 2026-01-27 13:23:51.233538854 +0000 UTC m=+0.023383316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:23:51 np0005597378 podman[155888]: 2026-01-27 13:23:51.334304357 +0000 UTC m=+0.124148799 container start 0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kirch, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:23:51 np0005597378 podman[155888]: 2026-01-27 13:23:51.337877378 +0000 UTC m=+0.127721820 container attach 0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:23:51 np0005597378 lvm[155983]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:23:51 np0005597378 lvm[155982]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:23:51 np0005597378 lvm[155983]: VG ceph_vg1 finished
Jan 27 08:23:51 np0005597378 lvm[155982]: VG ceph_vg0 finished
Jan 27 08:23:51 np0005597378 lvm[155985]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:23:51 np0005597378 lvm[155985]: VG ceph_vg2 finished
Jan 27 08:23:52 np0005597378 sweet_kirch[155904]: {}
Jan 27 08:23:52 np0005597378 systemd[1]: libpod-0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392.scope: Deactivated successfully.
Jan 27 08:23:52 np0005597378 systemd[1]: libpod-0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392.scope: Consumed 1.242s CPU time.
Jan 27 08:23:52 np0005597378 podman[155888]: 2026-01-27 13:23:52.099898929 +0000 UTC m=+0.889743381 container died 0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kirch, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:23:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e01dd7a42bce6080034e6b5a8168faecef7f5df0c19fa85537738db315fdff94-merged.mount: Deactivated successfully.
Jan 27 08:23:52 np0005597378 podman[155888]: 2026-01-27 13:23:52.1393734 +0000 UTC m=+0.929217842 container remove 0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:23:52 np0005597378 systemd[1]: libpod-conmon-0deb20c0f3823b0df401298cd797e25f10ac672f1c364bd514754cfcc351c392.scope: Deactivated successfully.
Jan 27 08:23:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:23:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:23:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:23:53 np0005597378 systemd-logind[786]: New session 48 of user zuul.
Jan 27 08:23:53 np0005597378 systemd[1]: Started Session 48 of User zuul.
Jan 27 08:23:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:54 np0005597378 python3.9[156180]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:23:55 np0005597378 python3.9[156336]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:23:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:56 np0005597378 python3.9[156501]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:23:56 np0005597378 systemd[1]: Reloading.
Jan 27 08:23:56 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:23:56 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:23:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:23:57 np0005597378 python3.9[156686]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:23:57 np0005597378 network[156703]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:23:57 np0005597378 network[156704]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:23:57 np0005597378 network[156705]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:23:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:23:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:00 np0005597378 python3.9[156967]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:01 np0005597378 python3.9[157120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:02 np0005597378 python3.9[157273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:02 np0005597378 python3.9[157426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:24:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5596 writes, 24K keys, 5596 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5596 writes, 879 syncs, 6.37 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5596 writes, 24K keys, 5596 commit groups, 1.0 writes per commit group, ingest: 18.65 MB, 0.03 MB/s#012Interval WAL: 5596 writes, 879 syncs, 6.37 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 27 08:24:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:04 np0005597378 python3.9[157581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:05 np0005597378 python3.9[157734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:05 np0005597378 python3.9[157887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:24:06 np0005597378 python3.9[158040]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:07 np0005597378 python3.9[158192]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:08 np0005597378 python3.9[158344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:24:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 6940 writes, 28K keys, 6940 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6940 writes, 1303 syncs, 5.33 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6940 writes, 28K keys, 6940 commit groups, 1.0 writes per commit group, ingest: 19.68 MB, 0.03 MB/s#012Interval WAL: 6940 writes, 1303 syncs, 5.33 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 27 08:24:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:08 np0005597378 python3.9[158496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:09 np0005597378 python3.9[158648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:09 np0005597378 python3.9[158800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:10 np0005597378 python3.9[158952]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:10 np0005597378 podman[159000]: 2026-01-27 13:24:10.849549334 +0000 UTC m=+0.193632143 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:24:11 np0005597378 python3.9[159130]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:11 np0005597378 python3.9[159283]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:12 np0005597378 python3.9[159435]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:12 np0005597378 python3.9[159587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:13 np0005597378 python3.9[159739]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:14 np0005597378 python3.9[159891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:24:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.0 total, 600.0 interval#012Cumulative writes: 5511 writes, 23K keys, 5511 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5511 writes, 805 syncs, 6.85 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5511 writes, 23K keys, 5511 commit groups, 1.0 writes per commit group, ingest: 18.38 MB, 0.03 MB/s#012Interval WAL: 5511 writes, 805 syncs, 6.85 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 27 08:24:14 np0005597378 python3.9[160043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:24:14 np0005597378 podman[160068]: 2026-01-27 13:24:14.704037505 +0000 UTC m=+0.046419170 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:24:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:15 np0005597378 python3.9[160215]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:15 np0005597378 python3.9[160367]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:24:16 np0005597378 python3.9[160519]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:24:16 np0005597378 systemd[1]: Reloading.
Jan 27 08:24:16 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:24:16 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:24:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:24:16
Jan 27 08:24:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:24:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:24:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'vms']
Jan 27 08:24:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:17 np0005597378 python3.9[160706]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:24:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:24:18 np0005597378 python3.9[160859]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:18 np0005597378 python3.9[161012]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:19 np0005597378 python3.9[161165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:20 np0005597378 python3.9[161318]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:20 np0005597378 python3.9[161471]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:21 np0005597378 python3.9[161624]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:24:22 np0005597378 python3.9[161777]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 27 08:24:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:23 np0005597378 python3.9[161930]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:24:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:24 np0005597378 python3.9[162088]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:24:24 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:24:24 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:24:25 np0005597378 python3.9[162249]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:24:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:25 np0005597378 python3.9[162333]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:24:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:41 np0005597378 podman[162519]: 2026-01-27 13:24:41.758107933 +0000 UTC m=+0.093208989 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 08:24:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:45 np0005597378 podman[162552]: 2026-01-27 13:24:45.714209362 +0000 UTC m=+0.052948115 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:24:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:24:46.269 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:24:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:24:46.270 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:24:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:24:46.270 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:24:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:24:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:24:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.32415947 +0000 UTC m=+0.039776690 container create a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:24:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 08:24:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:24:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:24:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:24:53 np0005597378 systemd[1]: Started libpod-conmon-a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c.scope.
Jan 27 08:24:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.304906724 +0000 UTC m=+0.020523994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.406798528 +0000 UTC m=+0.122415778 container init a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.413824664 +0000 UTC m=+0.129441884 container start a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.41708346 +0000 UTC m=+0.132700710 container attach a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:24:53 np0005597378 vibrant_elgamal[162730]: 167 167
Jan 27 08:24:53 np0005597378 systemd[1]: libpod-a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c.scope: Deactivated successfully.
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.420538281 +0000 UTC m=+0.136155501 container died a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:24:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f0368f8211b9745766d755f5d2c908720d18e4819e6c88e816a64f18b21b330-merged.mount: Deactivated successfully.
Jan 27 08:24:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:53 np0005597378 podman[162713]: 2026-01-27 13:24:53.466981566 +0000 UTC m=+0.182598786 container remove a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 08:24:53 np0005597378 systemd[1]: libpod-conmon-a5738c346bede94cbf372963c8a7e6731358344d3545057d50b701646dce2c9c.scope: Deactivated successfully.
Jan 27 08:24:53 np0005597378 podman[162753]: 2026-01-27 13:24:53.623536255 +0000 UTC m=+0.048967560 container create d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_rosalind, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:24:53 np0005597378 systemd[1]: Started libpod-conmon-d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced.scope.
Jan 27 08:24:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:24:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16310d5081219e2bc39c0e1558d30d69642198dd1b9bcb28b91f5aa1743343af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16310d5081219e2bc39c0e1558d30d69642198dd1b9bcb28b91f5aa1743343af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16310d5081219e2bc39c0e1558d30d69642198dd1b9bcb28b91f5aa1743343af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16310d5081219e2bc39c0e1558d30d69642198dd1b9bcb28b91f5aa1743343af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16310d5081219e2bc39c0e1558d30d69642198dd1b9bcb28b91f5aa1743343af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:53 np0005597378 podman[162753]: 2026-01-27 13:24:53.604262749 +0000 UTC m=+0.029694084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:24:53 np0005597378 podman[162753]: 2026-01-27 13:24:53.711895282 +0000 UTC m=+0.137326617 container init d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:24:53 np0005597378 podman[162753]: 2026-01-27 13:24:53.719649729 +0000 UTC m=+0.145081034 container start d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:24:53 np0005597378 podman[162753]: 2026-01-27 13:24:53.73978502 +0000 UTC m=+0.165216345 container attach d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_rosalind, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:24:54 np0005597378 jolly_rosalind[162770]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:24:54 np0005597378 jolly_rosalind[162770]: --> All data devices are unavailable
Jan 27 08:24:54 np0005597378 systemd[1]: libpod-d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced.scope: Deactivated successfully.
Jan 27 08:24:54 np0005597378 podman[162791]: 2026-01-27 13:24:54.242353746 +0000 UTC m=+0.028094567 container died d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:24:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-16310d5081219e2bc39c0e1558d30d69642198dd1b9bcb28b91f5aa1743343af-merged.mount: Deactivated successfully.
Jan 27 08:24:54 np0005597378 podman[162791]: 2026-01-27 13:24:54.278387614 +0000 UTC m=+0.064128405 container remove d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_rosalind, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:24:54 np0005597378 systemd[1]: libpod-conmon-d466ffb787b7916e8b4134e8f6b882944465f60162f6a944c20c6c1c2a132ced.scope: Deactivated successfully.
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.793349974 +0000 UTC m=+0.097529677 container create f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.726261983 +0000 UTC m=+0.030441716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:24:54 np0005597378 systemd[1]: Started libpod-conmon-f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20.scope.
Jan 27 08:24:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.909403254 +0000 UTC m=+0.213582977 container init f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.915802721 +0000 UTC m=+0.219982424 container start f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:24:54 np0005597378 dazzling_bohr[162888]: 167 167
Jan 27 08:24:54 np0005597378 systemd[1]: libpod-f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20.scope: Deactivated successfully.
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.924893349 +0000 UTC m=+0.229073072 container attach f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.925389503 +0000 UTC m=+0.229569206 container died f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:24:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d2b99cffb9452d9a3d10e5b0b68b010fd18ebfdcb04e8e9ad9121575aee91ff7-merged.mount: Deactivated successfully.
Jan 27 08:24:54 np0005597378 podman[162871]: 2026-01-27 13:24:54.968759687 +0000 UTC m=+0.272939390 container remove f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bohr, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:24:54 np0005597378 systemd[1]: libpod-conmon-f3db550e451e40c5f33ea89c1e85d1f69ad81388b90d53f7f6031d682e73cc20.scope: Deactivated successfully.
Jan 27 08:24:55 np0005597378 kernel: SELinux:  Converting 2774 SID table entries...
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:24:55 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.134405374 +0000 UTC m=+0.044086436 container create b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:24:55 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 27 08:24:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:55 np0005597378 systemd[1]: Started libpod-conmon-b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65.scope.
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.117773656 +0000 UTC m=+0.027454738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:24:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:24:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b7b31cc2d1ae5bc02cd0fff47efa82bd0904d0c006c623e46b9c564dbbae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b7b31cc2d1ae5bc02cd0fff47efa82bd0904d0c006c623e46b9c564dbbae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b7b31cc2d1ae5bc02cd0fff47efa82bd0904d0c006c623e46b9c564dbbae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b7b31cc2d1ae5bc02cd0fff47efa82bd0904d0c006c623e46b9c564dbbae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.247436815 +0000 UTC m=+0.157117907 container init b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.254978847 +0000 UTC m=+0.164659909 container start b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.258876811 +0000 UTC m=+0.168557873 container attach b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lederberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]: {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:    "0": [
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:        {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "devices": [
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "/dev/loop3"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            ],
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_name": "ceph_lv0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_size": "21470642176",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "name": "ceph_lv0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "tags": {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cluster_name": "ceph",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.crush_device_class": "",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.encrypted": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.objectstore": "bluestore",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osd_id": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.type": "block",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.vdo": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.with_tpm": "0"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            },
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "type": "block",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "vg_name": "ceph_vg0"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:        }
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:    ],
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:    "1": [
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:        {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "devices": [
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "/dev/loop4"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            ],
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_name": "ceph_lv1",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_size": "21470642176",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "name": "ceph_lv1",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "tags": {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cluster_name": "ceph",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.crush_device_class": "",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.encrypted": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.objectstore": "bluestore",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osd_id": "1",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.type": "block",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.vdo": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.with_tpm": "0"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            },
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "type": "block",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "vg_name": "ceph_vg1"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:        }
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:    ],
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:    "2": [
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:        {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "devices": [
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "/dev/loop5"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            ],
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_name": "ceph_lv2",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_size": "21470642176",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "name": "ceph_lv2",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "tags": {
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.cluster_name": "ceph",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.crush_device_class": "",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.encrypted": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.objectstore": "bluestore",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osd_id": "2",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.type": "block",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.vdo": "0",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:                "ceph.with_tpm": "0"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            },
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "type": "block",
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:            "vg_name": "ceph_vg2"
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:        }
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]:    ]
Jan 27 08:24:55 np0005597378 vigorous_lederberg[162934]: }
Jan 27 08:24:55 np0005597378 systemd[1]: libpod-b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65.scope: Deactivated successfully.
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.548012276 +0000 UTC m=+0.457693348 container died b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lederberg, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:24:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cc1b7b31cc2d1ae5bc02cd0fff47efa82bd0904d0c006c623e46b9c564dbbae3-merged.mount: Deactivated successfully.
Jan 27 08:24:55 np0005597378 podman[162915]: 2026-01-27 13:24:55.624812973 +0000 UTC m=+0.534494035 container remove b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:24:55 np0005597378 systemd[1]: libpod-conmon-b2d2731d3ac8876be2c4f5a99751393888d65a05d84290a9cb17c4349ea9be65.scope: Deactivated successfully.
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.087320401 +0000 UTC m=+0.056826711 container create 2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:24:56 np0005597378 systemd[1]: Started libpod-conmon-2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827.scope.
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.049986464 +0000 UTC m=+0.019492794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:24:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.165727204 +0000 UTC m=+0.135233534 container init 2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.172083622 +0000 UTC m=+0.141589962 container start 2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:24:56 np0005597378 pedantic_newton[163033]: 167 167
Jan 27 08:24:56 np0005597378 systemd[1]: libpod-2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827.scope: Deactivated successfully.
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.177976825 +0000 UTC m=+0.147483135 container attach 2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_newton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.178408477 +0000 UTC m=+0.147914797 container died 2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_newton, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:24:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5721db51a8452cbe4b8d7990d050b58f6132e3da8c09f8ed2d9880d50579debb-merged.mount: Deactivated successfully.
Jan 27 08:24:56 np0005597378 podman[163017]: 2026-01-27 13:24:56.216386843 +0000 UTC m=+0.185893153 container remove 2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:24:56 np0005597378 systemd[1]: libpod-conmon-2ec91e51ab13fbc3348d557813ac22f781cf92573e6e456766df64eb88e46827.scope: Deactivated successfully.
Jan 27 08:24:56 np0005597378 podman[163057]: 2026-01-27 13:24:56.358807617 +0000 UTC m=+0.035168344 container create e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_saha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:24:56 np0005597378 systemd[1]: Started libpod-conmon-e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee.scope.
Jan 27 08:24:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:24:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d91691e4c24e57752bdc5668512f727ad2fd80af31ccf5bb9bc79e80de67a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d91691e4c24e57752bdc5668512f727ad2fd80af31ccf5bb9bc79e80de67a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d91691e4c24e57752bdc5668512f727ad2fd80af31ccf5bb9bc79e80de67a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d91691e4c24e57752bdc5668512f727ad2fd80af31ccf5bb9bc79e80de67a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:24:56 np0005597378 podman[163057]: 2026-01-27 13:24:56.34395253 +0000 UTC m=+0.020313257 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:24:56 np0005597378 podman[163057]: 2026-01-27 13:24:56.449804571 +0000 UTC m=+0.126165378 container init e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:24:56 np0005597378 podman[163057]: 2026-01-27 13:24:56.462649548 +0000 UTC m=+0.139010275 container start e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_saha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:24:56 np0005597378 podman[163057]: 2026-01-27 13:24:56.480390159 +0000 UTC m=+0.156750916 container attach e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:24:57 np0005597378 lvm[163153]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:24:57 np0005597378 lvm[163154]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:24:57 np0005597378 lvm[163154]: VG ceph_vg1 finished
Jan 27 08:24:57 np0005597378 lvm[163153]: VG ceph_vg0 finished
Jan 27 08:24:57 np0005597378 lvm[163156]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:24:57 np0005597378 lvm[163156]: VG ceph_vg2 finished
Jan 27 08:24:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:24:57 np0005597378 nostalgic_saha[163073]: {}
Jan 27 08:24:57 np0005597378 systemd[1]: libpod-e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee.scope: Deactivated successfully.
Jan 27 08:24:57 np0005597378 systemd[1]: libpod-e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee.scope: Consumed 1.263s CPU time.
Jan 27 08:24:57 np0005597378 podman[163057]: 2026-01-27 13:24:57.222850183 +0000 UTC m=+0.899210910 container died e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_saha, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:24:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d18d91691e4c24e57752bdc5668512f727ad2fd80af31ccf5bb9bc79e80de67a-merged.mount: Deactivated successfully.
Jan 27 08:24:57 np0005597378 podman[163057]: 2026-01-27 13:24:57.263994171 +0000 UTC m=+0.940354898 container remove e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_saha, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:24:57 np0005597378 systemd[1]: libpod-conmon-e9d19edb909ab786a57e334913ae56715dd95cbbd4192074e18001bbd868a0ee.scope: Deactivated successfully.
Jan 27 08:24:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:24:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:24:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:24:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:24:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:24:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:24:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:24:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 27 08:25:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:04 np0005597378 kernel: SELinux:  Converting 2774 SID table entries...
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:25:04 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:25:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:25:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:25:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:25:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:25:12 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 27 08:25:12 np0005597378 podman[163202]: 2026-01-27 13:25:12.773047817 +0000 UTC m=+0.107066907 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:25:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:25:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 27 08:25:16 np0005597378 podman[163228]: 2026-01-27 13:25:16.711412326 +0000 UTC m=+0.050985359 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 08:25:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:25:16
Jan 27 08:25:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:25:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:25:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'vms', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Jan 27 08:25:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:25:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:25:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:25:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:43 np0005597378 podman[180103]: 2026-01-27 13:25:43.728235999 +0000 UTC m=+0.066614888 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:25:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:25:46.270 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:25:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:25:46.270 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:25:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:25:46.270 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:47 np0005597378 podman[180144]: 2026-01-27 13:25:47.736319336 +0000 UTC m=+0.078198898 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:25:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:54 np0005597378 kernel: SELinux:  Converting 2775 SID table entries...
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability open_perms=1
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability always_check_network=0
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 08:25:54 np0005597378 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 08:25:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:55 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:25:55 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 27 08:25:55 np0005597378 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 27 08:25:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:57 np0005597378 podman[180332]: 2026-01-27 13:25:57.948976924 +0000 UTC m=+0.085514091 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:25:58 np0005597378 podman[180332]: 2026-01-27 13:25:58.040785906 +0000 UTC m=+0.177323073 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:25:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:25:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:25:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:25:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:25:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:25:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:25:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:26:00.019132187 +0000 UTC m=+0.044827143 container create 57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:26:00 np0005597378 systemd[1]: Started libpod-conmon-57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7.scope.
Jan 27 08:26:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:25:59.997855787 +0000 UTC m=+0.023550773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:26:00.102706295 +0000 UTC m=+0.128401281 container init 57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:26:00.110212029 +0000 UTC m=+0.135907015 container start 57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_engelbart, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:26:00.113401486 +0000 UTC m=+0.139096452 container attach 57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_engelbart, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:26:00 np0005597378 laughing_engelbart[180847]: 167 167
Jan 27 08:26:00 np0005597378 systemd[1]: libpod-57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7.scope: Deactivated successfully.
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:26:00.117100617 +0000 UTC m=+0.142795593 container died 57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:26:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d33fe03305806a800d59dd572516e26843e98b341860d5b056b5a4289388f231-merged.mount: Deactivated successfully.
Jan 27 08:26:00 np0005597378 podman[180831]: 2026-01-27 13:26:00.15758319 +0000 UTC m=+0.183278156 container remove 57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_engelbart, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:26:00 np0005597378 systemd[1]: libpod-conmon-57fcc1cacbef3cb7570daf91d46937c292776c8c1655ccfc48901534ae2f7de7.scope: Deactivated successfully.
Jan 27 08:26:00 np0005597378 podman[180871]: 2026-01-27 13:26:00.312532312 +0000 UTC m=+0.043368943 container create 67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:26:00 np0005597378 systemd[1]: Started libpod-conmon-67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64.scope.
Jan 27 08:26:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:26:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07128e86ab2022a8ffcc798b4c9ab2cf326578f32c79149e9331c927d4ef70d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07128e86ab2022a8ffcc798b4c9ab2cf326578f32c79149e9331c927d4ef70d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07128e86ab2022a8ffcc798b4c9ab2cf326578f32c79149e9331c927d4ef70d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07128e86ab2022a8ffcc798b4c9ab2cf326578f32c79149e9331c927d4ef70d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07128e86ab2022a8ffcc798b4c9ab2cf326578f32c79149e9331c927d4ef70d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:00 np0005597378 podman[180871]: 2026-01-27 13:26:00.380011391 +0000 UTC m=+0.110848032 container init 67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:26:00 np0005597378 podman[180871]: 2026-01-27 13:26:00.387990048 +0000 UTC m=+0.118826679 container start 67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:26:00 np0005597378 podman[180871]: 2026-01-27 13:26:00.295054736 +0000 UTC m=+0.025891387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:26:00 np0005597378 podman[180871]: 2026-01-27 13:26:00.391622077 +0000 UTC m=+0.122458738 container attach 67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:26:00 np0005597378 tender_brown[180888]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:26:00 np0005597378 tender_brown[180888]: --> All data devices are unavailable
Jan 27 08:26:00 np0005597378 systemd[1]: libpod-67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64.scope: Deactivated successfully.
Jan 27 08:26:00 np0005597378 podman[180871]: 2026-01-27 13:26:00.833373625 +0000 UTC m=+0.564210256 container died 67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:26:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-07128e86ab2022a8ffcc798b4c9ab2cf326578f32c79149e9331c927d4ef70d2-merged.mount: Deactivated successfully.
Jan 27 08:26:01 np0005597378 podman[180871]: 2026-01-27 13:26:01.003287966 +0000 UTC m=+0.734124607 container remove 67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:26:01 np0005597378 systemd[1]: libpod-conmon-67bc48bd48941aaf9f6a76905683350ddbf7e38c9833a42c6cbcbdc65e86be64.scope: Deactivated successfully.
Jan 27 08:26:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.449305049 +0000 UTC m=+0.038835528 container create 1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:26:01 np0005597378 systemd[1]: Started libpod-conmon-1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957.scope.
Jan 27 08:26:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.522079042 +0000 UTC m=+0.111609521 container init 1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.434001602 +0000 UTC m=+0.023532081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.529625199 +0000 UTC m=+0.119155698 container start 1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.533499183 +0000 UTC m=+0.123029662 container attach 1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:26:01 np0005597378 silly_benz[180999]: 167 167
Jan 27 08:26:01 np0005597378 systemd[1]: libpod-1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957.scope: Deactivated successfully.
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.53517176 +0000 UTC m=+0.124702229 container died 1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:26:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ac32c56916d4e90b24f6989fbc1aee4bf2287548b8c8d9453d72b9d957077904-merged.mount: Deactivated successfully.
Jan 27 08:26:01 np0005597378 podman[180983]: 2026-01-27 13:26:01.571758686 +0000 UTC m=+0.161289155 container remove 1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:26:01 np0005597378 systemd[1]: libpod-conmon-1aafb06f160bb4f4b078b97ff7d86a7b2b6e92efbd473e146c7cb89e1dc12957.scope: Deactivated successfully.
Jan 27 08:26:01 np0005597378 podman[181023]: 2026-01-27 13:26:01.714789954 +0000 UTC m=+0.038896171 container create 1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:26:01 np0005597378 systemd[1]: Started libpod-conmon-1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2.scope.
Jan 27 08:26:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:26:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbbad4bfeb158680dba8a442cbf91fa44ce4d7dd6c5cd58f183072d7f40bba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbbad4bfeb158680dba8a442cbf91fa44ce4d7dd6c5cd58f183072d7f40bba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbbad4bfeb158680dba8a442cbf91fa44ce4d7dd6c5cd58f183072d7f40bba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbbad4bfeb158680dba8a442cbf91fa44ce4d7dd6c5cd58f183072d7f40bba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:01 np0005597378 podman[181023]: 2026-01-27 13:26:01.787659379 +0000 UTC m=+0.111765596 container init 1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:26:01 np0005597378 podman[181023]: 2026-01-27 13:26:01.795295847 +0000 UTC m=+0.119402064 container start 1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:26:01 np0005597378 podman[181023]: 2026-01-27 13:26:01.698879261 +0000 UTC m=+0.022985478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:26:01 np0005597378 podman[181023]: 2026-01-27 13:26:01.798284169 +0000 UTC m=+0.122390386 container attach 1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]: {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:    "0": [
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:        {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "devices": [
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "/dev/loop3"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            ],
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_name": "ceph_lv0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_size": "21470642176",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "name": "ceph_lv0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "tags": {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cluster_name": "ceph",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.crush_device_class": "",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.encrypted": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.objectstore": "bluestore",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osd_id": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.type": "block",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.vdo": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.with_tpm": "0"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            },
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "type": "block",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "vg_name": "ceph_vg0"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:        }
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:    ],
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:    "1": [
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:        {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "devices": [
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "/dev/loop4"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            ],
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_name": "ceph_lv1",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_size": "21470642176",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "name": "ceph_lv1",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "tags": {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cluster_name": "ceph",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.crush_device_class": "",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.encrypted": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.objectstore": "bluestore",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osd_id": "1",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.type": "block",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.vdo": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.with_tpm": "0"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            },
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "type": "block",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "vg_name": "ceph_vg1"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:        }
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:    ],
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:    "2": [
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:        {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "devices": [
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "/dev/loop5"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            ],
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_name": "ceph_lv2",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_size": "21470642176",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "name": "ceph_lv2",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "tags": {
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.cluster_name": "ceph",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.crush_device_class": "",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.encrypted": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.objectstore": "bluestore",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osd_id": "2",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.type": "block",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.vdo": "0",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:                "ceph.with_tpm": "0"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            },
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "type": "block",
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:            "vg_name": "ceph_vg2"
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:        }
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]:    ]
Jan 27 08:26:02 np0005597378 crazy_aryabhata[181039]: }
Jan 27 08:26:02 np0005597378 systemd[1]: libpod-1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2.scope: Deactivated successfully.
Jan 27 08:26:02 np0005597378 podman[181023]: 2026-01-27 13:26:02.108223365 +0000 UTC m=+0.432329602 container died 1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:26:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ecbbad4bfeb158680dba8a442cbf91fa44ce4d7dd6c5cd58f183072d7f40bba4-merged.mount: Deactivated successfully.
Jan 27 08:26:02 np0005597378 podman[181023]: 2026-01-27 13:26:02.164157439 +0000 UTC m=+0.488263656 container remove 1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:26:02 np0005597378 systemd[1]: libpod-conmon-1f6bc5fa7efd7acbbc82ab6d35a5c554674e5d989fda163ca1cfbec9de7f2cd2.scope: Deactivated successfully.
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.604481447 +0000 UTC m=+0.040523545 container create ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:26:02 np0005597378 systemd[1]: Started libpod-conmon-ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108.scope.
Jan 27 08:26:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.675583635 +0000 UTC m=+0.111625763 container init ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.683161002 +0000 UTC m=+0.119203100 container start ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khayyam, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.586320092 +0000 UTC m=+0.022362210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.687287474 +0000 UTC m=+0.123329662 container attach ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:26:02 np0005597378 interesting_khayyam[181377]: 167 167
Jan 27 08:26:02 np0005597378 systemd[1]: libpod-ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108.scope: Deactivated successfully.
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.690249965 +0000 UTC m=+0.126292063 container died ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:26:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2bf2d03a55e91c1bbdf0ea12546e5486c1f0a810d751808427ceb55adc1f80aa-merged.mount: Deactivated successfully.
Jan 27 08:26:02 np0005597378 podman[181308]: 2026-01-27 13:26:02.726692568 +0000 UTC m=+0.162734666 container remove ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:26:02 np0005597378 systemd[1]: libpod-conmon-ff75d5bbc8cb606b3879bf7d88dcabb778ff548ef3f6da3057a8cfa8f9889108.scope: Deactivated successfully.
Jan 27 08:26:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:03 np0005597378 podman[181781]: 2026-01-27 13:26:03.431100413 +0000 UTC m=+0.042970032 container create 85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclean, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:26:03 np0005597378 systemd[1]: Started libpod-conmon-85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62.scope.
Jan 27 08:26:03 np0005597378 podman[181781]: 2026-01-27 13:26:03.411724935 +0000 UTC m=+0.023594584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:26:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:26:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4fb9d74c09105136f3bb744c05da6feba67ee8bbb09bbf4e5f6ebfe862d63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4fb9d74c09105136f3bb744c05da6feba67ee8bbb09bbf4e5f6ebfe862d63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4fb9d74c09105136f3bb744c05da6feba67ee8bbb09bbf4e5f6ebfe862d63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a4fb9d74c09105136f3bb744c05da6feba67ee8bbb09bbf4e5f6ebfe862d63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:26:03 np0005597378 podman[181781]: 2026-01-27 13:26:03.530634835 +0000 UTC m=+0.142504474 container init 85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclean, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:26:03 np0005597378 systemd[1]: Stopping OpenSSH server daemon...
Jan 27 08:26:03 np0005597378 podman[181781]: 2026-01-27 13:26:03.540309359 +0000 UTC m=+0.152178978 container start 85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclean, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:26:03 np0005597378 systemd[1]: sshd.service: Deactivated successfully.
Jan 27 08:26:03 np0005597378 systemd[1]: sshd.service: Unit process 181796 (sshd-session) remains running after unit stopped.
Jan 27 08:26:03 np0005597378 systemd[1]: sshd.service: Unit process 181802 (sshd-session) remains running after unit stopped.
Jan 27 08:26:03 np0005597378 systemd[1]: Stopped OpenSSH server daemon.
Jan 27 08:26:03 np0005597378 systemd[1]: sshd.service: Consumed 2.868s CPU time, 37.6M memory peak, read 32.0K from disk, written 104.0K to disk.
Jan 27 08:26:03 np0005597378 systemd[1]: Stopped target sshd-keygen.target.
Jan 27 08:26:03 np0005597378 systemd[1]: Stopping sshd-keygen.target...
Jan 27 08:26:03 np0005597378 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 08:26:03 np0005597378 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 08:26:03 np0005597378 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 08:26:03 np0005597378 systemd[1]: Reached target sshd-keygen.target.
Jan 27 08:26:03 np0005597378 podman[181781]: 2026-01-27 13:26:03.544081531 +0000 UTC m=+0.155951170 container attach 85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:26:03 np0005597378 systemd[1]: Starting OpenSSH server daemon...
Jan 27 08:26:03 np0005597378 systemd[1]: Started OpenSSH server daemon.
Jan 27 08:26:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:04 np0005597378 lvm[181970]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:26:04 np0005597378 lvm[181970]: VG ceph_vg1 finished
Jan 27 08:26:04 np0005597378 lvm[181969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:26:04 np0005597378 lvm[181969]: VG ceph_vg0 finished
Jan 27 08:26:04 np0005597378 lvm[181978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:26:04 np0005597378 lvm[181978]: VG ceph_vg2 finished
Jan 27 08:26:04 np0005597378 keen_mclean[181800]: {}
Jan 27 08:26:04 np0005597378 systemd[1]: libpod-85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62.scope: Deactivated successfully.
Jan 27 08:26:04 np0005597378 systemd[1]: libpod-85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62.scope: Consumed 1.263s CPU time.
Jan 27 08:26:04 np0005597378 podman[181781]: 2026-01-27 13:26:04.371739986 +0000 UTC m=+0.983609605 container died 85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:26:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c5a4fb9d74c09105136f3bb744c05da6feba67ee8bbb09bbf4e5f6ebfe862d63-merged.mount: Deactivated successfully.
Jan 27 08:26:04 np0005597378 podman[181781]: 2026-01-27 13:26:04.433772386 +0000 UTC m=+1.045642005 container remove 85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:26:04 np0005597378 systemd[1]: libpod-conmon-85d2cfa841d8aa93a11496f56fe0e10797e4f577eb9570115f606263b6ebfe62.scope: Deactivated successfully.
Jan 27 08:26:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:26:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:26:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:26:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:26:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:05 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:26:05 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:26:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:26:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:26:05 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:05 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:05 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:05 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:26:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:10 np0005597378 python3.9[187787]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:26:10 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:10 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:10 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:11 np0005597378 python3.9[189130]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:26:11 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:11 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:11 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:12 np0005597378 python3.9[190453]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:26:12 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:12 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:12 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:13 np0005597378 python3.9[191299]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:26:13 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:13 np0005597378 podman[191301]: 2026-01-27 13:26:13.877318907 +0000 UTC m=+0.091688350 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 08:26:13 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:13 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:14 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:26:14 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:26:14 np0005597378 systemd[1]: man-db-cache-update.service: Consumed 9.668s CPU time.
Jan 27 08:26:14 np0005597378 systemd[1]: run-rc38ebe98e0114e5e8835de5cd5309d1f.service: Deactivated successfully.
Jan 27 08:26:14 np0005597378 python3.9[191515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:14 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:15 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:15 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:15 np0005597378 python3.9[191704]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:16 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:16 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:16 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:26:16
Jan 27 08:26:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:26:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:26:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.mgr']
Jan 27 08:26:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:26:17 np0005597378 python3.9[191895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:17 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:17 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:17 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:26:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:26:17 np0005597378 podman[192056]: 2026-01-27 13:26:17.874392299 +0000 UTC m=+0.067155792 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:26:18 np0005597378 python3.9[192101]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:18 np0005597378 python3.9[192258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:19 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:19 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:19 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:20 np0005597378 python3.9[192448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 08:26:20 np0005597378 systemd[1]: Reloading.
Jan 27 08:26:20 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:26:20 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:26:20 np0005597378 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 27 08:26:20 np0005597378 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 27 08:26:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:21 np0005597378 python3.9[192644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:22 np0005597378 python3.9[192799]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:22 np0005597378 python3.9[192954]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:23 np0005597378 python3.9[193109]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:24 np0005597378 python3.9[193264]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:25 np0005597378 python3.9[193419]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:26 np0005597378 python3.9[193574]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:26 np0005597378 python3.9[193729]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:26:27 np0005597378 python3.9[193884]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:28 np0005597378 python3.9[194039]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:29 np0005597378 python3.9[194194]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:30 np0005597378 python3.9[194349]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:30 np0005597378 python3.9[194504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:31 np0005597378 python3.9[194659]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 08:26:32 np0005597378 python3.9[194814]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:26:33 np0005597378 python3.9[194966]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:26:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:33 np0005597378 python3.9[195118]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:26:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:34 np0005597378 python3.9[195270]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:26:34 np0005597378 python3.9[195422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:26:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:35 np0005597378 python3.9[195574]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:26:36 np0005597378 python3.9[195724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:26:36 np0005597378 python3.9[195876]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:37 np0005597378 python3.9[196001]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520396.3711038-557-151560460683912/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:38 np0005597378 python3.9[196153]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:38 np0005597378 python3.9[196278]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520397.826318-557-25076545016356/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:39 np0005597378 python3.9[196430]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:40 np0005597378 python3.9[196555]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520398.9820213-557-210867126043589/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:40 np0005597378 python3.9[196707]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:41 np0005597378 python3.9[196832]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520400.3461604-557-215326132908729/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:41 np0005597378 python3.9[196984]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:42 np0005597378 python3.9[197109]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520401.5018513-557-215475318222190/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:43 np0005597378 python3.9[197261]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:43 np0005597378 python3.9[197386]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520402.8797967-557-98597736702369/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:44 np0005597378 podman[197510]: 2026-01-27 13:26:44.445183163 +0000 UTC m=+0.098310650 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 27 08:26:44 np0005597378 python3.9[197558]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:45 np0005597378 python3.9[197688]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520404.0890758-557-280002348807966/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:45 np0005597378 python3.9[197840]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:46 np0005597378 python3.9[197965]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769520405.286211-557-52243990295134/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:26:46.271 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:26:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:26:46.272 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:26:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:26:46.272 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:26:46 np0005597378 python3.9[198117]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:47 np0005597378 python3.9[198270]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:26:47 np0005597378 podman[198394]: 2026-01-27 13:26:47.958238076 +0000 UTC m=+0.045331817 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 08:26:48 np0005597378 python3.9[198441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:48 np0005597378 python3.9[198593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:49 np0005597378 python3.9[198745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:49 np0005597378 python3.9[198897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:50 np0005597378 python3.9[199049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:51 np0005597378 python3.9[199201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:51 np0005597378 python3.9[199353]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:52 np0005597378 python3.9[199505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:53 np0005597378 python3.9[199657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:53 np0005597378 python3.9[199809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:54 np0005597378 python3.9[199961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:54 np0005597378 python3.9[200113]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:55 np0005597378 python3.9[200265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:56 np0005597378 python3.9[200417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:56 np0005597378 python3.9[200540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520415.6491797-778-120910773723383/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:57 np0005597378 python3.9[200692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:58 np0005597378 python3.9[200815]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520416.9208026-778-47857748338459/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:26:58 np0005597378 python3.9[200967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:26:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:26:59 np0005597378 python3.9[201090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520418.253124-778-139409113916913/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:26:59 np0005597378 python3.9[201242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:00 np0005597378 python3.9[201365]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520419.4175048-778-52121092664458/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:01 np0005597378 python3.9[201517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:01 np0005597378 python3.9[201640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520420.590911-778-84640114613606/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:02 np0005597378 python3.9[201792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:02 np0005597378 python3.9[201915]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520421.763036-778-145781736176057/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:03 np0005597378 python3.9[202067]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:03 np0005597378 python3.9[202190]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520422.8784764-778-70154825982948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:04 np0005597378 python3.9[202342]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:05 np0005597378 python3.9[202515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520424.048925-778-270244928951416/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.414736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520425414763, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3560419, "memory_usage": 3610032, "flush_reason": "Manual Compaction"}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520425435149, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3484037, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9701, "largest_seqno": 11745, "table_properties": {"data_size": 3474749, "index_size": 5910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17895, "raw_average_key_size": 19, "raw_value_size": 3456325, "raw_average_value_size": 3760, "num_data_blocks": 268, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769520192, "oldest_key_time": 1769520192, "file_creation_time": 1769520425, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20538 microseconds, and 6696 cpu microseconds.
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.435272) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3484037 bytes OK
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.435343) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.436841) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.436859) EVENT_LOG_v1 {"time_micros": 1769520425436853, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.436877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3551877, prev total WAL file size 3551877, number of live WAL files 2.
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.438166) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3402KB)], [26(6044KB)]
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520425438227, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9673341, "oldest_snapshot_seqno": -1}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3716 keys, 8074612 bytes, temperature: kUnknown
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520425518476, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8074612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8046092, "index_size": 18106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89253, "raw_average_key_size": 24, "raw_value_size": 7975368, "raw_average_value_size": 2146, "num_data_blocks": 785, "num_entries": 3716, "num_filter_entries": 3716, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769520425, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.518708) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8074612 bytes
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.525340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.4 rd, 100.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4230, records dropped: 514 output_compression: NoCompression
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.525366) EVENT_LOG_v1 {"time_micros": 1769520425525354, "job": 10, "event": "compaction_finished", "compaction_time_micros": 80334, "compaction_time_cpu_micros": 17733, "output_level": 6, "num_output_files": 1, "total_output_size": 8074612, "num_input_records": 4230, "num_output_records": 3716, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520425526108, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520425527016, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.438061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.527060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.527067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.527069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.527071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:27:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:27:05.527072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.602070658 +0000 UTC m=+0.044469344 container create 18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 27 08:27:05 np0005597378 systemd[1]: Started libpod-conmon-18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930.scope.
Jan 27 08:27:05 np0005597378 python3.9[202748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.578265379 +0000 UTC m=+0.020664095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:27:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.714997558 +0000 UTC m=+0.157396264 container init 18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.72276416 +0000 UTC m=+0.165162846 container start 18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:27:05 np0005597378 nifty_austin[202777]: 167 167
Jan 27 08:27:05 np0005597378 systemd[1]: libpod-18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930.scope: Deactivated successfully.
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.731761526 +0000 UTC m=+0.174160222 container attach 18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.732452874 +0000 UTC m=+0.174851560 container died 18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:27:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0abef4bfa8ea77c5156e036e774ea49a42410cb6a04c828728de2c34c8ce9804-merged.mount: Deactivated successfully.
Jan 27 08:27:05 np0005597378 podman[202761]: 2026-01-27 13:27:05.857475364 +0000 UTC m=+0.299874050 container remove 18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:05 np0005597378 systemd[1]: libpod-conmon-18eb617a4720a2ecf60eb23da7b9523e2ae0a57d664c130767675a7884c2b930.scope: Deactivated successfully.
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:06.026502794 +0000 UTC m=+0.053445849 container create 7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pike, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:05.998708196 +0000 UTC m=+0.025651261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:27:06 np0005597378 systemd[1]: Started libpod-conmon-7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff.scope.
Jan 27 08:27:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:27:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b38fe340359bd5dcbb8b191d2026b72346c2196cd58f1f048c7e1b5def099d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b38fe340359bd5dcbb8b191d2026b72346c2196cd58f1f048c7e1b5def099d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b38fe340359bd5dcbb8b191d2026b72346c2196cd58f1f048c7e1b5def099d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b38fe340359bd5dcbb8b191d2026b72346c2196cd58f1f048c7e1b5def099d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b38fe340359bd5dcbb8b191d2026b72346c2196cd58f1f048c7e1b5def099d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:06 np0005597378 python3.9[202935]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520425.1607265-778-629409730257/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:06.224740101 +0000 UTC m=+0.251683176 container init 7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:06.231724022 +0000 UTC m=+0.258667087 container start 7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:06.274000635 +0000 UTC m=+0.300943690 container attach 7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:27:06 np0005597378 happy_pike[202941]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:27:06 np0005597378 happy_pike[202941]: --> All data devices are unavailable
Jan 27 08:27:06 np0005597378 systemd[1]: libpod-7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff.scope: Deactivated successfully.
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:06.740640182 +0000 UTC m=+0.767583237 container died 7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pike, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:27:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b6b38fe340359bd5dcbb8b191d2026b72346c2196cd58f1f048c7e1b5def099d-merged.mount: Deactivated successfully.
Jan 27 08:27:06 np0005597378 python3.9[203108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:06 np0005597378 podman[202916]: 2026-01-27 13:27:06.881930625 +0000 UTC m=+0.908873670 container remove 7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 27 08:27:06 np0005597378 systemd[1]: libpod-conmon-7096fd1de619cb29a21d0cc5b6590c8ab8b1e39f466601642c71588aabd273ff.scope: Deactivated successfully.
Jan 27 08:27:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:07 np0005597378 podman[203312]: 2026-01-27 13:27:07.342902828 +0000 UTC m=+0.099770342 container create 5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:27:07 np0005597378 podman[203312]: 2026-01-27 13:27:07.264267373 +0000 UTC m=+0.021134917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:27:07 np0005597378 python3.9[203311]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520426.3466814-778-97922504834025/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:07 np0005597378 systemd[1]: Started libpod-conmon-5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2.scope.
Jan 27 08:27:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:27:07 np0005597378 podman[203312]: 2026-01-27 13:27:07.538158564 +0000 UTC m=+0.295026108 container init 5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hypatia, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:07 np0005597378 podman[203312]: 2026-01-27 13:27:07.544158567 +0000 UTC m=+0.301026071 container start 5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:27:07 np0005597378 bold_hypatia[203330]: 167 167
Jan 27 08:27:07 np0005597378 systemd[1]: libpod-5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2.scope: Deactivated successfully.
Jan 27 08:27:07 np0005597378 conmon[203330]: conmon 5ed7768226e050b5b438 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2.scope/container/memory.events
Jan 27 08:27:07 np0005597378 podman[203312]: 2026-01-27 13:27:07.645204103 +0000 UTC m=+0.402071617 container attach 5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:27:07 np0005597378 podman[203312]: 2026-01-27 13:27:07.645509601 +0000 UTC m=+0.402377115 container died 5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hypatia, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:27:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-850ee830b9929e12ea0b68b4facf45b8d0105e20986ce2cf9856f7c4600d19c5-merged.mount: Deactivated successfully.
Jan 27 08:27:08 np0005597378 python3.9[203497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:08 np0005597378 podman[203312]: 2026-01-27 13:27:08.125678387 +0000 UTC m=+0.882545901 container remove 5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:08 np0005597378 systemd[1]: libpod-conmon-5ed7768226e050b5b438000174140facfdd1636c4390c0ef067ab42d9473cfa2.scope: Deactivated successfully.
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.294994876 +0000 UTC m=+0.054411966 container create b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.263367824 +0000 UTC m=+0.022784934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:27:08 np0005597378 systemd[1]: Started libpod-conmon-b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e.scope.
Jan 27 08:27:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:27:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23f7900a8b6ac676e9e0d51b51a8b128feecdaf4ccf48290d870a2a96af992a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23f7900a8b6ac676e9e0d51b51a8b128feecdaf4ccf48290d870a2a96af992a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23f7900a8b6ac676e9e0d51b51a8b128feecdaf4ccf48290d870a2a96af992a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23f7900a8b6ac676e9e0d51b51a8b128feecdaf4ccf48290d870a2a96af992a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.451637958 +0000 UTC m=+0.211055068 container init b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.458279889 +0000 UTC m=+0.217696999 container start b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.50412373 +0000 UTC m=+0.263540810 container attach b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:27:08 np0005597378 python3.9[203646]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520427.5547605-778-251282872110455/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]: {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:    "0": [
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:        {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "devices": [
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "/dev/loop3"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            ],
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_name": "ceph_lv0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_size": "21470642176",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "name": "ceph_lv0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "tags": {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cluster_name": "ceph",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.crush_device_class": "",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.encrypted": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.objectstore": "bluestore",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osd_id": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.type": "block",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.vdo": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.with_tpm": "0"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            },
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "type": "block",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "vg_name": "ceph_vg0"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:        }
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:    ],
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:    "1": [
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:        {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "devices": [
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "/dev/loop4"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            ],
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_name": "ceph_lv1",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_size": "21470642176",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "name": "ceph_lv1",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "tags": {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cluster_name": "ceph",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.crush_device_class": "",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.encrypted": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.objectstore": "bluestore",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osd_id": "1",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.type": "block",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.vdo": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.with_tpm": "0"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            },
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "type": "block",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "vg_name": "ceph_vg1"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:        }
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:    ],
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:    "2": [
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:        {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "devices": [
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "/dev/loop5"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            ],
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_name": "ceph_lv2",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_size": "21470642176",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "name": "ceph_lv2",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "tags": {
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.cluster_name": "ceph",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.crush_device_class": "",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.encrypted": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.objectstore": "bluestore",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osd_id": "2",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.type": "block",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.vdo": "0",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:                "ceph.with_tpm": "0"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            },
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "type": "block",
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:            "vg_name": "ceph_vg2"
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:        }
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]:    ]
Jan 27 08:27:08 np0005597378 hardcore_colden[203642]: }
Jan 27 08:27:08 np0005597378 systemd[1]: libpod-b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e.scope: Deactivated successfully.
Jan 27 08:27:08 np0005597378 conmon[203642]: conmon b0c7fef5d6d9b3128b75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e.scope/container/memory.events
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.767947745 +0000 UTC m=+0.527364835 container died b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b23f7900a8b6ac676e9e0d51b51a8b128feecdaf4ccf48290d870a2a96af992a-merged.mount: Deactivated successfully.
Jan 27 08:27:08 np0005597378 podman[203572]: 2026-01-27 13:27:08.879688083 +0000 UTC m=+0.639105183 container remove b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:08 np0005597378 systemd[1]: libpod-conmon-b0c7fef5d6d9b3128b75205c3c183fd23cb1c994722058801a6f394dd552994e.scope: Deactivated successfully.
Jan 27 08:27:09 np0005597378 auditd[704]: Audit daemon rotating log files
Jan 27 08:27:09 np0005597378 python3.9[203841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.357196777 +0000 UTC m=+0.058538278 container create c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:27:09 np0005597378 systemd[1]: Started libpod-conmon-c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad.scope.
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.321369819 +0000 UTC m=+0.022711350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:27:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.450931393 +0000 UTC m=+0.152272914 container init c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.460182035 +0000 UTC m=+0.161523536 container start c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:27:09 np0005597378 awesome_blackwell[203956]: 167 167
Jan 27 08:27:09 np0005597378 systemd[1]: libpod-c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad.scope: Deactivated successfully.
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.494850891 +0000 UTC m=+0.196192442 container attach c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.495485539 +0000 UTC m=+0.196827060 container died c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:27:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e645f34553b650537556e4b1f317c0354b5f8bfc3b5b7b2d27f5b9fd7d9fc052-merged.mount: Deactivated successfully.
Jan 27 08:27:09 np0005597378 podman[203897]: 2026-01-27 13:27:09.741396175 +0000 UTC m=+0.442737686 container remove c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:27:09 np0005597378 python3.9[204034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520428.722509-778-169508097436634/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:09 np0005597378 systemd[1]: libpod-conmon-c29ffc7d7626d6eac213ac55d3aa03cfaba209aec2d4c2505c6401a8765af7ad.scope: Deactivated successfully.
Jan 27 08:27:09 np0005597378 podman[204066]: 2026-01-27 13:27:09.937406351 +0000 UTC m=+0.063588115 container create a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:27:09 np0005597378 systemd[1]: Started libpod-conmon-a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5.scope.
Jan 27 08:27:09 np0005597378 podman[204066]: 2026-01-27 13:27:09.89813315 +0000 UTC m=+0.024314934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:27:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:27:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725965e16eb4759ca57da0c10eebb6df6334b2bd685a9ce3e375768dec4104b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725965e16eb4759ca57da0c10eebb6df6334b2bd685a9ce3e375768dec4104b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725965e16eb4759ca57da0c10eebb6df6334b2bd685a9ce3e375768dec4104b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725965e16eb4759ca57da0c10eebb6df6334b2bd685a9ce3e375768dec4104b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:27:10 np0005597378 podman[204066]: 2026-01-27 13:27:10.033075181 +0000 UTC m=+0.159256965 container init a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_greider, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:27:10 np0005597378 podman[204066]: 2026-01-27 13:27:10.039588948 +0000 UTC m=+0.165770712 container start a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_greider, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:10 np0005597378 podman[204066]: 2026-01-27 13:27:10.083757573 +0000 UTC m=+0.209939377 container attach a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_greider, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:27:10 np0005597378 python3.9[204214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:10 np0005597378 lvm[204401]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:27:10 np0005597378 lvm[204401]: VG ceph_vg0 finished
Jan 27 08:27:10 np0005597378 lvm[204408]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:27:10 np0005597378 lvm[204408]: VG ceph_vg1 finished
Jan 27 08:27:10 np0005597378 lvm[204414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:27:10 np0005597378 lvm[204414]: VG ceph_vg2 finished
Jan 27 08:27:10 np0005597378 lvm[204415]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:27:10 np0005597378 lvm[204415]: VG ceph_vg1 finished
Jan 27 08:27:10 np0005597378 lvm[204416]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:27:10 np0005597378 lvm[204416]: VG ceph_vg2 finished
Jan 27 08:27:10 np0005597378 crazy_greider[204123]: {}
Jan 27 08:27:10 np0005597378 systemd[1]: libpod-a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5.scope: Deactivated successfully.
Jan 27 08:27:10 np0005597378 systemd[1]: libpod-a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5.scope: Consumed 1.287s CPU time.
Jan 27 08:27:10 np0005597378 podman[204066]: 2026-01-27 13:27:10.861737532 +0000 UTC m=+0.987919306 container died a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:27:10 np0005597378 python3.9[204412]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520429.9364882-778-97946529327899/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-725965e16eb4759ca57da0c10eebb6df6334b2bd685a9ce3e375768dec4104b1-merged.mount: Deactivated successfully.
Jan 27 08:27:10 np0005597378 podman[204066]: 2026-01-27 13:27:10.979839363 +0000 UTC m=+1.106021167 container remove a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:27:10 np0005597378 systemd[1]: libpod-conmon-a0970bf09b7f583524f818dcafd83a438c08963df27287d6f73cc9c8b8c4fdc5.scope: Deactivated successfully.
Jan 27 08:27:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:27:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:27:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:27:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:27:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:11 np0005597378 python3.9[204607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:12 np0005597378 python3.9[204730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520431.0620866-778-90679060895437/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:27:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:27:12 np0005597378 python3.9[204880]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:13 np0005597378 python3.9[205035]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 27 08:27:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:14 np0005597378 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 27 08:27:14 np0005597378 podman[205040]: 2026-01-27 13:27:14.758980047 +0000 UTC m=+0.089555103 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 08:27:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:16 np0005597378 python3.9[205217]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:16 np0005597378 python3.9[205369]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:27:16
Jan 27 08:27:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:27:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:27:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Jan 27 08:27:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:17 np0005597378 python3.9[205521]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:27:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:27:18 np0005597378 python3.9[205673]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:18 np0005597378 podman[205797]: 2026-01-27 13:27:18.4860295 +0000 UTC m=+0.043504867 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:27:18 np0005597378 python3.9[205844]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:19 np0005597378 python3.9[205996]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:19 np0005597378 python3.9[206148]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:20 np0005597378 python3.9[206300]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:21 np0005597378 python3.9[206452]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:21 np0005597378 python3.9[206604]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:22 np0005597378 python3.9[206756]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:27:22 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:22 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:22 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:22 np0005597378 systemd[1]: Starting libvirt logging daemon socket...
Jan 27 08:27:22 np0005597378 systemd[1]: Listening on libvirt logging daemon socket.
Jan 27 08:27:22 np0005597378 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 27 08:27:22 np0005597378 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 27 08:27:22 np0005597378 systemd[1]: Starting libvirt logging daemon...
Jan 27 08:27:22 np0005597378 systemd[1]: Started libvirt logging daemon.
Jan 27 08:27:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:23 np0005597378 python3.9[206948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:27:23 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:23 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:23 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:24 np0005597378 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 27 08:27:24 np0005597378 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 27 08:27:24 np0005597378 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 27 08:27:24 np0005597378 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 27 08:27:24 np0005597378 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 27 08:27:24 np0005597378 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 27 08:27:24 np0005597378 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 27 08:27:24 np0005597378 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 08:27:24 np0005597378 systemd[1]: Started libvirt nodedev daemon.
Jan 27 08:27:24 np0005597378 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 27 08:27:24 np0005597378 python3.9[207166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:27:24 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:24 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:24 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:25 np0005597378 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 27 08:27:25 np0005597378 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 27 08:27:25 np0005597378 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 27 08:27:25 np0005597378 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 27 08:27:25 np0005597378 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 27 08:27:25 np0005597378 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 27 08:27:25 np0005597378 systemd[1]: Starting libvirt proxy daemon...
Jan 27 08:27:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:25 np0005597378 systemd[1]: Started libvirt proxy daemon.
Jan 27 08:27:25 np0005597378 python3.9[207385]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:27:26 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:26 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:26 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:26 np0005597378 setroubleshoot[206986]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4645a171-68ff-4427-a6b4-23e11fd297be
Jan 27 08:27:26 np0005597378 setroubleshoot[206986]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 27 08:27:26 np0005597378 setroubleshoot[206986]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4645a171-68ff-4427-a6b4-23e11fd297be
Jan 27 08:27:26 np0005597378 setroubleshoot[206986]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 27 08:27:26 np0005597378 systemd[1]: Listening on libvirt locking daemon socket.
Jan 27 08:27:26 np0005597378 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 27 08:27:26 np0005597378 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 27 08:27:26 np0005597378 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 27 08:27:26 np0005597378 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 27 08:27:26 np0005597378 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 27 08:27:26 np0005597378 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 27 08:27:26 np0005597378 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 27 08:27:26 np0005597378 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 27 08:27:26 np0005597378 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 27 08:27:26 np0005597378 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 08:27:26 np0005597378 systemd[1]: Started libvirt QEMU daemon.
Jan 27 08:27:27 np0005597378 python3.9[207600]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:27:27 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:27:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:27 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:27 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:27 np0005597378 systemd[1]: Starting libvirt secret daemon socket...
Jan 27 08:27:27 np0005597378 systemd[1]: Listening on libvirt secret daemon socket.
Jan 27 08:27:27 np0005597378 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 27 08:27:27 np0005597378 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 27 08:27:27 np0005597378 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 27 08:27:27 np0005597378 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 27 08:27:27 np0005597378 systemd[1]: Starting libvirt secret daemon...
Jan 27 08:27:27 np0005597378 systemd[1]: Started libvirt secret daemon.
Jan 27 08:27:28 np0005597378 python3.9[207812]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:28 np0005597378 python3.9[207964]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:27:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:29 np0005597378 python3.9[208116]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:30 np0005597378 python3.9[208270]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:27:31 np0005597378 python3.9[208420]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:31 np0005597378 python3.9[208541]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520450.6138165-1136-270398272151816/.source.xml follow=False _original_basename=secret.xml.j2 checksum=8c28a137ee2a1085931029cec5af08c23668b76e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:32 np0005597378 python3.9[208693]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 4d8fd694-f443-5fb1-b612-70034b2f3c6e#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:32 np0005597378 python3.9[208855]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:35 np0005597378 python3.9[209318]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:35 np0005597378 python3.9[209470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:36 np0005597378 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 27 08:27:36 np0005597378 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 27 08:27:36 np0005597378 python3.9[209593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520455.3142493-1191-82799855693251/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:37 np0005597378 python3.9[209745]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:37 np0005597378 python3.9[209897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:38 np0005597378 python3.9[209975]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:38 np0005597378 python3.9[210127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:39 np0005597378 python3.9[210205]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rkgeo35f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:39 np0005597378 python3.9[210357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:40 np0005597378 python3.9[210435]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:41 np0005597378 python3.9[210587]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:41 np0005597378 python3[210740]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 08:27:42 np0005597378 python3.9[210892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:42 np0005597378 python3.9[210970]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:43 np0005597378 python3.9[211122]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:44 np0005597378 python3.9[211247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520463.090679-1280-247658562033862/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:44 np0005597378 python3.9[211399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:45 np0005597378 podman[211449]: 2026-01-27 13:27:45.28042759 +0000 UTC m=+0.107901914 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 08:27:45 np0005597378 python3.9[211495]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:46 np0005597378 python3.9[211657]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:27:46.273 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:27:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:27:46.274 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:27:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:27:46.274 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:27:46 np0005597378 python3.9[211735]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:47 np0005597378 python3.9[211887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:27:47 np0005597378 python3.9[212012]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769520466.777181-1319-93190077371312/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:48 np0005597378 python3.9[212164]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:48 np0005597378 podman[212165]: 2026-01-27 13:27:48.711162911 +0000 UTC m=+0.054390265 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:27:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:49 np0005597378 python3.9[212335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:50 np0005597378 python3.9[212490]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:50 np0005597378 python3.9[212642]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:51 np0005597378 python3.9[212795]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:27:52 np0005597378 python3.9[212949]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:27:52 np0005597378 python3.9[213104]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:53 np0005597378 python3.9[213256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:53 np0005597378 python3.9[213379]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520472.913474-1391-138074018130458/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:54 np0005597378 python3.9[213533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:55 np0005597378 python3.9[213656]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520474.0957134-1406-176330865188464/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:55 np0005597378 python3.9[213808]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:27:56 np0005597378 python3.9[213931]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520475.2519538-1421-194266553305845/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:27:57 np0005597378 python3.9[214083]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:27:57 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:57 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:57 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:57 np0005597378 systemd[1]: Reached target edpm_libvirt.target.
Jan 27 08:27:58 np0005597378 python3.9[214274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 08:27:58 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:58 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:58 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:58 np0005597378 systemd[1]: Reloading.
Jan 27 08:27:58 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:27:58 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:27:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:27:59 np0005597378 systemd[1]: session-48.scope: Deactivated successfully.
Jan 27 08:27:59 np0005597378 systemd[1]: session-48.scope: Consumed 3min 17.872s CPU time.
Jan 27 08:27:59 np0005597378 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Jan 27 08:27:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:27:59 np0005597378 systemd-logind[786]: Removed session 48.
Jan 27 08:28:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:04 np0005597378 systemd-logind[786]: New session 49 of user zuul.
Jan 27 08:28:04 np0005597378 systemd[1]: Started Session 49 of User zuul.
Jan 27 08:28:05 np0005597378 python3.9[214525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:28:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:06 np0005597378 python3.9[214679]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:28:06 np0005597378 network[214696]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:28:06 np0005597378 network[214697]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:28:06 np0005597378 network[214698]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:28:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:10 np0005597378 python3.9[214970]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 08:28:11 np0005597378 python3.9[215054]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:28:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.162053456 +0000 UTC m=+0.039973609 container create 2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:12 np0005597378 systemd[1]: Started libpod-conmon-2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5.scope.
Jan 27 08:28:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.14341184 +0000 UTC m=+0.021332013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.253531972 +0000 UTC m=+0.131452165 container init 2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_brown, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.259351413 +0000 UTC m=+0.137271566 container start 2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.262373767 +0000 UTC m=+0.140293940 container attach 2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:12 np0005597378 clever_brown[215215]: 167 167
Jan 27 08:28:12 np0005597378 systemd[1]: libpod-2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5.scope: Deactivated successfully.
Jan 27 08:28:12 np0005597378 conmon[215215]: conmon 2d059a0fde3eae04eaa6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5.scope/container/memory.events
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.265163455 +0000 UTC m=+0.143083608 container died 2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cc6b0a56ced5a26c22e468123aa2ececfc3de3476bce99fe52832784517f72c8-merged.mount: Deactivated successfully.
Jan 27 08:28:12 np0005597378 podman[215199]: 2026-01-27 13:28:12.303394974 +0000 UTC m=+0.181315127 container remove 2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:12 np0005597378 systemd[1]: libpod-conmon-2d059a0fde3eae04eaa6c758139260de410a6dd90cb942f5e8503f6b1c8b82a5.scope: Deactivated successfully.
Jan 27 08:28:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:28:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:28:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:28:12 np0005597378 podman[215239]: 2026-01-27 13:28:12.450281436 +0000 UTC m=+0.038089407 container create 16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:28:12 np0005597378 systemd[1]: Started libpod-conmon-16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085.scope.
Jan 27 08:28:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:28:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a057326a5c941f723f7c8dac972f31a8994e6ffba3469c1e206971f6d38eb9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a057326a5c941f723f7c8dac972f31a8994e6ffba3469c1e206971f6d38eb9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a057326a5c941f723f7c8dac972f31a8994e6ffba3469c1e206971f6d38eb9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a057326a5c941f723f7c8dac972f31a8994e6ffba3469c1e206971f6d38eb9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a057326a5c941f723f7c8dac972f31a8994e6ffba3469c1e206971f6d38eb9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:12 np0005597378 podman[215239]: 2026-01-27 13:28:12.434830978 +0000 UTC m=+0.022638969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:28:12 np0005597378 podman[215239]: 2026-01-27 13:28:12.536615039 +0000 UTC m=+0.124423030 container init 16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:12 np0005597378 podman[215239]: 2026-01-27 13:28:12.544652692 +0000 UTC m=+0.132460663 container start 16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:28:12 np0005597378 podman[215239]: 2026-01-27 13:28:12.548758686 +0000 UTC m=+0.136566687 container attach 16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:28:12 np0005597378 keen_clarke[215255]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:28:12 np0005597378 keen_clarke[215255]: --> All data devices are unavailable
Jan 27 08:28:12 np0005597378 systemd[1]: libpod-16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085.scope: Deactivated successfully.
Jan 27 08:28:12 np0005597378 podman[215239]: 2026-01-27 13:28:12.987632462 +0000 UTC m=+0.575440443 container died 16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:28:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3a057326a5c941f723f7c8dac972f31a8994e6ffba3469c1e206971f6d38eb9c-merged.mount: Deactivated successfully.
Jan 27 08:28:13 np0005597378 podman[215239]: 2026-01-27 13:28:13.040396104 +0000 UTC m=+0.628204075 container remove 16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:28:13 np0005597378 systemd[1]: libpod-conmon-16c29747e8d9e04f4e93f03c54e8b27615febcde95656d1c5b71533933cad085.scope: Deactivated successfully.
Jan 27 08:28:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.503694247 +0000 UTC m=+0.042040847 container create 9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_yalow, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:28:13 np0005597378 systemd[1]: Started libpod-conmon-9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd.scope.
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.484998889 +0000 UTC m=+0.023345519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:28:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.594049142 +0000 UTC m=+0.132395772 container init 9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.602698351 +0000 UTC m=+0.141044951 container start 9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_yalow, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.6062769 +0000 UTC m=+0.144623520 container attach 9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:28:13 np0005597378 confident_yalow[215364]: 167 167
Jan 27 08:28:13 np0005597378 systemd[1]: libpod-9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd.scope: Deactivated successfully.
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.609107328 +0000 UTC m=+0.147453938 container died 9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:28:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5cb74cb7e7306f23a212f71d99d486d9411fc015e2cef6872ea22481f3c565d8-merged.mount: Deactivated successfully.
Jan 27 08:28:13 np0005597378 podman[215348]: 2026-01-27 13:28:13.650451545 +0000 UTC m=+0.188798145 container remove 9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:28:13 np0005597378 systemd[1]: libpod-conmon-9caf6209471bcd6c552046089ba51915383301feb911b2fbf90cf9ee914613cd.scope: Deactivated successfully.
Jan 27 08:28:13 np0005597378 podman[215388]: 2026-01-27 13:28:13.824676434 +0000 UTC m=+0.041462190 container create 83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sinoussi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:28:13 np0005597378 systemd[1]: Started libpod-conmon-83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722.scope.
Jan 27 08:28:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:28:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d9dea009f96eeb0d228265f2e20c3b928d644b50b8d0ac28167a03fa886f08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d9dea009f96eeb0d228265f2e20c3b928d644b50b8d0ac28167a03fa886f08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:13 np0005597378 podman[215388]: 2026-01-27 13:28:13.805571855 +0000 UTC m=+0.022357621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:28:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d9dea009f96eeb0d228265f2e20c3b928d644b50b8d0ac28167a03fa886f08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d9dea009f96eeb0d228265f2e20c3b928d644b50b8d0ac28167a03fa886f08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:13 np0005597378 podman[215388]: 2026-01-27 13:28:13.918223597 +0000 UTC m=+0.135009373 container init 83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sinoussi, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:28:13 np0005597378 podman[215388]: 2026-01-27 13:28:13.925364525 +0000 UTC m=+0.142150271 container start 83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:28:13 np0005597378 podman[215388]: 2026-01-27 13:28:13.929618933 +0000 UTC m=+0.146404699 container attach 83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sinoussi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:28:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]: {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:    "0": [
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:        {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "devices": [
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "/dev/loop3"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            ],
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_name": "ceph_lv0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_size": "21470642176",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "name": "ceph_lv0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "tags": {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cluster_name": "ceph",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.crush_device_class": "",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.encrypted": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.objectstore": "bluestore",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osd_id": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.type": "block",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.vdo": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.with_tpm": "0"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            },
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "type": "block",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "vg_name": "ceph_vg0"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:        }
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:    ],
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:    "1": [
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:        {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "devices": [
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "/dev/loop4"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            ],
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_name": "ceph_lv1",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_size": "21470642176",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "name": "ceph_lv1",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "tags": {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cluster_name": "ceph",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.crush_device_class": "",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.encrypted": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.objectstore": "bluestore",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osd_id": "1",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.type": "block",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.vdo": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.with_tpm": "0"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            },
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "type": "block",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "vg_name": "ceph_vg1"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:        }
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:    ],
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:    "2": [
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:        {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "devices": [
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "/dev/loop5"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            ],
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_name": "ceph_lv2",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_size": "21470642176",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "name": "ceph_lv2",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "tags": {
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.cluster_name": "ceph",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.crush_device_class": "",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.encrypted": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.objectstore": "bluestore",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osd_id": "2",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.type": "block",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.vdo": "0",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:                "ceph.with_tpm": "0"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            },
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "type": "block",
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:            "vg_name": "ceph_vg2"
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:        }
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]:    ]
Jan 27 08:28:14 np0005597378 tender_sinoussi[215405]: }
Jan 27 08:28:14 np0005597378 systemd[1]: libpod-83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722.scope: Deactivated successfully.
Jan 27 08:28:14 np0005597378 podman[215388]: 2026-01-27 13:28:14.235007859 +0000 UTC m=+0.451793605 container died 83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:28:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d2d9dea009f96eeb0d228265f2e20c3b928d644b50b8d0ac28167a03fa886f08-merged.mount: Deactivated successfully.
Jan 27 08:28:14 np0005597378 podman[215388]: 2026-01-27 13:28:14.357780272 +0000 UTC m=+0.574566018 container remove 83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:14 np0005597378 systemd[1]: libpod-conmon-83e5e60ae86314803c89c77e4dbc0acff2d9115cbae93e329c3a10bf37bd4722.scope: Deactivated successfully.
Jan 27 08:28:14 np0005597378 podman[215488]: 2026-01-27 13:28:14.817765204 +0000 UTC m=+0.036724870 container create 295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:28:14 np0005597378 podman[215488]: 2026-01-27 13:28:14.800016111 +0000 UTC m=+0.018975797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:28:14 np0005597378 systemd[1]: Started libpod-conmon-295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2.scope.
Jan 27 08:28:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:28:15 np0005597378 podman[215488]: 2026-01-27 13:28:15.041863166 +0000 UTC m=+0.260822862 container init 295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:28:15 np0005597378 podman[215488]: 2026-01-27 13:28:15.049101176 +0000 UTC m=+0.268060842 container start 295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elbakyan, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:28:15 np0005597378 affectionate_elbakyan[215504]: 167 167
Jan 27 08:28:15 np0005597378 systemd[1]: libpod-295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2.scope: Deactivated successfully.
Jan 27 08:28:15 np0005597378 podman[215488]: 2026-01-27 13:28:15.101241341 +0000 UTC m=+0.320201027 container attach 295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elbakyan, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:28:15 np0005597378 podman[215488]: 2026-01-27 13:28:15.102170427 +0000 UTC m=+0.321130093 container died 295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:28:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0fdc4b20a14014fdff4389610cc73d68d1fc25ee15d853e20c857f08d148e6bd-merged.mount: Deactivated successfully.
Jan 27 08:28:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:15 np0005597378 podman[215488]: 2026-01-27 13:28:15.31152201 +0000 UTC m=+0.530481676 container remove 295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:28:15 np0005597378 systemd[1]: libpod-conmon-295bc6d4356186dbaf20dac5b3f43e7b3878d58d61d5c58e9fefd773bb76b0f2.scope: Deactivated successfully.
Jan 27 08:28:15 np0005597378 podman[215523]: 2026-01-27 13:28:15.456429327 +0000 UTC m=+0.102495252 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:28:15 np0005597378 podman[215550]: 2026-01-27 13:28:15.470786675 +0000 UTC m=+0.043904497 container create 5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:28:15 np0005597378 systemd[1]: Started libpod-conmon-5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed.scope.
Jan 27 08:28:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:28:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd64ebb0bfb23327d29a1cbc93505f217fe0ccb7495feb0b7fa438a0c5337b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd64ebb0bfb23327d29a1cbc93505f217fe0ccb7495feb0b7fa438a0c5337b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:15 np0005597378 podman[215550]: 2026-01-27 13:28:15.450316717 +0000 UTC m=+0.023434559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:28:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd64ebb0bfb23327d29a1cbc93505f217fe0ccb7495feb0b7fa438a0c5337b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd64ebb0bfb23327d29a1cbc93505f217fe0ccb7495feb0b7fa438a0c5337b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:28:15 np0005597378 podman[215550]: 2026-01-27 13:28:15.5611212 +0000 UTC m=+0.134239042 container init 5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:28:15 np0005597378 podman[215550]: 2026-01-27 13:28:15.567511096 +0000 UTC m=+0.140628918 container start 5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:28:15 np0005597378 podman[215550]: 2026-01-27 13:28:15.570424077 +0000 UTC m=+0.143541919 container attach 5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:28:16 np0005597378 lvm[215653]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:28:16 np0005597378 lvm[215652]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:28:16 np0005597378 lvm[215652]: VG ceph_vg0 finished
Jan 27 08:28:16 np0005597378 lvm[215653]: VG ceph_vg1 finished
Jan 27 08:28:16 np0005597378 lvm[215655]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:28:16 np0005597378 lvm[215655]: VG ceph_vg2 finished
Jan 27 08:28:16 np0005597378 naughty_shirley[215574]: {}
Jan 27 08:28:16 np0005597378 systemd[1]: libpod-5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed.scope: Deactivated successfully.
Jan 27 08:28:16 np0005597378 podman[215550]: 2026-01-27 13:28:16.39520303 +0000 UTC m=+0.968320852 container died 5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:28:16 np0005597378 systemd[1]: libpod-5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed.scope: Consumed 1.424s CPU time.
Jan 27 08:28:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fdd64ebb0bfb23327d29a1cbc93505f217fe0ccb7495feb0b7fa438a0c5337b2-merged.mount: Deactivated successfully.
Jan 27 08:28:16 np0005597378 podman[215550]: 2026-01-27 13:28:16.432802233 +0000 UTC m=+1.005920055 container remove 5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:28:16 np0005597378 systemd[1]: libpod-conmon-5358ca09e4adc3dfdc3266be738f3f6dc82bc0161926e1652d6e101d744a27ed.scope: Deactivated successfully.
Jan 27 08:28:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:28:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:28:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:28:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:28:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:28:16
Jan 27 08:28:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:28:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:28:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Jan 27 08:28:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:17 np0005597378 python3.9[215846]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:28:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:28:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:28:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:28:18 np0005597378 python3.9[215998]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:18 np0005597378 podman[216123]: 2026-01-27 13:28:18.841161423 +0000 UTC m=+0.067400749 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 08:28:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:19 np0005597378 python3.9[216168]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:28:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:19 np0005597378 python3.9[216322]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:20 np0005597378 python3.9[216475]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:28:20 np0005597378 python3.9[216598]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520499.742422-90-54129759875259/.source.iscsi _original_basename=.gjowd7od follow=False checksum=e2c5990fdabebde83dd1cd75cdfc2ef5cdc3c189 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:21 np0005597378 python3.9[216750]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:22 np0005597378 python3.9[216902]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:23 np0005597378 python3.9[217054]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:28:23 np0005597378 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 27 08:28:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:24 np0005597378 python3.9[217210]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:28:24 np0005597378 systemd[1]: Reloading.
Jan 27 08:28:24 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:24 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:24 np0005597378 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 08:28:24 np0005597378 systemd[1]: Starting Open-iSCSI...
Jan 27 08:28:24 np0005597378 kernel: Loading iSCSI transport class v2.0-870.
Jan 27 08:28:24 np0005597378 systemd[1]: Started Open-iSCSI.
Jan 27 08:28:24 np0005597378 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 27 08:28:24 np0005597378 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 27 08:28:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:25 np0005597378 python3.9[217410]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:28:25 np0005597378 network[217427]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:28:25 np0005597378 network[217428]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:28:25 np0005597378 network[217429]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:28:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:29 np0005597378 python3.9[217701]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:28:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:32 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:28:32 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:28:32 np0005597378 systemd[1]: Reloading.
Jan 27 08:28:32 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:32 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:32 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:28:32 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:28:32 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:28:32 np0005597378 systemd[1]: run-r98b84859c17444d597a16a16db13e104.service: Deactivated successfully.
Jan 27 08:28:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:33 np0005597378 python3.9[218019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 08:28:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:34 np0005597378 python3.9[218171]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 27 08:28:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:35 np0005597378 python3.9[218327]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:28:35 np0005597378 python3.9[218450]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520514.8484101-178-256979785917491/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:36 np0005597378 python3.9[218602]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:37 np0005597378 python3.9[218754]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:28:37 np0005597378 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 08:28:37 np0005597378 systemd[1]: Stopped Load Kernel Modules.
Jan 27 08:28:37 np0005597378 systemd[1]: Stopping Load Kernel Modules...
Jan 27 08:28:37 np0005597378 systemd[1]: Starting Load Kernel Modules...
Jan 27 08:28:37 np0005597378 systemd[1]: Finished Load Kernel Modules.
Jan 27 08:28:38 np0005597378 python3.9[218910]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:38 np0005597378 python3.9[219063]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:28:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:39 np0005597378 python3.9[219215]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:28:40 np0005597378 python3.9[219338]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520519.0996494-229-10369884553892/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:40 np0005597378 python3.9[219490]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:41 np0005597378 python3.9[219643]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:41 np0005597378 python3.9[219795]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:42 np0005597378 python3.9[219947]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:43 np0005597378 python3.9[220099]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:43 np0005597378 python3.9[220251]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:44 np0005597378 python3.9[220403]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:45 np0005597378 python3.9[220555]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:45 np0005597378 podman[220679]: 2026-01-27 13:28:45.608199222 +0000 UTC m=+0.080893642 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 27 08:28:45 np0005597378 python3.9[220727]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:28:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:28:46.275 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:28:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:28:46.275 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:28:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:28:46.275 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:28:46 np0005597378 python3.9[220887]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:28:47 np0005597378 python3.9[221040]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:28:47 np0005597378 systemd[1]: Listening on multipathd control socket.
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:28:47 np0005597378 python3.9[221196]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:28:47 np0005597378 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 27 08:28:47 np0005597378 udevadm[221201]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 27 08:28:47 np0005597378 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 27 08:28:47 np0005597378 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 08:28:48 np0005597378 multipathd[221204]: --------start up--------
Jan 27 08:28:48 np0005597378 multipathd[221204]: read /etc/multipath.conf
Jan 27 08:28:48 np0005597378 multipathd[221204]: path checkers start up
Jan 27 08:28:48 np0005597378 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 08:28:48 np0005597378 python3.9[221363]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 08:28:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:49 np0005597378 podman[221487]: 2026-01-27 13:28:49.401450503 +0000 UTC m=+0.051872469 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 27 08:28:49 np0005597378 python3.9[221534]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 27 08:28:49 np0005597378 kernel: Key type psk registered
Jan 27 08:28:50 np0005597378 python3.9[221697]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:28:50 np0005597378 python3.9[221820]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769520529.7882035-359-278316216010149/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:51 np0005597378 python3.9[221972]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:28:52 np0005597378 python3.9[222124]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:28:52 np0005597378 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 08:28:52 np0005597378 systemd[1]: Stopped Load Kernel Modules.
Jan 27 08:28:52 np0005597378 systemd[1]: Stopping Load Kernel Modules...
Jan 27 08:28:52 np0005597378 systemd[1]: Starting Load Kernel Modules...
Jan 27 08:28:52 np0005597378 systemd[1]: Finished Load Kernel Modules.
Jan 27 08:28:53 np0005597378 python3.9[222280]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 08:28:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:55 np0005597378 systemd[1]: Reloading.
Jan 27 08:28:55 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:55 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:55 np0005597378 systemd[1]: Reloading.
Jan 27 08:28:55 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:55 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:56 np0005597378 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 08:28:56 np0005597378 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 08:28:56 np0005597378 lvm[222395]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:28:56 np0005597378 lvm[222395]: VG ceph_vg2 finished
Jan 27 08:28:56 np0005597378 lvm[222396]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:28:56 np0005597378 lvm[222396]: VG ceph_vg0 finished
Jan 27 08:28:56 np0005597378 lvm[222393]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:28:56 np0005597378 lvm[222393]: VG ceph_vg1 finished
Jan 27 08:28:56 np0005597378 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 08:28:56 np0005597378 systemd[1]: Starting man-db-cache-update.service...
Jan 27 08:28:56 np0005597378 systemd[1]: Reloading.
Jan 27 08:28:56 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:28:56 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:28:57 np0005597378 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 08:28:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:57 np0005597378 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 08:28:57 np0005597378 systemd[1]: Finished man-db-cache-update.service.
Jan 27 08:28:57 np0005597378 systemd[1]: man-db-cache-update.service: Consumed 1.457s CPU time.
Jan 27 08:28:57 np0005597378 systemd[1]: run-r447f9de0fc804445b7634756da54dc6b.service: Deactivated successfully.
Jan 27 08:28:58 np0005597378 python3.9[223751]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:28:58 np0005597378 systemd[1]: Stopping Open-iSCSI...
Jan 27 08:28:58 np0005597378 iscsid[217250]: iscsid shutting down.
Jan 27 08:28:58 np0005597378 systemd[1]: iscsid.service: Deactivated successfully.
Jan 27 08:28:58 np0005597378 systemd[1]: Stopped Open-iSCSI.
Jan 27 08:28:58 np0005597378 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 08:28:58 np0005597378 systemd[1]: Starting Open-iSCSI...
Jan 27 08:28:58 np0005597378 systemd[1]: Started Open-iSCSI.
Jan 27 08:28:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:28:59 np0005597378 python3.9[223907]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:28:59 np0005597378 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 27 08:28:59 np0005597378 multipathd[221204]: exit (signal)
Jan 27 08:28:59 np0005597378 multipathd[221204]: --------shut down-------
Jan 27 08:28:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:28:59 np0005597378 systemd[1]: multipathd.service: Deactivated successfully.
Jan 27 08:28:59 np0005597378 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 27 08:28:59 np0005597378 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 08:28:59 np0005597378 multipathd[223913]: --------start up--------
Jan 27 08:28:59 np0005597378 multipathd[223913]: read /etc/multipath.conf
Jan 27 08:28:59 np0005597378 multipathd[223913]: path checkers start up
Jan 27 08:28:59 np0005597378 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 08:29:00 np0005597378 python3.9[224070]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 08:29:01 np0005597378 python3.9[224226]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:02 np0005597378 python3.9[224378]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:29:02 np0005597378 systemd[1]: Reloading.
Jan 27 08:29:02 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:29:02 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:29:03 np0005597378 python3.9[224563]: ansible-ansible.builtin.service_facts Invoked
Jan 27 08:29:03 np0005597378 network[224580]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 08:29:03 np0005597378 network[224581]: 'network-scripts' will be removed from distribution in near future.
Jan 27 08:29:03 np0005597378 network[224582]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 08:29:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:07 np0005597378 python3.9[224855]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:08 np0005597378 python3.9[225008]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:08 np0005597378 python3.9[225161]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:09 np0005597378 python3.9[225314]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:10 np0005597378 python3.9[225467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:10 np0005597378 python3.9[225620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:11 np0005597378 python3.9[225773]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:12 np0005597378 python3.9[225926]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:29:13 np0005597378 python3.9[226079]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:13 np0005597378 python3.9[226231]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:14 np0005597378 python3.9[226383]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:14 np0005597378 python3.9[226535]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:15 np0005597378 python3.9[226687]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:15 np0005597378 podman[226755]: 2026-01-27 13:29:15.759563167 +0000 UTC m=+0.090540851 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:29:16 np0005597378 python3.9[226866]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:16 np0005597378 python3.9[227018]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:29:16
Jan 27 08:29:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:29:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:29:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 27 08:29:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:29:17 np0005597378 python3.9[227234]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:29:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:17 np0005597378 podman[227465]: 2026-01-27 13:29:17.792524761 +0000 UTC m=+0.080139473 container create 91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_almeida, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:17 np0005597378 podman[227465]: 2026-01-27 13:29:17.73411318 +0000 UTC m=+0.021727912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:29:17 np0005597378 systemd[1]: Started libpod-conmon-91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7.scope.
Jan 27 08:29:17 np0005597378 python3.9[227452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:29:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:29:17 np0005597378 podman[227465]: 2026-01-27 13:29:17.905709859 +0000 UTC m=+0.193324601 container init 91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_almeida, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:29:17 np0005597378 podman[227465]: 2026-01-27 13:29:17.916398624 +0000 UTC m=+0.204013366 container start 91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_almeida, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:17 np0005597378 sad_almeida[227481]: 167 167
Jan 27 08:29:17 np0005597378 systemd[1]: libpod-91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7.scope: Deactivated successfully.
Jan 27 08:29:17 np0005597378 podman[227465]: 2026-01-27 13:29:17.928850371 +0000 UTC m=+0.216465103 container attach 91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:29:17 np0005597378 podman[227465]: 2026-01-27 13:29:17.929533171 +0000 UTC m=+0.217147883 container died 91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:29:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a419cb4ea2abbce1d3d024305a96e871cfb14af343f8732429c219ca9144ab97-merged.mount: Deactivated successfully.
Jan 27 08:29:18 np0005597378 podman[227465]: 2026-01-27 13:29:18.145144828 +0000 UTC m=+0.432759570 container remove 91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_almeida, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:29:18 np0005597378 systemd[1]: libpod-conmon-91c885f32adad2510fb0cfe6994515d3041acfaf5d66ef501db1b3afd194d1a7.scope: Deactivated successfully.
Jan 27 08:29:18 np0005597378 podman[227657]: 2026-01-27 13:29:18.373307295 +0000 UTC m=+0.108518136 container create 2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jackson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:29:18 np0005597378 podman[227657]: 2026-01-27 13:29:18.288845128 +0000 UTC m=+0.024056019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:29:18 np0005597378 systemd[1]: Started libpod-conmon-2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2.scope.
Jan 27 08:29:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:29:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f72390e6178003072baeca479552fa4f6a12a9c7e0567024a15823cbd182e61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f72390e6178003072baeca479552fa4f6a12a9c7e0567024a15823cbd182e61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f72390e6178003072baeca479552fa4f6a12a9c7e0567024a15823cbd182e61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f72390e6178003072baeca479552fa4f6a12a9c7e0567024a15823cbd182e61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f72390e6178003072baeca479552fa4f6a12a9c7e0567024a15823cbd182e61/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:18 np0005597378 python3.9[227659]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:18 np0005597378 podman[227657]: 2026-01-27 13:29:18.513068353 +0000 UTC m=+0.248279234 container init 2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:29:18 np0005597378 podman[227657]: 2026-01-27 13:29:18.521111042 +0000 UTC m=+0.256321893 container start 2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:29:18 np0005597378 podman[227657]: 2026-01-27 13:29:18.536281707 +0000 UTC m=+0.271492548 container attach 2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:29:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:18 np0005597378 adoring_jackson[227674]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:29:18 np0005597378 adoring_jackson[227674]: --> All data devices are unavailable
Jan 27 08:29:19 np0005597378 systemd[1]: libpod-2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2.scope: Deactivated successfully.
Jan 27 08:29:19 np0005597378 podman[227657]: 2026-01-27 13:29:19.032916953 +0000 UTC m=+0.768127794 container died 2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jackson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:29:19 np0005597378 python3.9[227842]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f72390e6178003072baeca479552fa4f6a12a9c7e0567024a15823cbd182e61-merged.mount: Deactivated successfully.
Jan 27 08:29:19 np0005597378 podman[227657]: 2026-01-27 13:29:19.270321244 +0000 UTC m=+1.005532075 container remove 2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:19 np0005597378 systemd[1]: libpod-conmon-2b3cff5b0fe1a24cfb61a800b48b88b369c846e989a5b391d054aa203a7d0be2.scope: Deactivated successfully.
Jan 27 08:29:19 np0005597378 podman[228032]: 2026-01-27 13:29:19.53408206 +0000 UTC m=+0.057312821 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:29:19 np0005597378 python3.9[228080]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:19 np0005597378 podman[228093]: 2026-01-27 13:29:19.771253764 +0000 UTC m=+0.071366523 container create b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:19 np0005597378 podman[228093]: 2026-01-27 13:29:19.724311501 +0000 UTC m=+0.024424270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:29:19 np0005597378 systemd[1]: Started libpod-conmon-b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0.scope.
Jan 27 08:29:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:29:19 np0005597378 podman[228093]: 2026-01-27 13:29:19.933236807 +0000 UTC m=+0.233349566 container init b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:29:19 np0005597378 podman[228093]: 2026-01-27 13:29:19.945025784 +0000 UTC m=+0.245138533 container start b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:29:19 np0005597378 eloquent_dubinsky[228156]: 167 167
Jan 27 08:29:19 np0005597378 systemd[1]: libpod-b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0.scope: Deactivated successfully.
Jan 27 08:29:19 np0005597378 podman[228093]: 2026-01-27 13:29:19.988027005 +0000 UTC m=+0.288139774 container attach b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:29:19 np0005597378 podman[228093]: 2026-01-27 13:29:19.98924726 +0000 UTC m=+0.289360019 container died b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f4c70389a1ebc5389521670f13289e8d72a5cb808352d11a54e96bba89fd296-merged.mount: Deactivated successfully.
Jan 27 08:29:20 np0005597378 podman[228093]: 2026-01-27 13:29:20.16372064 +0000 UTC m=+0.463833389 container remove b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:29:20 np0005597378 systemd[1]: libpod-conmon-b9532cd653a838eefde22095446da0945e2de3c794ae75464244f08af4b0f1e0.scope: Deactivated successfully.
Jan 27 08:29:20 np0005597378 python3.9[228278]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:20 np0005597378 podman[228286]: 2026-01-27 13:29:20.386783191 +0000 UTC m=+0.098881919 container create 8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chebyshev, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:29:20 np0005597378 podman[228286]: 2026-01-27 13:29:20.321821733 +0000 UTC m=+0.033920491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:29:20 np0005597378 systemd[1]: Started libpod-conmon-8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35.scope.
Jan 27 08:29:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:29:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd426e90bd2584cd9324fae2ca8289c8a0c2666cf58cd2705c3816a4395256ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd426e90bd2584cd9324fae2ca8289c8a0c2666cf58cd2705c3816a4395256ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd426e90bd2584cd9324fae2ca8289c8a0c2666cf58cd2705c3816a4395256ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd426e90bd2584cd9324fae2ca8289c8a0c2666cf58cd2705c3816a4395256ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:20 np0005597378 podman[228286]: 2026-01-27 13:29:20.579871895 +0000 UTC m=+0.291970643 container init 8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chebyshev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:29:20 np0005597378 podman[228286]: 2026-01-27 13:29:20.585820185 +0000 UTC m=+0.297918923 container start 8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:20 np0005597378 podman[228286]: 2026-01-27 13:29:20.623806101 +0000 UTC m=+0.335904879 container attach 8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]: {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:    "0": [
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:        {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "devices": [
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "/dev/loop3"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            ],
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_name": "ceph_lv0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_size": "21470642176",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "name": "ceph_lv0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "tags": {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cluster_name": "ceph",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.crush_device_class": "",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.encrypted": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.objectstore": "bluestore",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osd_id": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.type": "block",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.vdo": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.with_tpm": "0"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            },
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "type": "block",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "vg_name": "ceph_vg0"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:        }
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:    ],
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:    "1": [
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:        {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "devices": [
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "/dev/loop4"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            ],
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_name": "ceph_lv1",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_size": "21470642176",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "name": "ceph_lv1",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "tags": {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cluster_name": "ceph",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.crush_device_class": "",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.encrypted": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.objectstore": "bluestore",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osd_id": "1",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.type": "block",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.vdo": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.with_tpm": "0"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            },
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "type": "block",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "vg_name": "ceph_vg1"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:        }
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:    ],
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:    "2": [
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:        {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "devices": [
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "/dev/loop5"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            ],
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_name": "ceph_lv2",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_size": "21470642176",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "name": "ceph_lv2",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "tags": {
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.cluster_name": "ceph",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.crush_device_class": "",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.encrypted": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.objectstore": "bluestore",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osd_id": "2",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.type": "block",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.vdo": "0",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:                "ceph.with_tpm": "0"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            },
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "type": "block",
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:            "vg_name": "ceph_vg2"
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:        }
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]:    ]
Jan 27 08:29:20 np0005597378 gracious_chebyshev[228341]: }
Jan 27 08:29:20 np0005597378 systemd[1]: libpod-8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35.scope: Deactivated successfully.
Jan 27 08:29:20 np0005597378 podman[228286]: 2026-01-27 13:29:20.90552362 +0000 UTC m=+0.617622358 container died 8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chebyshev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:20 np0005597378 python3.9[228459]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cd426e90bd2584cd9324fae2ca8289c8a0c2666cf58cd2705c3816a4395256ee-merged.mount: Deactivated successfully.
Jan 27 08:29:21 np0005597378 podman[228286]: 2026-01-27 13:29:21.128840558 +0000 UTC m=+0.840939296 container remove 8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:29:21 np0005597378 systemd[1]: libpod-conmon-8ccb784017f9462c54bdce42a1e65261195ce37834c5d1a3cdceaa4230250b35.scope: Deactivated successfully.
Jan 27 08:29:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.624026483 +0000 UTC m=+0.079955188 container create 7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_saha, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:21 np0005597378 python3.9[228676]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.570460101 +0000 UTC m=+0.026388826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:29:21 np0005597378 systemd[1]: Started libpod-conmon-7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411.scope.
Jan 27 08:29:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.783108844 +0000 UTC m=+0.239037569 container init 7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.793647776 +0000 UTC m=+0.249576481 container start 7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_saha, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:29:21 np0005597378 naughty_saha[228727]: 167 167
Jan 27 08:29:21 np0005597378 systemd[1]: libpod-7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411.scope: Deactivated successfully.
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.805646978 +0000 UTC m=+0.261575673 container attach 7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_saha, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.805951307 +0000 UTC m=+0.261880012 container died 7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:29:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b68bb6a87d59674bd9ce9a99274b6ac87e3b8090bc9bf0456081a6f3b57e8bf2-merged.mount: Deactivated successfully.
Jan 27 08:29:21 np0005597378 podman[228689]: 2026-01-27 13:29:21.919206637 +0000 UTC m=+0.375135372 container remove 7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_saha, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:29:21 np0005597378 systemd[1]: libpod-conmon-7ee8902c5d45fea1c3cf65759070fea9b0fc70e373c5f6e1b9c8db0b5fbd9411.scope: Deactivated successfully.
Jan 27 08:29:22 np0005597378 podman[228884]: 2026-01-27 13:29:22.177724131 +0000 UTC m=+0.101465653 container create 5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:29:22 np0005597378 podman[228884]: 2026-01-27 13:29:22.106179046 +0000 UTC m=+0.029920598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:29:22 np0005597378 systemd[1]: Started libpod-conmon-5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b.scope.
Jan 27 08:29:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:29:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b599ee9634731ecbaf5012e37f666ce721f2595101a335c3ef9da36c5e17ce58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b599ee9634731ecbaf5012e37f666ce721f2595101a335c3ef9da36c5e17ce58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b599ee9634731ecbaf5012e37f666ce721f2595101a335c3ef9da36c5e17ce58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b599ee9634731ecbaf5012e37f666ce721f2595101a335c3ef9da36c5e17ce58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:29:22 np0005597378 python3.9[228897]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:22 np0005597378 podman[228884]: 2026-01-27 13:29:22.355471487 +0000 UTC m=+0.279213009 container init 5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:29:22 np0005597378 podman[228884]: 2026-01-27 13:29:22.370221988 +0000 UTC m=+0.293963510 container start 5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:29:22 np0005597378 podman[228884]: 2026-01-27 13:29:22.424734478 +0000 UTC m=+0.348476000 container attach 5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:29:23 np0005597378 python3.9[229104]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:23 np0005597378 lvm[229135]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:29:23 np0005597378 lvm[229135]: VG ceph_vg1 finished
Jan 27 08:29:23 np0005597378 lvm[229134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:29:23 np0005597378 lvm[229134]: VG ceph_vg0 finished
Jan 27 08:29:23 np0005597378 lvm[229137]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:29:23 np0005597378 lvm[229137]: VG ceph_vg2 finished
Jan 27 08:29:23 np0005597378 blissful_elion[228902]: {}
Jan 27 08:29:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:23 np0005597378 systemd[1]: libpod-5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b.scope: Deactivated successfully.
Jan 27 08:29:23 np0005597378 systemd[1]: libpod-5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b.scope: Consumed 1.483s CPU time.
Jan 27 08:29:23 np0005597378 podman[228884]: 2026-01-27 13:29:23.287195119 +0000 UTC m=+1.210936671 container died 5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elion, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:29:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b599ee9634731ecbaf5012e37f666ce721f2595101a335c3ef9da36c5e17ce58-merged.mount: Deactivated successfully.
Jan 27 08:29:23 np0005597378 podman[228884]: 2026-01-27 13:29:23.407528091 +0000 UTC m=+1.331269613 container remove 5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:29:23 np0005597378 systemd[1]: libpod-conmon-5f5fe5b9677e81aa90f6fed8cc07cd197cb9c64acdf87608d6f2baadda1ab14b.scope: Deactivated successfully.
Jan 27 08:29:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:29:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:29:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:29:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:29:23 np0005597378 python3.9[229326]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 08:29:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:24 np0005597378 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 27 08:29:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:29:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:29:24 np0005597378 python3.9[229479]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:29:24 np0005597378 systemd[1]: Reloading.
Jan 27 08:29:24 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:29:24 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:29:25 np0005597378 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 08:29:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:25 np0005597378 python3.9[229668]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:26 np0005597378 python3.9[229821]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:27 np0005597378 python3.9[229974]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:29:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:27 np0005597378 python3.9[230127]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:28 np0005597378 python3.9[230280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:29 np0005597378 python3.9[230433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:29 np0005597378 python3.9[230586]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:30 np0005597378 python3.9[230739]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 08:29:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:31 np0005597378 python3.9[230892]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:32 np0005597378 python3.9[231044]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:32 np0005597378 python3.9[231196]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:33 np0005597378 python3.9[231348]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:34 np0005597378 python3.9[231500]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:34 np0005597378 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 27 08:29:34 np0005597378 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 08:29:34 np0005597378 python3.9[231654]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:35 np0005597378 python3.9[231806]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:36 np0005597378 python3.9[231958]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:36 np0005597378 python3.9[232110]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:37 np0005597378 python3.9[232262]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:42 np0005597378 python3.9[232414]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 27 08:29:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:43 np0005597378 python3.9[232567]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 08:29:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:44 np0005597378 python3.9[232725]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 08:29:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:45 np0005597378 systemd-logind[786]: New session 50 of user zuul.
Jan 27 08:29:45 np0005597378 systemd[1]: Started Session 50 of User zuul.
Jan 27 08:29:45 np0005597378 systemd[1]: session-50.scope: Deactivated successfully.
Jan 27 08:29:45 np0005597378 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Jan 27 08:29:45 np0005597378 systemd-logind[786]: Removed session 50.
Jan 27 08:29:45 np0005597378 podman[232786]: 2026-01-27 13:29:45.911217731 +0000 UTC m=+0.085360553 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 08:29:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:29:46.276 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:29:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:29:46.277 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:29:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:29:46.277 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:29:46 np0005597378 python3.9[232938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:46 np0005597378 python3.9[233059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520585.9263759-986-252569877948397/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:47 np0005597378 python3.9[233209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:29:48 np0005597378 python3.9[233285]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:48 np0005597378 python3.9[233435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:49 np0005597378 python3.9[233556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520588.166803-986-180686315116659/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:49 np0005597378 python3.9[233706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:49 np0005597378 podman[233707]: 2026-01-27 13:29:49.736240267 +0000 UTC m=+0.077000584 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 08:29:50 np0005597378 python3.9[233846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520589.2189758-986-98518886595906/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:50 np0005597378 python3.9[233996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:51 np0005597378 python3.9[234117]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520590.4340851-986-51923207755770/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:51 np0005597378 python3.9[234267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:52 np0005597378 python3.9[234388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520591.5471737-986-255902424628932/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:53 np0005597378 python3.9[234540]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:53 np0005597378 python3.9[234692]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:29:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:54 np0005597378 python3.9[234844]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:29:55 np0005597378 python3.9[234996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:55 np0005597378 python3.9[235119]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769520594.7261508-1093-243791696818799/.source _original_basename=.uyad_kju follow=False checksum=09d6d266b07c4741104f92169155a542aff04718 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 27 08:29:56 np0005597378 python3.9[235271]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:29:56 np0005597378 python3.9[235423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:57 np0005597378 python3.9[235544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520596.5669398-1119-121068884675004/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:58 np0005597378 python3.9[235694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 08:29:58 np0005597378 python3.9[235816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769520597.663076-1134-136145764678518/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 08:29:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:29:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:29:59 np0005597378 python3.9[235968]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 27 08:30:00 np0005597378 python3.9[236120]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:30:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:01 np0005597378 python3[236274]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:30:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Jan 27 08:30:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 27 08:30:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 27 08:30:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 27 08:30:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 27 08:30:11 np0005597378 podman[236287]: 2026-01-27 13:30:11.898501541 +0000 UTC m=+10.001016346 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 08:30:12 np0005597378 podman[236382]: 2026-01-27 13:30:12.032887655 +0000 UTC m=+0.050206858 container create bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init)
Jan 27 08:30:12 np0005597378 podman[236382]: 2026-01-27 13:30:12.005867272 +0000 UTC m=+0.023186495 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 08:30:12 np0005597378 python3[236274]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 27 08:30:12 np0005597378 python3.9[236572]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:30:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 27 08:30:13 np0005597378 python3.9[236726]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 27 08:30:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:14 np0005597378 python3.9[236878]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 08:30:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Jan 27 08:30:15 np0005597378 python3[237030]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 08:30:15 np0005597378 podman[237067]: 2026-01-27 13:30:15.664363135 +0000 UTC m=+0.048770626 container create bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:15 np0005597378 podman[237067]: 2026-01-27 13:30:15.637870708 +0000 UTC m=+0.022278219 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 08:30:15 np0005597378 python3[237030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 27 08:30:16 np0005597378 podman[237229]: 2026-01-27 13:30:16.264016328 +0000 UTC m=+0.091136638 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:30:16 np0005597378 python3.9[237275]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:30:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:30:16
Jan 27 08:30:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:30:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:30:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', '.mgr']
Jan 27 08:30:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:30:17 np0005597378 python3.9[237437]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:30:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:30:18 np0005597378 python3.9[237588]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769520617.1931179-1230-180524032967417/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 08:30:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:19 np0005597378 python3.9[237664]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 08:30:19 np0005597378 systemd[1]: Reloading.
Jan 27 08:30:19 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:30:19 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:30:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:19 np0005597378 python3.9[237774]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 08:30:20 np0005597378 systemd[1]: Reloading.
Jan 27 08:30:20 np0005597378 podman[237776]: 2026-01-27 13:30:20.062322811 +0000 UTC m=+0.084582481 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 08:30:20 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 08:30:20 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 08:30:20 np0005597378 systemd[1]: Starting nova_compute container...
Jan 27 08:30:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:20 np0005597378 podman[237833]: 2026-01-27 13:30:20.455732445 +0000 UTC m=+0.096842512 container init bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Jan 27 08:30:20 np0005597378 podman[237833]: 2026-01-27 13:30:20.463981801 +0000 UTC m=+0.105091848 container start bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, container_name=nova_compute)
Jan 27 08:30:20 np0005597378 podman[237833]: nova_compute
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + sudo -E kolla_set_configs
Jan 27 08:30:20 np0005597378 systemd[1]: Started nova_compute container.
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Validating config file
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying service configuration files
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Deleting /etc/ceph
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Creating directory /etc/ceph
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Writing out command to execute
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:20 np0005597378 nova_compute[237849]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:30:20 np0005597378 nova_compute[237849]: ++ cat /run_command
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + CMD=nova-compute
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + ARGS=
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + sudo kolla_copy_cacerts
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + [[ ! -n '' ]]
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + . kolla_extend_start
Jan 27 08:30:20 np0005597378 nova_compute[237849]: Running command: 'nova-compute'
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + umask 0022
Jan 27 08:30:20 np0005597378 nova_compute[237849]: + exec nova-compute
Jan 27 08:30:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:21 np0005597378 python3.9[238010]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:30:21 np0005597378 python3.9[238161]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:30:22 np0005597378 python3.9[238311]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 08:30:22 np0005597378 nova_compute[237849]: 2026-01-27 13:30:22.996 237853 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 27 08:30:22 np0005597378 nova_compute[237849]: 2026-01-27 13:30:22.996 237853 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 27 08:30:22 np0005597378 nova_compute[237849]: 2026-01-27 13:30:22.996 237853 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 27 08:30:22 np0005597378 nova_compute[237849]: 2026-01-27 13:30:22.997 237853 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 27 08:30:23 np0005597378 nova_compute[237849]: 2026-01-27 13:30:23.176 237853 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:30:23 np0005597378 nova_compute[237849]: 2026-01-27 13:30:23.197 237853 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:30:23 np0005597378 nova_compute[237849]: 2026-01-27 13:30:23.198 237853 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 27 08:30:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:23 np0005597378 python3.9[238467]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 08:30:23 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:30:23 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:30:23 np0005597378 nova_compute[237849]: 2026-01-27 13:30:23.803 237853 INFO nova.virt.driver [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 27 08:30:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:23 np0005597378 nova_compute[237849]: 2026-01-27 13:30:23.999 237853 INFO nova.compute.provider_config [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.017 237853 DEBUG oslo_concurrency.lockutils [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.017 237853 DEBUG oslo_concurrency.lockutils [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.017 237853 DEBUG oslo_concurrency.lockutils [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.018 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.018 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.018 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.018 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.018 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.019 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.019 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.019 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.019 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.019 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.020 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.020 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.020 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.020 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.021 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.021 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.021 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.021 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.021 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.022 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.022 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.022 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.022 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.023 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.023 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.023 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.023 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.023 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.024 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.024 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.024 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.024 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.025 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.025 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.025 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.025 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.025 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.026 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.026 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.026 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.026 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.027 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.027 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.027 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.027 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.027 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.028 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.028 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.028 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.028 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.028 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.029 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.029 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.029 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.029 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.029 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.030 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.030 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.030 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.030 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.031 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.031 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.031 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.031 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.031 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.032 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.032 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.032 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.032 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.033 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.033 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.033 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.033 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.033 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.034 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.034 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.034 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.034 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.034 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.035 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.035 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.035 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.035 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.035 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.036 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.036 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.036 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.036 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.036 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.037 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.037 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.037 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.037 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.037 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.038 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.038 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.038 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.038 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.038 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.039 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.039 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.039 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.039 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.039 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.040 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.040 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.040 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.040 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.040 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.040 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.041 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.041 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.041 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.041 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.042 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.042 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.042 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.042 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.042 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.043 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.043 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.043 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.043 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.043 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.044 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.044 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.044 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.044 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.045 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.045 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.045 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.045 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.046 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.046 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.046 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.046 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.046 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.047 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.047 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.047 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.047 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.048 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.048 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.048 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.048 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.049 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.049 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.049 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.049 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.049 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.050 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.050 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.050 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.050 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.050 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.051 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.051 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.051 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.051 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.051 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.051 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.052 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.052 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.052 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.052 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.052 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.053 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.053 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.053 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.053 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.053 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.054 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.054 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.054 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.054 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.054 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.055 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.055 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.055 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.055 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.056 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.056 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.056 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.056 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.056 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.057 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.057 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.057 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.057 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.057 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.058 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.058 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.058 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.058 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.059 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.059 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.059 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.059 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.059 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.059 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.060 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.060 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.060 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.060 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.060 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.061 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.061 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.061 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.061 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.061 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.062 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.062 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.062 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.062 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.062 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.063 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.063 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.063 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.063 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.063 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.064 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.064 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.064 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.064 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.064 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.064 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.065 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.065 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.065 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.065 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.065 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.066 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.066 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.066 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.066 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.066 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.067 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.067 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.067 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.067 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.067 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.068 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.068 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.068 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.068 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.068 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.069 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.069 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.069 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.069 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.069 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.070 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.070 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.070 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.070 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.071 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.071 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.071 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.071 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.072 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.072 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.072 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.072 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.073 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.073 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.073 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.073 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.074 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.074 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.074 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.074 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.074 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.075 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.075 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.075 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.075 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.075 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.076 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.076 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.076 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.076 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.077 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.077 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.077 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.077 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.077 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.078 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.078 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.078 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.078 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.079 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.079 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.079 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.079 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.079 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.080 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.080 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.080 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.080 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.080 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.081 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.081 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.081 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.081 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.081 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.081 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.082 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.082 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.082 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.083 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.084 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.084 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.084 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.085 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.085 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.085 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.085 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.085 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.086 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.086 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.086 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.086 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.086 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.086 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.087 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.087 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.087 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.087 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.087 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.088 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.088 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.088 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.088 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.088 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.088 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.089 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.089 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.089 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.089 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.090 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.090 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.090 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.090 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.090 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.090 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.091 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.091 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.091 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.091 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.091 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.091 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.092 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.092 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.092 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.092 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.093 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.093 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.093 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.093 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.093 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.093 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.094 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.094 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.094 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.094 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.094 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.094 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.095 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.095 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.095 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.095 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.095 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.095 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.096 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.096 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.096 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.096 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.096 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.097 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.097 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.097 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.097 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.097 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.097 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.098 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.098 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.098 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.098 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.098 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.098 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.099 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.100 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.101 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.101 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.101 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.101 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.101 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.101 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.102 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.103 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.103 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.103 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.103 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.103 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.103 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.104 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.104 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.104 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.104 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.104 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.104 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.105 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.106 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.106 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.106 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.106 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.106 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.107 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.107 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.107 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.107 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.107 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.108 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.108 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.108 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.109 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.109 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.110 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.110 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.110 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.110 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.111 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.111 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.111 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.111 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.111 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.112 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.112 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.112 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.112 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.112 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.113 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.113 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.113 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.113 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.113 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.114 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.114 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.114 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.114 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.114 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.115 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.116 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.116 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.116 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.116 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.116 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.117 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.117 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.117 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.117 237853 WARNING oslo_config.cfg [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 08:30:24 np0005597378 nova_compute[237849]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 08:30:24 np0005597378 nova_compute[237849]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 08:30:24 np0005597378 nova_compute[237849]: and ``live_migration_inbound_addr`` respectively.
Jan 27 08:30:24 np0005597378 nova_compute[237849]: ).  Its value may be silently ignored in the future.#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.117 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.118 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.118 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.118 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.118 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.118 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.119 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.119 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.119 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.119 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.119 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.120 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.121 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rbd_secret_uuid        = 4d8fd694-f443-5fb1-b612-70034b2f3c6e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.121 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.121 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.121 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.121 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.121 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.122 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.122 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.122 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.122 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.122 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.122 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.123 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.123 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.123 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.123 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.123 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.123 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.124 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.125 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.125 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.125 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.125 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.125 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.125 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.126 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.127 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.127 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.127 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.127 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.127 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.127 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.128 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.128 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.128 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.128 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.128 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.128 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.129 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.130 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.130 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.130 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.130 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.130 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.130 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.131 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.131 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.131 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.131 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.131 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.131 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.132 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.133 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.134 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.134 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.134 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.134 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.134 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.134 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.135 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.135 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.135 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.135 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.135 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.135 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.136 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.136 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.136 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.136 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.136 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.137 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.138 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.139 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.140 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.140 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.140 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.140 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.140 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.141 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.142 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.142 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.142 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.142 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.142 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.142 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.143 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.144 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.145 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.146 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.146 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.146 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.146 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.146 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.147 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.148 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.148 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.148 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.148 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.148 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.149 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.149 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.149 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.149 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.149 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.150 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.150 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.150 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.150 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.150 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.150 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.151 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.152 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.153 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.154 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.155 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.156 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.156 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.156 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.156 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.156 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.156 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.157 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.157 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.157 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.157 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.158 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.158 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.158 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.158 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.158 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.159 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.160 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.161 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.162 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.162 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.162 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.162 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.162 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.162 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.163 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.163 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.163 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.163 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.163 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.163 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.164 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.164 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.164 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.164 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.164 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.164 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.165 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.166 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.167 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.167 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.167 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.167 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.167 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.167 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.168 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.169 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.170 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.170 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.170 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.170 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.170 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.170 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.171 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.172 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.173 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.174 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.175 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.176 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.177 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.178 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.179 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.179 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.179 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.179 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.179 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.179 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.180 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.181 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.182 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.183 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.183 237853 DEBUG oslo_service.service [None req-e9058960-41f3-42e3-9d4e-41b75e740d8e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.184 237853 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.204 237853 DEBUG nova.virt.libvirt.host [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.205 237853 DEBUG nova.virt.libvirt.host [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.205 237853 DEBUG nova.virt.libvirt.host [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.206 237853 DEBUG nova.virt.libvirt.host [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 27 08:30:24 np0005597378 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 08:30:24 np0005597378 systemd[1]: Started libvirt QEMU daemon.
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.301 237853 DEBUG nova.virt.libvirt.host [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd321078eb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.303 237853 DEBUG nova.virt.libvirt.host [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd321078eb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.303 237853 INFO nova.virt.libvirt.driver [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.321 237853 WARNING nova.virt.libvirt.driver [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 27 08:30:24 np0005597378 nova_compute[237849]: 2026-01-27 13:30:24.321 237853 DEBUG nova.virt.libvirt.volume.mount [None req-06a5fd70-6e66-4624-a06d-e9eb24224776 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:30:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:30:24 np0005597378 python3.9[238767]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 08:30:24 np0005597378 systemd[1]: Stopping nova_compute container...
Jan 27 08:30:24 np0005597378 virtqemud[238711]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 27 08:30:24 np0005597378 systemd[1]: libpod-bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3.scope: Deactivated successfully.
Jan 27 08:30:24 np0005597378 virtqemud[238711]: hostname: compute-0
Jan 27 08:30:24 np0005597378 virtqemud[238711]: End of file while reading data: Input/output error
Jan 27 08:30:24 np0005597378 systemd[1]: libpod-bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3.scope: Consumed 2.867s CPU time.
Jan 27 08:30:24 np0005597378 podman[238832]: 2026-01-27 13:30:24.756384987 +0000 UTC m=+0.096762599 container died bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:30:24 np0005597378 podman[238851]: 2026-01-27 13:30:24.739825583 +0000 UTC m=+0.053505981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:30:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3-userdata-shm.mount: Deactivated successfully.
Jan 27 08:30:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d-merged.mount: Deactivated successfully.
Jan 27 08:30:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:25 np0005597378 podman[238851]: 2026-01-27 13:30:25.325598299 +0000 UTC m=+0.639278677 container create 88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_galois, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:25 np0005597378 podman[238832]: 2026-01-27 13:30:25.848756474 +0000 UTC m=+1.189134096 container cleanup bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:30:25 np0005597378 podman[238832]: nova_compute
Jan 27 08:30:25 np0005597378 systemd[1]: Started libpod-conmon-88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0.scope.
Jan 27 08:30:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:25 np0005597378 podman[238894]: nova_compute
Jan 27 08:30:25 np0005597378 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 27 08:30:25 np0005597378 systemd[1]: Stopped nova_compute container.
Jan 27 08:30:25 np0005597378 podman[238851]: 2026-01-27 13:30:25.916556764 +0000 UTC m=+1.230237162 container init 88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:25 np0005597378 systemd[1]: Starting nova_compute container...
Jan 27 08:30:25 np0005597378 podman[238851]: 2026-01-27 13:30:25.924285685 +0000 UTC m=+1.237966063 container start 88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:30:25 np0005597378 podman[238851]: 2026-01-27 13:30:25.927828887 +0000 UTC m=+1.241509295 container attach 88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:30:25 np0005597378 festive_galois[238897]: 167 167
Jan 27 08:30:25 np0005597378 systemd[1]: libpod-88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0.scope: Deactivated successfully.
Jan 27 08:30:25 np0005597378 podman[238851]: 2026-01-27 13:30:25.941395795 +0000 UTC m=+1.255076173 container died 88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:30:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2cd69b13bd6ea8e6fb29551a3d09f18dd1b8411991d71610c45cd190915223cb-merged.mount: Deactivated successfully.
Jan 27 08:30:25 np0005597378 podman[238851]: 2026-01-27 13:30:25.985199388 +0000 UTC m=+1.298879766 container remove 88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_galois, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:30:25 np0005597378 systemd[1]: libpod-conmon-88cdf33b63f1417ee37890b8ff1a7dab5fbe24f5b8ca9e9bf6fd03691b5d20d0.scope: Deactivated successfully.
Jan 27 08:30:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b845dc2344f7b96e40ed4bd3d29346682949dea58f761d2efc4d13e85c2a6d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 podman[238911]: 2026-01-27 13:30:26.051437152 +0000 UTC m=+0.123408301 container init bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Jan 27 08:30:26 np0005597378 podman[238911]: 2026-01-27 13:30:26.058546206 +0000 UTC m=+0.130517355 container start bfdc56125819a1b705062c355187d75abeb3d753266fffad83df12d97ed069e3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3)
Jan 27 08:30:26 np0005597378 podman[238911]: nova_compute
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + sudo -E kolla_set_configs
Jan 27 08:30:26 np0005597378 systemd[1]: Started nova_compute container.
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Validating config file
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying service configuration files
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /etc/ceph
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Creating directory /etc/ceph
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Writing out command to execute
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:26 np0005597378 nova_compute[238941]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.140266284 +0000 UTC m=+0.044396381 container create ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:30:26 np0005597378 nova_compute[238941]: ++ cat /run_command
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + CMD=nova-compute
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + ARGS=
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + sudo kolla_copy_cacerts
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + [[ ! -n '' ]]
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + . kolla_extend_start
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 08:30:26 np0005597378 nova_compute[238941]: Running command: 'nova-compute'
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + umask 0022
Jan 27 08:30:26 np0005597378 nova_compute[238941]: + exec nova-compute
Jan 27 08:30:26 np0005597378 systemd[1]: Started libpod-conmon-ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3.scope.
Jan 27 08:30:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57adbc515520fdb68809ec3ab273e58f25c89dfec02888719472a506bf8180fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57adbc515520fdb68809ec3ab273e58f25c89dfec02888719472a506bf8180fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57adbc515520fdb68809ec3ab273e58f25c89dfec02888719472a506bf8180fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57adbc515520fdb68809ec3ab273e58f25c89dfec02888719472a506bf8180fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57adbc515520fdb68809ec3ab273e58f25c89dfec02888719472a506bf8180fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.118559013 +0000 UTC m=+0.022689130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.222130885 +0000 UTC m=+0.126261012 container init ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_visvesvaraya, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.231884354 +0000 UTC m=+0.136014451 container start ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.236133556 +0000 UTC m=+0.140263673 container attach ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:30:26 np0005597378 intelligent_visvesvaraya[238999]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:30:26 np0005597378 intelligent_visvesvaraya[238999]: --> All data devices are unavailable
Jan 27 08:30:26 np0005597378 systemd[1]: libpod-ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3.scope: Deactivated successfully.
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.682947007 +0000 UTC m=+0.587077114 container died ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-57adbc515520fdb68809ec3ab273e58f25c89dfec02888719472a506bf8180fc-merged.mount: Deactivated successfully.
Jan 27 08:30:26 np0005597378 podman[238954]: 2026-01-27 13:30:26.736345884 +0000 UTC m=+0.640475981 container remove ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:30:26 np0005597378 systemd[1]: libpod-conmon-ec8ec6c83cfa045d9d56b76b7dd79740187aaedceb24da8443ad4734a8e5a8a3.scope: Deactivated successfully.
Jan 27 08:30:26 np0005597378 python3.9[239140]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 08:30:27 np0005597378 systemd[1]: Started libpod-conmon-bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01.scope.
Jan 27 08:30:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c776962589408bf135e179493cab66d670346ff6b091c8b3239cc8a9fd5f3ef/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c776962589408bf135e179493cab66d670346ff6b091c8b3239cc8a9fd5f3ef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c776962589408bf135e179493cab66d670346ff6b091c8b3239cc8a9fd5f3ef/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 podman[239235]: 2026-01-27 13:30:27.059235151 +0000 UTC m=+0.107589349 container init bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 27 08:30:27 np0005597378 podman[239235]: 2026-01-27 13:30:27.069160215 +0000 UTC m=+0.117514383 container start bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:30:27 np0005597378 python3.9[239140]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Applying nova statedir ownership
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 27 08:30:27 np0005597378 nova_compute_init[239256]: INFO:nova_statedir:Nova statedir ownership complete
Jan 27 08:30:27 np0005597378 systemd[1]: libpod-bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 podman[239283]: 2026-01-27 13:30:27.165601354 +0000 UTC m=+0.031368608 container died bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.186570354 +0000 UTC m=+0.054843290 container create 77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01-userdata-shm.mount: Deactivated successfully.
Jan 27 08:30:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6c776962589408bf135e179493cab66d670346ff6b091c8b3239cc8a9fd5f3ef-merged.mount: Deactivated successfully.
Jan 27 08:30:27 np0005597378 podman[239283]: 2026-01-27 13:30:27.217944761 +0000 UTC m=+0.083712015 container cleanup bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 08:30:27 np0005597378 systemd[1]: Started libpod-conmon-77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a.scope.
Jan 27 08:30:27 np0005597378 systemd[1]: libpod-conmon-bfeb127b2ad61e43e240305cb7caa352d1cdb906c316c1a8e8fb69dec2ddef01.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.160860449 +0000 UTC m=+0.029133405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:30:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.274419957 +0000 UTC m=+0.142692903 container init 77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lederberg, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.281240542 +0000 UTC m=+0.149513468 container start 77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:27 np0005597378 tender_lederberg[239323]: 167 167
Jan 27 08:30:27 np0005597378 systemd[1]: libpod-77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.291909307 +0000 UTC m=+0.160182383 container attach 77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.293244335 +0000 UTC m=+0.161517281 container died 77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:30:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:27 np0005597378 podman[239279]: 2026-01-27 13:30:27.351387419 +0000 UTC m=+0.219660345 container remove 77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lederberg, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:30:27 np0005597378 systemd[1]: libpod-conmon-77e170f70f9471c7df6d156328185d309aabb307346714c112512d6f11aa3f2a.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.512704773 +0000 UTC m=+0.038726329 container create d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:30:27 np0005597378 systemd[1]: Started libpod-conmon-d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493.scope.
Jan 27 08:30:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5ff4025fea2c2f10f1ea3f2c6296f0f94ce7c24af7629f6f04aaa4d2b34922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5ff4025fea2c2f10f1ea3f2c6296f0f94ce7c24af7629f6f04aaa4d2b34922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5ff4025fea2c2f10f1ea3f2c6296f0f94ce7c24af7629f6f04aaa4d2b34922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5ff4025fea2c2f10f1ea3f2c6296f0f94ce7c24af7629f6f04aaa4d2b34922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.581964404 +0000 UTC m=+0.107985990 container init d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_roentgen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.58742779 +0000 UTC m=+0.113449346 container start d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.495547563 +0000 UTC m=+0.021569139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.593158204 +0000 UTC m=+0.119179790 container attach d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_roentgen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:27 np0005597378 systemd[1]: session-49.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 systemd[1]: session-49.scope: Consumed 1min 57.925s CPU time.
Jan 27 08:30:27 np0005597378 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Jan 27 08:30:27 np0005597378 systemd-logind[786]: Removed session 49.
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]: {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:    "0": [
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:        {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "devices": [
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "/dev/loop3"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            ],
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_name": "ceph_lv0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_size": "21470642176",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "name": "ceph_lv0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "tags": {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cluster_name": "ceph",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.crush_device_class": "",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.encrypted": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.objectstore": "bluestore",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osd_id": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.type": "block",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.vdo": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.with_tpm": "0"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            },
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "type": "block",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "vg_name": "ceph_vg0"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:        }
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:    ],
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:    "1": [
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:        {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "devices": [
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "/dev/loop4"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            ],
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_name": "ceph_lv1",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_size": "21470642176",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "name": "ceph_lv1",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "tags": {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cluster_name": "ceph",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.crush_device_class": "",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.encrypted": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.objectstore": "bluestore",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osd_id": "1",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.type": "block",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.vdo": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.with_tpm": "0"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            },
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "type": "block",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "vg_name": "ceph_vg1"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:        }
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:    ],
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:    "2": [
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:        {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "devices": [
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "/dev/loop5"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            ],
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_name": "ceph_lv2",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_size": "21470642176",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "name": "ceph_lv2",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "tags": {
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.cluster_name": "ceph",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.crush_device_class": "",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.encrypted": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.objectstore": "bluestore",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osd_id": "2",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.type": "block",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.vdo": "0",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:                "ceph.with_tpm": "0"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            },
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "type": "block",
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:            "vg_name": "ceph_vg2"
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:        }
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]:    ]
Jan 27 08:30:27 np0005597378 adoring_roentgen[239388]: }
Jan 27 08:30:27 np0005597378 systemd[1]: libpod-d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.882371928 +0000 UTC m=+0.408393484 container died d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:30:27 np0005597378 podman[239372]: 2026-01-27 13:30:27.930170735 +0000 UTC m=+0.456192291 container remove d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:27 np0005597378 systemd[1]: libpod-conmon-d89999da97d83349ff351109562a66bf4b3eeecf43ccc99602627d680f425493.scope: Deactivated successfully.
Jan 27 08:30:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5160d59e250bed1a434b22b76884a14ff753542e4621f70bd861dfc0e6b96b78-merged.mount: Deactivated successfully.
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.208 238945 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.209 238945 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.209 238945 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.209 238945 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.346 238945 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.34916548 +0000 UTC m=+0.038659456 container create 7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_albattani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.369 238945 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.369 238945 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 27 08:30:28 np0005597378 systemd[1]: Started libpod-conmon-7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898.scope.
Jan 27 08:30:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.421571442 +0000 UTC m=+0.111065448 container init 7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_albattani, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.427910842 +0000 UTC m=+0.117404818 container start 7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.330860107 +0000 UTC m=+0.020354103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:30:28 np0005597378 hopeful_albattani[239490]: 167 167
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.431790314 +0000 UTC m=+0.121284310 container attach 7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_albattani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:30:28 np0005597378 systemd[1]: libpod-7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898.scope: Deactivated successfully.
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.433126342 +0000 UTC m=+0.122620338 container died 7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0aafcf3d949d9e484e51276fde30ee0357158a71a97523ea0885bdac1e9d4845-merged.mount: Deactivated successfully.
Jan 27 08:30:28 np0005597378 podman[239472]: 2026-01-27 13:30:28.464349955 +0000 UTC m=+0.153843931 container remove 7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:30:28 np0005597378 systemd[1]: libpod-conmon-7c771d9ca1a3df1b6201ed52173d2fa5a5eb4358a296488b3b58534584680898.scope: Deactivated successfully.
Jan 27 08:30:28 np0005597378 podman[239514]: 2026-01-27 13:30:28.619708099 +0000 UTC m=+0.042124996 container create 702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:30:28 np0005597378 systemd[1]: Started libpod-conmon-702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f.scope.
Jan 27 08:30:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:30:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a97cef951afbdaf2c5e29b3af2c0ed856c00da913eb4fc851ceacdfc7ce43d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a97cef951afbdaf2c5e29b3af2c0ed856c00da913eb4fc851ceacdfc7ce43d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a97cef951afbdaf2c5e29b3af2c0ed856c00da913eb4fc851ceacdfc7ce43d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a97cef951afbdaf2c5e29b3af2c0ed856c00da913eb4fc851ceacdfc7ce43d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:30:28 np0005597378 podman[239514]: 2026-01-27 13:30:28.601429346 +0000 UTC m=+0.023846253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:30:28 np0005597378 podman[239514]: 2026-01-27 13:30:28.703546577 +0000 UTC m=+0.125963474 container init 702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:30:28 np0005597378 podman[239514]: 2026-01-27 13:30:28.708688685 +0000 UTC m=+0.131105582 container start 702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:30:28 np0005597378 podman[239514]: 2026-01-27 13:30:28.712084151 +0000 UTC m=+0.134501078 container attach 702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.804 238945 INFO nova.virt.driver [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.905 238945 INFO nova.compute.provider_config [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 27 08:30:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.994 238945 DEBUG oslo_concurrency.lockutils [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.994 238945 DEBUG oslo_concurrency.lockutils [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.995 238945 DEBUG oslo_concurrency.lockutils [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.995 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.995 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.996 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.996 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.996 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.996 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.997 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.997 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.997 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.997 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.997 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.997 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.998 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.998 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.998 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.998 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.998 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.999 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.999 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:28 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.999 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:28.999 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.000 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.000 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.000 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.000 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.001 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.001 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.001 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.001 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.002 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.002 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.002 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.002 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.004 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.005 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.005 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.005 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.006 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.006 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.006 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.007 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.007 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.007 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.007 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.008 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.008 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.008 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.008 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.009 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.009 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.009 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.010 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.010 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.010 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.010 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.010 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.011 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.011 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.011 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.011 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.012 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.012 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.012 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.012 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.013 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.013 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.013 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.013 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.014 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.014 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.014 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.014 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.015 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.015 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.015 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.015 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.016 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.016 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.016 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.016 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.017 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.017 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.017 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.018 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.018 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.018 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.018 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.019 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.019 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.019 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.019 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.019 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.020 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.020 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.020 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.021 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.021 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.021 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.021 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.022 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.022 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.022 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.022431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.022 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520629022474, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1779, "num_deletes": 250, "total_data_size": 3022236, "memory_usage": 3054200, "flush_reason": "Manual Compaction"}
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.022 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.022 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.023 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.024 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.025 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.025 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.025 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.025 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.025 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.025 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.026 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.027 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.028 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.029 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.030 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.031 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.032 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.033 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.034 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520629035766, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1705851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11746, "largest_seqno": 13524, "table_properties": {"data_size": 1700040, "index_size": 2886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14565, "raw_average_key_size": 20, "raw_value_size": 1687225, "raw_average_value_size": 2330, "num_data_blocks": 134, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769520426, "oldest_key_time": 1769520426, "file_creation_time": 1769520629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13365 microseconds, and 3791 cpu microseconds.
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.035 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.036 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.035800) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1705851 bytes OK
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.035817) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.036903) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.036918) EVENT_LOG_v1 {"time_micros": 1769520629036913, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.036934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3014693, prev total WAL file size 3014693, number of live WAL files 2.
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.037 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.038122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.038 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1665KB)], [29(7885KB)]
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520629038230, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9780463, "oldest_snapshot_seqno": -1}
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.039 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.040 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.041 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.041 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.041 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.041 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.041 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.041 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.042 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.043 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.044 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.045 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.046 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.047 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.047 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.047 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.047 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.047 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.048 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.049 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.050 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.051 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.052 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.053 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.053 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.053 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.053 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.053 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.053 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.054 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.055 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.056 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.056 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.056 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.056 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.056 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.056 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.057 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.058 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.058 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.058 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.058 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.058 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.059 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.060 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.061 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.062 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.063 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.064 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.064 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.064 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.064 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.064 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.064 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.065 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.066 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.067 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.067 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.067 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.067 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.067 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.067 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.068 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.068 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.068 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.068 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.068 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.069 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.069 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.069 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.069 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.069 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.069 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.070 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.070 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.070 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.070 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.070 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.070 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.071 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.071 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.071 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.071 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.071 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.071 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.072 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.072 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.072 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.072 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.073 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.073 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.073 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.073 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.073 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.073 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.074 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.074 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.074 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.074 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.074 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.074 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.075 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.075 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.075 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.075 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.075 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.076 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.076 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.076 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.076 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.076 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.076 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.077 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.078 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.078 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.078 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.078 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.078 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.078 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.079 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.080 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.080 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.080 238945 WARNING oslo_config.cfg [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 08:30:29 np0005597378 nova_compute[238941]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 08:30:29 np0005597378 nova_compute[238941]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 08:30:29 np0005597378 nova_compute[238941]: and ``live_migration_inbound_addr`` respectively.
Jan 27 08:30:29 np0005597378 nova_compute[238941]: ).  Its value may be silently ignored in the future.#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.080 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.081 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.081 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.081 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.081 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.081 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.081 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.082 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.082 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.082 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.082 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.082 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.082 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rbd_secret_uuid        = 4d8fd694-f443-5fb1-b612-70034b2f3c6e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.083 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.084 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.084 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.084 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.084 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.084 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.084 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.085 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.085 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.085 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.085 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.085 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.085 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.086 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.086 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.086 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.086 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.086 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.086 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.087 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.087 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.087 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.087 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.087 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.087 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.088 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.088 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.088 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.088 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.088 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.088 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.089 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.089 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.089 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.089 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.089 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.089 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.090 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.090 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.090 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.090 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.090 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.091 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.091 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.091 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.091 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.091 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.092 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.092 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.092 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.092 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.092 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.093 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.093 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.093 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.093 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.093 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.093 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.094 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.094 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.094 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.094 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.094 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.095 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.096 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.096 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.096 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.096 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.096 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.096 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.097 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.098 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.098 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.098 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.098 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.098 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.098 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.099 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.100 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.101 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.102 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.102 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.102 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.102 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.102 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.103 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.103 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.103 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.103 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.103 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.104 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.104 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.104 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.104 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.104 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.105 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.106 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.106 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.106 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.106 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.106 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.107 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.108 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.108 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.108 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.108 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.108 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.108 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.109 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.109 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.109 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.109 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.109 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.109 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.110 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.110 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.110 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.110 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.110 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.111 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.111 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.111 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.111 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.111 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.112 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.112 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.112 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.112 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.112 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.112 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.113 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.113 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.113 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.113 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.113 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.113 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.114 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.114 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.114 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.114 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.114 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.114 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.115 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.115 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.115 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.115 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.115 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.116 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.116 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.116 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.116 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.116 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.116 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.117 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.117 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.117 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.117 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.117 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.117 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.118 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.118 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.118 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.118 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.118 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.119 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.119 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.119 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.119 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.119 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.120 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.120 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.120 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.120 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.120 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.120 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.121 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.121 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.121 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.121 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.121 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.121 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.122 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.122 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.122 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.122 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.122 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.122 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.123 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.123 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.123 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.123 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.123 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.124 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.124 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.124 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.124 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.124 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.124 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.125 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.125 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.125 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.125 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.125 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.126 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.126 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.126 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.126 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.126 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.127 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.127 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.127 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.127 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.127 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.128 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.128 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.128 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.128 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.128 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.129 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.129 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.129 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.129 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.129 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.129 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.130 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.130 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.130 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.130 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.130 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.131 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.131 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.131 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.131 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.131 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.132 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.132 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.132 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.132 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.132 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.133 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.133 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.133 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.133 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.133 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.134 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.134 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.134 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.134 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.134 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.135 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.135 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.135 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.135 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.136 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.136 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.136 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.136 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.136 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.136 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.137 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.137 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.137 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.137 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.137 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.138 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.138 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.138 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.138 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.138 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.139 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.139 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.139 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.139 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.139 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.140 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.140 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.140 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.140 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.141 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.141 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.141 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.141 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.141 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.142 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.142 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.142 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.142 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.142 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.142 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.143 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.143 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.143 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.143 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.143 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.144 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.144 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.144 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.144 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.145 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.145 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.145 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.145 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.145 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.146 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.146 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.146 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.146 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.146 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.147 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.147 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.147 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.147 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.147 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.148 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.148 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.148 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.148 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.148 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.148 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.149 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.149 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.149 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.149 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.149 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.149 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.150 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.151 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.151 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.151 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.151 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.151 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.151 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.152 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.152 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.152 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.152 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.152 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.153 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.153 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.153 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.153 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.153 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.153 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.154 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.154 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.154 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.154 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.154 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.155 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.155 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.155 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.155 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.155 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.156 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.156 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.156 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.156 238945 DEBUG oslo_service.service [None req-8740bdc3-edd6-438b-8f7a-be31279b296b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.157 238945 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4023 keys, 7703779 bytes, temperature: kUnknown
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520629181361, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7703779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7675137, "index_size": 17460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95730, "raw_average_key_size": 23, "raw_value_size": 7600872, "raw_average_value_size": 1889, "num_data_blocks": 761, "num_entries": 4023, "num_filter_entries": 4023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769520629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.181622) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7703779 bytes
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.183039) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.3 rd, 53.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4440, records dropped: 417 output_compression: NoCompression
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.183060) EVENT_LOG_v1 {"time_micros": 1769520629183050, "job": 12, "event": "compaction_finished", "compaction_time_micros": 143236, "compaction_time_cpu_micros": 20103, "output_level": 6, "num_output_files": 1, "total_output_size": 7703779, "num_input_records": 4440, "num_output_records": 4023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520629183660, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520629185797, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.038074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.185866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.185870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.185872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.185874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:30:29.185876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.201 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.202 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.202 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.202 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.214 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7bfc53fd30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.219 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7bfc53fd30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.220 238945 INFO nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.229 238945 INFO nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <host>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <uuid>3df1c84e-2399-4242-b9b7-0012ac6a93e5</uuid>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <arch>x86_64</arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model>EPYC-Rome-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <vendor>AMD</vendor>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <microcode version='16777317'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <signature family='23' model='49' stepping='0'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='x2apic'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='tsc-deadline'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='osxsave'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='hypervisor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='tsc_adjust'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='spec-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='stibp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='arch-capabilities'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='cmp_legacy'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='topoext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='virt-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='lbrv'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='tsc-scale'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='vmcb-clean'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='pause-filter'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='pfthreshold'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='svme-addr-chk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='rdctl-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='skip-l1dfl-vmentry'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='mds-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature name='pschange-mc-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <pages unit='KiB' size='4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <pages unit='KiB' size='2048'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <pages unit='KiB' size='1048576'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <power_management>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <suspend_mem/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </power_management>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <iommu support='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <migration_features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <live/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <uri_transports>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <uri_transport>tcp</uri_transport>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <uri_transport>rdma</uri_transport>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </uri_transports>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </migration_features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <topology>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <cells num='1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <cell id='0'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          <memory unit='KiB'>7864316</memory>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          <pages unit='KiB' size='2048'>0</pages>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          <distances>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <sibling id='0' value='10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          </distances>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          <cpus num='8'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:          </cpus>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        </cell>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </cells>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </topology>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <cache>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </cache>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <secmodel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model>selinux</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <doi>0</doi>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </secmodel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <secmodel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model>dac</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <doi>0</doi>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </secmodel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </host>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <guest>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <os_type>hvm</os_type>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <arch name='i686'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <wordsize>32</wordsize>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <domain type='qemu'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <domain type='kvm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <pae/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <nonpae/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <acpi default='on' toggle='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <apic default='on' toggle='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <cpuselection/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <deviceboot/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <disksnapshot default='on' toggle='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <externalSnapshot/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </guest>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <guest>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <os_type>hvm</os_type>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <arch name='x86_64'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <wordsize>64</wordsize>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <domain type='qemu'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <domain type='kvm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <acpi default='on' toggle='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <apic default='on' toggle='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <cpuselection/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <deviceboot/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <disksnapshot default='on' toggle='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <externalSnapshot/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </guest>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 
Jan 27 08:30:29 np0005597378 nova_compute[238941]: </capabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: #033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.232 238945 WARNING nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.232 238945 DEBUG nova.virt.libvirt.volume.mount [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.237 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.268 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 08:30:29 np0005597378 nova_compute[238941]: <domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <domain>kvm</domain>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <arch>i686</arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <vcpu max='4096'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <iothreads supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <os supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='firmware'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <loader supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>rom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pflash</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='readonly'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>yes</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='secure'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </loader>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-passthrough' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='hostPassthroughMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='maximum' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='maximumMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-model' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <vendor>AMD</vendor>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='x2apic'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-deadline'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='hypervisor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc_adjust'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='spec-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='stibp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='cmp_legacy'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='overflow-recov'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='succor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='amd-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='virt-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lbrv'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-scale'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='vmcb-clean'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='flushbyasid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pause-filter'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pfthreshold'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='disable' name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='custom' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Dhyana-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v6'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v7'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <memoryBacking supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='sourceType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>anonymous</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>memfd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </memoryBacking>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <disk supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='diskDevice'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>disk</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cdrom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>floppy</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>lun</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>fdc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>sata</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <graphics supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vnc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egl-headless</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <video supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='modelType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vga</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cirrus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>none</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>bochs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ramfb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hostdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='mode'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>subsystem</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='startupPolicy'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>mandatory</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>requisite</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>optional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='subsysType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pci</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='capsType'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='pciBackend'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hostdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <rng supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>random</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <filesystem supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='driverType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>path</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>handle</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtiofs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </filesystem>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tpm supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-tis</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-crb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emulator</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>external</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendVersion'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>2.0</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </tpm>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <redirdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </redirdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <channel supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </channel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <crypto supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </crypto>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <interface supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>passt</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <panic supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>isa</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>hyperv</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </panic>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <console supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>null</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dev</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pipe</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stdio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>udp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tcp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu-vdagent</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <gic supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <vmcoreinfo supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <genid supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backingStoreInput supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backup supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <async-teardown supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <s390-pv supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <ps2 supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tdx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sev supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sgx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hyperv supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='features'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>relaxed</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vapic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>spinlocks</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vpindex</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>runtime</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>synic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stimer</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reset</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vendor_id</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>frequencies</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reenlightenment</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tlbflush</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ipi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>avic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emsr_bitmap</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>xmm_input</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <spinlocks>4095</spinlocks>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <stimer_direct>on</stimer_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hyperv>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <launchSecurity supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: </domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.275 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 08:30:29 np0005597378 nova_compute[238941]: <domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <domain>kvm</domain>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <arch>i686</arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <vcpu max='240'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <iothreads supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <os supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='firmware'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <loader supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>rom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pflash</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='readonly'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>yes</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='secure'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </loader>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-passthrough' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='hostPassthroughMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='maximum' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='maximumMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-model' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <vendor>AMD</vendor>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='x2apic'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-deadline'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='hypervisor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc_adjust'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='spec-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='stibp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='cmp_legacy'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='overflow-recov'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='succor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='amd-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='virt-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lbrv'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-scale'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='vmcb-clean'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='flushbyasid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pause-filter'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pfthreshold'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='disable' name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='custom' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Dhyana-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 lvm[239630]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 lvm[239630]: VG ceph_vg0 finished
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v6'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v7'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:30:29 np0005597378 lvm[239633]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4-v1'>
Jan 27 08:30:29 np0005597378 lvm[239633]: VG ceph_vg1 finished
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <memoryBacking supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='sourceType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>anonymous</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>memfd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </memoryBacking>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <disk supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='diskDevice'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>disk</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cdrom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>floppy</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>lun</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ide</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>fdc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>sata</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <graphics supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vnc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egl-headless</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <video supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='modelType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vga</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cirrus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>none</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>bochs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ramfb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hostdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='mode'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>subsystem</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='startupPolicy'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>mandatory</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>requisite</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>optional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='subsysType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pci</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='capsType'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='pciBackend'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hostdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <rng supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>random</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <filesystem supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='driverType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>path</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>handle</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtiofs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </filesystem>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tpm supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-tis</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-crb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emulator</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>external</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendVersion'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>2.0</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </tpm>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <redirdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </redirdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <channel supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </channel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <crypto supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </crypto>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <interface supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>passt</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <panic supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>isa</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>hyperv</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </panic>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <console supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>null</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dev</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pipe</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stdio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>udp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tcp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu-vdagent</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <gic supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <vmcoreinfo supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <genid supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backingStoreInput supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backup supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <async-teardown supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <s390-pv supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <ps2 supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tdx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sev supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sgx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hyperv supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='features'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>relaxed</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vapic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>spinlocks</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vpindex</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>runtime</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>synic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stimer</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reset</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vendor_id</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>frequencies</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reenlightenment</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tlbflush</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ipi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>avic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emsr_bitmap</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>xmm_input</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <spinlocks>4095</spinlocks>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <stimer_direct>on</stimer_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hyperv>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <launchSecurity supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: </domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.335 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.340 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 08:30:29 np0005597378 nova_compute[238941]: <domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <domain>kvm</domain>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <arch>x86_64</arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <vcpu max='240'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <iothreads supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <os supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='firmware'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <loader supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>rom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pflash</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='readonly'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>yes</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='secure'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </loader>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-passthrough' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='hostPassthroughMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='maximum' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='maximumMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-model' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <vendor>AMD</vendor>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='x2apic'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-deadline'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='hypervisor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc_adjust'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='spec-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='stibp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='cmp_legacy'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='overflow-recov'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='succor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='amd-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='virt-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lbrv'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-scale'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='vmcb-clean'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='flushbyasid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pause-filter'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pfthreshold'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='disable' name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='custom' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 lvm[239635]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 lvm[239635]: VG ceph_vg2 finished
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Dhyana-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 lvm[239636]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 lvm[239636]: VG ceph_vg0 finished
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v6'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v7'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <memoryBacking supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='sourceType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>anonymous</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>memfd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </memoryBacking>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <disk supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='diskDevice'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>disk</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cdrom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>floppy</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>lun</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ide</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>fdc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>sata</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <graphics supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vnc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egl-headless</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <video supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='modelType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vga</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cirrus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>none</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>bochs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ramfb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hostdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='mode'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>subsystem</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='startupPolicy'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>mandatory</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>requisite</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>optional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='subsysType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pci</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='capsType'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='pciBackend'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hostdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <rng supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>random</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <filesystem supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='driverType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>path</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>handle</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtiofs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </filesystem>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tpm supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-tis</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-crb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emulator</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>external</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendVersion'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>2.0</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </tpm>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <redirdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </redirdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <channel supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </channel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <crypto supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </crypto>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <interface supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>passt</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <panic supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>isa</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>hyperv</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </panic>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <console supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>null</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dev</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pipe</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stdio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>udp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tcp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu-vdagent</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <gic supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <vmcoreinfo supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <genid supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backingStoreInput supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backup supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <async-teardown supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <s390-pv supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <ps2 supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tdx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sev supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sgx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hyperv supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='features'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>relaxed</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vapic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>spinlocks</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vpindex</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>runtime</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>synic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stimer</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reset</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vendor_id</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>frequencies</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reenlightenment</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tlbflush</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ipi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>avic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emsr_bitmap</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>xmm_input</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <spinlocks>4095</spinlocks>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <stimer_direct>on</stimer_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hyperv>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <launchSecurity supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: </domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.434 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 27 08:30:29 np0005597378 nova_compute[238941]: <domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <path>/usr/libexec/qemu-kvm</path>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <domain>kvm</domain>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <arch>x86_64</arch>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <vcpu max='4096'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <iothreads supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <os supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='firmware'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>efi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <loader supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>rom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pflash</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='readonly'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>yes</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='secure'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>yes</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>no</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </loader>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-passthrough' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='hostPassthroughMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='maximum' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='maximumMigratable'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>on</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>off</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='host-model' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <vendor>AMD</vendor>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='x2apic'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-deadline'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='hypervisor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc_adjust'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='spec-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='stibp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='cmp_legacy'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='overflow-recov'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='succor'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='amd-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='virt-ssbd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lbrv'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='tsc-scale'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='vmcb-clean'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='flushbyasid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pause-filter'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='pfthreshold'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='svme-addr-chk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <feature policy='disable' name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <mode name='custom' supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Broadwell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cascadelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='ClearwaterForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ddpd-u'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sha512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm3'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sm4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Cooperlake-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Denverton-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Dhyana-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Genoa-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Milan-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Rome-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-Turin-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amd-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='auto-ibrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vp2intersect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fs-gs-base-ns'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibpb-brtype'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='no-nested-data-bp'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='null-sel-clr-base'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='perfmon-v2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbpb'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='srso-user-kernel-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='stibp-always-on'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 08:30:29 np0005597378 cool_kowalevski[239530]: {}
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='EPYC-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='GraniteRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-128'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-256'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx10-512'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='prefetchiti'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Haswell-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-noTSX'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v6'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Icelake-Server-v7'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='IvyBridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='KnightsMill-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4fmaps'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-4vnniw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512er'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512pf'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G4-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Opteron_G5-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fma4'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tbm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xop'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 systemd[1]: libpod-702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f.scope: Deactivated successfully.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 podman[239514]: 2026-01-27 13:30:29.54680112 +0000 UTC m=+0.969218017 container died 702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:30:29 np0005597378 systemd[1]: libpod-702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f.scope: Consumed 1.234s CPU time.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SapphireRapids-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='amx-tile'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-bf16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-fp16'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512-vpopcntdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bitalg'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vbmi2'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrc'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fzrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='la57'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='taa-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='tsx-ldtrk'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='SierraForest-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ifma'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-ne-convert'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx-vnni-int8'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bhi-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='bus-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cmpccxadd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fbsdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='fsrs'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ibrs-all'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='intel-psfd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ipred-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='lam'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mcdt-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pbrsb-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='psdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rrsba-ctrl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='sbdr-ssdp-no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='serialize'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vaes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='vpclmulqdq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Client-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='hle'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='rtm'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Skylake-Server-v5'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512bw'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512cd'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512dq'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512f'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='avx512vl'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='invpcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pcid'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='pku'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='mpx'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v2'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v3'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='core-capability'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='split-lock-detect'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='Snowridge-v4'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='cldemote'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='erms'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='gfni'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdir64b'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='movdiri'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='xsaves'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='athlon-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='core2duo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='coreduo-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270'>
Jan 27 08:30:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-95a97cef951afbdaf2c5e29b3af2c0ed856c00da913eb4fc851ceacdfc7ce43d-merged.mount: Deactivated successfully.
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='n270-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='ss'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <blockers model='phenom-v1'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnow'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <feature name='3dnowext'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </blockers>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </mode>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <memoryBacking supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <enum name='sourceType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>anonymous</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <value>memfd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </memoryBacking>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <disk supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='diskDevice'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>disk</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cdrom</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>floppy</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>lun</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>fdc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>sata</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <graphics supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vnc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egl-headless</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <video supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='modelType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vga</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>cirrus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>none</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>bochs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ramfb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hostdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='mode'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>subsystem</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='startupPolicy'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>mandatory</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>requisite</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>optional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='subsysType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pci</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>scsi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='capsType'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='pciBackend'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hostdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <rng supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtio-non-transitional</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>random</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>egd</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <filesystem supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='driverType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>path</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>handle</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>virtiofs</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </filesystem>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tpm supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-tis</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tpm-crb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emulator</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>external</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendVersion'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>2.0</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </tpm>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <redirdev supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='bus'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>usb</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </redirdev>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <channel supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </channel>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <crypto supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendModel'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>builtin</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </crypto>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <interface supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='backendType'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>default</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>passt</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <panic supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='model'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>isa</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>hyperv</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </panic>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <console supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='type'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>null</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vc</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pty</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dev</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>file</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>pipe</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stdio</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>udp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tcp</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>unix</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>qemu-vdagent</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>dbus</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <gic supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <vmcoreinfo supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <genid supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backingStoreInput supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <backup supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <async-teardown supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <s390-pv supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <ps2 supported='yes'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <tdx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sev supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <sgx supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <hyperv supported='yes'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <enum name='features'>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>relaxed</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vapic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>spinlocks</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vpindex</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>runtime</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>synic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>stimer</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reset</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>vendor_id</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>frequencies</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>reenlightenment</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>tlbflush</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>ipi</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>avic</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>emsr_bitmap</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <value>xmm_input</value>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </enum>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      <defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <spinlocks>4095</spinlocks>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <stimer_direct>on</stimer_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_direct>on</tlbflush_direct>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <tlbflush_extended>on</tlbflush_extended>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:      </defaults>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    </hyperv>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:    <launchSecurity supported='no'/>
Jan 27 08:30:29 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: </domainCapabilities>
Jan 27 08:30:29 np0005597378 nova_compute[238941]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.497 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.498 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.498 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.505 238945 INFO nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Secure Boot support detected#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.506 238945 INFO nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.507 238945 INFO nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.516 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.560 238945 INFO nova.virt.node [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Determined node identity cc8b0052-0829-4cee-8aba-4745f236afe4 from /var/lib/nova/compute_id#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.586 238945 WARNING nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Compute nodes ['cc8b0052-0829-4cee-8aba-4745f236afe4'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 27 08:30:29 np0005597378 podman[239514]: 2026-01-27 13:30:29.600354151 +0000 UTC m=+1.022771048 container remove 702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:30:29 np0005597378 systemd[1]: libpod-conmon-702ca31c348f9fc90f8d1c8504c309cc90e7f650a3c1d539e4e9e4f3254d284f.scope: Deactivated successfully.
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.626 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.659 238945 WARNING nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.659 238945 DEBUG oslo_concurrency.lockutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.659 238945 DEBUG oslo_concurrency.lockutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.659 238945 DEBUG oslo_concurrency.lockutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.660 238945 DEBUG nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:30:29 np0005597378 nova_compute[238941]: 2026-01-27 13:30:29.660 238945 DEBUG oslo_concurrency.processutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:30:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:30:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:30:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:30:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:30:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860092428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.243 238945 DEBUG oslo_concurrency.processutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:30:30 np0005597378 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 08:30:30 np0005597378 systemd[1]: Started libvirt nodedev daemon.
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.572 238945 WARNING nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.573 238945 DEBUG nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.574 238945 DEBUG oslo_concurrency.lockutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.574 238945 DEBUG oslo_concurrency.lockutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.602 238945 WARNING nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] No compute node record for compute-0.ctlplane.example.com:cc8b0052-0829-4cee-8aba-4745f236afe4: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host cc8b0052-0829-4cee-8aba-4745f236afe4 could not be found.#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.623 238945 INFO nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: cc8b0052-0829-4cee-8aba-4745f236afe4#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.689 238945 DEBUG nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:30:30 np0005597378 nova_compute[238941]: 2026-01-27 13:30:30.689 238945 DEBUG nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:30:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:31 np0005597378 nova_compute[238941]: 2026-01-27 13:30:31.606 238945 INFO nova.scheduler.client.report [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [req-2c739f62-b16e-439e-a56b-482073bdbd19] Created resource provider record via placement API for resource provider with UUID cc8b0052-0829-4cee-8aba-4745f236afe4 and name compute-0.ctlplane.example.com.#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.006 238945 DEBUG oslo_concurrency.processutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:30:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:30:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795700706' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.553 238945 DEBUG oslo_concurrency.processutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.558 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 27 08:30:32 np0005597378 nova_compute[238941]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.558 238945 INFO nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.559 238945 DEBUG nova.compute.provider_tree [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.560 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.611 238945 DEBUG nova.scheduler.client.report [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Updated inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.611 238945 DEBUG nova.compute.provider_tree [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Updating resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.611 238945 DEBUG nova.compute.provider_tree [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.699 238945 DEBUG nova.compute.provider_tree [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Updating resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.718 238945 DEBUG nova.compute.resource_tracker [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.719 238945 DEBUG oslo_concurrency.lockutils [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.719 238945 DEBUG nova.service [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.798 238945 DEBUG nova.service [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 27 08:30:32 np0005597378 nova_compute[238941]: 2026-01-27 13:30:32.798 238945 DEBUG nova.servicegroup.drivers.db [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 27 08:30:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:30:46.278 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:30:46.278 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:30:46.278 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:30:46 np0005597378 podman[239744]: 2026-01-27 13:30:46.744085036 +0000 UTC m=+0.085745073 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:30:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:50 np0005597378 podman[239771]: 2026-01-27 13:30:50.745278892 +0000 UTC m=+0.087715760 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:30:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90609947' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90609947' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3924550881' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3924550881' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3110168320' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:30:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3110168320' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:30:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:30:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:30:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:14 np0005597378 nova_compute[238941]: 2026-01-27 13:31:14.800 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:14 np0005597378 nova_compute[238941]: 2026-01-27 13:31:14.826 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:31:16
Jan 27 08:31:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:31:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:31:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Jan 27 08:31:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:17 np0005597378 podman[239790]: 2026-01-27 13:31:17.72230059 +0000 UTC m=+0.068801220 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:31:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:31:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:21 np0005597378 podman[239817]: 2026-01-27 13:31:21.708557456 +0000 UTC m=+0.054923158 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 08:31:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 27 08:31:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4112531831' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 27 08:31:24 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14328 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 27 08:31:24 np0005597378 ceph-mgr[75385]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 27 08:31:24 np0005597378 ceph-mgr[75385]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 27 08:31:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:31:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.395 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.397 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.397 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.419 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.419 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:31:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:31:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127207124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:31:28 np0005597378 nova_compute[238941]: 2026-01-27 13:31:28.961 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:31:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.108 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.109 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.109 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.110 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.212 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.213 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.234 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:31:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:31:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/872448905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.741 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.748 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.796 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.828 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:31:29 np0005597378 nova_compute[238941]: 2026-01-27 13:31:29.828 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:31:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:31:30 np0005597378 podman[240023]: 2026-01-27 13:31:30.912478021 +0000 UTC m=+0.041615614 container create 5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:30 np0005597378 systemd[1]: Started libpod-conmon-5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561.scope.
Jan 27 08:31:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:31:30 np0005597378 podman[240023]: 2026-01-27 13:31:30.890944895 +0000 UTC m=+0.020082498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:31:30 np0005597378 podman[240023]: 2026-01-27 13:31:30.990641439 +0000 UTC m=+0.119779042 container init 5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:31:30 np0005597378 podman[240023]: 2026-01-27 13:31:30.99664193 +0000 UTC m=+0.125779513 container start 5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:31:31 np0005597378 podman[240023]: 2026-01-27 13:31:31.000470342 +0000 UTC m=+0.129607955 container attach 5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:31:31 np0005597378 hardcore_morse[240039]: 167 167
Jan 27 08:31:31 np0005597378 systemd[1]: libpod-5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561.scope: Deactivated successfully.
Jan 27 08:31:31 np0005597378 conmon[240039]: conmon 5e2dc7904b583eb359c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561.scope/container/memory.events
Jan 27 08:31:31 np0005597378 podman[240023]: 2026-01-27 13:31:31.003465363 +0000 UTC m=+0.132602936 container died 5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:31:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:31:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:31:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:31:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-800c3eb12306ff1a351b042a5f65a0345ea0fbf9e9ed151a1abf636d81b8ceee-merged.mount: Deactivated successfully.
Jan 27 08:31:31 np0005597378 podman[240023]: 2026-01-27 13:31:31.046924114 +0000 UTC m=+0.176061697 container remove 5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:31:31 np0005597378 systemd[1]: libpod-conmon-5e2dc7904b583eb359c67626ad76904244177feb1fb4e1dbaef78ed304923561.scope: Deactivated successfully.
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.230578242 +0000 UTC m=+0.061584307 container create 7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:31:31 np0005597378 systemd[1]: Started libpod-conmon-7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1.scope.
Jan 27 08:31:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:31:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c9fba4a463c2fb348b2263f1a3ce337d1dde6aaf8592f7901a02a2946bd0f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c9fba4a463c2fb348b2263f1a3ce337d1dde6aaf8592f7901a02a2946bd0f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c9fba4a463c2fb348b2263f1a3ce337d1dde6aaf8592f7901a02a2946bd0f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c9fba4a463c2fb348b2263f1a3ce337d1dde6aaf8592f7901a02a2946bd0f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c9fba4a463c2fb348b2263f1a3ce337d1dde6aaf8592f7901a02a2946bd0f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.200740875 +0000 UTC m=+0.031746990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.315735588 +0000 UTC m=+0.146741663 container init 7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.323486825 +0000 UTC m=+0.154492900 container start 7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.335128597 +0000 UTC m=+0.166134672 container attach 7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:31:31 np0005597378 fervent_chandrasekhar[240078]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:31:31 np0005597378 fervent_chandrasekhar[240078]: --> All data devices are unavailable
Jan 27 08:31:31 np0005597378 systemd[1]: libpod-7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1.scope: Deactivated successfully.
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.761399869 +0000 UTC m=+0.592405924 container died 7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:31:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-38c9fba4a463c2fb348b2263f1a3ce337d1dde6aaf8592f7901a02a2946bd0f6-merged.mount: Deactivated successfully.
Jan 27 08:31:31 np0005597378 podman[240061]: 2026-01-27 13:31:31.804078709 +0000 UTC m=+0.635084764 container remove 7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chandrasekhar, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:31 np0005597378 systemd[1]: libpod-conmon-7fcd57c58e7537b07fe731c4179af4aee990a782dad7a4a3924fe74d71eac6b1.scope: Deactivated successfully.
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.25083046 +0000 UTC m=+0.050680177 container create 1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:31:32 np0005597378 systemd[1]: Started libpod-conmon-1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51.scope.
Jan 27 08:31:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.227416744 +0000 UTC m=+0.027266491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.325041613 +0000 UTC m=+0.124891360 container init 1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noyce, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.331410733 +0000 UTC m=+0.131260450 container start 1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.335236485 +0000 UTC m=+0.135086222 container attach 1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:32 np0005597378 sleepy_noyce[240189]: 167 167
Jan 27 08:31:32 np0005597378 systemd[1]: libpod-1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51.scope: Deactivated successfully.
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.33804348 +0000 UTC m=+0.137893197 container died 1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:31:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7c3ce890a1f41dc4d6b688dd4f1f55b9cc170fca8cb84dc4aeb9294e8f775be0-merged.mount: Deactivated successfully.
Jan 27 08:31:32 np0005597378 podman[240172]: 2026-01-27 13:31:32.374203947 +0000 UTC m=+0.174053654 container remove 1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noyce, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:31:32 np0005597378 systemd[1]: libpod-conmon-1f1369c06703eb44fce7dfa07fc3a97ad9729f6c1a230676b7a77b2183f01e51.scope: Deactivated successfully.
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.530115423 +0000 UTC m=+0.047323986 container create e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 08:31:32 np0005597378 systemd[1]: Started libpod-conmon-e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2.scope.
Jan 27 08:31:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:31:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1c13ae19a520d036df8cdc95c0051fa38f4fff27a62edc4605cce56470adbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1c13ae19a520d036df8cdc95c0051fa38f4fff27a62edc4605cce56470adbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1c13ae19a520d036df8cdc95c0051fa38f4fff27a62edc4605cce56470adbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1c13ae19a520d036df8cdc95c0051fa38f4fff27a62edc4605cce56470adbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.504643153 +0000 UTC m=+0.021851796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.612063274 +0000 UTC m=+0.129271837 container init e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.61717361 +0000 UTC m=+0.134382173 container start e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.630023423 +0000 UTC m=+0.147232016 container attach e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]: {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:    "0": [
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:        {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "devices": [
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "/dev/loop3"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            ],
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_name": "ceph_lv0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_size": "21470642176",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "name": "ceph_lv0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "tags": {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cluster_name": "ceph",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.crush_device_class": "",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.encrypted": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.objectstore": "bluestore",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osd_id": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.type": "block",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.vdo": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.with_tpm": "0"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            },
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "type": "block",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "vg_name": "ceph_vg0"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:        }
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:    ],
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:    "1": [
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:        {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "devices": [
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "/dev/loop4"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            ],
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_name": "ceph_lv1",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_size": "21470642176",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "name": "ceph_lv1",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "tags": {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cluster_name": "ceph",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.crush_device_class": "",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.encrypted": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.objectstore": "bluestore",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osd_id": "1",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.type": "block",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.vdo": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.with_tpm": "0"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            },
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "type": "block",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "vg_name": "ceph_vg1"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:        }
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:    ],
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:    "2": [
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:        {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "devices": [
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "/dev/loop5"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            ],
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_name": "ceph_lv2",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_size": "21470642176",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "name": "ceph_lv2",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "tags": {
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.cluster_name": "ceph",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.crush_device_class": "",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.encrypted": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.objectstore": "bluestore",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osd_id": "2",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.type": "block",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.vdo": "0",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:                "ceph.with_tpm": "0"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            },
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "type": "block",
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:            "vg_name": "ceph_vg2"
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:        }
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]:    ]
Jan 27 08:31:32 np0005597378 mystifying_satoshi[240230]: }
Jan 27 08:31:32 np0005597378 systemd[1]: libpod-e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2.scope: Deactivated successfully.
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.886060056 +0000 UTC m=+0.403268619 container died e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:31:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fa1c13ae19a520d036df8cdc95c0051fa38f4fff27a62edc4605cce56470adbe-merged.mount: Deactivated successfully.
Jan 27 08:31:32 np0005597378 podman[240213]: 2026-01-27 13:31:32.936574836 +0000 UTC m=+0.453783389 container remove e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:31:32 np0005597378 systemd[1]: libpod-conmon-e3a1b69c2d0e2ff5658fd614ab25418f72283771c3ba620f9dc20001af3c6ed2.scope: Deactivated successfully.
Jan 27 08:31:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.380815309 +0000 UTC m=+0.037268747 container create 11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:31:33 np0005597378 systemd[1]: Started libpod-conmon-11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8.scope.
Jan 27 08:31:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.456504952 +0000 UTC m=+0.112958400 container init 11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.365544671 +0000 UTC m=+0.021998119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.461920866 +0000 UTC m=+0.118374294 container start 11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:31:33 np0005597378 cool_bardeen[240330]: 167 167
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.465614425 +0000 UTC m=+0.122067873 container attach 11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:31:33 np0005597378 systemd[1]: libpod-11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8.scope: Deactivated successfully.
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.46651312 +0000 UTC m=+0.122966548 container died 11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:31:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a41602c5c8e8b7e789b8c106ec904fb35702281c8300bd2b35fe81184f2b8a2d-merged.mount: Deactivated successfully.
Jan 27 08:31:33 np0005597378 podman[240313]: 2026-01-27 13:31:33.497836166 +0000 UTC m=+0.154289594 container remove 11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bardeen, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:31:33 np0005597378 systemd[1]: libpod-conmon-11542c75b8f884018de7fa8fd34d410e1b5f322d6916da81a268898a6bece2c8.scope: Deactivated successfully.
Jan 27 08:31:33 np0005597378 podman[240354]: 2026-01-27 13:31:33.657581506 +0000 UTC m=+0.049431872 container create eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:31:33 np0005597378 systemd[1]: Started libpod-conmon-eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc.scope.
Jan 27 08:31:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:31:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad1cc47bf94a0a4adaad4041248f74691baba4f7e486d00c07a48921aa7de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad1cc47bf94a0a4adaad4041248f74691baba4f7e486d00c07a48921aa7de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad1cc47bf94a0a4adaad4041248f74691baba4f7e486d00c07a48921aa7de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad1cc47bf94a0a4adaad4041248f74691baba4f7e486d00c07a48921aa7de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:31:33 np0005597378 podman[240354]: 2026-01-27 13:31:33.627593885 +0000 UTC m=+0.019444281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:31:33 np0005597378 podman[240354]: 2026-01-27 13:31:33.727420132 +0000 UTC m=+0.119270498 container init eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:31:33 np0005597378 podman[240354]: 2026-01-27 13:31:33.734265016 +0000 UTC m=+0.126115382 container start eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:31:33 np0005597378 podman[240354]: 2026-01-27 13:31:33.736922017 +0000 UTC m=+0.128772413 container attach eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_driscoll, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:31:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:34 np0005597378 lvm[240451]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:31:34 np0005597378 lvm[240450]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:31:34 np0005597378 lvm[240451]: VG ceph_vg2 finished
Jan 27 08:31:34 np0005597378 lvm[240450]: VG ceph_vg1 finished
Jan 27 08:31:34 np0005597378 lvm[240448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:31:34 np0005597378 lvm[240448]: VG ceph_vg0 finished
Jan 27 08:31:34 np0005597378 lucid_driscoll[240370]: {}
Jan 27 08:31:34 np0005597378 systemd[1]: libpod-eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc.scope: Deactivated successfully.
Jan 27 08:31:34 np0005597378 podman[240354]: 2026-01-27 13:31:34.48612116 +0000 UTC m=+0.877971526 container died eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_driscoll, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:31:34 np0005597378 systemd[1]: libpod-eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc.scope: Consumed 1.266s CPU time.
Jan 27 08:31:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-26ad1cc47bf94a0a4adaad4041248f74691baba4f7e486d00c07a48921aa7de6-merged.mount: Deactivated successfully.
Jan 27 08:31:34 np0005597378 podman[240354]: 2026-01-27 13:31:34.54263546 +0000 UTC m=+0.934485826 container remove eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:31:34 np0005597378 systemd[1]: libpod-conmon-eeb8557611cbff1b2a63ef115f3b910e4d4d91be1e5f2dec9d2d2b62097095bc.scope: Deactivated successfully.
Jan 27 08:31:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:31:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:31:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:31:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:31:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:31:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:31:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:31:46.279 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:31:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:31:46.280 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:31:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:31:46.280 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:31:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 27 08:31:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4196746059' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 27 08:31:48 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 27 08:31:48 np0005597378 ceph-mgr[75385]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 27 08:31:48 np0005597378 ceph-mgr[75385]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 27 08:31:48 np0005597378 podman[240490]: 2026-01-27 13:31:48.760107408 +0000 UTC m=+0.102743438 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 08:31:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:52 np0005597378 podman[240518]: 2026-01-27 13:31:52.697496168 +0000 UTC m=+0.043226926 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:31:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:31:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:31:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:31:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3005333418' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:31:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:31:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3005333418' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:32:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.045799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520734045835, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1356, "num_deletes": 506, "total_data_size": 1634798, "memory_usage": 1679296, "flush_reason": "Manual Compaction"}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520734057255, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1608093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13525, "largest_seqno": 14880, "table_properties": {"data_size": 1602157, "index_size": 2756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15076, "raw_average_key_size": 18, "raw_value_size": 1588260, "raw_average_value_size": 1904, "num_data_blocks": 126, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769520629, "oldest_key_time": 1769520629, "file_creation_time": 1769520734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11835 microseconds, and 4937 cpu microseconds.
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.057631) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1608093 bytes OK
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.057751) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.059971) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.060007) EVENT_LOG_v1 {"time_micros": 1769520734060000, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.060028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1627656, prev total WAL file size 1627656, number of live WAL files 2.
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.061276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1570KB)], [32(7523KB)]
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520734061304, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9311872, "oldest_snapshot_seqno": -1}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3832 keys, 7352227 bytes, temperature: kUnknown
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520734111953, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7352227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7324916, "index_size": 16641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93861, "raw_average_key_size": 24, "raw_value_size": 7253806, "raw_average_value_size": 1892, "num_data_blocks": 706, "num_entries": 3832, "num_filter_entries": 3832, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769520734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.112690) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7352227 bytes
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.114185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.6 rd, 144.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4857, records dropped: 1025 output_compression: NoCompression
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.114219) EVENT_LOG_v1 {"time_micros": 1769520734114202, "job": 14, "event": "compaction_finished", "compaction_time_micros": 50728, "compaction_time_cpu_micros": 16697, "output_level": 6, "num_output_files": 1, "total_output_size": 7352227, "num_input_records": 4857, "num_output_records": 3832, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520734114949, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520734117370, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.061224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.117452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.117458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.117459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.117461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:32:14 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:32:14.117463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:32:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:32:16
Jan 27 08:32:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:32:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:32:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'vms']
Jan 27 08:32:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:32:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:32:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:19 np0005597378 podman[240539]: 2026-01-27 13:32:19.746681344 +0000 UTC m=+0.079130516 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:23 np0005597378 podman[240569]: 2026-01-27 13:32:23.739152347 +0000 UTC m=+0.077381169 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:32:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:32:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.821 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.842 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.842 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.843 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.853 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.853 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.853 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.854 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.854 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:29 np0005597378 nova_compute[238941]: 2026-01-27 13:32:29.854 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:32:30 np0005597378 nova_compute[238941]: 2026-01-27 13:32:30.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:32:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:32:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285256962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.001 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.157 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.158 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.158 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.158 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.222 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.222 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.239 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:32:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:32:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866298060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.793 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.797 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.812 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.813 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:32:31 np0005597378 nova_compute[238941]: 2026-01-27 13:32:31.813 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:32:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:32:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:32:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.071563441 +0000 UTC m=+0.067431783 container create ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.026268941 +0000 UTC m=+0.022137303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:32:36 np0005597378 systemd[1]: Started libpod-conmon-ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418.scope.
Jan 27 08:32:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.185598479 +0000 UTC m=+0.181466901 container init ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_morse, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.19573465 +0000 UTC m=+0.191602992 container start ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:32:36 np0005597378 crazy_morse[240791]: 167 167
Jan 27 08:32:36 np0005597378 systemd[1]: libpod-ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418.scope: Deactivated successfully.
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.206566179 +0000 UTC m=+0.202434591 container attach ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.207986408 +0000 UTC m=+0.203854760 container died ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:32:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-18ae4ad2fa95cec8aa4b46ec5cf848d64b212c76821584c614420db73694dc2b-merged.mount: Deactivated successfully.
Jan 27 08:32:36 np0005597378 podman[240775]: 2026-01-27 13:32:36.26607877 +0000 UTC m=+0.261947112 container remove ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:32:36 np0005597378 systemd[1]: libpod-conmon-ae2dd18555fada0f2eb8b3df55e84c98653796a052592956c3db70b19986d418.scope: Deactivated successfully.
Jan 27 08:32:36 np0005597378 podman[240817]: 2026-01-27 13:32:36.420544138 +0000 UTC m=+0.039777124 container create 01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:32:36 np0005597378 systemd[1]: Started libpod-conmon-01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28.scope.
Jan 27 08:32:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:32:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ce21a69d8694963a26f17d2c8fbdbaba8c725117ebdd861fd03f40d09dc70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ce21a69d8694963a26f17d2c8fbdbaba8c725117ebdd861fd03f40d09dc70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ce21a69d8694963a26f17d2c8fbdbaba8c725117ebdd861fd03f40d09dc70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ce21a69d8694963a26f17d2c8fbdbaba8c725117ebdd861fd03f40d09dc70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ce21a69d8694963a26f17d2c8fbdbaba8c725117ebdd861fd03f40d09dc70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:36 np0005597378 podman[240817]: 2026-01-27 13:32:36.497758431 +0000 UTC m=+0.116991447 container init 01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:32:36 np0005597378 podman[240817]: 2026-01-27 13:32:36.40488683 +0000 UTC m=+0.024119836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:32:36 np0005597378 podman[240817]: 2026-01-27 13:32:36.507409429 +0000 UTC m=+0.126642415 container start 01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:32:36 np0005597378 podman[240817]: 2026-01-27 13:32:36.51489147 +0000 UTC m=+0.134124456 container attach 01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:32:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:32:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:32:36 np0005597378 busy_gauss[240834]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:32:36 np0005597378 busy_gauss[240834]: --> All data devices are unavailable
Jan 27 08:32:36 np0005597378 systemd[1]: libpod-01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28.scope: Deactivated successfully.
Jan 27 08:32:36 np0005597378 podman[240817]: 2026-01-27 13:32:36.955404892 +0000 UTC m=+0.574637888 container died 01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:32:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-877ce21a69d8694963a26f17d2c8fbdbaba8c725117ebdd861fd03f40d09dc70-merged.mount: Deactivated successfully.
Jan 27 08:32:37 np0005597378 podman[240817]: 2026-01-27 13:32:37.018311854 +0000 UTC m=+0.637544840 container remove 01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:32:37 np0005597378 systemd[1]: libpod-conmon-01e2f3844b2b27f0c09924bab089c5ed4edad198a4d74c65ca0d554b562c1d28.scope: Deactivated successfully.
Jan 27 08:32:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.4907363 +0000 UTC m=+0.041799438 container create b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhabha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:32:37 np0005597378 systemd[1]: Started libpod-conmon-b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63.scope.
Jan 27 08:32:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.548186436 +0000 UTC m=+0.099249594 container init b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.554240377 +0000 UTC m=+0.105303515 container start b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhabha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:32:37 np0005597378 youthful_bhabha[240944]: 167 167
Jan 27 08:32:37 np0005597378 systemd[1]: libpod-b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63.scope: Deactivated successfully.
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.560220217 +0000 UTC m=+0.111283365 container attach b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.561037389 +0000 UTC m=+0.112100527 container died b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.474824244 +0000 UTC m=+0.025887402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:32:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1a93c96deb8b8bc0deedd5d4689e5956e9467700dc563d0f36051cef2f42f0d6-merged.mount: Deactivated successfully.
Jan 27 08:32:37 np0005597378 podman[240928]: 2026-01-27 13:32:37.603955146 +0000 UTC m=+0.155018284 container remove b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:32:37 np0005597378 systemd[1]: libpod-conmon-b677eff648e5aa0fb88656cdde2b5b564ed08277a448c487d3a0f47efe88ac63.scope: Deactivated successfully.
Jan 27 08:32:37 np0005597378 podman[240968]: 2026-01-27 13:32:37.769904581 +0000 UTC m=+0.043068232 container create e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_colden, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:32:37 np0005597378 systemd[1]: Started libpod-conmon-e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078.scope.
Jan 27 08:32:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:32:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006782f0a665ec26c57c49ac3a3a28b452fd088f04282b4c90a1752aacf0d5ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006782f0a665ec26c57c49ac3a3a28b452fd088f04282b4c90a1752aacf0d5ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006782f0a665ec26c57c49ac3a3a28b452fd088f04282b4c90a1752aacf0d5ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006782f0a665ec26c57c49ac3a3a28b452fd088f04282b4c90a1752aacf0d5ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:37 np0005597378 podman[240968]: 2026-01-27 13:32:37.751895459 +0000 UTC m=+0.025059130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:32:37 np0005597378 podman[240968]: 2026-01-27 13:32:37.854757659 +0000 UTC m=+0.127921330 container init e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:32:37 np0005597378 podman[240968]: 2026-01-27 13:32:37.862616188 +0000 UTC m=+0.135779839 container start e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:32:37 np0005597378 podman[240968]: 2026-01-27 13:32:37.867781737 +0000 UTC m=+0.140945408 container attach e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]: {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:    "0": [
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:        {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "devices": [
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "/dev/loop3"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            ],
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_name": "ceph_lv0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_size": "21470642176",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "name": "ceph_lv0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "tags": {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cluster_name": "ceph",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.crush_device_class": "",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.encrypted": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.objectstore": "bluestore",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osd_id": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.type": "block",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.vdo": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.with_tpm": "0"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            },
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "type": "block",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "vg_name": "ceph_vg0"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:        }
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:    ],
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:    "1": [
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:        {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "devices": [
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "/dev/loop4"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            ],
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_name": "ceph_lv1",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_size": "21470642176",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "name": "ceph_lv1",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "tags": {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cluster_name": "ceph",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.crush_device_class": "",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.encrypted": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.objectstore": "bluestore",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osd_id": "1",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.type": "block",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.vdo": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.with_tpm": "0"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            },
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "type": "block",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "vg_name": "ceph_vg1"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:        }
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:    ],
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:    "2": [
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:        {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "devices": [
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "/dev/loop5"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            ],
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_name": "ceph_lv2",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_size": "21470642176",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "name": "ceph_lv2",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "tags": {
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.cluster_name": "ceph",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.crush_device_class": "",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.encrypted": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.objectstore": "bluestore",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osd_id": "2",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.type": "block",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.vdo": "0",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:                "ceph.with_tpm": "0"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            },
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "type": "block",
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:            "vg_name": "ceph_vg2"
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:        }
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]:    ]
Jan 27 08:32:38 np0005597378 unruffled_colden[240984]: }
Jan 27 08:32:38 np0005597378 systemd[1]: libpod-e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078.scope: Deactivated successfully.
Jan 27 08:32:38 np0005597378 podman[240968]: 2026-01-27 13:32:38.159860453 +0000 UTC m=+0.433024104 container died e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_colden, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:32:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-006782f0a665ec26c57c49ac3a3a28b452fd088f04282b4c90a1752aacf0d5ae-merged.mount: Deactivated successfully.
Jan 27 08:32:38 np0005597378 podman[240968]: 2026-01-27 13:32:38.246219371 +0000 UTC m=+0.519383022 container remove e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_colden, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:32:38 np0005597378 systemd[1]: libpod-conmon-e7b9b8c35d87a4e641ebf716a33b79f8f5616c5b7dc495ba0649282d0eb08078.scope: Deactivated successfully.
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.648986185 +0000 UTC m=+0.039636290 container create e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khorana, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:32:38 np0005597378 systemd[1]: Started libpod-conmon-e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e.scope.
Jan 27 08:32:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.630960503 +0000 UTC m=+0.021610628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.73075928 +0000 UTC m=+0.121409405 container init e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.737003848 +0000 UTC m=+0.127653973 container start e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.740876041 +0000 UTC m=+0.131526166 container attach e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khorana, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:32:38 np0005597378 systemd[1]: libpod-e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e.scope: Deactivated successfully.
Jan 27 08:32:38 np0005597378 lucid_khorana[241084]: 167 167
Jan 27 08:32:38 np0005597378 conmon[241084]: conmon e090e4df777e4020c25e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e.scope/container/memory.events
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.744526998 +0000 UTC m=+0.135177103 container died e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khorana, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:32:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dee725341060e75fb00a180c63b2356304e7c5c1f6c189f90e899f2f3f3689c4-merged.mount: Deactivated successfully.
Jan 27 08:32:38 np0005597378 podman[241068]: 2026-01-27 13:32:38.794221267 +0000 UTC m=+0.184871372 container remove e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:32:38 np0005597378 systemd[1]: libpod-conmon-e090e4df777e4020c25e5a3c0e35c71a54c6bfe63a7d68d89520a3b7c7edbf1e.scope: Deactivated successfully.
Jan 27 08:32:38 np0005597378 podman[241106]: 2026-01-27 13:32:38.952862317 +0000 UTC m=+0.043975626 container create 7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_swirles, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:32:39 np0005597378 systemd[1]: Started libpod-conmon-7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b.scope.
Jan 27 08:32:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:32:39 np0005597378 podman[241106]: 2026-01-27 13:32:38.934004112 +0000 UTC m=+0.025117451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:32:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bb8388a7acd523781dfeea198688933acf51670e5c747bcd4f29e6281cf049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bb8388a7acd523781dfeea198688933acf51670e5c747bcd4f29e6281cf049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bb8388a7acd523781dfeea198688933acf51670e5c747bcd4f29e6281cf049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bb8388a7acd523781dfeea198688933acf51670e5c747bcd4f29e6281cf049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:32:39 np0005597378 podman[241106]: 2026-01-27 13:32:39.041946038 +0000 UTC m=+0.133059367 container init 7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_swirles, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:32:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:39 np0005597378 podman[241106]: 2026-01-27 13:32:39.053362033 +0000 UTC m=+0.144475332 container start 7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_swirles, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:32:39 np0005597378 podman[241106]: 2026-01-27 13:32:39.057673758 +0000 UTC m=+0.148787057 container attach 7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_swirles, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:32:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:39 np0005597378 lvm[241200]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:32:39 np0005597378 lvm[241201]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:32:39 np0005597378 lvm[241200]: VG ceph_vg0 finished
Jan 27 08:32:39 np0005597378 lvm[241201]: VG ceph_vg1 finished
Jan 27 08:32:39 np0005597378 lvm[241203]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:32:39 np0005597378 lvm[241203]: VG ceph_vg2 finished
Jan 27 08:32:39 np0005597378 wizardly_swirles[241122]: {}
Jan 27 08:32:40 np0005597378 systemd[1]: libpod-7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b.scope: Deactivated successfully.
Jan 27 08:32:40 np0005597378 podman[241106]: 2026-01-27 13:32:40.001719508 +0000 UTC m=+1.092832827 container died 7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:32:40 np0005597378 systemd[1]: libpod-7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b.scope: Consumed 1.545s CPU time.
Jan 27 08:32:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-09bb8388a7acd523781dfeea198688933acf51670e5c747bcd4f29e6281cf049-merged.mount: Deactivated successfully.
Jan 27 08:32:40 np0005597378 podman[241106]: 2026-01-27 13:32:40.060645793 +0000 UTC m=+1.151759102 container remove 7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_swirles, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:32:40 np0005597378 systemd[1]: libpod-conmon-7a90e97b27214e2d1bf7a0dbbfd9e55aaf0a3774c3559672e8990a9febd3813b.scope: Deactivated successfully.
Jan 27 08:32:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:32:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:32:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:32:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:32:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:32:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:32:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:32:41.109 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:32:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:32:41.112 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:32:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:32:41.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:32:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:32:46.280 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:32:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:32:46.281 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:32:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:32:46.281 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:32:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:50 np0005597378 podman[241245]: 2026-01-27 13:32:50.750832977 +0000 UTC m=+0.093226072 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:32:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:54 np0005597378 podman[241274]: 2026-01-27 13:32:54.720284793 +0000 UTC m=+0.055227077 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:32:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:32:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 3375 writes, 15K keys, 3375 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3375 writes, 3375 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1296 writes, 5883 keys, 1296 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s#012Interval WAL: 1296 writes, 1296 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.9      0.20              0.04         7    0.028       0      0       0.0       0.0#012  L6      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6     89.1     73.2      0.58              0.10         6    0.096     24K   3204       0.0       0.0#012 Sum      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     66.3     75.4      0.77              0.14        13    0.059     24K   3204       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     76.5     77.1      0.46              0.09         8    0.057     17K   2470       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     89.1     73.2      0.58              0.10         6    0.096     24K   3204       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     86.0      0.19              0.04         6    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.8 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 308.00 MB usage: 1.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000268 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(104,1.61 MB,0.523703%) FilterBlock(14,74.86 KB,0.0237353%) IndexBlock(14,149.30 KB,0.0473369%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 08:32:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:32:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:32:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:32:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3448261812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:32:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:32:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3448261812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:33:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:33:16
Jan 27 08:33:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:33:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:33:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data']
Jan 27 08:33:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:33:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:33:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:21 np0005597378 podman[241294]: 2026-01-27 13:33:21.743965058 +0000 UTC m=+0.082283830 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Jan 27 08:33:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:25 np0005597378 podman[241320]: 2026-01-27 13:33:25.74542441 +0000 UTC m=+0.081423797 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:33:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:29 np0005597378 nova_compute[238941]: 2026-01-27 13:33:29.813 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:29 np0005597378 nova_compute[238941]: 2026-01-27 13:33:29.813 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.482 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.483 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.508 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.508 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.509 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.509 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:33:30 np0005597378 nova_compute[238941]: 2026-01-27 13:33:30.509 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:33:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:33:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/60670499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.121 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.310 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.312 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.312 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.312 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:33:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.620 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.621 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:33:31 np0005597378 nova_compute[238941]: 2026-01-27 13:33:31.639 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:33:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:33:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626657467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:33:32 np0005597378 nova_compute[238941]: 2026-01-27 13:33:32.206 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:33:32 np0005597378 nova_compute[238941]: 2026-01-27 13:33:32.213 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:33:32 np0005597378 nova_compute[238941]: 2026-01-27 13:33:32.227 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:33:32 np0005597378 nova_compute[238941]: 2026-01-27 13:33:32.229 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:33:32 np0005597378 nova_compute[238941]: 2026-01-27 13:33:32.229 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:33:33 np0005597378 nova_compute[238941]: 2026-01-27 13:33:33.129 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:33 np0005597378 nova_compute[238941]: 2026-01-27 13:33:33.130 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:33 np0005597378 nova_compute[238941]: 2026-01-27 13:33:33.130 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:33 np0005597378 nova_compute[238941]: 2026-01-27 13:33:33.130 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:33 np0005597378 nova_compute[238941]: 2026-01-27 13:33:33.130 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:33:33 np0005597378 nova_compute[238941]: 2026-01-27 13:33:33.130 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:33:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:33:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.315430892 +0000 UTC m=+0.039343178 container create 7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:33:41 np0005597378 systemd[1]: Started libpod-conmon-7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135.scope.
Jan 27 08:33:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:33:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.389083722 +0000 UTC m=+0.112996028 container init 7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_nash, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.29656116 +0000 UTC m=+0.020473466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.395636805 +0000 UTC m=+0.119549081 container start 7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.398485391 +0000 UTC m=+0.122397677 container attach 7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:33:41 np0005597378 quirky_nash[241542]: 167 167
Jan 27 08:33:41 np0005597378 systemd[1]: libpod-7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135.scope: Deactivated successfully.
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.400133856 +0000 UTC m=+0.124046142 container died 7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:33:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1e42d9f043dea419ffdd9d0353032012ea2b9f871a62cc659a5e19434e41f475-merged.mount: Deactivated successfully.
Jan 27 08:33:41 np0005597378 podman[241526]: 2026-01-27 13:33:41.470558969 +0000 UTC m=+0.194471255 container remove 7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:33:41 np0005597378 systemd[1]: libpod-conmon-7c81bfb051c73bc5f87feef07376d03716851fc1bbb855ee7f9dc09966625135.scope: Deactivated successfully.
Jan 27 08:33:41 np0005597378 podman[241565]: 2026-01-27 13:33:41.622671754 +0000 UTC m=+0.043908528 container create 49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:33:41 np0005597378 systemd[1]: Started libpod-conmon-49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a.scope.
Jan 27 08:33:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:33:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63357effa208def475b9680600c1a7f91e87b9a7faa620298885e9c85d02ed11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63357effa208def475b9680600c1a7f91e87b9a7faa620298885e9c85d02ed11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63357effa208def475b9680600c1a7f91e87b9a7faa620298885e9c85d02ed11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63357effa208def475b9680600c1a7f91e87b9a7faa620298885e9c85d02ed11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63357effa208def475b9680600c1a7f91e87b9a7faa620298885e9c85d02ed11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:41 np0005597378 podman[241565]: 2026-01-27 13:33:41.600367152 +0000 UTC m=+0.021603946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:33:41 np0005597378 podman[241565]: 2026-01-27 13:33:41.708852287 +0000 UTC m=+0.130089081 container init 49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_grothendieck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:33:41 np0005597378 podman[241565]: 2026-01-27 13:33:41.715833353 +0000 UTC m=+0.137070127 container start 49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_grothendieck, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:33:41 np0005597378 podman[241565]: 2026-01-27 13:33:41.719237613 +0000 UTC m=+0.140474427 container attach 49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:33:42 np0005597378 wonderful_grothendieck[241581]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:33:42 np0005597378 wonderful_grothendieck[241581]: --> All data devices are unavailable
Jan 27 08:33:42 np0005597378 systemd[1]: libpod-49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a.scope: Deactivated successfully.
Jan 27 08:33:42 np0005597378 podman[241601]: 2026-01-27 13:33:42.180178406 +0000 UTC m=+0.019154861 container died 49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_grothendieck, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:33:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-63357effa208def475b9680600c1a7f91e87b9a7faa620298885e9c85d02ed11-merged.mount: Deactivated successfully.
Jan 27 08:33:42 np0005597378 podman[241601]: 2026-01-27 13:33:42.225427529 +0000 UTC m=+0.064403994 container remove 49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:33:42 np0005597378 systemd[1]: libpod-conmon-49ae70b33d2952547f79dc432ee6c671c4d57f69365c5100a767ed423ef92f4a.scope: Deactivated successfully.
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.685502408 +0000 UTC m=+0.062223486 container create a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_feistel, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:33:42 np0005597378 systemd[1]: Started libpod-conmon-a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3.scope.
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.648989556 +0000 UTC m=+0.025710724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:33:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.767665823 +0000 UTC m=+0.144386921 container init a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_feistel, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.774981398 +0000 UTC m=+0.151702466 container start a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:33:42 np0005597378 practical_feistel[241694]: 167 167
Jan 27 08:33:42 np0005597378 systemd[1]: libpod-a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3.scope: Deactivated successfully.
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.780424953 +0000 UTC m=+0.157146091 container attach a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.781153503 +0000 UTC m=+0.157874581 container died a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_feistel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:33:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0a1d5beb2d3d045370c6b289e588228166d8162c9c399defe0375f4c63bfb4d4-merged.mount: Deactivated successfully.
Jan 27 08:33:42 np0005597378 podman[241678]: 2026-01-27 13:33:42.847988411 +0000 UTC m=+0.224709489 container remove a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:33:42 np0005597378 systemd[1]: libpod-conmon-a0bd6b49164792f1b97446b2f5e1a2f18dd104ea4431bd93d1bb5801e7ba93b3.scope: Deactivated successfully.
Jan 27 08:33:42 np0005597378 podman[241720]: 2026-01-27 13:33:42.998426882 +0000 UTC m=+0.042673445 container create 21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:33:43 np0005597378 systemd[1]: Started libpod-conmon-21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b.scope.
Jan 27 08:33:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:33:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba3efc245d6ea04c4e80016bedf12fe545fee7688b987f2c5e12a95286bea47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba3efc245d6ea04c4e80016bedf12fe545fee7688b987f2c5e12a95286bea47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba3efc245d6ea04c4e80016bedf12fe545fee7688b987f2c5e12a95286bea47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba3efc245d6ea04c4e80016bedf12fe545fee7688b987f2c5e12a95286bea47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:43 np0005597378 podman[241720]: 2026-01-27 13:33:42.983109605 +0000 UTC m=+0.027356188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:33:43 np0005597378 podman[241720]: 2026-01-27 13:33:43.077185407 +0000 UTC m=+0.121432000 container init 21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:33:43 np0005597378 podman[241720]: 2026-01-27 13:33:43.084826471 +0000 UTC m=+0.129073034 container start 21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hawking, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:33:43 np0005597378 podman[241720]: 2026-01-27 13:33:43.105140551 +0000 UTC m=+0.149387144 container attach 21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hawking, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:33:43 np0005597378 charming_hawking[241737]: {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:    "0": [
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:        {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "devices": [
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "/dev/loop3"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            ],
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_name": "ceph_lv0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_size": "21470642176",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "name": "ceph_lv0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "tags": {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cluster_name": "ceph",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.crush_device_class": "",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.encrypted": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.objectstore": "bluestore",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osd_id": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.type": "block",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.vdo": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.with_tpm": "0"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            },
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "type": "block",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "vg_name": "ceph_vg0"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:        }
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:    ],
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:    "1": [
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:        {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "devices": [
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "/dev/loop4"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            ],
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_name": "ceph_lv1",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_size": "21470642176",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "name": "ceph_lv1",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "tags": {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cluster_name": "ceph",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.crush_device_class": "",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.encrypted": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.objectstore": "bluestore",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osd_id": "1",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.type": "block",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.vdo": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.with_tpm": "0"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            },
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "type": "block",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "vg_name": "ceph_vg1"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:        }
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:    ],
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:    "2": [
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:        {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "devices": [
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "/dev/loop5"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            ],
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_name": "ceph_lv2",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_size": "21470642176",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "name": "ceph_lv2",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "tags": {
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.cluster_name": "ceph",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.crush_device_class": "",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.encrypted": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.objectstore": "bluestore",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osd_id": "2",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.type": "block",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.vdo": "0",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:                "ceph.with_tpm": "0"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            },
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "type": "block",
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:            "vg_name": "ceph_vg2"
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:        }
Jan 27 08:33:43 np0005597378 charming_hawking[241737]:    ]
Jan 27 08:33:43 np0005597378 charming_hawking[241737]: }
Jan 27 08:33:43 np0005597378 systemd[1]: libpod-21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b.scope: Deactivated successfully.
Jan 27 08:33:43 np0005597378 podman[241720]: 2026-01-27 13:33:43.36000058 +0000 UTC m=+0.404247163 container died 21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hawking, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:33:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9ba3efc245d6ea04c4e80016bedf12fe545fee7688b987f2c5e12a95286bea47-merged.mount: Deactivated successfully.
Jan 27 08:33:43 np0005597378 podman[241720]: 2026-01-27 13:33:43.564653564 +0000 UTC m=+0.608900127 container remove 21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:33:43 np0005597378 systemd[1]: libpod-conmon-21e7662e8ee6ea2a9cb04c8e6c5e283d6ad0061123eb1ae37f02027d4910082b.scope: Deactivated successfully.
Jan 27 08:33:43 np0005597378 podman[241820]: 2026-01-27 13:33:43.998864336 +0000 UTC m=+0.038826764 container create 4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_albattani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:33:44 np0005597378 systemd[1]: Started libpod-conmon-4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914.scope.
Jan 27 08:33:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:33:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:44 np0005597378 podman[241820]: 2026-01-27 13:33:44.065742614 +0000 UTC m=+0.105705042 container init 4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:33:44 np0005597378 podman[241820]: 2026-01-27 13:33:44.071680523 +0000 UTC m=+0.111642961 container start 4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_albattani, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:33:44 np0005597378 great_albattani[241836]: 167 167
Jan 27 08:33:44 np0005597378 systemd[1]: libpod-4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914.scope: Deactivated successfully.
Jan 27 08:33:44 np0005597378 podman[241820]: 2026-01-27 13:33:44.077142837 +0000 UTC m=+0.117105285 container attach 4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_albattani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:33:44 np0005597378 podman[241820]: 2026-01-27 13:33:43.978986176 +0000 UTC m=+0.018948694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:33:44 np0005597378 podman[241820]: 2026-01-27 13:33:44.077738604 +0000 UTC m=+0.117701042 container died 4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_albattani, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:33:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cc736a2e5c2adbd9a3f6c8805b122e5ffb634fe52da57cfc0979557d407b8783-merged.mount: Deactivated successfully.
Jan 27 08:33:44 np0005597378 podman[241820]: 2026-01-27 13:33:44.127373184 +0000 UTC m=+0.167335612 container remove 4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_albattani, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:33:44 np0005597378 systemd[1]: libpod-conmon-4179e1a3b8164279dc2624b90aca724c8f06ae3d0110678523d9a43c3ec71914.scope: Deactivated successfully.
Jan 27 08:33:44 np0005597378 podman[241860]: 2026-01-27 13:33:44.287394011 +0000 UTC m=+0.044901845 container create 3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_pare, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:33:44 np0005597378 systemd[1]: Started libpod-conmon-3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643.scope.
Jan 27 08:33:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:33:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c5b2a9b842118f09487a3df58e4970902fc85fac12b330a8246189fa55a87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c5b2a9b842118f09487a3df58e4970902fc85fac12b330a8246189fa55a87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c5b2a9b842118f09487a3df58e4970902fc85fac12b330a8246189fa55a87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c5b2a9b842118f09487a3df58e4970902fc85fac12b330a8246189fa55a87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:33:44 np0005597378 podman[241860]: 2026-01-27 13:33:44.355177014 +0000 UTC m=+0.112684878 container init 3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_pare, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:33:44 np0005597378 podman[241860]: 2026-01-27 13:33:44.264228825 +0000 UTC m=+0.021736679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:33:44 np0005597378 podman[241860]: 2026-01-27 13:33:44.363474975 +0000 UTC m=+0.120982809 container start 3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_pare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:33:44 np0005597378 podman[241860]: 2026-01-27 13:33:44.368475148 +0000 UTC m=+0.125982982 container attach 3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:33:45 np0005597378 lvm[241954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:33:45 np0005597378 lvm[241955]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:33:45 np0005597378 lvm[241954]: VG ceph_vg0 finished
Jan 27 08:33:45 np0005597378 lvm[241955]: VG ceph_vg1 finished
Jan 27 08:33:45 np0005597378 lvm[241957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:33:45 np0005597378 lvm[241957]: VG ceph_vg2 finished
Jan 27 08:33:45 np0005597378 lvm[241958]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:33:45 np0005597378 lvm[241958]: VG ceph_vg2 finished
Jan 27 08:33:45 np0005597378 frosty_pare[241876]: {}
Jan 27 08:33:45 np0005597378 systemd[1]: libpod-3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643.scope: Deactivated successfully.
Jan 27 08:33:45 np0005597378 systemd[1]: libpod-3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643.scope: Consumed 1.244s CPU time.
Jan 27 08:33:45 np0005597378 podman[241860]: 2026-01-27 13:33:45.145256771 +0000 UTC m=+0.902764635 container died 3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_pare, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:33:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-be5c5b2a9b842118f09487a3df58e4970902fc85fac12b330a8246189fa55a87-merged.mount: Deactivated successfully.
Jan 27 08:33:45 np0005597378 podman[241860]: 2026-01-27 13:33:45.217259727 +0000 UTC m=+0.974767571 container remove 3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:33:45 np0005597378 systemd[1]: libpod-conmon-3840286be0fbe228fcd7f77b0958e7f50148f025137e7b9e429d5dcd2579c643.scope: Deactivated successfully.
Jan 27 08:33:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:33:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:33:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:33:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:33:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:33:46.281 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:33:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:33:46.282 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:33:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:33:46.282 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:33:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:33:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:33:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:52 np0005597378 podman[241997]: 2026-01-27 13:33:52.811186666 +0000 UTC m=+0.128313693 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 08:33:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:56 np0005597378 podman[242023]: 2026-01-27 13:33:56.725074092 +0000 UTC m=+0.070079775 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:33:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:33:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:33:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:33:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759779802' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:33:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:33:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759779802' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:34:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:34:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5844 writes, 24K keys, 5844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5844 writes, 1003 syncs, 5.83 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 27 08:34:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:34:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 7164 writes, 29K keys, 7164 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7164 writes, 1415 syncs, 5.06 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 27 08:34:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:34:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.0 total, 600.0 interval#012Cumulative writes: 5739 writes, 24K keys, 5739 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5739 writes, 919 syncs, 6.24 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 27 08:34:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:16 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:34:16
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:34:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:34:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:23 np0005597378 podman[242042]: 2026-01-27 13:34:23.790570876 +0000 UTC m=+0.116115950 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 08:34:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:34:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:27 np0005597378 podman[242068]: 2026-01-27 13:34:27.719466281 +0000 UTC m=+0.059646937 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:34:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:29 np0005597378 nova_compute[238941]: 2026-01-27 13:34:29.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:29 np0005597378 nova_compute[238941]: 2026-01-27 13:34:29.394 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:30 np0005597378 nova_compute[238941]: 2026-01-27 13:34:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.407 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.409 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:34:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:34:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345074643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:34:31 np0005597378 nova_compute[238941]: 2026-01-27 13:34:31.963 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.102 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.103 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.103 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.104 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.343 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.343 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.356 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:34:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:34:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22922360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.862 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.867 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.962 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.963 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:34:32 np0005597378 nova_compute[238941]: 2026-01-27 13:34:32.964 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:34:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:33 np0005597378 nova_compute[238941]: 2026-01-27 13:34:33.959 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:33 np0005597378 nova_compute[238941]: 2026-01-27 13:34:33.960 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:33 np0005597378 nova_compute[238941]: 2026-01-27 13:34:33.960 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:34:33 np0005597378 nova_compute[238941]: 2026-01-27 13:34:33.961 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:34:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:34 np0005597378 nova_compute[238941]: 2026-01-27 13:34:34.077 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:34:34 np0005597378 nova_compute[238941]: 2026-01-27 13:34:34.078 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:34 np0005597378 nova_compute[238941]: 2026-01-27 13:34:34.078 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:34:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.825715) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520885825748, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1450, "num_deletes": 251, "total_data_size": 2313637, "memory_usage": 2359312, "flush_reason": "Manual Compaction"}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520885850520, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2269762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14881, "largest_seqno": 16330, "table_properties": {"data_size": 2263051, "index_size": 3846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13781, "raw_average_key_size": 19, "raw_value_size": 2249613, "raw_average_value_size": 3200, "num_data_blocks": 176, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769520734, "oldest_key_time": 1769520734, "file_creation_time": 1769520885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 24859 microseconds, and 7998 cpu microseconds.
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.850571) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2269762 bytes OK
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.850590) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.852946) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.853011) EVENT_LOG_v1 {"time_micros": 1769520885852998, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.853044) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2307268, prev total WAL file size 2307268, number of live WAL files 2.
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.854754) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2216KB)], [35(7179KB)]
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520885854816, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9621989, "oldest_snapshot_seqno": -1}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4021 keys, 7822621 bytes, temperature: kUnknown
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520885901593, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7822621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7793638, "index_size": 17801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98215, "raw_average_key_size": 24, "raw_value_size": 7718791, "raw_average_value_size": 1919, "num_data_blocks": 754, "num_entries": 4021, "num_filter_entries": 4021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769520885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.901794) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7822621 bytes
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.903407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.5 rd, 167.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.4) OK, records in: 4535, records dropped: 514 output_compression: NoCompression
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.903430) EVENT_LOG_v1 {"time_micros": 1769520885903419, "job": 16, "event": "compaction_finished", "compaction_time_micros": 46829, "compaction_time_cpu_micros": 15521, "output_level": 6, "num_output_files": 1, "total_output_size": 7822621, "num_input_records": 4535, "num_output_records": 4021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520885903942, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769520885905403, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.854649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.905450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.905455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.905456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.905458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:34:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:34:45.905459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:34:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:34:46.283 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:34:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:34:46.283 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:34:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:34:46.283 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:34:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:34:46 np0005597378 podman[242347]: 2026-01-27 13:34:46.959267912 +0000 UTC m=+0.040114199 container create d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:46 np0005597378 systemd[1]: Started libpod-conmon-d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5.scope.
Jan 27 08:34:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:34:47 np0005597378 podman[242347]: 2026-01-27 13:34:47.034703228 +0000 UTC m=+0.115549555 container init d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:47 np0005597378 podman[242347]: 2026-01-27 13:34:46.939448065 +0000 UTC m=+0.020294392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:34:47 np0005597378 podman[242347]: 2026-01-27 13:34:47.040922923 +0000 UTC m=+0.121769220 container start d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:47 np0005597378 podman[242347]: 2026-01-27 13:34:47.044414076 +0000 UTC m=+0.125260403 container attach d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:34:47 np0005597378 gracious_antonelli[242363]: 167 167
Jan 27 08:34:47 np0005597378 systemd[1]: libpod-d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5.scope: Deactivated successfully.
Jan 27 08:34:47 np0005597378 podman[242347]: 2026-01-27 13:34:47.045875626 +0000 UTC m=+0.126721913 container died d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-83deefd1918d3b6fb3b50cd42bd8e6d8d01ea691a17aa5d3ad177f0993b7514d-merged.mount: Deactivated successfully.
Jan 27 08:34:47 np0005597378 podman[242347]: 2026-01-27 13:34:47.08512102 +0000 UTC m=+0.165967307 container remove d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 27 08:34:47 np0005597378 systemd[1]: libpod-conmon-d3d515a73d254208575ff4a668d5f12d9aee94e62395404ce2cbe41f66f0c0d5.scope: Deactivated successfully.
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.29303032 +0000 UTC m=+0.035270789 container create f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:47 np0005597378 systemd[1]: Started libpod-conmon-f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e.scope.
Jan 27 08:34:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a64f6940a09731b9e8dd06b83b449500b4d532c7020cf6c2d953401e2cdb07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a64f6940a09731b9e8dd06b83b449500b4d532c7020cf6c2d953401e2cdb07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a64f6940a09731b9e8dd06b83b449500b4d532c7020cf6c2d953401e2cdb07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a64f6940a09731b9e8dd06b83b449500b4d532c7020cf6c2d953401e2cdb07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a64f6940a09731b9e8dd06b83b449500b4d532c7020cf6c2d953401e2cdb07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.369640828 +0000 UTC m=+0.111881317 container init f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.276922692 +0000 UTC m=+0.019163181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.378037542 +0000 UTC m=+0.120278011 container start f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.382356856 +0000 UTC m=+0.124597355 container attach f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:47 np0005597378 zen_wilbur[242405]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:34:47 np0005597378 zen_wilbur[242405]: --> All data devices are unavailable
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:34:47 np0005597378 systemd[1]: libpod-f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e.scope: Deactivated successfully.
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.842057525 +0000 UTC m=+0.584298004 container died f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:34:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-19a64f6940a09731b9e8dd06b83b449500b4d532c7020cf6c2d953401e2cdb07-merged.mount: Deactivated successfully.
Jan 27 08:34:47 np0005597378 podman[242388]: 2026-01-27 13:34:47.888878021 +0000 UTC m=+0.631118490 container remove f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:34:47 np0005597378 systemd[1]: libpod-conmon-f27506d2a13a41611dd0c31740eca185814f6fa6b0ff43bbb78815774d75ef4e.scope: Deactivated successfully.
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.427265993 +0000 UTC m=+0.063259544 container create e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:48 np0005597378 systemd[1]: Started libpod-conmon-e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49.scope.
Jan 27 08:34:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.406352276 +0000 UTC m=+0.042345797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.511528024 +0000 UTC m=+0.147521625 container init e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.520146073 +0000 UTC m=+0.156139614 container start e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.524627373 +0000 UTC m=+0.160620924 container attach e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:34:48 np0005597378 mystifying_curie[242515]: 167 167
Jan 27 08:34:48 np0005597378 systemd[1]: libpod-e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49.scope: Deactivated successfully.
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.529918083 +0000 UTC m=+0.165911614 container died e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:34:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a2728995775841f27bb2178c674bae0d49d55e614f7efc51e3bd44c00e560892-merged.mount: Deactivated successfully.
Jan 27 08:34:48 np0005597378 podman[242499]: 2026-01-27 13:34:48.585999175 +0000 UTC m=+0.221992706 container remove e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:34:48 np0005597378 systemd[1]: libpod-conmon-e727749bd9fa00e3bad56a671d7a43f41b27c39116f6ea336409b7db6f074b49.scope: Deactivated successfully.
Jan 27 08:34:48 np0005597378 podman[242537]: 2026-01-27 13:34:48.791409649 +0000 UTC m=+0.051347746 container create 7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:48 np0005597378 systemd[1]: Started libpod-conmon-7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14.scope.
Jan 27 08:34:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:34:48 np0005597378 podman[242537]: 2026-01-27 13:34:48.76850517 +0000 UTC m=+0.028443297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:34:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2e94686c7ffb9b954acd724011e63104ff7dec1a96590601c9a1014f4ff3e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2e94686c7ffb9b954acd724011e63104ff7dec1a96590601c9a1014f4ff3e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2e94686c7ffb9b954acd724011e63104ff7dec1a96590601c9a1014f4ff3e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2e94686c7ffb9b954acd724011e63104ff7dec1a96590601c9a1014f4ff3e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:48 np0005597378 podman[242537]: 2026-01-27 13:34:48.88463769 +0000 UTC m=+0.144575807 container init 7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:34:48 np0005597378 podman[242537]: 2026-01-27 13:34:48.893667949 +0000 UTC m=+0.153606046 container start 7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:48 np0005597378 podman[242537]: 2026-01-27 13:34:48.898479548 +0000 UTC m=+0.158417685 container attach 7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:34:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]: {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:    "0": [
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:        {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "devices": [
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "/dev/loop3"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            ],
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_name": "ceph_lv0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_size": "21470642176",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "name": "ceph_lv0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "tags": {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cluster_name": "ceph",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.crush_device_class": "",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.encrypted": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.objectstore": "bluestore",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osd_id": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.type": "block",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.vdo": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.with_tpm": "0"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            },
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "type": "block",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "vg_name": "ceph_vg0"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:        }
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:    ],
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:    "1": [
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:        {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "devices": [
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "/dev/loop4"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            ],
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_name": "ceph_lv1",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_size": "21470642176",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "name": "ceph_lv1",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "tags": {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cluster_name": "ceph",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.crush_device_class": "",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.encrypted": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.objectstore": "bluestore",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osd_id": "1",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.type": "block",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.vdo": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.with_tpm": "0"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            },
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "type": "block",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "vg_name": "ceph_vg1"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:        }
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:    ],
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:    "2": [
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:        {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "devices": [
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "/dev/loop5"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            ],
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_name": "ceph_lv2",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_size": "21470642176",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "name": "ceph_lv2",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "tags": {
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.cluster_name": "ceph",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.crush_device_class": "",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.encrypted": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.objectstore": "bluestore",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osd_id": "2",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.type": "block",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.vdo": "0",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:                "ceph.with_tpm": "0"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            },
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "type": "block",
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:            "vg_name": "ceph_vg2"
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:        }
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]:    ]
Jan 27 08:34:49 np0005597378 youthful_babbage[242554]: }
Jan 27 08:34:49 np0005597378 systemd[1]: libpod-7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14.scope: Deactivated successfully.
Jan 27 08:34:49 np0005597378 podman[242537]: 2026-01-27 13:34:49.286055568 +0000 UTC m=+0.545993705 container died 7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_babbage, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 08:34:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-af2e94686c7ffb9b954acd724011e63104ff7dec1a96590601c9a1014f4ff3e9-merged.mount: Deactivated successfully.
Jan 27 08:34:49 np0005597378 podman[242537]: 2026-01-27 13:34:49.342320675 +0000 UTC m=+0.602258782 container remove 7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_babbage, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:34:49 np0005597378 systemd[1]: libpod-conmon-7774c7c216d5df536d02f723fcd378cc503215bfcdd2c3d1a9b011d15908ca14.scope: Deactivated successfully.
Jan 27 08:34:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:49 np0005597378 podman[242635]: 2026-01-27 13:34:49.888391911 +0000 UTC m=+0.068980916 container create 1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:34:49 np0005597378 systemd[1]: Started libpod-conmon-1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d.scope.
Jan 27 08:34:49 np0005597378 podman[242635]: 2026-01-27 13:34:49.851590352 +0000 UTC m=+0.032179407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:34:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:34:49 np0005597378 podman[242635]: 2026-01-27 13:34:49.989106601 +0000 UTC m=+0.169695596 container init 1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:34:49 np0005597378 podman[242635]: 2026-01-27 13:34:49.995553232 +0000 UTC m=+0.176142237 container start 1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_proskuriakova, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:34:50 np0005597378 condescending_proskuriakova[242651]: 167 167
Jan 27 08:34:50 np0005597378 podman[242635]: 2026-01-27 13:34:50.001701866 +0000 UTC m=+0.182290841 container attach 1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_proskuriakova, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:34:50 np0005597378 systemd[1]: libpod-1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d.scope: Deactivated successfully.
Jan 27 08:34:50 np0005597378 podman[242635]: 2026-01-27 13:34:50.003104073 +0000 UTC m=+0.183693068 container died 1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:34:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b13241be280d955d6db7e833ab3aa74dc62a02b353aff217901ad2708e865604-merged.mount: Deactivated successfully.
Jan 27 08:34:50 np0005597378 podman[242635]: 2026-01-27 13:34:50.050865653 +0000 UTC m=+0.231454628 container remove 1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:34:50 np0005597378 systemd[1]: libpod-conmon-1a4ae007b75b8d4025054a02312084703f543fe167b64abd0246f8113d84575d.scope: Deactivated successfully.
Jan 27 08:34:50 np0005597378 podman[242677]: 2026-01-27 13:34:50.261494836 +0000 UTC m=+0.059464462 container create 2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:34:50 np0005597378 systemd[1]: Started libpod-conmon-2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251.scope.
Jan 27 08:34:50 np0005597378 podman[242677]: 2026-01-27 13:34:50.231644392 +0000 UTC m=+0.029614098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:34:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:34:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682b70e6365e28149078724f7cf69f05dd24be8d4c97d3e3fabafaaea87a2ec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682b70e6365e28149078724f7cf69f05dd24be8d4c97d3e3fabafaaea87a2ec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682b70e6365e28149078724f7cf69f05dd24be8d4c97d3e3fabafaaea87a2ec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682b70e6365e28149078724f7cf69f05dd24be8d4c97d3e3fabafaaea87a2ec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:34:50 np0005597378 podman[242677]: 2026-01-27 13:34:50.366299584 +0000 UTC m=+0.164269300 container init 2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 08:34:50 np0005597378 podman[242677]: 2026-01-27 13:34:50.373783133 +0000 UTC m=+0.171752779 container start 2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:34:50 np0005597378 podman[242677]: 2026-01-27 13:34:50.379176256 +0000 UTC m=+0.177145962 container attach 2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:34:51 np0005597378 lvm[242772]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:34:51 np0005597378 lvm[242773]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:34:51 np0005597378 lvm[242773]: VG ceph_vg1 finished
Jan 27 08:34:51 np0005597378 lvm[242772]: VG ceph_vg0 finished
Jan 27 08:34:51 np0005597378 lvm[242774]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:34:51 np0005597378 lvm[242774]: VG ceph_vg2 finished
Jan 27 08:34:51 np0005597378 crazy_einstein[242693]: {}
Jan 27 08:34:51 np0005597378 systemd[1]: libpod-2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251.scope: Deactivated successfully.
Jan 27 08:34:51 np0005597378 podman[242677]: 2026-01-27 13:34:51.213221104 +0000 UTC m=+1.011190740 container died 2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:34:51 np0005597378 systemd[1]: libpod-2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251.scope: Consumed 1.456s CPU time.
Jan 27 08:34:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-682b70e6365e28149078724f7cf69f05dd24be8d4c97d3e3fabafaaea87a2ec2-merged.mount: Deactivated successfully.
Jan 27 08:34:51 np0005597378 podman[242677]: 2026-01-27 13:34:51.271085013 +0000 UTC m=+1.069054639 container remove 2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_einstein, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:34:51 np0005597378 systemd[1]: libpod-conmon-2f0e7be13dbe154ee85b8b5e9742869fa4b149b1a7ea9a6cd239c320cedd5251.scope: Deactivated successfully.
Jan 27 08:34:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:34:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:34:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:34:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:54 np0005597378 podman[242816]: 2026-01-27 13:34:54.80311123 +0000 UTC m=+0.117656360 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 27 08:34:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:58 np0005597378 podman[242842]: 2026-01-27 13:34:58.751665689 +0000 UTC m=+0.084531260 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:34:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:34:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:34:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:34:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2762772300' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:34:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:34:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2762772300' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:35:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 27 08:35:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:35:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:35:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:35:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:35:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 27 08:35:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:35:17
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr']
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:35:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:35:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:25 np0005597378 podman[242863]: 2026-01-27 13:35:25.760112829 +0000 UTC m=+0.088588958 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5473997654250071e-06 of space, bias 4.0, pg target 0.0018568797185100085 quantized to 16 (current 16)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:35:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:28 np0005597378 nova_compute[238941]: 2026-01-27 13:35:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:28 np0005597378 nova_compute[238941]: 2026-01-27 13:35:28.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 08:35:28 np0005597378 nova_compute[238941]: 2026-01-27 13:35:28.398 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 08:35:28 np0005597378 nova_compute[238941]: 2026-01-27 13:35:28.399 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:28 np0005597378 nova_compute[238941]: 2026-01-27 13:35:28.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 08:35:28 np0005597378 nova_compute[238941]: 2026-01-27 13:35:28.419 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:29 np0005597378 podman[242889]: 2026-01-27 13:35:29.723243744 +0000 UTC m=+0.059788870 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:35:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:31 np0005597378 nova_compute[238941]: 2026-01-27 13:35:31.435 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.422 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.422 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.423 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.423 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.423 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.423 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.424 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.460 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.461 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.462 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:35:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:35:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947899616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:35:32 np0005597378 nova_compute[238941]: 2026-01-27 13:35:32.984 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.150 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.151 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.152 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.152 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:35:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.556 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.676 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.846 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.847 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.875 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.906 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 08:35:33 np0005597378 nova_compute[238941]: 2026-01-27 13:35:33.925 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:35:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:35:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278959579' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:35:34 np0005597378 nova_compute[238941]: 2026-01-27 13:35:34.512 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:35:34 np0005597378 nova_compute[238941]: 2026-01-27 13:35:34.523 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:35:34 np0005597378 nova_compute[238941]: 2026-01-27 13:35:34.583 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:35:34 np0005597378 nova_compute[238941]: 2026-01-27 13:35:34.584 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:35:34 np0005597378 nova_compute[238941]: 2026-01-27 13:35:34.585 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:35:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:36 np0005597378 nova_compute[238941]: 2026-01-27 13:35:36.544 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:35:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:35:46.284 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:35:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:35:46.284 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:35:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:35:46.284 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:35:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.556396097 +0000 UTC m=+0.051681141 container create 5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:35:52 np0005597378 systemd[1]: Started libpod-conmon-5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a.scope.
Jan 27 08:35:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.530044103 +0000 UTC m=+0.025329177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.639770108 +0000 UTC m=+0.135055172 container init 5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.647280113 +0000 UTC m=+0.142565157 container start 5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:35:52 np0005597378 affectionate_ganguly[243112]: 167 167
Jan 27 08:35:52 np0005597378 systemd[1]: libpod-5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a.scope: Deactivated successfully.
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.653860904 +0000 UTC m=+0.149145968 container attach 5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.654840159 +0000 UTC m=+0.150125203 container died 5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:35:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5865848be3cbfb913b34757c4a45ebdb4d090439d242837b2dd97b61678fbbe3-merged.mount: Deactivated successfully.
Jan 27 08:35:52 np0005597378 podman[243096]: 2026-01-27 13:35:52.7231522 +0000 UTC m=+0.218437244 container remove 5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:35:52 np0005597378 systemd[1]: libpod-conmon-5c1e3ff17ceb83d0d1d798912234a0d41c76f34a7df5f3482571917783aced2a.scope: Deactivated successfully.
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:35:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:35:52 np0005597378 podman[243137]: 2026-01-27 13:35:52.899265466 +0000 UTC m=+0.046258070 container create 78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bose, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:35:52 np0005597378 systemd[1]: Started libpod-conmon-78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491.scope.
Jan 27 08:35:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:35:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338a1276bfc68edfa6533d73e23218dc407b045369d2b53c80ec19f25493ce5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338a1276bfc68edfa6533d73e23218dc407b045369d2b53c80ec19f25493ce5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338a1276bfc68edfa6533d73e23218dc407b045369d2b53c80ec19f25493ce5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338a1276bfc68edfa6533d73e23218dc407b045369d2b53c80ec19f25493ce5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338a1276bfc68edfa6533d73e23218dc407b045369d2b53c80ec19f25493ce5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:52 np0005597378 podman[243137]: 2026-01-27 13:35:52.879197435 +0000 UTC m=+0.026190059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:35:52 np0005597378 podman[243137]: 2026-01-27 13:35:52.985005529 +0000 UTC m=+0.131998143 container init 78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:35:52 np0005597378 podman[243137]: 2026-01-27 13:35:52.990999034 +0000 UTC m=+0.137991638 container start 78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:35:52 np0005597378 podman[243137]: 2026-01-27 13:35:52.995697457 +0000 UTC m=+0.142690081 container attach 78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:35:53 np0005597378 gracious_bose[243153]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:35:53 np0005597378 gracious_bose[243153]: --> All data devices are unavailable
Jan 27 08:35:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:53 np0005597378 systemd[1]: libpod-78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491.scope: Deactivated successfully.
Jan 27 08:35:53 np0005597378 podman[243137]: 2026-01-27 13:35:53.465780314 +0000 UTC m=+0.612772918 container died 78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bose, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:35:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-338a1276bfc68edfa6533d73e23218dc407b045369d2b53c80ec19f25493ce5b-merged.mount: Deactivated successfully.
Jan 27 08:35:53 np0005597378 podman[243137]: 2026-01-27 13:35:53.507471495 +0000 UTC m=+0.654464099 container remove 78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bose, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:35:53 np0005597378 systemd[1]: libpod-conmon-78618f5724fe4c3349412b1e7a179db36169b928ea8ae5e11d96cd6801c30491.scope: Deactivated successfully.
Jan 27 08:35:53 np0005597378 podman[243248]: 2026-01-27 13:35:53.934872566 +0000 UTC m=+0.046044414 container create 8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:35:53 np0005597378 systemd[1]: Started libpod-conmon-8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d.scope.
Jan 27 08:35:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:35:54 np0005597378 podman[243248]: 2026-01-27 13:35:53.914898948 +0000 UTC m=+0.026070816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:35:54 np0005597378 podman[243248]: 2026-01-27 13:35:54.022054447 +0000 UTC m=+0.133226315 container init 8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:35:54 np0005597378 podman[243248]: 2026-01-27 13:35:54.033982676 +0000 UTC m=+0.145154524 container start 8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:35:54 np0005597378 blissful_bhabha[243265]: 167 167
Jan 27 08:35:54 np0005597378 podman[243248]: 2026-01-27 13:35:54.038437501 +0000 UTC m=+0.149609399 container attach 8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:35:54 np0005597378 systemd[1]: libpod-8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d.scope: Deactivated successfully.
Jan 27 08:35:54 np0005597378 podman[243248]: 2026-01-27 13:35:54.039878928 +0000 UTC m=+0.151050816 container died 8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:35:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-865bd3136142429ef97696d34355b81d630c634086dfedcf28f20c82b42d5922-merged.mount: Deactivated successfully.
Jan 27 08:35:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:54 np0005597378 podman[243248]: 2026-01-27 13:35:54.089735141 +0000 UTC m=+0.200907029 container remove 8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:35:54 np0005597378 systemd[1]: libpod-conmon-8aa92af9be867126fb792f66f23c6d7cfb42684ce2d19586b9e29d9cd432903d.scope: Deactivated successfully.
Jan 27 08:35:54 np0005597378 podman[243291]: 2026-01-27 13:35:54.274490511 +0000 UTC m=+0.035707776 container create 565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:35:54 np0005597378 systemd[1]: Started libpod-conmon-565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64.scope.
Jan 27 08:35:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2011d91c37afe3d37e6f73fbe939fc7ed88e33b270fe6e178813380cb09770/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2011d91c37afe3d37e6f73fbe939fc7ed88e33b270fe6e178813380cb09770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2011d91c37afe3d37e6f73fbe939fc7ed88e33b270fe6e178813380cb09770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2011d91c37afe3d37e6f73fbe939fc7ed88e33b270fe6e178813380cb09770/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:54 np0005597378 podman[243291]: 2026-01-27 13:35:54.347654198 +0000 UTC m=+0.108871483 container init 565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 27 08:35:54 np0005597378 podman[243291]: 2026-01-27 13:35:54.353236373 +0000 UTC m=+0.114453638 container start 565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:35:54 np0005597378 podman[243291]: 2026-01-27 13:35:54.259808621 +0000 UTC m=+0.021025906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:35:54 np0005597378 podman[243291]: 2026-01-27 13:35:54.355992114 +0000 UTC m=+0.117209409 container attach 565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]: {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:    "0": [
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:        {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "devices": [
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "/dev/loop3"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            ],
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_name": "ceph_lv0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_size": "21470642176",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "name": "ceph_lv0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "tags": {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cluster_name": "ceph",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.crush_device_class": "",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.encrypted": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.objectstore": "bluestore",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osd_id": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.type": "block",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.vdo": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.with_tpm": "0"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            },
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "type": "block",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "vg_name": "ceph_vg0"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:        }
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:    ],
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:    "1": [
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:        {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "devices": [
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "/dev/loop4"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            ],
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_name": "ceph_lv1",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_size": "21470642176",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "name": "ceph_lv1",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "tags": {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cluster_name": "ceph",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.crush_device_class": "",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.encrypted": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.objectstore": "bluestore",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osd_id": "1",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.type": "block",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.vdo": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.with_tpm": "0"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            },
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "type": "block",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "vg_name": "ceph_vg1"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:        }
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:    ],
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:    "2": [
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:        {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "devices": [
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "/dev/loop5"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            ],
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_name": "ceph_lv2",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_size": "21470642176",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "name": "ceph_lv2",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "tags": {
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.cluster_name": "ceph",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.crush_device_class": "",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.encrypted": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.objectstore": "bluestore",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osd_id": "2",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.type": "block",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.vdo": "0",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:                "ceph.with_tpm": "0"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            },
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "type": "block",
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:            "vg_name": "ceph_vg2"
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:        }
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]:    ]
Jan 27 08:35:54 np0005597378 hardcore_yonath[243307]: }
Jan 27 08:35:54 np0005597378 systemd[1]: libpod-565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64.scope: Deactivated successfully.
Jan 27 08:35:54 np0005597378 podman[243316]: 2026-01-27 13:35:54.690086937 +0000 UTC m=+0.025976145 container died 565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:35:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9e2011d91c37afe3d37e6f73fbe939fc7ed88e33b270fe6e178813380cb09770-merged.mount: Deactivated successfully.
Jan 27 08:35:54 np0005597378 podman[243316]: 2026-01-27 13:35:54.727089886 +0000 UTC m=+0.062979094 container remove 565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:35:54 np0005597378 systemd[1]: libpod-conmon-565467f50b846e5448626fa2573b5955487c905cb9a25e88bf60013789e02a64.scope: Deactivated successfully.
Jan 27 08:35:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 27 08:35:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 27 08:35:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.179168597 +0000 UTC m=+0.039114055 container create 5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:35:55 np0005597378 systemd[1]: Started libpod-conmon-5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247.scope.
Jan 27 08:35:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.161939721 +0000 UTC m=+0.021885209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.267781004 +0000 UTC m=+0.127726462 container init 5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.274136979 +0000 UTC m=+0.134082437 container start 5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_gates, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.277577278 +0000 UTC m=+0.137522736 container attach 5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:35:55 np0005597378 gracious_gates[243408]: 167 167
Jan 27 08:35:55 np0005597378 systemd[1]: libpod-5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247.scope: Deactivated successfully.
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.27879574 +0000 UTC m=+0.138741208 container died 5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 08:35:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c17612bd4d8a27be4eee1173ecf0f1c13c23616df163d90759ea2bbe4bcdfdd2-merged.mount: Deactivated successfully.
Jan 27 08:35:55 np0005597378 podman[243392]: 2026-01-27 13:35:55.317085423 +0000 UTC m=+0.177030881 container remove 5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:35:55 np0005597378 systemd[1]: libpod-conmon-5487ac6285a76864dab21cae0f00eda6105eedd7ef6c0bfc1b6ecde7c9d50247.scope: Deactivated successfully.
Jan 27 08:35:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:55 np0005597378 podman[243432]: 2026-01-27 13:35:55.490488989 +0000 UTC m=+0.040339497 container create 5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:35:55 np0005597378 systemd[1]: Started libpod-conmon-5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea.scope.
Jan 27 08:35:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:35:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6f78ffaba66efde3275ab88facc6a9540ab9e5a7f9367a5e3d585697bacd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6f78ffaba66efde3275ab88facc6a9540ab9e5a7f9367a5e3d585697bacd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6f78ffaba66efde3275ab88facc6a9540ab9e5a7f9367a5e3d585697bacd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6f78ffaba66efde3275ab88facc6a9540ab9e5a7f9367a5e3d585697bacd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:35:55 np0005597378 podman[243432]: 2026-01-27 13:35:55.561153021 +0000 UTC m=+0.111003549 container init 5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:35:55 np0005597378 podman[243432]: 2026-01-27 13:35:55.566552651 +0000 UTC m=+0.116403149 container start 5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:35:55 np0005597378 podman[243432]: 2026-01-27 13:35:55.473040026 +0000 UTC m=+0.022890554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:35:55 np0005597378 podman[243432]: 2026-01-27 13:35:55.570270117 +0000 UTC m=+0.120120625 container attach 5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:35:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 27 08:35:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 27 08:35:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 27 08:35:56 np0005597378 lvm[243536]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:35:56 np0005597378 lvm[243537]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:35:56 np0005597378 lvm[243536]: VG ceph_vg0 finished
Jan 27 08:35:56 np0005597378 lvm[243537]: VG ceph_vg1 finished
Jan 27 08:35:56 np0005597378 lvm[243542]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:35:56 np0005597378 lvm[243542]: VG ceph_vg2 finished
Jan 27 08:35:56 np0005597378 podman[243527]: 2026-01-27 13:35:56.451890015 +0000 UTC m=+0.123497223 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:35:56 np0005597378 mystifying_bell[243449]: {}
Jan 27 08:35:56 np0005597378 systemd[1]: libpod-5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea.scope: Deactivated successfully.
Jan 27 08:35:56 np0005597378 systemd[1]: libpod-5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea.scope: Consumed 1.471s CPU time.
Jan 27 08:35:56 np0005597378 podman[243563]: 2026-01-27 13:35:56.602255404 +0000 UTC m=+0.043666514 container died 5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:35:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c2d6f78ffaba66efde3275ab88facc6a9540ab9e5a7f9367a5e3d585697bacd1-merged.mount: Deactivated successfully.
Jan 27 08:35:56 np0005597378 podman[243563]: 2026-01-27 13:35:56.809862166 +0000 UTC m=+0.251273206 container remove 5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:35:56 np0005597378 systemd[1]: libpod-conmon-5d996b821f82bae8fe81693182b144a6e91de90778bc81352fcd9524400b67ea.scope: Deactivated successfully.
Jan 27 08:35:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:35:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:35:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:35:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:35:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:35:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:35:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:35:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 27 08:35:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 27 08:35:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:35:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 3.4 MiB/s wr, 3 op/s
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681486927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681486927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 27 08:35:59 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 27 08:36:00 np0005597378 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000054s
Jan 27 08:36:00 np0005597378 ceph-osd[86941]: bluestore.MempoolThread fragmentation_score=0.000147 took=0.000049s
Jan 27 08:36:00 np0005597378 ceph-osd[85897]: bluestore.MempoolThread fragmentation_score=0.000125 took=0.000027s
Jan 27 08:36:00 np0005597378 podman[243604]: 2026-01-27 13:36:00.758163504 +0000 UTC m=+0.086675178 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:36:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 33 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 MiB/s wr, 39 op/s
Jan 27 08:36:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 33 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 4.3 MiB/s wr, 48 op/s
Jan 27 08:36:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 27 08:36:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 27 08:36:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 27 08:36:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Jan 27 08:36:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 45 op/s
Jan 27 08:36:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 27 08:36:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 27 08:36:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 27 08:36:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.0 MiB/s wr, 18 op/s
Jan 27 08:36:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 30 op/s
Jan 27 08:36:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 27 08:36:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 27 08:36:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 27 08:36:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Jan 27 08:36:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:36:17
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'images', '.rgw.root', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:36:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:36:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 27 08:36:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 27 08:36:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 27 08:36:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 27 08:36:19 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 27 08:36:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.4 KiB/s wr, 27 op/s
Jan 27 08:36:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 7 op/s
Jan 27 08:36:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 27 08:36:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 27 08:36:23 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 27 08:36:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 27 08:36:26 np0005597378 podman[243625]: 2026-01-27 13:36:26.746849755 +0000 UTC m=+0.083200468 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4966827668800264e-07 of space, bias 1.0, pg target 0.0001349004830064008 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3868860134336641e-06 of space, bias 4.0, pg target 0.001664263216120397 quantized to 16 (current 16)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:36:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 27 08:36:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 27 08:36:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:31 np0005597378 nova_compute[238941]: 2026-01-27 13:36:31.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:31 np0005597378 nova_compute[238941]: 2026-01-27 13:36:31.406 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:36:31 np0005597378 podman[243652]: 2026-01-27 13:36:31.721123043 +0000 UTC m=+0.067999755 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 27 08:36:32 np0005597378 nova_compute[238941]: 2026-01-27 13:36:32.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:32 np0005597378 nova_compute[238941]: 2026-01-27 13:36:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:32 np0005597378 nova_compute[238941]: 2026-01-27 13:36:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.399 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.423 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.424 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.424 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.424 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.424 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:36:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 27 08:36:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:36:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175665814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:36:33 np0005597378 nova_compute[238941]: 2026-01-27 13:36:33.960 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.122 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.123 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.123 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.124 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.286 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.287 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.309 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:36:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:36:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1072497133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.903 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.908 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.936 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.938 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:36:34 np0005597378 nova_compute[238941]: 2026-01-27 13:36:34.938 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:36:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 520 B/s wr, 4 op/s
Jan 27 08:36:35 np0005597378 nova_compute[238941]: 2026-01-27 13:36:35.921 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:35 np0005597378 nova_compute[238941]: 2026-01-27 13:36:35.921 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:35 np0005597378 nova_compute[238941]: 2026-01-27 13:36:35.921 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:36:36 np0005597378 nova_compute[238941]: 2026-01-27 13:36:36.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:36:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:36:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 511 B/s wr, 4 op/s
Jan 27 08:36:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:36:46.285 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:36:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:36:46.285 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:36:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:36:46.285 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:36:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:57 np0005597378 podman[243741]: 2026-01-27 13:36:57.108117674 +0000 UTC m=+0.078759654 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 08:36:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:57 np0005597378 podman[243839]: 2026-01-27 13:36:57.878354944 +0000 UTC m=+0.432324280 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:36:57 np0005597378 podman[243839]: 2026-01-27 13:36:57.989756532 +0000 UTC m=+0.543725838 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:36:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:36:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:36:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:36:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525831988' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525831988' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:36:59 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.653118958 +0000 UTC m=+0.046535597 container create 765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cohen, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 08:36:59 np0005597378 systemd[1]: Started libpod-conmon-765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237.scope.
Jan 27 08:36:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.635267906 +0000 UTC m=+0.028684575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.734744855 +0000 UTC m=+0.128161524 container init 765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cohen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.742317951 +0000 UTC m=+0.135734600 container start 765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:36:59 np0005597378 determined_cohen[244189]: 167 167
Jan 27 08:36:59 np0005597378 systemd[1]: libpod-765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237.scope: Deactivated successfully.
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.746988852 +0000 UTC m=+0.140405501 container attach 765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.747446584 +0000 UTC m=+0.140863233 container died 765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cohen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:36:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4c09f271c181060182e4c5adbaad1d43258df25ec08d700b5ffbf2ce66a3c126-merged.mount: Deactivated successfully.
Jan 27 08:36:59 np0005597378 podman[244173]: 2026-01-27 13:36:59.799456692 +0000 UTC m=+0.192873341 container remove 765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 27 08:36:59 np0005597378 systemd[1]: libpod-conmon-765b58634ee7d379b7969233394aae3381e6d2749cc43bb0c169faa1b8e8a237.scope: Deactivated successfully.
Jan 27 08:36:59 np0005597378 podman[244214]: 2026-01-27 13:36:59.959391069 +0000 UTC m=+0.039243759 container create cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:36:59 np0005597378 systemd[1]: Started libpod-conmon-cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7.scope.
Jan 27 08:37:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096fa76d123cb039f84dc0e64f075622c37671dffb6ccab83f5c0938e7a0e42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096fa76d123cb039f84dc0e64f075622c37671dffb6ccab83f5c0938e7a0e42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096fa76d123cb039f84dc0e64f075622c37671dffb6ccab83f5c0938e7a0e42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096fa76d123cb039f84dc0e64f075622c37671dffb6ccab83f5c0938e7a0e42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096fa76d123cb039f84dc0e64f075622c37671dffb6ccab83f5c0938e7a0e42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:00 np0005597378 podman[244214]: 2026-01-27 13:36:59.943094906 +0000 UTC m=+0.022947616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:37:00 np0005597378 podman[244214]: 2026-01-27 13:37:00.040133473 +0000 UTC m=+0.119986213 container init cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:37:00 np0005597378 podman[244214]: 2026-01-27 13:37:00.046813855 +0000 UTC m=+0.126666545 container start cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:37:00 np0005597378 podman[244214]: 2026-01-27 13:37:00.052636817 +0000 UTC m=+0.132489537 container attach cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:37:00 np0005597378 heuristic_chebyshev[244231]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:37:00 np0005597378 heuristic_chebyshev[244231]: --> All data devices are unavailable
Jan 27 08:37:00 np0005597378 systemd[1]: libpod-cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7.scope: Deactivated successfully.
Jan 27 08:37:00 np0005597378 podman[244251]: 2026-01-27 13:37:00.534934801 +0000 UTC m=+0.022305880 container died cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chebyshev, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8096fa76d123cb039f84dc0e64f075622c37671dffb6ccab83f5c0938e7a0e42-merged.mount: Deactivated successfully.
Jan 27 08:37:00 np0005597378 podman[244251]: 2026-01-27 13:37:00.57424001 +0000 UTC m=+0.061611069 container remove cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:37:00 np0005597378 systemd[1]: libpod-conmon-cc64c66efbf9a23c101f473d68aaf85ab3df8ac55a5433c13e1e01966dd6edd7.scope: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:01.004146156 +0000 UTC m=+0.062546202 container create b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:00.963745768 +0000 UTC m=+0.022145834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:37:01 np0005597378 systemd[1]: Started libpod-conmon-b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606.scope.
Jan 27 08:37:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:01.126811887 +0000 UTC m=+0.185211933 container init b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:01.13389988 +0000 UTC m=+0.192299926 container start b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:01.137049772 +0000 UTC m=+0.195449818 container attach b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:37:01 np0005597378 goofy_merkle[244344]: 167 167
Jan 27 08:37:01 np0005597378 systemd[1]: libpod-b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606.scope: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:01.139788343 +0000 UTC m=+0.198188409 container died b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:37:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a5988f6662c44fa4a33ce68aebfc1084ba21eedcb861499b6cae7fbf3a35a374-merged.mount: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244328]: 2026-01-27 13:37:01.179346258 +0000 UTC m=+0.237746304 container remove b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:37:01 np0005597378 systemd[1]: libpod-conmon-b7f638cefe257313818dba66922558550491bf62bb898e40de41cfe0d0570606.scope: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.360706731 +0000 UTC m=+0.051038185 container create 6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:37:01 np0005597378 systemd[1]: Started libpod-conmon-6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc.scope.
Jan 27 08:37:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:37:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e71fbee322706f3e3f9ea3ef45d3a014eae0ce96c6d01edf4b5aa667192639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e71fbee322706f3e3f9ea3ef45d3a014eae0ce96c6d01edf4b5aa667192639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e71fbee322706f3e3f9ea3ef45d3a014eae0ce96c6d01edf4b5aa667192639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e71fbee322706f3e3f9ea3ef45d3a014eae0ce96c6d01edf4b5aa667192639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.337957251 +0000 UTC m=+0.028288535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.441244578 +0000 UTC m=+0.131575832 container init 6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.448106796 +0000 UTC m=+0.138438051 container start 6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.451397982 +0000 UTC m=+0.141729246 container attach 6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:37:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:01 np0005597378 youthful_wing[244388]: {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:    "0": [
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:        {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "devices": [
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "/dev/loop3"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            ],
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_name": "ceph_lv0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_size": "21470642176",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "name": "ceph_lv0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "tags": {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cluster_name": "ceph",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.crush_device_class": "",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.encrypted": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.objectstore": "bluestore",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osd_id": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.type": "block",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.vdo": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.with_tpm": "0"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            },
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "type": "block",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "vg_name": "ceph_vg0"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:        }
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:    ],
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:    "1": [
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:        {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "devices": [
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "/dev/loop4"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            ],
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_name": "ceph_lv1",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_size": "21470642176",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "name": "ceph_lv1",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "tags": {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cluster_name": "ceph",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.crush_device_class": "",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.encrypted": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.objectstore": "bluestore",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osd_id": "1",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.type": "block",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.vdo": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.with_tpm": "0"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            },
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "type": "block",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "vg_name": "ceph_vg1"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:        }
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:    ],
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:    "2": [
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:        {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "devices": [
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "/dev/loop5"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            ],
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_name": "ceph_lv2",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_size": "21470642176",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "name": "ceph_lv2",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "tags": {
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.cluster_name": "ceph",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.crush_device_class": "",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.encrypted": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.objectstore": "bluestore",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osd_id": "2",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.type": "block",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.vdo": "0",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:                "ceph.with_tpm": "0"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            },
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "type": "block",
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:            "vg_name": "ceph_vg2"
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:        }
Jan 27 08:37:01 np0005597378 youthful_wing[244388]:    ]
Jan 27 08:37:01 np0005597378 youthful_wing[244388]: }
Jan 27 08:37:01 np0005597378 systemd[1]: libpod-6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc.scope: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.754018698 +0000 UTC m=+0.444349952 container died 6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-91e71fbee322706f3e3f9ea3ef45d3a014eae0ce96c6d01edf4b5aa667192639-merged.mount: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244368]: 2026-01-27 13:37:01.804001004 +0000 UTC m=+0.494332258 container remove 6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:37:01 np0005597378 systemd[1]: libpod-conmon-6f51765841f7d59f815e91248f5872e14b030ba37b6cee23c8df251b0be7eebc.scope: Deactivated successfully.
Jan 27 08:37:01 np0005597378 podman[244398]: 2026-01-27 13:37:01.861455404 +0000 UTC m=+0.061489175 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.241934228 +0000 UTC m=+0.046684091 container create 017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:37:02 np0005597378 systemd[1]: Started libpod-conmon-017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d.scope.
Jan 27 08:37:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.215895224 +0000 UTC m=+0.020645107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.328699258 +0000 UTC m=+0.133449141 container init 017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.335206257 +0000 UTC m=+0.139956120 container start 017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.338268476 +0000 UTC m=+0.143018339 container attach 017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:37:02 np0005597378 distracted_hoover[244505]: 167 167
Jan 27 08:37:02 np0005597378 systemd[1]: libpod-017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d.scope: Deactivated successfully.
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.339683163 +0000 UTC m=+0.144433026 container died 017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:37:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d76325455c335ce15bccf2edb0eac1f03429fdc28c3ea547e18ab3aef8606d51-merged.mount: Deactivated successfully.
Jan 27 08:37:02 np0005597378 podman[244489]: 2026-01-27 13:37:02.370499452 +0000 UTC m=+0.175249315 container remove 017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:37:02 np0005597378 systemd[1]: libpod-conmon-017b8ea4cc2970e11b066d58ad2bf9979ebf2207abc1ce4e805177cc88a98a0d.scope: Deactivated successfully.
Jan 27 08:37:02 np0005597378 podman[244528]: 2026-01-27 13:37:02.572005817 +0000 UTC m=+0.091039762 container create 1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:37:02 np0005597378 podman[244528]: 2026-01-27 13:37:02.502225117 +0000 UTC m=+0.021259082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:37:03 np0005597378 systemd[1]: Started libpod-conmon-1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7.scope.
Jan 27 08:37:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:37:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4bfd6d11a5cd5b0136c0312d54316083dd4ed696f0ab22cfdcd81d5347b698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4bfd6d11a5cd5b0136c0312d54316083dd4ed696f0ab22cfdcd81d5347b698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4bfd6d11a5cd5b0136c0312d54316083dd4ed696f0ab22cfdcd81d5347b698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4bfd6d11a5cd5b0136c0312d54316083dd4ed696f0ab22cfdcd81d5347b698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:37:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:03 np0005597378 podman[244528]: 2026-01-27 13:37:03.552772834 +0000 UTC m=+1.071806799 container init 1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:37:03 np0005597378 podman[244528]: 2026-01-27 13:37:03.559320865 +0000 UTC m=+1.078354850 container start 1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 27 08:37:03 np0005597378 podman[244528]: 2026-01-27 13:37:03.711837289 +0000 UTC m=+1.230871294 container attach 1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:37:04 np0005597378 lvm[244622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:37:04 np0005597378 lvm[244623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:37:04 np0005597378 lvm[244623]: VG ceph_vg1 finished
Jan 27 08:37:04 np0005597378 lvm[244622]: VG ceph_vg0 finished
Jan 27 08:37:04 np0005597378 lvm[244625]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:37:04 np0005597378 lvm[244625]: VG ceph_vg2 finished
Jan 27 08:37:04 np0005597378 wizardly_tesla[244544]: {}
Jan 27 08:37:04 np0005597378 systemd[1]: libpod-1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7.scope: Deactivated successfully.
Jan 27 08:37:04 np0005597378 systemd[1]: libpod-1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7.scope: Consumed 1.237s CPU time.
Jan 27 08:37:04 np0005597378 podman[244528]: 2026-01-27 13:37:04.298228973 +0000 UTC m=+1.817262938 container died 1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:37:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fc4bfd6d11a5cd5b0136c0312d54316083dd4ed696f0ab22cfdcd81d5347b698-merged.mount: Deactivated successfully.
Jan 27 08:37:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:04 np0005597378 podman[244528]: 2026-01-27 13:37:04.627948181 +0000 UTC m=+2.146982126 container remove 1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tesla, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:37:04 np0005597378 systemd[1]: libpod-conmon-1ff43261dd88500442044dc6f075a1b0a917e266347f5a09055ead955979a8c7.scope: Deactivated successfully.
Jan 27 08:37:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:37:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:37:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:37:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:37:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:37:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:37:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:37:17
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control']
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:37:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:37:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 5.897199796296706e-07 of space, bias 1.0, pg target 0.00017691599388890118 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3876777922663906e-06 of space, bias 4.0, pg target 0.0016652133507196686 quantized to 16 (current 16)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:37:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:27 np0005597378 podman[244670]: 2026-01-27 13:37:27.744237722 +0000 UTC m=+0.086453182 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:37:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:31 np0005597378 nova_compute[238941]: 2026-01-27 13:37:31.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:32 np0005597378 nova_compute[238941]: 2026-01-27 13:37:32.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:32 np0005597378 podman[244698]: 2026-01-27 13:37:32.705226525 +0000 UTC m=+0.050551562 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:37:33 np0005597378 nova_compute[238941]: 2026-01-27 13:37:33.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:33 np0005597378 nova_compute[238941]: 2026-01-27 13:37:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:37:33 np0005597378 nova_compute[238941]: 2026-01-27 13:37:33.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:37:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:33 np0005597378 nova_compute[238941]: 2026-01-27 13:37:33.740 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:37:33 np0005597378 nova_compute[238941]: 2026-01-27 13:37:33.741 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:34 np0005597378 nova_compute[238941]: 2026-01-27 13:37:34.734 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.442 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.442 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.442 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.442 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:37:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:37:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035981837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:37:35 np0005597378 nova_compute[238941]: 2026-01-27 13:37:35.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.169 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:37:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.172 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.172 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.172 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.267 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.268 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.288 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:37:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:37:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022063701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.839 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.844 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.859 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.861 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:37:36 np0005597378 nova_compute[238941]: 2026-01-27 13:37:36.861 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:37:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:37 np0005597378 nova_compute[238941]: 2026-01-27 13:37:37.861 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:37 np0005597378 nova_compute[238941]: 2026-01-27 13:37:37.862 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:37:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:37:46.286 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:37:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:37:46.286 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:37:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:37:46.287 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:37:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:37:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:37:58 np0005597378 podman[244764]: 2026-01-27 13:37:58.748284278 +0000 UTC m=+0.088668162 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:37:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:04 np0005597378 podman[244790]: 2026-01-27 13:38:04.010098822 +0000 UTC m=+0.052152477 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 08:38:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:38:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:38:05 np0005597378 podman[244952]: 2026-01-27 13:38:05.978014367 +0000 UTC m=+0.047259384 container create aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:38:06 np0005597378 systemd[1]: Started libpod-conmon-aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab.scope.
Jan 27 08:38:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:38:06 np0005597378 podman[244952]: 2026-01-27 13:38:05.951135072 +0000 UTC m=+0.020380109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:38:06 np0005597378 podman[244952]: 2026-01-27 13:38:06.056008229 +0000 UTC m=+0.125253266 container init aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:38:06 np0005597378 podman[244952]: 2026-01-27 13:38:06.062216216 +0000 UTC m=+0.131461233 container start aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:38:06 np0005597378 podman[244952]: 2026-01-27 13:38:06.064917279 +0000 UTC m=+0.134162306 container attach aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feistel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:38:06 np0005597378 hardcore_feistel[244968]: 167 167
Jan 27 08:38:06 np0005597378 systemd[1]: libpod-aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab.scope: Deactivated successfully.
Jan 27 08:38:06 np0005597378 podman[244952]: 2026-01-27 13:38:06.068874996 +0000 UTC m=+0.138120013 container died aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feistel, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:38:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0ac020bf76040a8165030c049e48cf512302a90e099b4369b7b5f3d93b687f84-merged.mount: Deactivated successfully.
Jan 27 08:38:06 np0005597378 podman[244952]: 2026-01-27 13:38:06.117193079 +0000 UTC m=+0.186438096 container remove aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:38:06 np0005597378 systemd[1]: libpod-conmon-aa7460b30c227696c6ab059b11ea5c9c95adfde9b9e411e64dcaa79a84e9b8ab.scope: Deactivated successfully.
Jan 27 08:38:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.291411145 +0000 UTC m=+0.044984544 container create f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:38:06 np0005597378 systemd[1]: Started libpod-conmon-f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8.scope.
Jan 27 08:38:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:38:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5afef4afaadd280c82c1357d3c99faaee1c9a6711b0c60e0ec6e8c97e36d3c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5afef4afaadd280c82c1357d3c99faaee1c9a6711b0c60e0ec6e8c97e36d3c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5afef4afaadd280c82c1357d3c99faaee1c9a6711b0c60e0ec6e8c97e36d3c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5afef4afaadd280c82c1357d3c99faaee1c9a6711b0c60e0ec6e8c97e36d3c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5afef4afaadd280c82c1357d3c99faaee1c9a6711b0c60e0ec6e8c97e36d3c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.272189266 +0000 UTC m=+0.025762735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.374112704 +0000 UTC m=+0.127686093 container init f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.383735113 +0000 UTC m=+0.137308512 container start f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.396306542 +0000 UTC m=+0.149879941 container attach f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:38:06 np0005597378 quirky_cerf[245010]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:38:06 np0005597378 quirky_cerf[245010]: --> All data devices are unavailable
Jan 27 08:38:06 np0005597378 systemd[1]: libpod-f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8.scope: Deactivated successfully.
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.831486973 +0000 UTC m=+0.585060362 container died f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:38:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d5afef4afaadd280c82c1357d3c99faaee1c9a6711b0c60e0ec6e8c97e36d3c7-merged.mount: Deactivated successfully.
Jan 27 08:38:06 np0005597378 podman[244993]: 2026-01-27 13:38:06.899034893 +0000 UTC m=+0.652608282 container remove f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:38:06 np0005597378 systemd[1]: libpod-conmon-f40dc935e900f25c48a225e10562cc592f9e6de0b8f2c208039b590d6b9e71b8.scope: Deactivated successfully.
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.330514054 +0000 UTC m=+0.035556999 container create 4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:38:07 np0005597378 systemd[1]: Started libpod-conmon-4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c.scope.
Jan 27 08:38:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.404270062 +0000 UTC m=+0.109313027 container init 4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.411759794 +0000 UTC m=+0.116802739 container start 4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.316071225 +0000 UTC m=+0.021114190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:38:07 np0005597378 pensive_mendeleev[245122]: 167 167
Jan 27 08:38:07 np0005597378 systemd[1]: libpod-4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c.scope: Deactivated successfully.
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.42013529 +0000 UTC m=+0.125178265 container attach 4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.421478956 +0000 UTC m=+0.126521901 container died 4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:38:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0f37684c4ae4645237da9c15ef58f00b91a8190318386cade4658d7a1a70d10d-merged.mount: Deactivated successfully.
Jan 27 08:38:07 np0005597378 podman[245105]: 2026-01-27 13:38:07.461179227 +0000 UTC m=+0.166222172 container remove 4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:38:07 np0005597378 systemd[1]: libpod-conmon-4f39a4f5dabe49ebecae1d1e37224a394ee9b78d97b6963963a6cd764a18243c.scope: Deactivated successfully.
Jan 27 08:38:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:07 np0005597378 podman[245146]: 2026-01-27 13:38:07.63749409 +0000 UTC m=+0.043675050 container create adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:38:07 np0005597378 systemd[1]: Started libpod-conmon-adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c.scope.
Jan 27 08:38:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfc1a59e3d646a2a038bde5442c5590613c783474b7ead93f008395e52093e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfc1a59e3d646a2a038bde5442c5590613c783474b7ead93f008395e52093e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfc1a59e3d646a2a038bde5442c5590613c783474b7ead93f008395e52093e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfc1a59e3d646a2a038bde5442c5590613c783474b7ead93f008395e52093e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:07 np0005597378 podman[245146]: 2026-01-27 13:38:07.615313021 +0000 UTC m=+0.021494001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:38:07 np0005597378 podman[245146]: 2026-01-27 13:38:07.712917592 +0000 UTC m=+0.119098582 container init adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bhaskara, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:38:07 np0005597378 podman[245146]: 2026-01-27 13:38:07.719830448 +0000 UTC m=+0.126011408 container start adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bhaskara, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:38:07 np0005597378 podman[245146]: 2026-01-27 13:38:07.724932716 +0000 UTC m=+0.131113696 container attach adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bhaskara, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]: {
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:    "0": [
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:        {
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "devices": [
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "/dev/loop3"
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            ],
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "lv_name": "ceph_lv0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "lv_size": "21470642176",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "name": "ceph_lv0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "tags": {
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.cluster_name": "ceph",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.crush_device_class": "",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.encrypted": "0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.objectstore": "bluestore",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.osd_id": "0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.type": "block",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.vdo": "0",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:                "ceph.with_tpm": "0"
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            },
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "type": "block",
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "vg_name": "ceph_vg0"
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:        }
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:    ],
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:    "1": [
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:        {
Jan 27 08:38:07 np0005597378 sharp_bhaskara[245162]:            "devices": [
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "/dev/loop4"
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            ],
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_name": "ceph_lv1",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_size": "21470642176",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "name": "ceph_lv1",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "tags": {
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.cluster_name": "ceph",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.crush_device_class": "",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.encrypted": "0",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.objectstore": "bluestore",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.osd_id": "1",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.type": "block",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.vdo": "0",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.with_tpm": "0"
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            },
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "type": "block",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "vg_name": "ceph_vg1"
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:        }
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:    ],
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:    "2": [
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:        {
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "devices": [
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "/dev/loop5"
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            ],
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_name": "ceph_lv2",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_size": "21470642176",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "name": "ceph_lv2",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "tags": {
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.cluster_name": "ceph",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.crush_device_class": "",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.encrypted": "0",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.objectstore": "bluestore",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.osd_id": "2",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.type": "block",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.vdo": "0",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:                "ceph.with_tpm": "0"
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            },
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "type": "block",
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:            "vg_name": "ceph_vg2"
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:        }
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]:    ]
Jan 27 08:38:08 np0005597378 sharp_bhaskara[245162]: }
Jan 27 08:38:08 np0005597378 systemd[1]: libpod-adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c.scope: Deactivated successfully.
Jan 27 08:38:08 np0005597378 podman[245146]: 2026-01-27 13:38:08.040207844 +0000 UTC m=+0.446388824 container died adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bhaskara, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:38:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-abfc1a59e3d646a2a038bde5442c5590613c783474b7ead93f008395e52093e7-merged.mount: Deactivated successfully.
Jan 27 08:38:08 np0005597378 podman[245146]: 2026-01-27 13:38:08.848226995 +0000 UTC m=+1.254407955 container remove adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bhaskara, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:38:08 np0005597378 systemd[1]: libpod-conmon-adedd809cb94b782213d038e17474c5c39d774b922661a6fa68644f378e3809c.scope: Deactivated successfully.
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.296294342 +0000 UTC m=+0.041421997 container create 533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_panini, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:38:09 np0005597378 systemd[1]: Started libpod-conmon-533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134.scope.
Jan 27 08:38:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.280302772 +0000 UTC m=+0.025430457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.384963823 +0000 UTC m=+0.130091508 container init 533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_panini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.39264016 +0000 UTC m=+0.137767815 container start 533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:38:09 np0005597378 nervous_panini[245262]: 167 167
Jan 27 08:38:09 np0005597378 systemd[1]: libpod-533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134.scope: Deactivated successfully.
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.399967247 +0000 UTC m=+0.145094902 container attach 533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_panini, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.400775509 +0000 UTC m=+0.145903164 container died 533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:38:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-223790572cb3242cd5f06385f8ab2d8f030f42c9be3e610322573265772c35c2-merged.mount: Deactivated successfully.
Jan 27 08:38:09 np0005597378 podman[245245]: 2026-01-27 13:38:09.446164722 +0000 UTC m=+0.191292377 container remove 533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:38:09 np0005597378 systemd[1]: libpod-conmon-533939d74e8525034edf3248ff1a91fcd48064af314d62345130bd52da52d134.scope: Deactivated successfully.
Jan 27 08:38:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 27 08:38:09 np0005597378 podman[245287]: 2026-01-27 13:38:09.602843536 +0000 UTC m=+0.036713431 container create 779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_pike, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:38:09 np0005597378 systemd[1]: Started libpod-conmon-779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7.scope.
Jan 27 08:38:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:38:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d040a18aed9cb69d87ebf04b9d79a2b1488943fc49c253cd21993485c591f92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d040a18aed9cb69d87ebf04b9d79a2b1488943fc49c253cd21993485c591f92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d040a18aed9cb69d87ebf04b9d79a2b1488943fc49c253cd21993485c591f92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d040a18aed9cb69d87ebf04b9d79a2b1488943fc49c253cd21993485c591f92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:38:09 np0005597378 podman[245287]: 2026-01-27 13:38:09.675880034 +0000 UTC m=+0.109749959 container init 779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:38:09 np0005597378 podman[245287]: 2026-01-27 13:38:09.58705373 +0000 UTC m=+0.020923645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:38:09 np0005597378 podman[245287]: 2026-01-27 13:38:09.684510418 +0000 UTC m=+0.118380323 container start 779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_pike, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:38:09 np0005597378 podman[245287]: 2026-01-27 13:38:09.702734099 +0000 UTC m=+0.136604024 container attach 779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:38:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 27 08:38:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 27 08:38:09 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 27 08:38:10 np0005597378 lvm[245383]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:38:10 np0005597378 lvm[245382]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:38:10 np0005597378 lvm[245383]: VG ceph_vg1 finished
Jan 27 08:38:10 np0005597378 lvm[245382]: VG ceph_vg0 finished
Jan 27 08:38:10 np0005597378 lvm[245385]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:38:10 np0005597378 lvm[245385]: VG ceph_vg2 finished
Jan 27 08:38:10 np0005597378 admiring_pike[245304]: {}
Jan 27 08:38:10 np0005597378 systemd[1]: libpod-779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7.scope: Deactivated successfully.
Jan 27 08:38:10 np0005597378 systemd[1]: libpod-779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7.scope: Consumed 1.294s CPU time.
Jan 27 08:38:10 np0005597378 podman[245388]: 2026-01-27 13:38:10.530765158 +0000 UTC m=+0.027388949 container died 779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 08:38:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3d040a18aed9cb69d87ebf04b9d79a2b1488943fc49c253cd21993485c591f92-merged.mount: Deactivated successfully.
Jan 27 08:38:10 np0005597378 podman[245388]: 2026-01-27 13:38:10.620502717 +0000 UTC m=+0.117126498 container remove 779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_pike, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:38:10 np0005597378 systemd[1]: libpod-conmon-779f4db62dec1b728e21d93cf6700a95d937749eb657d692a58fab848039e8b7.scope: Deactivated successfully.
Jan 27 08:38:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:38:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:38:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:38:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:38:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:38:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:38:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Jan 27 08:38:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 27 08:38:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 27 08:38:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 27 08:38:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 32 op/s
Jan 27 08:38:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 08:38:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:38:17
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control']
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:38:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:38:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 38 op/s
Jan 27 08:38:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Jan 27 08:38:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.625715) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521104625984, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2106, "num_deletes": 252, "total_data_size": 3479059, "memory_usage": 3535336, "flush_reason": "Manual Compaction"}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521104653869, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3400359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16331, "largest_seqno": 18436, "table_properties": {"data_size": 3390812, "index_size": 6041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19195, "raw_average_key_size": 20, "raw_value_size": 3371667, "raw_average_value_size": 3519, "num_data_blocks": 272, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769520886, "oldest_key_time": 1769520886, "file_creation_time": 1769521104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 28304 microseconds, and 7557 cpu microseconds.
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.654024) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3400359 bytes OK
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.654049) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.656077) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.656094) EVENT_LOG_v1 {"time_micros": 1769521104656090, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.656113) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3470221, prev total WAL file size 3470221, number of live WAL files 2.
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.657051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3320KB)], [38(7639KB)]
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521104657109, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11222980, "oldest_snapshot_seqno": -1}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4460 keys, 9431336 bytes, temperature: kUnknown
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521104725082, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9431336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9397908, "index_size": 21158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 107828, "raw_average_key_size": 24, "raw_value_size": 9313760, "raw_average_value_size": 2088, "num_data_blocks": 897, "num_entries": 4460, "num_filter_entries": 4460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.725305) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9431336 bytes
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.726999) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.9 rd, 138.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4979, records dropped: 519 output_compression: NoCompression
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.727019) EVENT_LOG_v1 {"time_micros": 1769521104727010, "job": 18, "event": "compaction_finished", "compaction_time_micros": 68045, "compaction_time_cpu_micros": 18484, "output_level": 6, "num_output_files": 1, "total_output_size": 9431336, "num_input_records": 4979, "num_output_records": 4460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521104727896, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521104729580, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.656969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.729673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.729678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.729680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.729681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:24.729682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Jan 27 08:38:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664183997250927 of space, bias 1.0, pg target 0.1999255199175278 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.380241281268823e-06 of space, bias 4.0, pg target 0.0016562895375225874 quantized to 16 (current 16)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:38:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:29 np0005597378 podman[245428]: 2026-01-27 13:38:29.769031275 +0000 UTC m=+0.112088773 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:38:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:38:32.301 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:38:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:38:32.302 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.837 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "11765753-29e5-4632-8836-cb890652806c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.838 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "11765753-29e5-4632-8836-cb890652806c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.859 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.981 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.982 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.991 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:38:32 np0005597378 nova_compute[238941]: 2026-01-27 13:38:32.992 238945 INFO nova.compute.claims [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.109 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:38:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343441227' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.663 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.670 238945 DEBUG nova.compute.provider_tree [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.684 238945 DEBUG nova.scheduler.client.report [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.705 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.706 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.757 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.785 238945 INFO nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.802 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.885 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.887 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.887 238945 INFO nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Creating image(s)#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.910 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.938 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.957 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.960 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:33 np0005597378 nova_compute[238941]: 2026-01-27 13:38:33.961 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:34 np0005597378 nova_compute[238941]: 2026-01-27 13:38:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:34 np0005597378 podman[245531]: 2026-01-27 13:38:34.71218459 +0000 UTC m=+0.053945885 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:38:34 np0005597378 nova_compute[238941]: 2026-01-27 13:38:34.984 238945 DEBUG nova.virt.libvirt.imagebackend [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/deec719f-9679-4d33-adfe-db01148e4a56/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/deec719f-9679-4d33-adfe-db01148e4a56/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.411 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.433 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.433 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.433 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.433 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.434 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:38:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487938464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:38:35 np0005597378 nova_compute[238941]: 2026-01-27 13:38:35.998 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:36 np0005597378 nova_compute[238941]: 2026-01-27 13:38:36.995 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:38:36 np0005597378 nova_compute[238941]: 2026-01-27 13:38:36.995 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:38:36 np0005597378 nova_compute[238941]: 2026-01-27 13:38:36.996 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:36 np0005597378 nova_compute[238941]: 2026-01-27 13:38:36.996 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.074 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 11765753-29e5-4632-8836-cb890652806c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.074 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.074 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.112 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:38:37.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.327 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.391 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.393 238945 DEBUG nova.virt.images [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] deec719f-9679-4d33-adfe-db01148e4a56 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.395 238945 DEBUG nova.privsep.utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.395 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.part /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.623 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.part /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.converted" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.627 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:38:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531930980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.686 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.688 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.688 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.708 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.712 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11765753-29e5-4632-8836-cb890652806c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.730 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.762 238945 ERROR nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [req-6fef3095-e8c6-4fdb-846c-b4d22fd7eb4e] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID cc8b0052-0829-4cee-8aba-4745f236afe4.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-6fef3095-e8c6-4fdb-846c-b4d22fd7eb4e"}]}#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.778 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.794 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.794 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.808 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.829 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 08:38:37 np0005597378 nova_compute[238941]: 2026-01-27 13:38:37.861 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 27 08:38:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 27 08:38:38 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 27 08:38:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:38:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281189841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.425 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.430 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.522 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updated inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.523 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.523 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.552 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:38:38 np0005597378 nova_compute[238941]: 2026-01-27 13:38:38.553 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 27 08:38:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 27 08:38:39 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.315 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11765753-29e5-4632-8836-cb890652806c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.376 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] resizing rbd image 11765753-29e5-4632-8836-cb890652806c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.463 238945 DEBUG nova.objects.instance [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lazy-loading 'migration_context' on Instance uuid 11765753-29e5-4632-8836-cb890652806c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.485 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.486 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Ensure instance console log exists: /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.486 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.487 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.487 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.489 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.493 238945 WARNING nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.498 238945 DEBUG nova.virt.libvirt.host [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.499 238945 DEBUG nova.virt.libvirt.host [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:38:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.504 238945 DEBUG nova.virt.libvirt.host [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.505 238945 DEBUG nova.virt.libvirt.host [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.505 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.506 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.506 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.507 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.507 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.507 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.507 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.508 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.508 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.508 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.508 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.509 238945 DEBUG nova.virt.hardware [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.541 238945 DEBUG nova.privsep.utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.542 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.557 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.558 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.581 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:39 np0005597378 nova_compute[238941]: 2026-01-27 13:38:39.582 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:38:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:38:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375180005' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.118 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.138 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.142 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:38:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461826581' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.737 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.739 238945 DEBUG nova.objects.instance [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11765753-29e5-4632-8836-cb890652806c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.786 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <uuid>11765753-29e5-4632-8836-cb890652806c</uuid>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <name>instance-00000001</name>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:name>tempest-AutoAllocateNetworkTest-server-1642557992</nova:name>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:38:39</nova:creationTime>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:user uuid="a759ad33cedf42b8aa7a855daf0912c2">tempest-AutoAllocateNetworkTest-1754405264-project-member</nova:user>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <nova:project uuid="07a1587c5fe140db919a48396724b913">tempest-AutoAllocateNetworkTest-1754405264</nova:project>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <entry name="serial">11765753-29e5-4632-8836-cb890652806c</entry>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <entry name="uuid">11765753-29e5-4632-8836-cb890652806c</entry>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/11765753-29e5-4632-8836-cb890652806c_disk">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/11765753-29e5-4632-8836-cb890652806c_disk.config">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/console.log" append="off"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:38:40 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:38:40 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:38:40 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:38:40 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.990 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.990 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:38:40 np0005597378 nova_compute[238941]: 2026-01-27 13:38:40.991 238945 INFO nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Using config drive#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.010 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 70 MiB data, 187 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.592 238945 INFO nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Creating config drive at /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/disk.config#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.597 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud4p968h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.727 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud4p968h" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.749 238945 DEBUG nova.storage.rbd_utils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] rbd image 11765753-29e5-4632-8836-cb890652806c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.752 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/disk.config 11765753-29e5-4632-8836-cb890652806c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.886 238945 DEBUG oslo_concurrency.processutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/disk.config 11765753-29e5-4632-8836-cb890652806c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:41 np0005597378 nova_compute[238941]: 2026-01-27 13:38:41.887 238945 INFO nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Deleting local config drive /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:38:41 np0005597378 systemd[1]: Starting libvirt secret daemon...
Jan 27 08:38:41 np0005597378 systemd[1]: Started libvirt secret daemon.
Jan 27 08:38:41 np0005597378 systemd-machined[207425]: New machine qemu-1-instance-00000001.
Jan 27 08:38:42 np0005597378 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.527 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521122.5271533, 11765753-29e5-4632-8836-cb890652806c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.529 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.533 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.533 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.536 238945 INFO nova.virt.libvirt.driver [-] [instance: 11765753-29e5-4632-8836-cb890652806c] Instance spawned successfully.#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.537 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.572 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.575 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.613 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.614 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.614 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.615 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.615 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.616 238945 DEBUG nova.virt.libvirt.driver [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.623 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.623 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521122.528733, 11765753-29e5-4632-8836-cb890652806c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.623 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.686 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.690 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.696 238945 INFO nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Took 8.81 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.697 238945 DEBUG nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.713 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.765 238945 INFO nova.compute.manager [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Took 9.82 seconds to build instance.#033[00m
Jan 27 08:38:42 np0005597378 nova_compute[238941]: 2026-01-27 13:38:42.792 238945 DEBUG oslo_concurrency.lockutils [None req-6593dc50-0a9e-491a-8283-7f6c15aa4d13 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "11765753-29e5-4632-8836-cb890652806c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 80 MiB data, 193 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 61 op/s
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.326 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "11765753-29e5-4632-8836-cb890652806c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.327 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "11765753-29e5-4632-8836-cb890652806c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.327 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "11765753-29e5-4632-8836-cb890652806c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.328 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "11765753-29e5-4632-8836-cb890652806c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.328 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "11765753-29e5-4632-8836-cb890652806c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.330 238945 INFO nova.compute.manager [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Terminating instance#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.332 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "refresh_cache-11765753-29e5-4632-8836-cb890652806c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.333 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquired lock "refresh_cache-11765753-29e5-4632-8836-cb890652806c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.333 238945 DEBUG nova.network.neutron [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:38:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Jan 27 08:38:45 np0005597378 nova_compute[238941]: 2026-01-27 13:38:45.759 238945 DEBUG nova.network.neutron [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:38:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.186 238945 DEBUG nova.network.neutron [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:38:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 27 08:38:46 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.200 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Releasing lock "refresh_cache-11765753-29e5-4632-8836-cb890652806c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.200 238945 DEBUG nova.compute.manager [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:38:46 np0005597378 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 27 08:38:46 np0005597378 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 4.160s CPU time.
Jan 27 08:38:46 np0005597378 systemd-machined[207425]: Machine qemu-1-instance-00000001 terminated.
Jan 27 08:38:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:38:46.287 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:38:46.287 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:38:46.287 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.420 238945 INFO nova.virt.libvirt.driver [-] [instance: 11765753-29e5-4632-8836-cb890652806c] Instance destroyed successfully.#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.420 238945 DEBUG nova.objects.instance [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lazy-loading 'resources' on Instance uuid 11765753-29e5-4632-8836-cb890652806c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.731 238945 INFO nova.virt.libvirt.driver [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Deleting instance files /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c_del#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.732 238945 INFO nova.virt.libvirt.driver [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Deletion of /var/lib/nova/instances/11765753-29e5-4632-8836-cb890652806c_del complete#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.805 238945 DEBUG nova.virt.libvirt.host [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.806 238945 INFO nova.virt.libvirt.host [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] UEFI support detected#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.807 238945 INFO nova.compute.manager [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] [instance: 11765753-29e5-4632-8836-cb890652806c] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.809 238945 DEBUG oslo.service.loopingcall [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.809 238945 DEBUG nova.compute.manager [-] [instance: 11765753-29e5-4632-8836-cb890652806c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:38:46 np0005597378 nova_compute[238941]: 2026-01-27 13:38:46.809 238945 DEBUG nova.network.neutron [-] [instance: 11765753-29e5-4632-8836-cb890652806c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.145 238945 DEBUG nova.network.neutron [-] [instance: 11765753-29e5-4632-8836-cb890652806c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.174 238945 DEBUG nova.network.neutron [-] [instance: 11765753-29e5-4632-8836-cb890652806c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.228 238945 INFO nova.compute.manager [-] [instance: 11765753-29e5-4632-8836-cb890652806c] Took 0.42 seconds to deallocate network for instance.#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.297 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.298 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.346 238945 DEBUG oslo_concurrency.processutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 116 op/s
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:38:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:38:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248685007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.920 238945 DEBUG oslo_concurrency.processutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:38:47 np0005597378 nova_compute[238941]: 2026-01-27 13:38:47.926 238945 DEBUG nova.compute.provider_tree [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:38:48 np0005597378 nova_compute[238941]: 2026-01-27 13:38:48.086 238945 DEBUG nova.scheduler.client.report [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:38:48 np0005597378 nova_compute[238941]: 2026-01-27 13:38:48.338 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:48 np0005597378 nova_compute[238941]: 2026-01-27 13:38:48.378 238945 INFO nova.scheduler.client.report [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Deleted allocations for instance 11765753-29e5-4632-8836-cb890652806c#033[00m
Jan 27 08:38:48 np0005597378 nova_compute[238941]: 2026-01-27 13:38:48.474 238945 DEBUG oslo_concurrency.lockutils [None req-af5f9d9a-13f2-4913-9f65-766e091cd715 a759ad33cedf42b8aa7a855daf0912c2 07a1587c5fe140db919a48396724b913 - - default default] Lock "11765753-29e5-4632-8836-cb890652806c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:38:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 70 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Jan 27 08:38:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 148 op/s
Jan 27 08:38:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 616 KiB/s wr, 111 op/s
Jan 27 08:38:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 718 KiB/s rd, 1.4 KiB/s wr, 53 op/s
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.202277) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521136202357, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 526, "num_deletes": 252, "total_data_size": 520449, "memory_usage": 531360, "flush_reason": "Manual Compaction"}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521136214588, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 382526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18437, "largest_seqno": 18962, "table_properties": {"data_size": 379813, "index_size": 749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7079, "raw_average_key_size": 19, "raw_value_size": 374154, "raw_average_value_size": 1048, "num_data_blocks": 34, "num_entries": 357, "num_filter_entries": 357, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521105, "oldest_key_time": 1769521105, "file_creation_time": 1769521136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 12366 microseconds, and 2121 cpu microseconds.
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.214647) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 382526 bytes OK
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.214670) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.226406) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.226460) EVENT_LOG_v1 {"time_micros": 1769521136226452, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.226484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 517415, prev total WAL file size 517415, number of live WAL files 2.
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.227054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(373KB)], [41(9210KB)]
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521136227117, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9813862, "oldest_snapshot_seqno": -1}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4311 keys, 6588565 bytes, temperature: kUnknown
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521136328261, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6588565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6560322, "index_size": 16358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 105204, "raw_average_key_size": 24, "raw_value_size": 6482871, "raw_average_value_size": 1503, "num_data_blocks": 688, "num_entries": 4311, "num_filter_entries": 4311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.328600) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6588565 bytes
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.357719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.9 rd, 65.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.0 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(42.9) write-amplify(17.2) OK, records in: 4817, records dropped: 506 output_compression: NoCompression
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.357783) EVENT_LOG_v1 {"time_micros": 1769521136357760, "job": 20, "event": "compaction_finished", "compaction_time_micros": 101274, "compaction_time_cpu_micros": 15837, "output_level": 6, "num_output_files": 1, "total_output_size": 6588565, "num_input_records": 4817, "num_output_records": 4311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521136358176, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521136360359, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.226961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.360622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.360633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.360636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.360638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:38:56.360640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:38:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 635 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Jan 27 08:38:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:38:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1865956643' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:38:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:38:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1865956643' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:38:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 598 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Jan 27 08:39:00 np0005597378 podman[245982]: 2026-01-27 13:39:00.742076008 +0000 UTC m=+0.085413604 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 08:39:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.419 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521126.418109, 11765753-29e5-4632-8836-cb890652806c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.419 238945 INFO nova.compute.manager [-] [instance: 11765753-29e5-4632-8836-cb890652806c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:39:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 B/s wr, 2 op/s
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.614 238945 DEBUG nova.compute.manager [None req-c5ad3109-7b36-42b2-b692-e07aa009aad5 - - - - - -] [instance: 11765753-29e5-4632-8836-cb890652806c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.651 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "86e1daac-45e6-441d-8a4a-1294891b69a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.652 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "86e1daac-45e6-441d-8a4a-1294891b69a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.669 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.744 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.745 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.752 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.753 238945 INFO nova.compute.claims [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:01 np0005597378 nova_compute[238941]: 2026-01-27 13:39:01.916 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995416247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.461 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.467 238945 DEBUG nova.compute.provider_tree [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.483 238945 DEBUG nova.scheduler.client.report [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.510 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.511 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.580 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.581 238945 DEBUG nova.network.neutron [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.615 238945 INFO nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.633 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.723 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.725 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.725 238945 INFO nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Creating image(s)#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.752 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.782 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.807 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.812 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.881 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.882 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.883 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.885 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.886 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.886 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.913 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.918 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 86e1daac-45e6-441d-8a4a-1294891b69a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.958 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.967 238945 DEBUG nova.network.neutron [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:39:02 np0005597378 nova_compute[238941]: 2026-01-27 13:39:02.968 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.114 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.114 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.127 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.127 238945 INFO nova.compute.claims [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.199 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 86e1daac-45e6-441d-8a4a-1294891b69a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.265 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] resizing rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.313 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.692 238945 DEBUG nova.objects.instance [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 86e1daac-45e6-441d-8a4a-1294891b69a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.730 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.730 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Ensure instance console log exists: /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.731 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.731 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.732 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.733 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.740 238945 WARNING nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.745 238945 DEBUG nova.virt.libvirt.host [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.746 238945 DEBUG nova.virt.libvirt.host [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.754 238945 DEBUG nova.virt.libvirt.host [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.755 238945 DEBUG nova.virt.libvirt.host [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.755 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.756 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.756 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.756 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.757 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.757 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.757 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.757 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.757 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.758 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.758 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.758 238945 DEBUG nova.virt.hardware [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.771 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913955691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:03 np0005597378 nova_compute[238941]: 2026-01-27 13:39:03.995 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.000 238945 DEBUG nova.compute.provider_tree [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.017 238945 DEBUG nova.scheduler.client.report [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.315 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.316 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004525514' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.426 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.447 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.450 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.479 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.480 238945 DEBUG nova.network.neutron [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.511 238945 INFO nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.528 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.625 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.626 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.627 238945 INFO nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Creating image(s)#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.650 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.677 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.703 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.708 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.768 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.769 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.770 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.770 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.791 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.795 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e494a15a-7ac1-47d9-be70-22ec46b36797_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.842 238945 WARNING oslo_policy.policy [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.843 238945 WARNING oslo_policy.policy [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 27 08:39:04 np0005597378 nova_compute[238941]: 2026-01-27 13:39:04.847 238945 DEBUG nova.policy [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11072876e4694e33bece015a47248409', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:39:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/766328836' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.045 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e494a15a-7ac1-47d9-be70-22ec46b36797_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.070 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.072 238945 DEBUG nova.objects.instance [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86e1daac-45e6-441d-8a4a-1294891b69a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.107 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <uuid>86e1daac-45e6-441d-8a4a-1294891b69a6</uuid>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <name>instance-00000002</name>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-644314250</nova:name>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:39:03</nova:creationTime>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:user uuid="4eadd6ee0ee0402aa1b2e18319362088">tempest-DeleteServersAdminTestJSON-1841579421-project-member</nova:user>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <nova:project uuid="c9f9b365be6e4ebc8f60e60abbc310b4">tempest-DeleteServersAdminTestJSON-1841579421</nova:project>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <entry name="serial">86e1daac-45e6-441d-8a4a-1294891b69a6</entry>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <entry name="uuid">86e1daac-45e6-441d-8a4a-1294891b69a6</entry>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/86e1daac-45e6-441d-8a4a-1294891b69a6_disk">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/86e1daac-45e6-441d-8a4a-1294891b69a6_disk.config">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/console.log" append="off"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:39:05 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:39:05 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:39:05 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:39:05 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.116 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] resizing rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:39:05 np0005597378 podman[246413]: 2026-01-27 13:39:05.195294525 +0000 UTC m=+0.057673495 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.218 238945 DEBUG nova.objects.instance [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'migration_context' on Instance uuid e494a15a-7ac1-47d9-be70-22ec46b36797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.222 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.224 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.224 238945 INFO nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Using config drive#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.244 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.252 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.252 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Ensure instance console log exists: /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.254 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.254 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.254 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.656 238945 INFO nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Creating config drive at /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/disk.config#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.660 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfz48uylx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:05 np0005597378 nova_compute[238941]: 2026-01-27 13:39:05.782 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfz48uylx" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 77 MiB data, 187 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.078 238945 DEBUG nova.storage.rbd_utils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 86e1daac-45e6-441d-8a4a-1294891b69a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.082 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/disk.config 86e1daac-45e6-441d-8a4a-1294891b69a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.195 238945 DEBUG oslo_concurrency.processutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/disk.config 86e1daac-45e6-441d-8a4a-1294891b69a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.196 238945 INFO nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Deleting local config drive /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6/disk.config because it was imported into RBD.#033[00m
Jan 27 08:39:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:06 np0005597378 systemd-machined[207425]: New machine qemu-2-instance-00000002.
Jan 27 08:39:06 np0005597378 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.721 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521146.7205973, 86e1daac-45e6-441d-8a4a-1294891b69a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.722 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.725 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.725 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.728 238945 INFO nova.virt.libvirt.driver [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Instance spawned successfully.#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.729 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.777 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.782 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.782 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.782 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.783 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.783 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.783 238945 DEBUG nova.virt.libvirt.driver [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.787 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.844 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.844 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521146.7219732, 86e1daac-45e6-441d-8a4a-1294891b69a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.844 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] VM Started (Lifecycle Event)#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.949 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.952 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.966 238945 INFO nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Took 4.24 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.966 238945 DEBUG nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:06 np0005597378 nova_compute[238941]: 2026-01-27 13:39:06.993 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:07 np0005597378 nova_compute[238941]: 2026-01-27 13:39:07.067 238945 INFO nova.compute.manager [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Took 5.35 seconds to build instance.#033[00m
Jan 27 08:39:07 np0005597378 nova_compute[238941]: 2026-01-27 13:39:07.091 238945 DEBUG oslo_concurrency.lockutils [None req-e5f25775-4b9e-460e-a868-f9d0084ae2a0 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "86e1daac-45e6-441d-8a4a-1294891b69a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:07 np0005597378 nova_compute[238941]: 2026-01-27 13:39:07.344 238945 DEBUG nova.network.neutron [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Successfully created port: ee6301a2-f8c5-49f5-a6f6-5885ad339b05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:39:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 102 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.756 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Acquiring lock "86e1daac-45e6-441d-8a4a-1294891b69a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.757 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lock "86e1daac-45e6-441d-8a4a-1294891b69a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.757 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Acquiring lock "86e1daac-45e6-441d-8a4a-1294891b69a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.757 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lock "86e1daac-45e6-441d-8a4a-1294891b69a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.758 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lock "86e1daac-45e6-441d-8a4a-1294891b69a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.759 238945 INFO nova.compute.manager [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Terminating instance#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.759 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Acquiring lock "refresh_cache-86e1daac-45e6-441d-8a4a-1294891b69a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.760 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Acquired lock "refresh_cache-86e1daac-45e6-441d-8a4a-1294891b69a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:08 np0005597378 nova_compute[238941]: 2026-01-27 13:39:08.760 238945 DEBUG nova.network.neutron [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.055 238945 DEBUG nova.network.neutron [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.123 238945 DEBUG nova.network.neutron [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Successfully updated port: ee6301a2-f8c5-49f5-a6f6-5885ad339b05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.144 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.144 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquired lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.144 238945 DEBUG nova.network.neutron [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.476 238945 DEBUG nova.network.neutron [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.557 238945 DEBUG nova.network.neutron [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.587 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Releasing lock "refresh_cache-86e1daac-45e6-441d-8a4a-1294891b69a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.588 238945 DEBUG nova.compute.manager [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.593 238945 DEBUG nova.compute.manager [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-changed-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.593 238945 DEBUG nova.compute.manager [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Refreshing instance network info cache due to event network-changed-ee6301a2-f8c5-49f5-a6f6-5885ad339b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.593 238945 DEBUG oslo_concurrency.lockutils [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:09 np0005597378 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 27 08:39:09 np0005597378 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.314s CPU time.
Jan 27 08:39:09 np0005597378 systemd-machined[207425]: Machine qemu-2-instance-00000002 terminated.
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.814 238945 INFO nova.virt.libvirt.driver [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Instance destroyed successfully.#033[00m
Jan 27 08:39:09 np0005597378 nova_compute[238941]: 2026-01-27 13:39:09.815 238945 DEBUG nova.objects.instance [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lazy-loading 'resources' on Instance uuid 86e1daac-45e6-441d-8a4a-1294891b69a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 134 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 125 op/s
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.181 238945 INFO nova.virt.libvirt.driver [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Deleting instance files /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6_del#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.182 238945 INFO nova.virt.libvirt.driver [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Deletion of /var/lib/nova/instances/86e1daac-45e6-441d-8a4a-1294891b69a6_del complete#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.391 238945 INFO nova.compute.manager [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.392 238945 DEBUG oslo.service.loopingcall [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.395 238945 DEBUG nova.compute.manager [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.396 238945 DEBUG nova.network.neutron [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.696 238945 DEBUG nova.network.neutron [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.719 238945 DEBUG nova.network.neutron [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.742 238945 INFO nova.compute.manager [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Took 0.35 seconds to deallocate network for instance.#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.800 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.801 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.909 238945 DEBUG oslo_concurrency.processutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:10 np0005597378 nova_compute[238941]: 2026-01-27 13:39:10.982 238945 DEBUG nova.network.neutron [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updating instance_info_cache with network_info: [{"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.021 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Releasing lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.022 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Instance network_info: |[{"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.023 238945 DEBUG oslo_concurrency.lockutils [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.023 238945 DEBUG nova.network.neutron [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Refreshing network info cache for port ee6301a2-f8c5-49f5-a6f6-5885ad339b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.027 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Start _get_guest_xml network_info=[{"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.037 238945 WARNING nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.066 238945 DEBUG nova.virt.libvirt.host [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.067 238945 DEBUG nova.virt.libvirt.host [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.071 238945 DEBUG nova.virt.libvirt.host [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.071 238945 DEBUG nova.virt.libvirt.host [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.072 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.072 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='521742722',id=21,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1964824453',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.073 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.073 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.074 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.074 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.074 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.074 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.075 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.075 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.075 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.077 238945 DEBUG nova.virt.hardware [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.081 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699528867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.595 238945 DEBUG oslo_concurrency.processutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.607 238945 DEBUG nova.compute.provider_tree [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.622 238945 DEBUG nova.scheduler.client.report [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.644 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.674 238945 INFO nova.scheduler.client.report [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Deleted allocations for instance 86e1daac-45e6-441d-8a4a-1294891b69a6#033[00m
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3785687053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.731 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.762 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.770 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:11 np0005597378 nova_compute[238941]: 2026-01-27 13:39:11.791 238945 DEBUG oslo_concurrency.lockutils [None req-7f0abad6-6208-4954-85e7-e2302a9f4142 d654a76fa8564e6ea4c08125068680d4 4b08235dd0f84b2f8dcb0dfb608d370b - - default default] Lock "86e1daac-45e6-441d-8a4a-1294891b69a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:12.021983971 +0000 UTC m=+0.045006594 container create 21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:39:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 134 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 125 op/s
Jan 27 08:39:12 np0005597378 systemd[1]: Started libpod-conmon-21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138.scope.
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:11.998780345 +0000 UTC m=+0.021803028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:39:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:12.115895323 +0000 UTC m=+0.138917976 container init 21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:12.12361446 +0000 UTC m=+0.146637093 container start 21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:39:12 np0005597378 ecstatic_ishizaka[246847]: 167 167
Jan 27 08:39:12 np0005597378 systemd[1]: libpod-21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138.scope: Deactivated successfully.
Jan 27 08:39:12 np0005597378 conmon[246847]: conmon 21db081a73c0d3206ac2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138.scope/container/memory.events
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:12.132354727 +0000 UTC m=+0.155377350 container attach 21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:12.13287872 +0000 UTC m=+0.155901343 container died 21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 27 08:39:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fd3e4e2ab1dae849d2a6d15e0539d2067e6f8e136505414009cdeb8e397e59ba-merged.mount: Deactivated successfully.
Jan 27 08:39:12 np0005597378 podman[246831]: 2026-01-27 13:39:12.197268186 +0000 UTC m=+0.220290809 container remove 21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:39:12 np0005597378 systemd[1]: libpod-conmon-21db081a73c0d3206ac27db8e319b74f09dceba00c5dcebc660f668f444ce138.scope: Deactivated successfully.
Jan 27 08:39:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:39:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:39:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:39:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617333177' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:12 np0005597378 podman[246870]: 2026-01-27 13:39:12.400646548 +0000 UTC m=+0.054914101 container create 226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.400 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.402 238945 DEBUG nova.virt.libvirt.vif [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:39:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1717948008',id=3,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-7qhtsdc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:39:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=e494a15a-7ac1-47d9-be70-22ec46b36797,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.403 238945 DEBUG nova.network.os_vif_util [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.403 238945 DEBUG nova.network.os_vif_util [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.405 238945 DEBUG nova.objects.instance [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'pci_devices' on Instance uuid e494a15a-7ac1-47d9-be70-22ec46b36797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.420 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <uuid>e494a15a-7ac1-47d9-be70-22ec46b36797</uuid>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <name>instance-00000003</name>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1717948008</nova:name>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:39:11</nova:creationTime>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1964824453">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:user uuid="11072876e4694e33bece015a47248409">tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member</nova:user>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:project uuid="e6c8760bce6747b1a4ba3511f8705506">tempest-ServersWithSpecificFlavorTestJSON-418976095</nova:project>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <nova:port uuid="ee6301a2-f8c5-49f5-a6f6-5885ad339b05">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <entry name="serial">e494a15a-7ac1-47d9-be70-22ec46b36797</entry>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <entry name="uuid">e494a15a-7ac1-47d9-be70-22ec46b36797</entry>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e494a15a-7ac1-47d9-be70-22ec46b36797_disk">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e494a15a-7ac1-47d9-be70-22ec46b36797_disk.config">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:90:7c:f2"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <target dev="tapee6301a2-f8"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/console.log" append="off"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:39:12 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:39:12 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:39:12 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:39:12 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.421 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Preparing to wait for external event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.422 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.422 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.422 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.423 238945 DEBUG nova.virt.libvirt.vif [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:39:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1717948008',id=3,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-7qhtsdc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:39:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=e494a15a-7ac1-47d9-be70-22ec46b36797,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.423 238945 DEBUG nova.network.os_vif_util [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.424 238945 DEBUG nova.network.os_vif_util [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.425 238945 DEBUG os_vif [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:39:12 np0005597378 systemd[1]: Started libpod-conmon-226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5.scope.
Jan 27 08:39:12 np0005597378 podman[246870]: 2026-01-27 13:39:12.377918945 +0000 UTC m=+0.032186528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:39:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.480 238945 DEBUG nova.network.neutron [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updated VIF entry in instance network info cache for port ee6301a2-f8c5-49f5-a6f6-5885ad339b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.480 238945 DEBUG nova.network.neutron [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updating instance_info_cache with network_info: [{"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eda364ca0a9cecf5d95111d1bf3af8fb8f9248d4ca57b1cfaa1a96f834ebd60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eda364ca0a9cecf5d95111d1bf3af8fb8f9248d4ca57b1cfaa1a96f834ebd60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eda364ca0a9cecf5d95111d1bf3af8fb8f9248d4ca57b1cfaa1a96f834ebd60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eda364ca0a9cecf5d95111d1bf3af8fb8f9248d4ca57b1cfaa1a96f834ebd60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eda364ca0a9cecf5d95111d1bf3af8fb8f9248d4ca57b1cfaa1a96f834ebd60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.490 238945 DEBUG ovsdbapp.backend.ovs_idl [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.490 238945 DEBUG ovsdbapp.backend.ovs_idl [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.490 238945 DEBUG ovsdbapp.backend.ovs_idl [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.495 238945 DEBUG oslo_concurrency.lockutils [req-ace2a8c0-d822-41f0-b1a9-e5f280413f20 req-5be5a78c-dd27-4320-8145-3ace2493c929 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:12 np0005597378 podman[246870]: 2026-01-27 13:39:12.503979123 +0000 UTC m=+0.158246716 container init 226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mclaren, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.509 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.509 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:39:12 np0005597378 nova_compute[238941]: 2026-01-27 13:39:12.510 238945 INFO oslo.privsep.daemon [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp04jmoo9b/privsep.sock']#033[00m
Jan 27 08:39:12 np0005597378 podman[246870]: 2026-01-27 13:39:12.512248796 +0000 UTC m=+0.166516349 container start 226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:39:12 np0005597378 podman[246870]: 2026-01-27 13:39:12.517600831 +0000 UTC m=+0.171868644 container attach 226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:39:12 np0005597378 admiring_mclaren[246888]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:39:12 np0005597378 admiring_mclaren[246888]: --> All data devices are unavailable
Jan 27 08:39:13 np0005597378 systemd[1]: libpod-226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5.scope: Deactivated successfully.
Jan 27 08:39:13 np0005597378 podman[246870]: 2026-01-27 13:39:13.025684186 +0000 UTC m=+0.679951769 container died 226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:39:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1eda364ca0a9cecf5d95111d1bf3af8fb8f9248d4ca57b1cfaa1a96f834ebd60-merged.mount: Deactivated successfully.
Jan 27 08:39:13 np0005597378 podman[246870]: 2026-01-27 13:39:13.097519413 +0000 UTC m=+0.751786966 container remove 226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:39:13 np0005597378 systemd[1]: libpod-conmon-226a88369006fee9ff9827e149d7b82e40fef72d862c39dd411e3fff3358bbf5.scope: Deactivated successfully.
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.597 238945 INFO oslo.privsep.daemon [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.441 246975 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.446 246975 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.448 246975 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.449 246975 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246975#033[00m
Jan 27 08:39:13 np0005597378 podman[246988]: 2026-01-27 13:39:13.585537957 +0000 UTC m=+0.026262168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:39:13 np0005597378 podman[246988]: 2026-01-27 13:39:13.743205727 +0000 UTC m=+0.183929908 container create 5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 08:39:13 np0005597378 systemd[1]: Started libpod-conmon-5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d.scope.
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.905 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.907 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.924 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.986 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.987 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.996 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:13 np0005597378 nova_compute[238941]: 2026-01-27 13:39:13.998 238945 INFO nova.compute.claims [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:14 np0005597378 podman[246988]: 2026-01-27 13:39:14.029226107 +0000 UTC m=+0.469950318 container init 5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 08:39:14 np0005597378 podman[246988]: 2026-01-27 13:39:14.035546838 +0000 UTC m=+0.476271019 container start 5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:39:14 np0005597378 vigilant_yonath[247007]: 167 167
Jan 27 08:39:14 np0005597378 systemd[1]: libpod-5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d.scope: Deactivated successfully.
Jan 27 08:39:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 104 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 131 op/s
Jan 27 08:39:14 np0005597378 podman[246988]: 2026-01-27 13:39:14.067956781 +0000 UTC m=+0.508680972 container attach 5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:39:14 np0005597378 podman[246988]: 2026-01-27 13:39:14.068475025 +0000 UTC m=+0.509199226 container died 5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.110 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3c4134a7e4ee8da2cc62a06837772dd510b9c8e65ca2463255fde7d73bab650a-merged.mount: Deactivated successfully.
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.670 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee6301a2-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.671 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee6301a2-f8, col_values=(('external_ids', {'iface-id': 'ee6301a2-f8c5-49f5-a6f6-5885ad339b05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:7c:f2', 'vm-uuid': 'e494a15a-7ac1-47d9-be70-22ec46b36797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:14 np0005597378 podman[246988]: 2026-01-27 13:39:14.673527035 +0000 UTC m=+1.114251216 container remove 5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yonath, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:39:14 np0005597378 NetworkManager[48904]: <info>  [1769521154.6747] manager: (tapee6301a2-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.684 238945 INFO os_vif [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8')#033[00m
Jan 27 08:39:14 np0005597378 systemd[1]: libpod-conmon-5a727724c99cb3492b727267206f3dc4759a3f63da903c499dcb186d0e08500d.scope: Deactivated successfully.
Jan 27 08:39:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215197460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.747 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.756 238945 DEBUG nova.compute.provider_tree [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.776 238945 DEBUG nova.scheduler.client.report [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.797 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.798 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.848 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.848 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.849 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No VIF found with MAC fa:16:3e:90:7c:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.849 238945 INFO nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Using config drive#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.867 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.875 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.875 238945 DEBUG nova.network.neutron [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.893 238945 INFO nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:14 np0005597378 nova_compute[238941]: 2026-01-27 13:39:14.909 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:14 np0005597378 podman[247055]: 2026-01-27 13:39:14.819553491 +0000 UTC m=+0.023590457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:39:14 np0005597378 podman[247055]: 2026-01-27 13:39:14.914519141 +0000 UTC m=+0.118556087 container create 9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mcnulty, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:39:15 np0005597378 systemd[1]: Started libpod-conmon-9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee.scope.
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.116 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.118 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.119 238945 INFO nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Creating image(s)#033[00m
Jan 27 08:39:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.141 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fad1e108585e77acd9bc2b98f56a30aa148470ff87df64a32daa9ed117d6dd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fad1e108585e77acd9bc2b98f56a30aa148470ff87df64a32daa9ed117d6dd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fad1e108585e77acd9bc2b98f56a30aa148470ff87df64a32daa9ed117d6dd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fad1e108585e77acd9bc2b98f56a30aa148470ff87df64a32daa9ed117d6dd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.169 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.192 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.197 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:15 np0005597378 podman[247055]: 2026-01-27 13:39:15.231811923 +0000 UTC m=+0.435848879 container init 9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mcnulty, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:39:15 np0005597378 podman[247055]: 2026-01-27 13:39:15.239704816 +0000 UTC m=+0.443741762 container start 9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.262 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.263 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.264 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.265 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.290 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.295 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:15 np0005597378 podman[247055]: 2026-01-27 13:39:15.316177018 +0000 UTC m=+0.520213984 container attach 9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]: {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:    "0": [
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:        {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "devices": [
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "/dev/loop3"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            ],
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_name": "ceph_lv0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_size": "21470642176",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "name": "ceph_lv0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "tags": {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cluster_name": "ceph",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.crush_device_class": "",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.encrypted": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.objectstore": "bluestore",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osd_id": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.type": "block",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.vdo": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.with_tpm": "0"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            },
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "type": "block",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "vg_name": "ceph_vg0"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:        }
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:    ],
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:    "1": [
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:        {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "devices": [
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "/dev/loop4"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            ],
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_name": "ceph_lv1",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_size": "21470642176",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "name": "ceph_lv1",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "tags": {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cluster_name": "ceph",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.crush_device_class": "",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.encrypted": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.objectstore": "bluestore",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osd_id": "1",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.type": "block",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.vdo": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.with_tpm": "0"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            },
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "type": "block",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "vg_name": "ceph_vg1"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:        }
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:    ],
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:    "2": [
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:        {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "devices": [
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "/dev/loop5"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            ],
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_name": "ceph_lv2",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_size": "21470642176",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "name": "ceph_lv2",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "tags": {
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.cluster_name": "ceph",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.crush_device_class": "",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.encrypted": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.objectstore": "bluestore",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osd_id": "2",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.type": "block",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.vdo": "0",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:                "ceph.with_tpm": "0"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            },
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "type": "block",
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:            "vg_name": "ceph_vg2"
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:        }
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]:    ]
Jan 27 08:39:15 np0005597378 inspiring_mcnulty[247090]: }
Jan 27 08:39:15 np0005597378 systemd[1]: libpod-9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee.scope: Deactivated successfully.
Jan 27 08:39:15 np0005597378 podman[247055]: 2026-01-27 13:39:15.573571625 +0000 UTC m=+0.777608571 container died 9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mcnulty, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.900 238945 DEBUG nova.network.neutron [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:39:15 np0005597378 nova_compute[238941]: 2026-01-27 13:39:15.901 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:39:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5fad1e108585e77acd9bc2b98f56a30aa148470ff87df64a32daa9ed117d6dd5-merged.mount: Deactivated successfully.
Jan 27 08:39:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 147 op/s
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.081 238945 INFO nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Creating config drive at /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/disk.config#033[00m
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.089 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9dezb5l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.216 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9dezb5l" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.241 238945 DEBUG nova.storage.rbd_utils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image e494a15a-7ac1-47d9-be70-22ec46b36797_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.246 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/disk.config e494a15a-7ac1-47d9-be70-22ec46b36797_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:16 np0005597378 podman[247055]: 2026-01-27 13:39:16.442759894 +0000 UTC m=+1.646796840 container remove 9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:39:16 np0005597378 systemd[1]: libpod-conmon-9ae5a663fb226a870f616e35c754e04e1caf583e4751fc1268be4848ca3312ee.scope: Deactivated successfully.
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.584 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:16 np0005597378 nova_compute[238941]: 2026-01-27 13:39:16.651 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] resizing rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:39:17
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:39:17 np0005597378 podman[247360]: 2026-01-27 13:39:16.922771613 +0000 UTC m=+0.023856604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:39:17 np0005597378 podman[247360]: 2026-01-27 13:39:17.038402821 +0000 UTC m=+0.139487802 container create 0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:39:17 np0005597378 systemd[1]: Started libpod-conmon-0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32.scope.
Jan 27 08:39:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:17 np0005597378 podman[247360]: 2026-01-27 13:39:17.445547145 +0000 UTC m=+0.546632146 container init 0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:39:17 np0005597378 podman[247360]: 2026-01-27 13:39:17.452245075 +0000 UTC m=+0.553330046 container start 0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_franklin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:39:17 np0005597378 recursing_franklin[247376]: 167 167
Jan 27 08:39:17 np0005597378 systemd[1]: libpod-0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32.scope: Deactivated successfully.
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.497 238945 DEBUG oslo_concurrency.processutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/disk.config e494a15a-7ac1-47d9-be70-22ec46b36797_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.498 238945 INFO nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deleting local config drive /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797/disk.config because it was imported into RBD.#033[00m
Jan 27 08:39:17 np0005597378 podman[247360]: 2026-01-27 13:39:17.538748287 +0000 UTC m=+0.639833258 container attach 0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:39:17 np0005597378 podman[247360]: 2026-01-27 13:39:17.539804106 +0000 UTC m=+0.640889077 container died 0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.568 238945 DEBUG nova.objects.instance [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 90745d09-8b6e-47d6-aa1b-bf6df105c5a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.582 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.583 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Ensure instance console log exists: /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.584 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.584 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.584 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:17 np0005597378 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 27 08:39:17 np0005597378 kernel: tapee6301a2-f8: entered promiscuous mode
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.588 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:39:17 np0005597378 NetworkManager[48904]: <info>  [1769521157.5905] manager: (tapee6301a2-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Jan 27 08:39:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:17Z|00027|binding|INFO|Claiming lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 for this chassis.
Jan 27 08:39:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:17Z|00028|binding|INFO|ee6301a2-f8c5-49f5-a6f6-5885ad339b05: Claiming fa:16:3e:90:7c:f2 10.100.0.4
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.598 238945 WARNING nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.606 238945 DEBUG nova.virt.libvirt.host [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.607 238945 DEBUG nova.virt.libvirt.host [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:39:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:17.609 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7c:f2 10.100.0.4'], port_security=['fa:16:3e:90:7c:f2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e494a15a-7ac1-47d9-be70-22ec46b36797', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee6301a2-f8c5-49f5-a6f6-5885ad339b05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:39:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:17.611 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee6301a2-f8c5-49f5-a6f6-5885ad339b05 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 bound to our chassis#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.611 238945 DEBUG nova.virt.libvirt.host [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.612 238945 DEBUG nova.virt.libvirt.host [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:39:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:17.613 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d498e730-2c72-4423-80f9-9db85c3d90b3#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.613 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.614 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.615 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:39:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:17.615 154802 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmph_n46b58/privsep.sock']#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.615 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.615 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.616 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.616 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.616 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.617 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.617 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.617 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.618 238945 DEBUG nova.virt.hardware [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:39:17 np0005597378 systemd-udevd[247424]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.622 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:17 np0005597378 NetworkManager[48904]: <info>  [1769521157.6696] device (tapee6301a2-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:39:17 np0005597378 NetworkManager[48904]: <info>  [1769521157.6708] device (tapee6301a2-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:17Z|00029|binding|INFO|Setting lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 ovn-installed in OVS
Jan 27 08:39:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:17Z|00030|binding|INFO|Setting lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 up in Southbound
Jan 27 08:39:17 np0005597378 systemd-machined[207425]: New machine qemu-3-instance-00000003.
Jan 27 08:39:17 np0005597378 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.893 238945 DEBUG nova.compute.manager [req-c2e76f3e-ba62-443c-bd5f-062bb738fcee req-1b196994-a28d-4a82-815b-7f708d083009 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.893 238945 DEBUG oslo_concurrency.lockutils [req-c2e76f3e-ba62-443c-bd5f-062bb738fcee req-1b196994-a28d-4a82-815b-7f708d083009 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.893 238945 DEBUG oslo_concurrency.lockutils [req-c2e76f3e-ba62-443c-bd5f-062bb738fcee req-1b196994-a28d-4a82-815b-7f708d083009 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.894 238945 DEBUG oslo_concurrency.lockutils [req-c2e76f3e-ba62-443c-bd5f-062bb738fcee req-1b196994-a28d-4a82-815b-7f708d083009 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:17 np0005597378 nova_compute[238941]: 2026-01-27 13:39:17.894 238945 DEBUG nova.compute.manager [req-c2e76f3e-ba62-443c-bd5f-062bb738fcee req-1b196994-a28d-4a82-815b-7f708d083009 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Processing event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:39:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3679ccfe36a90f81de96ecfe9b38ba931b330ba344f2bda414708aadd9b711b1-merged.mount: Deactivated successfully.
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:39:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:39:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 98 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 135 op/s
Jan 27 08:39:18 np0005597378 podman[247360]: 2026-01-27 13:39:18.106889552 +0000 UTC m=+1.207974523 container remove 0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_franklin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:39:18 np0005597378 systemd[1]: libpod-conmon-0b01195e18683169dfe90fc8dc504a911c484b2f1f8078a1d4e135a7fe4b3f32.scope: Deactivated successfully.
Jan 27 08:39:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445815584' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.236 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521158.2340732, e494a15a-7ac1-47d9-be70-22ec46b36797 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.237 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] VM Started (Lifecycle Event)#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.239 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.242 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.278 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.283 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.306 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.307 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.309 238945 DEBUG oslo_concurrency.processutils [None req-5baf8d6c-cb48-46c6-83d3-9e369f37b354 db87235819c0461e9a89cd4af386623b d520060aaa2c4839bc9d9cf5c27ec078 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.337 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.340 238945 INFO nova.virt.libvirt.driver [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Instance spawned successfully.#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.341 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.357 238945 DEBUG oslo_concurrency.processutils [None req-5baf8d6c-cb48-46c6-83d3-9e369f37b354 db87235819c0461e9a89cd4af386623b d520060aaa2c4839bc9d9cf5c27ec078 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.374 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.375 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521158.2341495, e494a15a-7ac1-47d9-be70-22ec46b36797 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.375 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.380 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.381 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.382 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.382 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.383 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.383 238945 DEBUG nova.virt.libvirt.driver [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:18 np0005597378 podman[247513]: 2026-01-27 13:39:18.292203307 +0000 UTC m=+0.051587212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.390 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.393 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521158.2426314, e494a15a-7ac1-47d9-be70-22ec46b36797 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.394 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:39:18 np0005597378 podman[247513]: 2026-01-27 13:39:18.407759272 +0000 UTC m=+0.167143177 container create ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_merkle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.427 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.431 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.440 238945 INFO nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Took 13.81 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.441 238945 DEBUG nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.450 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.495 238945 INFO nova.compute.manager [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Took 15.40 seconds to build instance.#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.510 238945 DEBUG oslo_concurrency.lockutils [None req-fde45e5e-3ff1-4a11-a3d0-9c881e0bc58e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.530 154802 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.531 154802 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmph_n46b58/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.394 247546 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.399 247546 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.401 247546 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.401 247546 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247546#033[00m
Jan 27 08:39:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:18.536 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de7a76a2-ec5e-4bf7-bb38-8547cefb0b2b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:18 np0005597378 systemd[1]: Started libpod-conmon-ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db.scope.
Jan 27 08:39:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765f366386ec210fa394619ba19d41f79f505b31c7aeac160d55a24aa89a861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765f366386ec210fa394619ba19d41f79f505b31c7aeac160d55a24aa89a861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765f366386ec210fa394619ba19d41f79f505b31c7aeac160d55a24aa89a861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e765f366386ec210fa394619ba19d41f79f505b31c7aeac160d55a24aa89a861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:18 np0005597378 podman[247513]: 2026-01-27 13:39:18.721848858 +0000 UTC m=+0.481232903 container init ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:39:18 np0005597378 podman[247513]: 2026-01-27 13:39:18.730613634 +0000 UTC m=+0.489997539 container start ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:39:18 np0005597378 podman[247513]: 2026-01-27 13:39:18.761781324 +0000 UTC m=+0.521165259 container attach ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:39:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79775337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.884 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.887 238945 DEBUG nova.objects.instance [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 90745d09-8b6e-47d6-aa1b-bf6df105c5a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.900 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <uuid>90745d09-8b6e-47d6-aa1b-bf6df105c5a2</uuid>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <name>instance-00000004</name>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1385065177</nova:name>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:39:17</nova:creationTime>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:user uuid="4eadd6ee0ee0402aa1b2e18319362088">tempest-DeleteServersAdminTestJSON-1841579421-project-member</nova:user>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <nova:project uuid="c9f9b365be6e4ebc8f60e60abbc310b4">tempest-DeleteServersAdminTestJSON-1841579421</nova:project>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <entry name="serial">90745d09-8b6e-47d6-aa1b-bf6df105c5a2</entry>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <entry name="uuid">90745d09-8b6e-47d6-aa1b-bf6df105c5a2</entry>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk.config">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/console.log" append="off"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:39:18 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:39:18 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:39:18 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:39:18 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.980 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.981 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:18 np0005597378 nova_compute[238941]: 2026-01-27 13:39:18.982 238945 INFO nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Using config drive#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.003 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.219 238945 INFO nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Creating config drive at /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/disk.config#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.225 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0r71jzl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.357 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0r71jzl" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.385 238945 DEBUG nova.storage.rbd_utils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] rbd image 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.392 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/disk.config 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:19 np0005597378 lvm[247705]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:39:19 np0005597378 lvm[247705]: VG ceph_vg0 finished
Jan 27 08:39:19 np0005597378 lvm[247707]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:39:19 np0005597378 lvm[247707]: VG ceph_vg1 finished
Jan 27 08:39:19 np0005597378 lvm[247708]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:39:19 np0005597378 lvm[247708]: VG ceph_vg0 finished
Jan 27 08:39:19 np0005597378 lvm[247709]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:39:19 np0005597378 lvm[247709]: VG ceph_vg2 finished
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.676 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:19 np0005597378 wizardly_merkle[247569]: {}
Jan 27 08:39:19 np0005597378 systemd[1]: libpod-ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db.scope: Deactivated successfully.
Jan 27 08:39:19 np0005597378 systemd[1]: libpod-ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db.scope: Consumed 1.423s CPU time.
Jan 27 08:39:19 np0005597378 conmon[247569]: conmon ce607590937ec5e8e7ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db.scope/container/memory.events
Jan 27 08:39:19 np0005597378 podman[247513]: 2026-01-27 13:39:19.757817794 +0000 UTC m=+1.517201699 container died ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_merkle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.775 238945 DEBUG oslo_concurrency.processutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/disk.config 90745d09-8b6e-47d6-aa1b-bf6df105c5a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.777 238945 INFO nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Deleting local config drive /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2/disk.config because it was imported into RBD.#033[00m
Jan 27 08:39:19 np0005597378 systemd-machined[207425]: New machine qemu-4-instance-00000004.
Jan 27 08:39:19 np0005597378 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 27 08:39:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e765f366386ec210fa394619ba19d41f79f505b31c7aeac160d55a24aa89a861-merged.mount: Deactivated successfully.
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.993 238945 DEBUG nova.compute.manager [req-d02a5150-b763-47b9-ba5b-b6b7f9604fff req-f5a500b6-1bf3-4522-8704-40ae0532f749 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.994 238945 DEBUG oslo_concurrency.lockutils [req-d02a5150-b763-47b9-ba5b-b6b7f9604fff req-f5a500b6-1bf3-4522-8704-40ae0532f749 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.994 238945 DEBUG oslo_concurrency.lockutils [req-d02a5150-b763-47b9-ba5b-b6b7f9604fff req-f5a500b6-1bf3-4522-8704-40ae0532f749 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.995 238945 DEBUG oslo_concurrency.lockutils [req-d02a5150-b763-47b9-ba5b-b6b7f9604fff req-f5a500b6-1bf3-4522-8704-40ae0532f749 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.995 238945 DEBUG nova.compute.manager [req-d02a5150-b763-47b9-ba5b-b6b7f9604fff req-f5a500b6-1bf3-4522-8704-40ae0532f749 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] No waiting events found dispatching network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:39:19 np0005597378 nova_compute[238941]: 2026-01-27 13:39:19.995 238945 WARNING nova.compute.manager [req-d02a5150-b763-47b9-ba5b-b6b7f9604fff req-f5a500b6-1bf3-4522-8704-40ae0532f749 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received unexpected event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:39:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 134 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Jan 27 08:39:20 np0005597378 podman[247513]: 2026-01-27 13:39:20.26997109 +0000 UTC m=+2.029354995 container remove ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 08:39:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:39:20 np0005597378 systemd[1]: libpod-conmon-ce607590937ec5e8e7eea488c74a737d1ef53d79ffb05d10073e2085049b29db.scope: Deactivated successfully.
Jan 27 08:39:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:39:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:39:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4804] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4812] device (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <warn>  [1769521160.4815] device (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4825] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4828] device (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <warn>  [1769521160.4829] device (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4838] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4845] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4850] device (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 08:39:20 np0005597378 NetworkManager[48904]: <info>  [1769521160.4854] device (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.579 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.682 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521160.6824298, 90745d09-8b6e-47d6-aa1b-bf6df105c5a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.683 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.685 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.686 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:39:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:20.687 247546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:20.687 247546 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:20.687 247546 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.691 238945 INFO nova.virt.libvirt.driver [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Instance spawned successfully.#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.691 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.712 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.718 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.721 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.722 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.722 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.723 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.723 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.724 238945 DEBUG nova.virt.libvirt.driver [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.751 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.752 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521160.6850572, 90745d09-8b6e-47d6-aa1b-bf6df105c5a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.752 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] VM Started (Lifecycle Event)#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.787 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.791 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.810 238945 INFO nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Took 5.69 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.811 238945 DEBUG nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.812 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.885 238945 INFO nova.compute.manager [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Took 6.92 seconds to build instance.#033[00m
Jan 27 08:39:20 np0005597378 nova_compute[238941]: 2026-01-27 13:39:20.914 238945 DEBUG oslo_concurrency.lockutils [None req-8c97c2d6-22bd-436d-b0ed-10acad0fd192 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:39:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:39:21 np0005597378 nova_compute[238941]: 2026-01-27 13:39:21.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 134 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1018 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.220 238945 DEBUG nova.compute.manager [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-changed-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.221 238945 DEBUG nova.compute.manager [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Refreshing instance network info cache due to event network-changed-ee6301a2-f8c5-49f5-a6f6-5885ad339b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.221 238945 DEBUG oslo_concurrency.lockutils [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.221 238945 DEBUG oslo_concurrency.lockutils [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.222 238945 DEBUG nova.network.neutron [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Refreshing network info cache for port ee6301a2-f8c5-49f5-a6f6-5885ad339b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16815964-9f64-41ca-83ad-17d454e60c14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.325 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd498e730-21 in ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.328 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd498e730-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.328 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c1fcfd-963a-4c9a-a150-cc2c4aa566bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.335 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ec0a17-131a-4caf-aaf8-752522513aff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.362 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[75b4bcf5-e438-4255-98e0-f3bb8228ea7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aca64945-1cbf-430b-a92d-fb20acf60c0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:22.472 154802 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpd16ygj2u/privsep.sock']#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.824 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.825 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.825 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.825 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.826 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.826 238945 INFO nova.compute.manager [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Terminating instance#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.827 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "refresh_cache-90745d09-8b6e-47d6-aa1b-bf6df105c5a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.827 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquired lock "refresh_cache-90745d09-8b6e-47d6-aa1b-bf6df105c5a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:22 np0005597378 nova_compute[238941]: 2026-01-27 13:39:22.828 238945 DEBUG nova.network.neutron [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.024 238945 DEBUG nova.network.neutron [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.190 154802 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.191 154802 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpd16ygj2u/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.072 247819 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.076 247819 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.077 247819 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.078 247819 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247819#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.194 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a6214553-2a56-456a-a734-b5e5df346314]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.364 238945 DEBUG nova.network.neutron [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.377 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Releasing lock "refresh_cache-90745d09-8b6e-47d6-aa1b-bf6df105c5a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.378 238945 DEBUG nova.compute.manager [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:39:23 np0005597378 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 27 08:39:23 np0005597378 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 3.505s CPU time.
Jan 27 08:39:23 np0005597378 systemd-machined[207425]: Machine qemu-4-instance-00000004 terminated.
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.596 238945 INFO nova.virt.libvirt.driver [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Instance destroyed successfully.#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.596 238945 DEBUG nova.objects.instance [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lazy-loading 'resources' on Instance uuid 90745d09-8b6e-47d6-aa1b-bf6df105c5a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.721 247819 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.722 247819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:23.722 247819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.725 238945 DEBUG nova.network.neutron [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updated VIF entry in instance network info cache for port ee6301a2-f8c5-49f5-a6f6-5885ad339b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.726 238945 DEBUG nova.network.neutron [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updating instance_info_cache with network_info: [{"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:23 np0005597378 nova_compute[238941]: 2026-01-27 13:39:23.745 238945 DEBUG oslo_concurrency.lockutils [req-83be476f-0d33-48df-b50b-db44da134647 req-a8fae4f9-894b-4d27-93ae-c0bff8e65542 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 134 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.322 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bd894366-d6de-4d64-9d2d-9c02908130a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.348 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b77a072-5132-4240-a1cf-dcc1b339625b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 NetworkManager[48904]: <info>  [1769521164.3497] manager: (tapd498e730-20): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Jan 27 08:39:24 np0005597378 systemd-udevd[247849]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.382 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[df4e7870-11d4-461f-9bc0-92e77827a1d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.385 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02b02984-0616-4dad-8a60-18ed1a754fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 NetworkManager[48904]: <info>  [1769521164.4031] device (tapd498e730-20): carrier: link connected
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.408 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e1337fe6-0d45-45e0-a1a1-f11e104c1430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.426 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eecc161e-2fb1-425d-b888-b431ba7b1df0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd498e730-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:09:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 380402, 'reachable_time': 43411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247869, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.442 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a5eb82d0-6552-4742-8b5e-fffb4a2a9765]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:95e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 380402, 'tstamp': 380402}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247870, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.458 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f83e39a-d859-493c-8c76-178ed50c0aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd498e730-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:09:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 380402, 'reachable_time': 43411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247871, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b161f74b-afe1-4c11-99da-a811fbcb8a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.554 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5dcff4-dd7a-4c32-86aa-0276911ac965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.555 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.556 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.556 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd498e730-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:24 np0005597378 NetworkManager[48904]: <info>  [1769521164.5598] manager: (tapd498e730-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 27 08:39:24 np0005597378 kernel: tapd498e730-20: entered promiscuous mode
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.563 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd498e730-20, col_values=(('external_ids', {'iface-id': '8c35d240-f8e0-427c-9fae-48cfa2369c72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:24Z|00031|binding|INFO|Releasing lport 8c35d240-f8e0-427c-9fae-48cfa2369c72 from this chassis (sb_readonly=0)
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.577 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.579 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.579 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c775c92-e5d0-4fe3-8309-8bf0a56485ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.580 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:39:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:24.581 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'env', 'PROCESS_TAG=haproxy-d498e730-2c72-4423-80f9-9db85c3d90b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d498e730-2c72-4423-80f9-9db85c3d90b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.813 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521149.8122153, 86e1daac-45e6-441d-8a4a-1294891b69a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.814 238945 INFO nova.compute.manager [-] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:39:24 np0005597378 nova_compute[238941]: 2026-01-27 13:39:24.833 238945 DEBUG nova.compute.manager [None req-49c6f001-ec71-45d3-88c6-1d7991591421 - - - - - -] [instance: 86e1daac-45e6-441d-8a4a-1294891b69a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:25 np0005597378 podman[247906]: 2026-01-27 13:39:24.924987438 +0000 UTC m=+0.023712610 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:39:25 np0005597378 podman[247906]: 2026-01-27 13:39:25.346573722 +0000 UTC m=+0.445298864 container create 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 08:39:25 np0005597378 systemd[1]: Started libpod-conmon-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879.scope.
Jan 27 08:39:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:39:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147413b571772a677643494592b143becb1d569a2d63e13c83951fecd7e472d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:39:25 np0005597378 podman[247906]: 2026-01-27 13:39:25.817954349 +0000 UTC m=+0.916679511 container init 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:39:25 np0005597378 podman[247906]: 2026-01-27 13:39:25.82840682 +0000 UTC m=+0.927131962 container start 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:39:25 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [NOTICE]   (247925) : New worker (247927) forked
Jan 27 08:39:25 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [NOTICE]   (247925) : Loading success.
Jan 27 08:39:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 118 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Jan 27 08:39:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:26 np0005597378 nova_compute[238941]: 2026-01-27 13:39:26.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:26 np0005597378 nova_compute[238941]: 2026-01-27 13:39:26.776 238945 INFO nova.virt.libvirt.driver [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Deleting instance files /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2_del#033[00m
Jan 27 08:39:26 np0005597378 nova_compute[238941]: 2026-01-27 13:39:26.777 238945 INFO nova.virt.libvirt.driver [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Deletion of /var/lib/nova/instances/90745d09-8b6e-47d6-aa1b-bf6df105c5a2_del complete#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.019 238945 INFO nova.compute.manager [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Took 3.64 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.020 238945 DEBUG oslo.service.loopingcall [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.020 238945 DEBUG nova.compute.manager [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.020 238945 DEBUG nova.network.neutron [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006443909573478919 of space, bias 1.0, pg target 0.19331728720436758 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666464043446038 of space, bias 1.0, pg target 0.1999392130338114 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.037633922825554e-06 of space, bias 4.0, pg target 0.0012451607073906646 quantized to 16 (current 16)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:39:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.387 238945 DEBUG nova.network.neutron [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.400 238945 DEBUG nova.network.neutron [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.418 238945 INFO nova.compute.manager [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Took 0.40 seconds to deallocate network for instance.#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.470 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.471 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:27 np0005597378 nova_compute[238941]: 2026-01-27 13:39:27.546 238945 DEBUG oslo_concurrency.processutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 108 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Jan 27 08:39:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421085497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:28 np0005597378 nova_compute[238941]: 2026-01-27 13:39:28.157 238945 DEBUG oslo_concurrency.processutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:28 np0005597378 nova_compute[238941]: 2026-01-27 13:39:28.163 238945 DEBUG nova.compute.provider_tree [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:28 np0005597378 nova_compute[238941]: 2026-01-27 13:39:28.178 238945 DEBUG nova.scheduler.client.report [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:28 np0005597378 nova_compute[238941]: 2026-01-27 13:39:28.198 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:28 np0005597378 nova_compute[238941]: 2026-01-27 13:39:28.223 238945 INFO nova.scheduler.client.report [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Deleted allocations for instance 90745d09-8b6e-47d6-aa1b-bf6df105c5a2#033[00m
Jan 27 08:39:28 np0005597378 nova_compute[238941]: 2026-01-27 13:39:28.295 238945 DEBUG oslo_concurrency.lockutils [None req-127eeffa-ebd7-4a36-8a04-4015eff469f3 4eadd6ee0ee0402aa1b2e18319362088 c9f9b365be6e4ebc8f60e60abbc310b4 - - default default] Lock "90745d09-8b6e-47d6-aa1b-bf6df105c5a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:29 np0005597378 nova_compute[238941]: 2026-01-27 13:39:29.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Jan 27 08:39:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:31 np0005597378 nova_compute[238941]: 2026-01-27 13:39:31.625 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:31 np0005597378 podman[247959]: 2026-01-27 13:39:31.746172846 +0000 UTC m=+0.083751658 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:39:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 130 op/s
Jan 27 08:39:32 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 27 08:39:32 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 27 08:39:33 np0005597378 nova_compute[238941]: 2026-01-27 13:39:33.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:33.184 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:39:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:33.185 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:39:33 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 27 08:39:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 95 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 915 KiB/s wr, 144 op/s
Jan 27 08:39:34 np0005597378 nova_compute[238941]: 2026-01-27 13:39:34.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:34 np0005597378 nova_compute[238941]: 2026-01-27 13:39:34.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:35.187 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.413 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:39:35 np0005597378 nova_compute[238941]: 2026-01-27 13:39:35.413 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:35 np0005597378 podman[248008]: 2026-01-27 13:39:35.702967963 +0000 UTC m=+0.046102143 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:39:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 96 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 1.1 MiB/s wr, 66 op/s
Jan 27 08:39:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619242746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.179 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.765s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.290 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.291 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.426 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.428 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=59.967317685484886GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.428 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.428 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.488 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e494a15a-7ac1-47d9-be70-22ec46b36797 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.488 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.489 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.536 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:36 np0005597378 nova_compute[238941]: 2026-01-27 13:39:36.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:36Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:7c:f2 10.100.0.4
Jan 27 08:39:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:36Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:7c:f2 10.100.0.4
Jan 27 08:39:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369204416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.094 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "02505f33-d581-487d-9fac-6798017dbe63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.094 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.111 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.113 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.119 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.136 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.173 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.174 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.187 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.187 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.193 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.193 238945 INFO nova.compute.claims [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.292 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571288232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.823 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.832 238945 DEBUG nova.compute.provider_tree [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.850 238945 DEBUG nova.scheduler.client.report [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.888 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.889 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.948 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:37 np0005597378 nova_compute[238941]: 2026-01-27 13:39:37.948 238945 DEBUG nova.network.neutron [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.006 238945 INFO nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.033 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 105 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.157 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.158 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.159 238945 INFO nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Creating image(s)#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.178 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.200 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.222 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.225 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.246 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.247 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.247 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.270 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.291 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.292 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.293 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.293 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.312 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.316 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 02505f33-d581-487d-9fac-6798017dbe63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.405 238945 DEBUG nova.network.neutron [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.406 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.436 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.437 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.437 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.437 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e494a15a-7ac1-47d9-be70-22ec46b36797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.595 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521163.5938692, 90745d09-8b6e-47d6-aa1b-bf6df105c5a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.595 238945 INFO nova.compute.manager [-] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.630 238945 DEBUG nova.compute.manager [None req-6060e525-9087-46fa-be0d-586ddd21daa1 - - - - - -] [instance: 90745d09-8b6e-47d6-aa1b-bf6df105c5a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.793 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 02505f33-d581-487d-9fac-6798017dbe63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.875 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] resizing rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.981 238945 DEBUG nova.objects.instance [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lazy-loading 'migration_context' on Instance uuid 02505f33-d581-487d-9fac-6798017dbe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.994 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.994 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Ensure instance console log exists: /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.995 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.995 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.996 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:38 np0005597378 nova_compute[238941]: 2026-01-27 13:39:38.997 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.000 238945 WARNING nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.005 238945 DEBUG nova.virt.libvirt.host [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.005 238945 DEBUG nova.virt.libvirt.host [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.008 238945 DEBUG nova.virt.libvirt.host [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.008 238945 DEBUG nova.virt.libvirt.host [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.009 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.009 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.009 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.009 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.009 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.010 238945 DEBUG nova.virt.hardware [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.013 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694841168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.614 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.636 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.641 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.980 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updating instance_info_cache with network_info: [{"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.997 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e494a15a-7ac1-47d9-be70-22ec46b36797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.998 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.998 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.999 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.999 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:39 np0005597378 nova_compute[238941]: 2026-01-27 13:39:39.999 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:39:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 139 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 236 KiB/s rd, 2.9 MiB/s wr, 79 op/s
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.127 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810096414' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.218 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.220 238945 DEBUG nova.objects.instance [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02505f33-d581-487d-9fac-6798017dbe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.273 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <uuid>02505f33-d581-487d-9fac-6798017dbe63</uuid>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <name>instance-00000005</name>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-2000476238</nova:name>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:39:39</nova:creationTime>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:user uuid="92fecc76e0904bbc81ccb5afc6b32822">tempest-ServerDiagnosticsNegativeTest-1341303125-project-member</nova:user>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <nova:project uuid="925ef6881b634b1bb34b33539c3d5938">tempest-ServerDiagnosticsNegativeTest-1341303125</nova:project>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <entry name="serial">02505f33-d581-487d-9fac-6798017dbe63</entry>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <entry name="uuid">02505f33-d581-487d-9fac-6798017dbe63</entry>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/02505f33-d581-487d-9fac-6798017dbe63_disk">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/02505f33-d581-487d-9fac-6798017dbe63_disk.config">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/console.log" append="off"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:39:40 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:39:40 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:39:40 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:39:40 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.335 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.336 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.337 238945 INFO nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Using config drive#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.365 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.645 238945 INFO nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Creating config drive at /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/disk.config#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.650 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_bh_sxa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.775 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_bh_sxa" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.803 238945 DEBUG nova.storage.rbd_utils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] rbd image 02505f33-d581-487d-9fac-6798017dbe63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:40 np0005597378 nova_compute[238941]: 2026-01-27 13:39:40.808 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/disk.config 02505f33-d581-487d-9fac-6798017dbe63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.435927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521181435978, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 258, "total_data_size": 710309, "memory_usage": 723064, "flush_reason": "Manual Compaction"}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521181472175, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 703305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18963, "largest_seqno": 19635, "table_properties": {"data_size": 699882, "index_size": 1267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7876, "raw_average_key_size": 18, "raw_value_size": 692793, "raw_average_value_size": 1607, "num_data_blocks": 57, "num_entries": 431, "num_filter_entries": 431, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521136, "oldest_key_time": 1769521136, "file_creation_time": 1769521181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 36298 microseconds, and 2555 cpu microseconds.
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.472229) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 703305 bytes OK
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.472250) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.494671) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.494725) EVENT_LOG_v1 {"time_micros": 1769521181494716, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.494750) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 706708, prev total WAL file size 706708, number of live WAL files 2.
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.495497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(686KB)], [44(6434KB)]
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521181495561, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7291870, "oldest_snapshot_seqno": -1}
Jan 27 08:39:41 np0005597378 nova_compute[238941]: 2026-01-27 13:39:41.628 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4215 keys, 7159226 bytes, temperature: kUnknown
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521181709030, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7159226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7130547, "index_size": 17049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104408, "raw_average_key_size": 24, "raw_value_size": 7053724, "raw_average_value_size": 1673, "num_data_blocks": 714, "num_entries": 4215, "num_filter_entries": 4215, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:39:41 np0005597378 nova_compute[238941]: 2026-01-27 13:39:41.738 238945 DEBUG oslo_concurrency.processutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/disk.config 02505f33-d581-487d-9fac-6798017dbe63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.930s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:41 np0005597378 nova_compute[238941]: 2026-01-27 13:39:41.738 238945 INFO nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deleting local config drive /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63/disk.config because it was imported into RBD.#033[00m
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.709302) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7159226 bytes
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.767419) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.1 rd, 33.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 6.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(20.5) write-amplify(10.2) OK, records in: 4742, records dropped: 527 output_compression: NoCompression
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.767458) EVENT_LOG_v1 {"time_micros": 1769521181767443, "job": 22, "event": "compaction_finished", "compaction_time_micros": 213551, "compaction_time_cpu_micros": 29993, "output_level": 6, "num_output_files": 1, "total_output_size": 7159226, "num_input_records": 4742, "num_output_records": 4215, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521181767755, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521181769009, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.495373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.769089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.769095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.769097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.769099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:39:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:39:41.769101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:39:41 np0005597378 systemd-machined[207425]: New machine qemu-5-instance-00000005.
Jan 27 08:39:41 np0005597378 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 27 08:39:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 139 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 27 08:39:43 np0005597378 nova_compute[238941]: 2026-01-27 13:39:43.974 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521183.9745803, 02505f33-d581-487d-9fac-6798017dbe63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:43 np0005597378 nova_compute[238941]: 2026-01-27 13:39:43.976 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:39:43 np0005597378 nova_compute[238941]: 2026-01-27 13:39:43.978 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:39:43 np0005597378 nova_compute[238941]: 2026-01-27 13:39:43.979 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:39:43 np0005597378 nova_compute[238941]: 2026-01-27 13:39:43.982 238945 INFO nova.virt.libvirt.driver [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance spawned successfully.#033[00m
Jan 27 08:39:43 np0005597378 nova_compute[238941]: 2026-01-27 13:39:43.982 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.000 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.008 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.011 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.011 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.011 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.012 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.012 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.013 238945 DEBUG nova.virt.libvirt.driver [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.045 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.046 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521183.9753482, 02505f33-d581-487d-9fac-6798017dbe63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.046 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] VM Started (Lifecycle Event)#033[00m
Jan 27 08:39:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.093 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.097 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.109 238945 INFO nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Took 5.95 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.109 238945 DEBUG nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.119 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.166 238945 INFO nova.compute.manager [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Took 7.00 seconds to build instance.#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.184 238945 DEBUG oslo_concurrency.lockutils [None req-00cbbcea-901b-4db0-b5c5-04cf632aa213 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:44 np0005597378 nova_compute[238941]: 2026-01-27 13:39:44.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.032 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "730980bf-3349-4faf-8757-7bcc05dac289" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.033 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 848 KiB/s rd, 3.0 MiB/s wr, 109 op/s
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.081 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.207 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.207 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.213 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.213 238945 INFO nova.compute.claims [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:46.287 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:46.288 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:46.288 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.314 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "02505f33-d581-487d-9fac-6798017dbe63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.314 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.314 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "02505f33-d581-487d-9fac-6798017dbe63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.314 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.314 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.315 238945 INFO nova.compute.manager [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Terminating instance#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.316 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "refresh_cache-02505f33-d581-487d-9fac-6798017dbe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.316 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquired lock "refresh_cache-02505f33-d581-487d-9fac-6798017dbe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.316 238945 DEBUG nova.network.neutron [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.438 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.617 238945 DEBUG nova.network.neutron [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589379521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.976 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.981 238945 DEBUG nova.compute.provider_tree [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:46 np0005597378 nova_compute[238941]: 2026-01-27 13:39:46.983 238945 DEBUG nova.network.neutron [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.044 238945 DEBUG nova.scheduler.client.report [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.072 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Releasing lock "refresh_cache-02505f33-d581-487d-9fac-6798017dbe63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.073 238945 DEBUG nova.compute.manager [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.167 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.168 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:47 np0005597378 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 27 08:39:47 np0005597378 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 4.878s CPU time.
Jan 27 08:39:47 np0005597378 systemd-machined[207425]: Machine qemu-5-instance-00000005 terminated.
Jan 27 08:39:47 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:39:47 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.286 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.287 238945 DEBUG nova.network.neutron [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.293 238945 INFO nova.virt.libvirt.driver [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance destroyed successfully.#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.294 238945 DEBUG nova.objects.instance [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lazy-loading 'resources' on Instance uuid 02505f33-d581-487d-9fac-6798017dbe63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.337 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.450 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.513 238945 DEBUG nova.network.neutron [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.514 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.702 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.703 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.703 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Creating image(s)#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.723 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.743 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.760 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.763 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.821 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.822 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.822 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.823 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.842 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:47 np0005597378 nova_compute[238941]: 2026-01-27 13:39:47.846 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 730980bf-3349-4faf-8757-7bcc05dac289_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 152 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 129 op/s
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.187 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 730980bf-3349-4faf-8757-7bcc05dac289_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.261 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] resizing rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.369 238945 DEBUG nova.objects.instance [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 730980bf-3349-4faf-8757-7bcc05dac289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.386 238945 INFO nova.virt.libvirt.driver [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deleting instance files /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63_del#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.387 238945 INFO nova.virt.libvirt.driver [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deletion of /var/lib/nova/instances/02505f33-d581-487d-9fac-6798017dbe63_del complete#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.481 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.481 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Ensure instance console log exists: /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.482 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.483 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.483 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.484 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.489 238945 WARNING nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.492 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.493 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.496 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.libvirt.host [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.497 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.498 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.499 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.500 238945 DEBUG nova.virt.hardware [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.502 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.602 238945 INFO nova.compute.manager [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Took 1.53 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.603 238945 DEBUG oslo.service.loopingcall [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.603 238945 DEBUG nova.compute.manager [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:39:48 np0005597378 nova_compute[238941]: 2026-01-27 13:39:48.604 238945 DEBUG nova.network.neutron [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:39:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055437991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.067 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.086 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.090 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.229 238945 DEBUG nova.network.neutron [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.251 238945 DEBUG nova.network.neutron [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.265 238945 INFO nova.compute.manager [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Took 0.66 seconds to deallocate network for instance.#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.316 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.317 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.402 238945 DEBUG oslo_concurrency.processutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903734539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.659 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.661 238945 DEBUG nova.objects.instance [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 730980bf-3349-4faf-8757-7bcc05dac289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.678 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <uuid>730980bf-3349-4faf-8757-7bcc05dac289</uuid>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <name>instance-00000006</name>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:name>tempest-LiveMigrationNegativeTest-server-622355007</nova:name>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:39:48</nova:creationTime>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:user uuid="224925de56f64feca98f9fffb9810e07">tempest-LiveMigrationNegativeTest-1588682355-project-member</nova:user>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <nova:project uuid="09c59af3df414ec29b63dc65458aa7c2">tempest-LiveMigrationNegativeTest-1588682355</nova:project>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <entry name="serial">730980bf-3349-4faf-8757-7bcc05dac289</entry>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <entry name="uuid">730980bf-3349-4faf-8757-7bcc05dac289</entry>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/730980bf-3349-4faf-8757-7bcc05dac289_disk">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/730980bf-3349-4faf-8757-7bcc05dac289_disk.config">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/console.log" append="off"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:39:49 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:39:49 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:39:49 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:39:49 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.773 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.774 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.775 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Using config drive#033[00m
Jan 27 08:39:49 np0005597378 nova_compute[238941]: 2026-01-27 13:39:49.799 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154421787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.001 238945 DEBUG oslo_concurrency.processutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.006 238945 DEBUG nova.compute.provider_tree [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.023 238945 DEBUG nova.scheduler.client.report [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.049 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 152 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 189 op/s
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.082 238945 INFO nova.scheduler.client.report [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Deleted allocations for instance 02505f33-d581-487d-9fac-6798017dbe63#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.103 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Creating config drive at /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.108 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpexvjm0ih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.149 238945 DEBUG oslo_concurrency.lockutils [None req-5d8f87ab-9d55-4865-b919-446d8b9481e6 92fecc76e0904bbc81ccb5afc6b32822 925ef6881b634b1bb34b33539c3d5938 - - default default] Lock "02505f33-d581-487d-9fac-6798017dbe63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.233 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpexvjm0ih" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.255 238945 DEBUG nova.storage.rbd_utils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 730980bf-3349-4faf-8757-7bcc05dac289_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.259 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config 730980bf-3349-4faf-8757-7bcc05dac289_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.385 238945 DEBUG oslo_concurrency.processutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config 730980bf-3349-4faf-8757-7bcc05dac289_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.385 238945 INFO nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deleting local config drive /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289/disk.config because it was imported into RBD.#033[00m
Jan 27 08:39:50 np0005597378 systemd-machined[207425]: New machine qemu-6-instance-00000006.
Jan 27 08:39:50 np0005597378 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.756 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521190.7564063, 730980bf-3349-4faf-8757-7bcc05dac289 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.757 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.760 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.760 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.762 238945 INFO nova.virt.libvirt.driver [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance spawned successfully.#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.763 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.801 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.803 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.827 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.828 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.828 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.828 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.829 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.829 238945 DEBUG nova.virt.libvirt.driver [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.850 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.851 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521190.7589326, 730980bf-3349-4faf-8757-7bcc05dac289 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.851 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] VM Started (Lifecycle Event)#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.915 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.919 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.979 238945 INFO nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 3.28 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.980 238945 DEBUG nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:39:50 np0005597378 nova_compute[238941]: 2026-01-27 13:39:50.997 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:39:51 np0005597378 nova_compute[238941]: 2026-01-27 13:39:51.088 238945 INFO nova.compute.manager [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 4.92 seconds to build instance.#033[00m
Jan 27 08:39:51 np0005597378 nova_compute[238941]: 2026-01-27 13:39:51.211 238945 DEBUG oslo_concurrency.lockutils [None req-8767dfcc-de89-415d-9e73-11aefff64d4e 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:51 np0005597378 nova_compute[238941]: 2026-01-27 13:39:51.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 152 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.570 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.570 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.571 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.571 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.571 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.572 238945 INFO nova.compute.manager [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Terminating instance#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.573 238945 DEBUG nova.compute.manager [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:39:53 np0005597378 kernel: tapee6301a2-f8 (unregistering): left promiscuous mode
Jan 27 08:39:53 np0005597378 NetworkManager[48904]: <info>  [1769521193.8786] device (tapee6301a2-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:39:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:53Z|00032|binding|INFO|Releasing lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 from this chassis (sb_readonly=0)
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:53Z|00033|binding|INFO|Setting lport ee6301a2-f8c5-49f5-a6f6-5885ad339b05 down in Southbound
Jan 27 08:39:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:39:53Z|00034|binding|INFO|Removing iface tapee6301a2-f8 ovn-installed in OVS
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.899 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7c:f2 10.100.0.4'], port_security=['fa:16:3e:90:7c:f2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e494a15a-7ac1-47d9-be70-22ec46b36797', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee6301a2-f8c5-49f5-a6f6-5885ad339b05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:39:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.901 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee6301a2-f8c5-49f5-a6f6-5885ad339b05 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 unbound from our chassis#033[00m
Jan 27 08:39:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.902 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d498e730-2c72-4423-80f9-9db85c3d90b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:39:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58053be3-22f9-4c04-8413-7be89fdd1ad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:53.913 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace which is not needed anymore#033[00m
Jan 27 08:39:53 np0005597378 nova_compute[238941]: 2026-01-27 13:39:53.918 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:53 np0005597378 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 27 08:39:53 np0005597378 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 15.480s CPU time.
Jan 27 08:39:53 np0005597378 systemd-machined[207425]: Machine qemu-3-instance-00000003 terminated.
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.004 238945 INFO nova.virt.libvirt.driver [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Instance destroyed successfully.#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.005 238945 DEBUG nova.objects.instance [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'resources' on Instance uuid e494a15a-7ac1-47d9-be70-22ec46b36797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.017 238945 DEBUG nova.virt.libvirt.vif [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:39:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1717948008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1717948008',id=3,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:39:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-7qhtsdc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:39:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=e494a15a-7ac1-47d9-be70-22ec46b36797,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.018 238945 DEBUG nova.network.os_vif_util [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "address": "fa:16:3e:90:7c:f2", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee6301a2-f8", "ovs_interfaceid": "ee6301a2-f8c5-49f5-a6f6-5885ad339b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.019 238945 DEBUG nova.network.os_vif_util [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.019 238945 DEBUG os_vif [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee6301a2-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.028 238945 INFO os_vif [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:7c:f2,bridge_name='br-int',has_traffic_filtering=True,id=ee6301a2-f8c5-49f5-a6f6-5885ad339b05,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee6301a2-f8')#033[00m
Jan 27 08:39:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 27 08:39:54 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [NOTICE]   (247925) : haproxy version is 2.8.14-c23fe91
Jan 27 08:39:54 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [NOTICE]   (247925) : path to executable is /usr/sbin/haproxy
Jan 27 08:39:54 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [WARNING]  (247925) : Exiting Master process...
Jan 27 08:39:54 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [WARNING]  (247925) : Exiting Master process...
Jan 27 08:39:54 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [ALERT]    (247925) : Current worker (247927) exited with code 143 (Terminated)
Jan 27 08:39:54 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[247921]: [WARNING]  (247925) : All workers exited. Exiting... (0)
Jan 27 08:39:54 np0005597378 systemd[1]: libpod-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879.scope: Deactivated successfully.
Jan 27 08:39:54 np0005597378 podman[248858]: 2026-01-27 13:39:54.097623882 +0000 UTC m=+0.086554002 container died 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:39:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879-userdata-shm.mount: Deactivated successfully.
Jan 27 08:39:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-147413b571772a677643494592b143becb1d569a2d63e13c83951fecd7e472d7-merged.mount: Deactivated successfully.
Jan 27 08:39:54 np0005597378 podman[248858]: 2026-01-27 13:39:54.203228137 +0000 UTC m=+0.192158267 container cleanup 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:39:54 np0005597378 systemd[1]: libpod-conmon-9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879.scope: Deactivated successfully.
Jan 27 08:39:54 np0005597378 podman[248909]: 2026-01-27 13:39:54.274070103 +0000 UTC m=+0.050810625 container remove 9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.279 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5813b6b9-dc67-4c24-96a3-8718bada4d7c]: (4, ('Tue Jan 27 01:39:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879)\n9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879\nTue Jan 27 01:39:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879)\n9997c8ecbed4e491663db389e9e17840c19d6dac5f71ed053b3ab83a23908879\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.281 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02ed58ce-0d16-4878-bd26-794068590836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:39:54 np0005597378 kernel: tapd498e730-20: left promiscuous mode
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0961c120-1c3a-4534-a5d8-520cd36b1f30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a12bfa6-c3b7-4592-9e92-e1abb5ed8862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.321 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93c5f133-7b0c-4eeb-915b-72d53ac2d793]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.340 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[513320e5-8b2a-419a-b8a0-900604b73c51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 380394, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248925, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.353 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:39:54 np0005597378 systemd[1]: run-netns-ovnmeta\x2dd498e730\x2d2c72\x2d4423\x2d80f9\x2d9db85c3d90b3.mount: Deactivated successfully.
Jan 27 08:39:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:39:54.353 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[34ca26fd-0aa4-4400-a196-1152346b9ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.412 238945 INFO nova.virt.libvirt.driver [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deleting instance files /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797_del#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.415 238945 INFO nova.virt.libvirt.driver [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deletion of /var/lib/nova/instances/e494a15a-7ac1-47d9-be70-22ec46b36797_del complete#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.468 238945 INFO nova.compute.manager [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.469 238945 DEBUG oslo.service.loopingcall [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.469 238945 DEBUG nova.compute.manager [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:39:54 np0005597378 nova_compute[238941]: 2026-01-27 13:39:54.469 238945 DEBUG nova.network.neutron [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.544 238945 DEBUG nova.compute.manager [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-unplugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.544 238945 DEBUG oslo_concurrency.lockutils [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.544 238945 DEBUG oslo_concurrency.lockutils [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.545 238945 DEBUG oslo_concurrency.lockutils [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.545 238945 DEBUG nova.compute.manager [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] No waiting events found dispatching network-vif-unplugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.545 238945 DEBUG nova.compute.manager [req-78f46d74-f933-4715-8973-8e8136e3458a req-f66ba043-3faa-4fcc-9464-c53968a49445 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-unplugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.614 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.615 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.636 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.818 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.818 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.825 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.826 238945 INFO nova.compute.claims [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.893 238945 DEBUG nova.network.neutron [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.921 238945 INFO nova.compute.manager [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Took 1.45 seconds to deallocate network for instance.#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.982 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:55 np0005597378 nova_compute[238941]: 2026-01-27 13:39:55.984 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 135 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Jan 27 08:39:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:39:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241389866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.529 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.535 238945 DEBUG nova.compute.provider_tree [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.556 238945 DEBUG nova.scheduler.client.report [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.589 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.590 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.593 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.633 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.658 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.658 238945 DEBUG nova.network.neutron [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.663 238945 DEBUG oslo_concurrency.processutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.686 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.736 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.885 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.887 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.888 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Creating image(s)#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.914 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.952 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.973 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:56 np0005597378 nova_compute[238941]: 2026-01-27 13:39:56.976 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.001 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.001 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.032 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.033 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.033 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.034 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.050 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.053 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.070 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.159 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234262688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.226 238945 DEBUG oslo_concurrency.processutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.231 238945 DEBUG nova.compute.provider_tree [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.265 238945 DEBUG nova.scheduler.client.report [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.279 238945 DEBUG nova.network.neutron [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.279 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.297 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.299 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.306 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.307 238945 INFO nova.compute.claims [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.323 238945 INFO nova.scheduler.client.report [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Deleted allocations for instance e494a15a-7ac1-47d9-be70-22ec46b36797#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.421 238945 DEBUG oslo_concurrency.lockutils [None req-bcb2b2a5-f0e5-415a-8649-aaf9d5d4388e 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.504 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.746 238945 DEBUG nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.747 238945 DEBUG oslo_concurrency.lockutils [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 DEBUG oslo_concurrency.lockutils [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 DEBUG oslo_concurrency.lockutils [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e494a15a-7ac1-47d9-be70-22ec46b36797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 DEBUG nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] No waiting events found dispatching network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.748 238945 WARNING nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received unexpected event network-vif-plugged-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:39:57 np0005597378 nova_compute[238941]: 2026-01-27 13:39:57.749 238945 DEBUG nova.compute.manager [req-f13f9f0e-1ee0-48de-b513-1305202e6059 req-25f330c6-dd57-40af-a45e-b0ba992172d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Received event network-vif-deleted-ee6301a2-f8c5-49f5-a6f6-5885ad339b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:39:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 111 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Jan 27 08:39:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:39:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155842529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.283 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.779s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.287 238945 DEBUG nova.compute.provider_tree [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.311 238945 DEBUG nova.scheduler.client.report [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.334 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.335 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.377 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.377 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.402 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.454 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.543 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.657 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.658 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.659 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Creating image(s)#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.679 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.700 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.730 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.736 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.765 238945 DEBUG nova.policy [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11072876e4694e33bece015a47248409', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.784 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] resizing rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.825 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.827 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.827 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.828 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.848 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.852 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.931 238945 DEBUG nova.objects.instance [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.947 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.948 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Ensure instance console log exists: /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.949 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.949 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.949 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.951 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.955 238945 WARNING nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.959 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.960 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.964 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.965 238945 DEBUG nova.virt.libvirt.host [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.965 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.965 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.966 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.966 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.966 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.967 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.967 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.967 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.968 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.968 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.968 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.969 238945 DEBUG nova.virt.hardware [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:39:58 np0005597378 nova_compute[238941]: 2026-01-27 13:39:58.971 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:39:59 np0005597378 nova_compute[238941]: 2026-01-27 13:39:59.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:39:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:39:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320834523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:39:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:39:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320834523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:39:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:39:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278681254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:39:59 np0005597378 nova_compute[238941]: 2026-01-27 13:39:59.582 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:39:59 np0005597378 nova_compute[238941]: 2026-01-27 13:39:59.603 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:39:59 np0005597378 nova_compute[238941]: 2026-01-27 13:39:59.611 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 136 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 206 op/s
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.140 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Successfully created port: 558630aa-19e2-4422-b561-e8f9bf906893 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:40:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:40:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795193917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.263 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.264 238945 DEBUG nova.objects.instance [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.294 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <uuid>259f6a14-9cd8-416b-bef0-c3e0bf708340</uuid>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <name>instance-00000007</name>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:name>tempest-LiveMigrationNegativeTest-server-30873084</nova:name>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:39:58</nova:creationTime>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:user uuid="224925de56f64feca98f9fffb9810e07">tempest-LiveMigrationNegativeTest-1588682355-project-member</nova:user>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <nova:project uuid="09c59af3df414ec29b63dc65458aa7c2">tempest-LiveMigrationNegativeTest-1588682355</nova:project>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <entry name="serial">259f6a14-9cd8-416b-bef0-c3e0bf708340</entry>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <entry name="uuid">259f6a14-9cd8-416b-bef0-c3e0bf708340</entry>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/259f6a14-9cd8-416b-bef0-c3e0bf708340_disk">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/console.log" append="off"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:40:00 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:40:00 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:40:00 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:40:00 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.339 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.403 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] resizing rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.495 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.495 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.496 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Using config drive#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.515 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.700 238945 DEBUG nova.objects.instance [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'migration_context' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.776 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Creating config drive at /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.780 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkzn_p22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.824 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.845 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.849 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.850 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.850 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.876 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.877 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:00 np0005597378 nova_compute[238941]: 2026-01-27 13:40:00.905 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdkzn_p22" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.042 238945 DEBUG nova.storage.rbd_utils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] rbd image 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.046 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.063 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.064 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.082 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.087 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.144 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Successfully updated port: 558630aa-19e2-4422-b561-e8f9bf906893 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.160 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.161 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.161 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:40:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.682 238945 DEBUG nova.compute.manager [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-changed-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.683 238945 DEBUG nova.compute.manager [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing instance network info cache due to event network-changed-558630aa-19e2-4422-b561-e8f9bf906893. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.683 238945 DEBUG oslo_concurrency.lockutils [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:01 np0005597378 nova_compute[238941]: 2026-01-27 13:40:01.741 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 136 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.080 238945 DEBUG oslo_concurrency.processutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config 259f6a14-9cd8-416b-bef0-c3e0bf708340_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.081 238945 INFO nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deleting local config drive /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340/disk.config because it was imported into RBD.#033[00m
Jan 27 08:40:02 np0005597378 systemd-machined[207425]: New machine qemu-7-instance-00000007.
Jan 27 08:40:02 np0005597378 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 27 08:40:02 np0005597378 podman[249530]: 2026-01-27 13:40:02.263405636 +0000 UTC m=+0.107362895 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.292 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521187.291582, 02505f33-d581-487d-9fac-6798017dbe63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.294 238945 INFO nova.compute.manager [-] [instance: 02505f33-d581-487d-9fac-6798017dbe63] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.314 238945 DEBUG nova.compute.manager [None req-bb8daf74-23c4-45ce-ab6f-ed68bdba9ac4 - - - - - -] [instance: 02505f33-d581-487d-9fac-6798017dbe63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.682 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.791 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521202.7839396, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.791 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.793 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.793 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.799 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.800 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Ensure instance console log exists: /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.800 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.801 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.801 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.804 238945 INFO nova.virt.libvirt.driver [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance spawned successfully.#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.804 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.821 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.826 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.832 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.833 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.834 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.834 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.835 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.835 238945 DEBUG nova.virt.libvirt.driver [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.859 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.860 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521202.7846906, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.860 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Started (Lifecycle Event)#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.882 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.886 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.889 238945 INFO nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 6.00 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.890 238945 DEBUG nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.932 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.968 238945 INFO nova.compute.manager [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 7.27 seconds to build instance.#033[00m
Jan 27 08:40:02 np0005597378 nova_compute[238941]: 2026-01-27 13:40:02.989 238945 DEBUG oslo_concurrency.lockutils [None req-5ec18bb6-34d9-4ff6-9436-dd69ff1999ab 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.261 238945 DEBUG nova.network.neutron [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.287 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.288 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance network_info: |[{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.289 238945 DEBUG oslo_concurrency.lockutils [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.289 238945 DEBUG nova.network.neutron [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.293 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start _get_guest_xml network_info=[{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_options': None, 'size': 1, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.299 238945 WARNING nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.429 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.430 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.431 238945 DEBUG nova.objects.instance [None req-f3218032-411b-443f-85b0-e6981de45af5 352b7c73d6234fa0846f33c05eb4899e ed912b410a2d40e1b43d71ffdd3159a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.435 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.436 238945 DEBUG nova.virt.libvirt.host [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.436 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.437 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1473110656',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-1894129721',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.437 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.438 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.438 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.439 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.439 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.439 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.440 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.440 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.440 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.441 238945 DEBUG nova.virt.hardware [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.444 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.978 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521203.978518, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:03 np0005597378 nova_compute[238941]: 2026-01-27 13:40:03.980 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.002 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.006 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.038 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 08:40:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 186 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 187 op/s
Jan 27 08:40:04 np0005597378 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 27 08:40:04 np0005597378 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 1.098s CPU time.
Jan 27 08:40:04 np0005597378 systemd-machined[207425]: Machine qemu-7-instance-00000007 terminated.
Jan 27 08:40:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:40:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186958202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.710 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.711 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:04 np0005597378 nova_compute[238941]: 2026-01-27 13:40:04.729 238945 DEBUG nova.compute.manager [None req-f3218032-411b-443f-85b0-e6981de45af5 352b7c73d6234fa0846f33c05eb4899e ed912b410a2d40e1b43d71ffdd3159a3 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.113 238945 DEBUG nova.network.neutron [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updated VIF entry in instance network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.113 238945 DEBUG nova.network.neutron [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.132 238945 DEBUG oslo_concurrency.lockutils [req-759270bb-64cc-4f46-9635-60c0dc3cf9b5 req-dc56e2e3-dc58-46c9-befb-3083ea0d21ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:40:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180181542' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.341 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.362 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.366 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:40:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3926220709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.975 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.978 238945 DEBUG nova.virt.libvirt.vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:39:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1542454913',id=8,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-5920xcy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:39:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=aad8e98b-f3fc-4b25-bde9-310210ec6f13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.979 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.980 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:40:05 np0005597378 nova_compute[238941]: 2026-01-27 13:40:05.981 238945 DEBUG nova.objects.instance [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'pci_devices' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 203 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 482 KiB/s rd, 5.4 MiB/s wr, 145 op/s
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.135 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <uuid>aad8e98b-f3fc-4b25-bde9-310210ec6f13</uuid>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <name>instance-00000008</name>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1542454913</nova:name>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:40:03</nova:creationTime>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-1894129721">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:ephemeral>1</nova:ephemeral>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:user uuid="11072876e4694e33bece015a47248409">tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member</nova:user>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:project uuid="e6c8760bce6747b1a4ba3511f8705506">tempest-ServersWithSpecificFlavorTestJSON-418976095</nova:project>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <nova:port uuid="558630aa-19e2-4422-b561-e8f9bf906893">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <entry name="serial">aad8e98b-f3fc-4b25-bde9-310210ec6f13</entry>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <entry name="uuid">aad8e98b-f3fc-4b25-bde9-310210ec6f13</entry>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.eph0">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <target dev="vdb" bus="virtio"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:cf:1b:ef"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <target dev="tap558630aa-19"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/console.log" append="off"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:40:06 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:40:06 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:40:06 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:40:06 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.137 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Preparing to wait for external event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.137 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.138 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.138 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.139 238945 DEBUG nova.virt.libvirt.vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:39:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1542454913',id=8,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-5920xcy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:39:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=aad8e98b-f3fc-4b25-bde9-310210ec6f13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.139 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.140 238945 DEBUG nova.network.os_vif_util [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.141 238945 DEBUG os_vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.142 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.143 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap558630aa-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap558630aa-19, col_values=(('external_ids', {'iface-id': '558630aa-19e2-4422-b561-e8f9bf906893', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:1b:ef', 'vm-uuid': 'aad8e98b-f3fc-4b25-bde9-310210ec6f13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:06 np0005597378 NetworkManager[48904]: <info>  [1769521206.1536] manager: (tap558630aa-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.163 238945 INFO os_vif [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19')#033[00m
Jan 27 08:40:06 np0005597378 podman[249752]: 2026-01-27 13:40:06.252521112 +0000 UTC m=+0.050503137 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 08:40:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.323 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.324 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.324 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.324 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] No VIF found with MAC fa:16:3e:cf:1b:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.325 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Using config drive#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.346 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.975 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Creating config drive at /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config#033[00m
Jan 27 08:40:06 np0005597378 nova_compute[238941]: 2026-01-27 13:40:06.982 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpiic1iz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.108 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpiic1iz" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.136 238945 DEBUG nova.storage.rbd_utils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] rbd image aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.140 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.386 238945 DEBUG oslo_concurrency.processutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config aad8e98b-f3fc-4b25-bde9-310210ec6f13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.389 238945 INFO nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deleting local config drive /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13/disk.config because it was imported into RBD.#033[00m
Jan 27 08:40:07 np0005597378 kernel: tap558630aa-19: entered promiscuous mode
Jan 27 08:40:07 np0005597378 NetworkManager[48904]: <info>  [1769521207.4423] manager: (tap558630aa-19): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 27 08:40:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:07Z|00035|binding|INFO|Claiming lport 558630aa-19e2-4422-b561-e8f9bf906893 for this chassis.
Jan 27 08:40:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:07Z|00036|binding|INFO|558630aa-19e2-4422-b561-e8f9bf906893: Claiming fa:16:3e:cf:1b:ef 10.100.0.10
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.445 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.450 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:1b:ef 10.100.0.10'], port_security=['fa:16:3e:cf:1b:ef 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aad8e98b-f3fc-4b25-bde9-310210ec6f13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=558630aa-19e2-4422-b561-e8f9bf906893) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.452 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 558630aa-19e2-4422-b561-e8f9bf906893 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 bound to our chassis#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.453 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d498e730-2c72-4423-80f9-9db85c3d90b3#033[00m
Jan 27 08:40:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:07Z|00037|binding|INFO|Setting lport 558630aa-19e2-4422-b561-e8f9bf906893 ovn-installed in OVS
Jan 27 08:40:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:07Z|00038|binding|INFO|Setting lport 558630aa-19e2-4422-b561-e8f9bf906893 up in Southbound
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.466 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[173891b5-0e0f-42d9-a24d-f9cb8fe4d89e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.466 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd498e730-21 in ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.468 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd498e730-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.468 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[086ff083-a852-4d6e-be64-7dd4764a7a52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f47a4c-94c9-4a73-81b3-458b3995f3df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 systemd-udevd[249843]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:40:07 np0005597378 systemd-machined[207425]: New machine qemu-8-instance-00000008.
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.482 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[686985f3-17f0-4fc3-9f06-61a87bd425a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 NetworkManager[48904]: <info>  [1769521207.4923] device (tap558630aa-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:40:07 np0005597378 NetworkManager[48904]: <info>  [1769521207.4931] device (tap558630aa-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:40:07 np0005597378 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8196b8-da18-49cc-9095-c15915fd1941]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.539 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c8082daf-5afa-474e-ba56-5041c8cef552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 systemd-udevd[249848]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaad0af7-cdc7-45c1-b5bc-d120d804559d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 NetworkManager[48904]: <info>  [1769521207.5470] manager: (tapd498e730-20): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.577 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[56ae0e85-abfc-42e6-bb76-b8d501284295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.580 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8474b1-e906-4b3d-8cae-42ed30a2b439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 NetworkManager[48904]: <info>  [1769521207.6013] device (tapd498e730-20): carrier: link connected
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.606 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6dafcc95-84bd-4595-808c-fe61dfbe3a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9456220-d92d-46f6-8d76-038e650b0b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd498e730-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:09:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384722, 'reachable_time': 26230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249877, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.643 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[053f73cc-f836-4445-a5be-3a36b9dd7838]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:95e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384722, 'tstamp': 384722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249878, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.651 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.651 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.651 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "259f6a14-9cd8-416b-bef0-c3e0bf708340-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.652 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.652 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.653 238945 INFO nova.compute.manager [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Terminating instance#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.654 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "refresh_cache-259f6a14-9cd8-416b-bef0-c3e0bf708340" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.654 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquired lock "refresh_cache-259f6a14-9cd8-416b-bef0-c3e0bf708340" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.654 238945 DEBUG nova.network.neutron [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.675 238945 DEBUG nova.compute.manager [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.675 238945 DEBUG oslo_concurrency.lockutils [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.676 238945 DEBUG oslo_concurrency.lockutils [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.676 238945 DEBUG oslo_concurrency.lockutils [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.676 238945 DEBUG nova.compute.manager [req-290263ca-2f55-4b2c-8134-e08e18884f20 req-c13dd999-8936-433e-880c-c3357da5ce94 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Processing event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4100086a-42f2-4e75-8733-68a577c433ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd498e730-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:09:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384722, 'reachable_time': 26230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249879, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.725 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6824e5-99b9-436b-9f6f-c940f87a617d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.791 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30a93601-cb67-473b-8be1-9a6a85cec963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.793 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.793 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.793 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd498e730-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 kernel: tapd498e730-20: entered promiscuous mode
Jan 27 08:40:07 np0005597378 NetworkManager[48904]: <info>  [1769521207.7972] manager: (tapd498e730-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.798 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd498e730-20, col_values=(('external_ids', {'iface-id': '8c35d240-f8e0-427c-9fae-48cfa2369c72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:07Z|00039|binding|INFO|Releasing lport 8c35d240-f8e0-427c-9fae-48cfa2369c72 from this chassis (sb_readonly=0)
Jan 27 08:40:07 np0005597378 nova_compute[238941]: 2026-01-27 13:40:07.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.818 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.819 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0552006-7344-442c-8d06-bf25ab858c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.820 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/d498e730-2c72-4423-80f9-9db85c3d90b3.pid.haproxy
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID d498e730-2c72-4423-80f9-9db85c3d90b3
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:40:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:07.820 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'env', 'PROCESS_TAG=haproxy-d498e730-2c72-4423-80f9-9db85c3d90b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d498e730-2c72-4423-80f9-9db85c3d90b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:40:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 209 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 5.7 MiB/s wr, 121 op/s
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.376 238945 DEBUG nova.network.neutron [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:08 np0005597378 podman[249947]: 2026-01-27 13:40:08.287316463 +0000 UTC m=+0.027006281 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.521 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521208.5206192, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.521 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Started (Lifecycle Event)#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.523 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.527 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.533 238945 INFO nova.virt.libvirt.driver [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance spawned successfully.#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.535 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.550 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.556 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.582 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.583 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521208.5207865, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.583 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.589 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.590 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.590 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.591 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.591 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.592 238945 DEBUG nova.virt.libvirt.driver [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.601 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.605 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521208.526795, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.606 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.631 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.640 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.669 238945 INFO nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 10.01 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.670 238945 DEBUG nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.671 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.690 238945 DEBUG nova.network.neutron [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.709 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Releasing lock "refresh_cache-259f6a14-9cd8-416b-bef0-c3e0bf708340" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.710 238945 DEBUG nova.compute.manager [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.717 238945 INFO nova.virt.libvirt.driver [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance destroyed successfully.#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.718 238945 DEBUG nova.objects.instance [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'resources' on Instance uuid 259f6a14-9cd8-416b-bef0-c3e0bf708340 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:08 np0005597378 podman[249947]: 2026-01-27 13:40:08.734605371 +0000 UTC m=+0.474295189 container create f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.809 238945 INFO nova.compute.manager [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 11.66 seconds to build instance.#033[00m
Jan 27 08:40:08 np0005597378 nova_compute[238941]: 2026-01-27 13:40:08.893 238945 DEBUG oslo_concurrency.lockutils [None req-54ff4a4e-68c9-4ca7-84e0-a103b4e8eca5 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:08 np0005597378 systemd[1]: Started libpod-conmon-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a.scope.
Jan 27 08:40:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ef4fa3e8c90a013155b674ab0282224988c43f39bf1987a30926144ac64ff9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:09 np0005597378 nova_compute[238941]: 2026-01-27 13:40:09.002 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521194.0013242, e494a15a-7ac1-47d9-be70-22ec46b36797 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:09 np0005597378 nova_compute[238941]: 2026-01-27 13:40:09.002 238945 INFO nova.compute.manager [-] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:40:09 np0005597378 nova_compute[238941]: 2026-01-27 13:40:09.020 238945 DEBUG nova.compute.manager [None req-b2bbc6bd-5715-4917-a7e4-255912fcb366 - - - - - -] [instance: e494a15a-7ac1-47d9-be70-22ec46b36797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:09 np0005597378 podman[249947]: 2026-01-27 13:40:09.050143414 +0000 UTC m=+0.789833262 container init f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:40:09 np0005597378 podman[249947]: 2026-01-27 13:40:09.0566328 +0000 UTC m=+0.796322618 container start f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:40:09 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : New worker (250010) forked
Jan 27 08:40:09 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : Loading success.
Jan 27 08:40:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 27 08:40:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 204 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 178 op/s
Jan 27 08:40:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 27 08:40:10 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 27 08:40:10 np0005597378 nova_compute[238941]: 2026-01-27 13:40:10.386 238945 DEBUG nova.compute.manager [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:10 np0005597378 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG oslo_concurrency.lockutils [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:10 np0005597378 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG oslo_concurrency.lockutils [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:10 np0005597378 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG oslo_concurrency.lockutils [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:10 np0005597378 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 DEBUG nova.compute.manager [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] No waiting events found dispatching network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:40:10 np0005597378 nova_compute[238941]: 2026-01-27 13:40:10.387 238945 WARNING nova.compute.manager [req-b5b18ee5-a7b2-4cd4-8e83-7627df5ba2f5 req-f717db35-1613-4828-9c9d-db0da81b42a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received unexpected event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.509 238945 INFO nova.virt.libvirt.driver [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deleting instance files /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340_del#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.510 238945 INFO nova.virt.libvirt.driver [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deletion of /var/lib/nova/instances/259f6a14-9cd8-416b-bef0-c3e0bf708340_del complete#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.575 238945 INFO nova.compute.manager [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 2.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.576 238945 DEBUG oslo.service.loopingcall [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.576 238945 DEBUG nova.compute.manager [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.576 238945 DEBUG nova.network.neutron [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.637 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.902 238945 DEBUG nova.network.neutron [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.914 238945 DEBUG nova.network.neutron [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.928 238945 INFO nova.compute.manager [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Took 0.35 seconds to deallocate network for instance.#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.968 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:11 np0005597378 nova_compute[238941]: 2026-01-27 13:40:11.969 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 204 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 182 op/s
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.081 238945 DEBUG oslo_concurrency.processutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.513 238945 DEBUG nova.compute.manager [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-changed-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.513 238945 DEBUG nova.compute.manager [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing instance network info cache due to event network-changed-558630aa-19e2-4422-b561-e8f9bf906893. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.514 238945 DEBUG oslo_concurrency.lockutils [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.514 238945 DEBUG oslo_concurrency.lockutils [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.514 238945 DEBUG nova.network.neutron [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Refreshing network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:40:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738458867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.767 238945 DEBUG oslo_concurrency.processutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.773 238945 DEBUG nova.compute.provider_tree [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.825 238945 DEBUG nova.scheduler.client.report [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.857 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.890 238945 INFO nova.scheduler.client.report [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Deleted allocations for instance 259f6a14-9cd8-416b-bef0-c3e0bf708340#033[00m
Jan 27 08:40:12 np0005597378 nova_compute[238941]: 2026-01-27 13:40:12.956 238945 DEBUG oslo_concurrency.lockutils [None req-6e56f00b-1caf-463c-8319-bc13fcc419be 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "259f6a14-9cd8-416b-bef0-c3e0bf708340" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 27 08:40:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 27 08:40:13 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 27 08:40:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 543 KiB/s wr, 235 op/s
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.275 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "730980bf-3349-4faf-8757-7bcc05dac289" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.275 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.275 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "730980bf-3349-4faf-8757-7bcc05dac289-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.276 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.276 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.277 238945 INFO nova.compute.manager [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Terminating instance#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.278 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "refresh_cache-730980bf-3349-4faf-8757-7bcc05dac289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.278 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquired lock "refresh_cache-730980bf-3349-4faf-8757-7bcc05dac289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.278 238945 DEBUG nova.network.neutron [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.282 238945 DEBUG nova.network.neutron [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updated VIF entry in instance network info cache for port 558630aa-19e2-4422-b561-e8f9bf906893. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.283 238945 DEBUG nova.network.neutron [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [{"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.301 238945 DEBUG oslo_concurrency.lockutils [req-2a40cb07-f640-47ad-aaee-f4741ce5e549 req-6fadb497-4329-498c-b6f6-b193b16a46fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.423 238945 DEBUG nova.network.neutron [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.668 238945 DEBUG nova.network.neutron [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.681 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Releasing lock "refresh_cache-730980bf-3349-4faf-8757-7bcc05dac289" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:14 np0005597378 nova_compute[238941]: 2026-01-27 13:40:14.682 238945 DEBUG nova.compute.manager [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:40:14 np0005597378 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 27 08:40:14 np0005597378 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.198s CPU time.
Jan 27 08:40:14 np0005597378 systemd-machined[207425]: Machine qemu-6-instance-00000006 terminated.
Jan 27 08:40:15 np0005597378 nova_compute[238941]: 2026-01-27 13:40:15.108 238945 INFO nova.virt.libvirt.driver [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance destroyed successfully.#033[00m
Jan 27 08:40:15 np0005597378 nova_compute[238941]: 2026-01-27 13:40:15.109 238945 DEBUG nova.objects.instance [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lazy-loading 'resources' on Instance uuid 730980bf-3349-4faf-8757-7bcc05dac289 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 169 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 75 KiB/s wr, 217 op/s
Jan 27 08:40:16 np0005597378 nova_compute[238941]: 2026-01-27 13:40:16.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:16 np0005597378 nova_compute[238941]: 2026-01-27 13:40:16.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:40:17
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', 'default.rgw.meta', 'backups']
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.393 238945 INFO nova.virt.libvirt.driver [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deleting instance files /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289_del#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.393 238945 INFO nova.virt.libvirt.driver [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deletion of /var/lib/nova/instances/730980bf-3349-4faf-8757-7bcc05dac289_del complete#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.497 238945 INFO nova.compute.manager [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 2.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.498 238945 DEBUG oslo.service.loopingcall [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.498 238945 DEBUG nova.compute.manager [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.498 238945 DEBUG nova.network.neutron [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.807 238945 DEBUG nova.network.neutron [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.824 238945 DEBUG nova.network.neutron [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.841 238945 INFO nova.compute.manager [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Took 0.34 seconds to deallocate network for instance.#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.887 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.888 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:40:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:40:17 np0005597378 nova_compute[238941]: 2026-01-27 13:40:17.946 238945 DEBUG oslo_concurrency.processutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 131 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 118 op/s
Jan 27 08:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863995503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:18 np0005597378 nova_compute[238941]: 2026-01-27 13:40:18.532 238945 DEBUG oslo_concurrency.processutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:18 np0005597378 nova_compute[238941]: 2026-01-27 13:40:18.537 238945 DEBUG nova.compute.provider_tree [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:18 np0005597378 nova_compute[238941]: 2026-01-27 13:40:18.560 238945 DEBUG nova.scheduler.client.report [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:18 np0005597378 nova_compute[238941]: 2026-01-27 13:40:18.672 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:18 np0005597378 nova_compute[238941]: 2026-01-27 13:40:18.736 238945 INFO nova.scheduler.client.report [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Deleted allocations for instance 730980bf-3349-4faf-8757-7bcc05dac289#033[00m
Jan 27 08:40:18 np0005597378 nova_compute[238941]: 2026-01-27 13:40:18.865 238945 DEBUG oslo_concurrency.lockutils [None req-e6030b3b-a77e-41d4-b47b-91edc493307f 224925de56f64feca98f9fffb9810e07 09c59af3df414ec29b63dc65458aa7c2 - - default default] Lock "730980bf-3349-4faf-8757-7bcc05dac289" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:19 np0005597378 nova_compute[238941]: 2026-01-27 13:40:19.730 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521204.7272933, 259f6a14-9cd8-416b-bef0-c3e0bf708340 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:19 np0005597378 nova_compute[238941]: 2026-01-27 13:40:19.731 238945 INFO nova.compute.manager [-] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:40:19 np0005597378 nova_compute[238941]: 2026-01-27 13:40:19.757 238945 DEBUG nova.compute.manager [None req-a91be8b8-98ed-47f1-a645-02ee429acf35 - - - - - -] [instance: 259f6a14-9cd8-416b-bef0-c3e0bf708340] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 20 KiB/s wr, 142 op/s
Jan 27 08:40:21 np0005597378 nova_compute[238941]: 2026-01-27 13:40:21.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:40:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:40:21 np0005597378 nova_compute[238941]: 2026-01-27 13:40:21.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.8 KiB/s wr, 125 op/s
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.013082238 +0000 UTC m=+0.023529377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.133282559 +0000 UTC m=+0.143729678 container create 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:40:22 np0005597378 nova_compute[238941]: 2026-01-27 13:40:22.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:22 np0005597378 systemd[1]: Started libpod-conmon-7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7.scope.
Jan 27 08:40:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.344683427 +0000 UTC m=+0.355130576 container init 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.353575947 +0000 UTC m=+0.364023066 container start 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:22 np0005597378 exciting_carson[250245]: 167 167
Jan 27 08:40:22 np0005597378 systemd[1]: libpod-7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7.scope: Deactivated successfully.
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.390750693 +0000 UTC m=+0.401197832 container attach 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.392243493 +0000 UTC m=+0.402690632 container died 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:40:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a9454ad602aa51dfd192cf8454c9c397055b56bb6166c2b3282f143cae2fb900-merged.mount: Deactivated successfully.
Jan 27 08:40:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:40:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:40:22 np0005597378 podman[250229]: 2026-01-27 13:40:22.61176768 +0000 UTC m=+0.622214799 container remove 7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:40:22 np0005597378 systemd[1]: libpod-conmon-7d700fc662cf7c7ac14264fd275253f7ff498646ab1f64973c6a0274742a25b7.scope: Deactivated successfully.
Jan 27 08:40:22 np0005597378 podman[250269]: 2026-01-27 13:40:22.847424043 +0000 UTC m=+0.095719089 container create 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:40:22 np0005597378 podman[250269]: 2026-01-27 13:40:22.780444102 +0000 UTC m=+0.028739168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:40:22 np0005597378 systemd[1]: Started libpod-conmon-7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc.scope.
Jan 27 08:40:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:23 np0005597378 podman[250269]: 2026-01-27 13:40:23.071944775 +0000 UTC m=+0.320239841 container init 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:40:23 np0005597378 podman[250269]: 2026-01-27 13:40:23.080296701 +0000 UTC m=+0.328591747 container start 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:40:23 np0005597378 podman[250269]: 2026-01-27 13:40:23.117931259 +0000 UTC m=+0.366226335 container attach 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:40:23 np0005597378 goofy_ganguly[250286]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:40:23 np0005597378 goofy_ganguly[250286]: --> All data devices are unavailable
Jan 27 08:40:23 np0005597378 systemd[1]: libpod-7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc.scope: Deactivated successfully.
Jan 27 08:40:23 np0005597378 podman[250306]: 2026-01-27 13:40:23.778103554 +0000 UTC m=+0.025984654 container died 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:40:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4d58069e8b76f2ee229707a3283ec4eafcca1a92b9beaee803eb8d27bd0762de-merged.mount: Deactivated successfully.
Jan 27 08:40:24 np0005597378 podman[250306]: 2026-01-27 13:40:24.022221626 +0000 UTC m=+0.270102686 container remove 7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ganguly, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:40:24 np0005597378 systemd[1]: libpod-conmon-7cf93374170557a76de0d679884cd959c3e0cf192510d2eca1a5c4545c3714dc.scope: Deactivated successfully.
Jan 27 08:40:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 102 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 1.3 MiB/s wr, 73 op/s
Jan 27 08:40:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:24Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:1b:ef 10.100.0.10
Jan 27 08:40:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:24Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:1b:ef 10.100.0.10
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.496464772 +0000 UTC m=+0.052082919 container create b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:24 np0005597378 systemd[1]: Started libpod-conmon-b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb.scope.
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.467998783 +0000 UTC m=+0.023616950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:40:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.607854704 +0000 UTC m=+0.163472881 container init b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.616027495 +0000 UTC m=+0.171645642 container start b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:40:24 np0005597378 vibrant_stonebraker[250400]: 167 167
Jan 27 08:40:24 np0005597378 systemd[1]: libpod-b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb.scope: Deactivated successfully.
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.669815141 +0000 UTC m=+0.225433298 container attach b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.67053487 +0000 UTC m=+0.226153037 container died b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:40:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-329c75723c89e5c8763adb45abff66935544a1e93bddd4b624c0e0e992470bf9-merged.mount: Deactivated successfully.
Jan 27 08:40:24 np0005597378 podman[250383]: 2026-01-27 13:40:24.81218258 +0000 UTC m=+0.367800727 container remove b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:40:24 np0005597378 systemd[1]: libpod-conmon-b9384e744b8e650f896d4562f6f486726dab4d1321b7ddc21289f3d3023c60cb.scope: Deactivated successfully.
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:25.007952215 +0000 UTC m=+0.070398675 container create 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:24.962964198 +0000 UTC m=+0.025410678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:40:25 np0005597378 systemd[1]: Started libpod-conmon-466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd.scope.
Jan 27 08:40:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:25.135197036 +0000 UTC m=+0.197643516 container init 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:25.142105373 +0000 UTC m=+0.204551833 container start 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:25.15491547 +0000 UTC m=+0.217361930 container attach 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]: {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:    "0": [
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:        {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "devices": [
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "/dev/loop3"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            ],
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_name": "ceph_lv0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_size": "21470642176",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "name": "ceph_lv0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "tags": {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cluster_name": "ceph",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.crush_device_class": "",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.encrypted": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.objectstore": "bluestore",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osd_id": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.type": "block",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.vdo": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.with_tpm": "0"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            },
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "type": "block",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "vg_name": "ceph_vg0"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:        }
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:    ],
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:    "1": [
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:        {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "devices": [
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "/dev/loop4"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            ],
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_name": "ceph_lv1",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_size": "21470642176",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "name": "ceph_lv1",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "tags": {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cluster_name": "ceph",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.crush_device_class": "",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.encrypted": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.objectstore": "bluestore",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osd_id": "1",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.type": "block",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.vdo": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.with_tpm": "0"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            },
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "type": "block",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "vg_name": "ceph_vg1"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:        }
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:    ],
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:    "2": [
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:        {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "devices": [
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "/dev/loop5"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            ],
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_name": "ceph_lv2",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_size": "21470642176",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "name": "ceph_lv2",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "tags": {
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.cluster_name": "ceph",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.crush_device_class": "",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.encrypted": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.objectstore": "bluestore",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osd_id": "2",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.type": "block",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.vdo": "0",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:                "ceph.with_tpm": "0"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            },
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "type": "block",
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:            "vg_name": "ceph_vg2"
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:        }
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]:    ]
Jan 27 08:40:25 np0005597378 priceless_herschel[250441]: }
Jan 27 08:40:25 np0005597378 systemd[1]: libpod-466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd.scope: Deactivated successfully.
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:25.457446972 +0000 UTC m=+0.519893432 container died 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:40:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c60260ab483f5e4574b95eb3f4e1c3513b972b763edb6b72bc12b176ab96cdc0-merged.mount: Deactivated successfully.
Jan 27 08:40:25 np0005597378 podman[250425]: 2026-01-27 13:40:25.6356202 +0000 UTC m=+0.698066660 container remove 466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_herschel, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:40:25 np0005597378 systemd[1]: libpod-conmon-466efeef7c11b6ca429c0a729c8f9fadb15eda88ca716771895a48b274aa53dd.scope: Deactivated successfully.
Jan 27 08:40:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 120 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 2.6 MiB/s wr, 132 op/s
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.145586363 +0000 UTC m=+0.063370276 container create fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:40:26 np0005597378 nova_compute[238941]: 2026-01-27 13:40:26.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.106201038 +0000 UTC m=+0.023984971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:40:26 np0005597378 systemd[1]: Started libpod-conmon-fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2.scope.
Jan 27 08:40:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.295662091 +0000 UTC m=+0.213446024 container init fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.303069432 +0000 UTC m=+0.220853345 container start fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:40:26 np0005597378 inspiring_neumann[250540]: 167 167
Jan 27 08:40:26 np0005597378 systemd[1]: libpod-fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2.scope: Deactivated successfully.
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.317127452 +0000 UTC m=+0.234911385 container attach fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.317590615 +0000 UTC m=+0.235374538 container died fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:40:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-698dc6a1b63c34f8e51dee13cca89c9abeea057b91d87104b14a7d2557d147ce-merged.mount: Deactivated successfully.
Jan 27 08:40:26 np0005597378 nova_compute[238941]: 2026-01-27 13:40:26.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:26 np0005597378 nova_compute[238941]: 2026-01-27 13:40:26.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:26 np0005597378 podman[250524]: 2026-01-27 13:40:26.650451876 +0000 UTC m=+0.568235789 container remove fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:40:26 np0005597378 systemd[1]: libpod-conmon-fc48c88511d513e27be49f46eedffe0fb2a3db1c6dbb2e0bc4e413f8f136ddc2.scope: Deactivated successfully.
Jan 27 08:40:26 np0005597378 podman[250563]: 2026-01-27 13:40:26.822074728 +0000 UTC m=+0.023058665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:40:26 np0005597378 podman[250563]: 2026-01-27 13:40:26.963844672 +0000 UTC m=+0.164828599 container create b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:27 np0005597378 systemd[1]: Started libpod-conmon-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope.
Jan 27 08:40:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:27 np0005597378 podman[250563]: 2026-01-27 13:40:27.190382989 +0000 UTC m=+0.391366946 container init b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:40:27 np0005597378 podman[250563]: 2026-01-27 13:40:27.198154949 +0000 UTC m=+0.399138876 container start b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:40:27 np0005597378 podman[250563]: 2026-01-27 13:40:27.291040411 +0000 UTC m=+0.492024368 container attach b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000757731287214077 of space, bias 1.0, pg target 0.22731938616422312 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666696730664196 of space, bias 1.0, pg target 0.20000090191992587 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2672963595416712e-06 of space, bias 4.0, pg target 0.0015207556314500055 quantized to 16 (current 16)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:40:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:40:27 np0005597378 lvm[250658]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:40:27 np0005597378 lvm[250659]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:40:27 np0005597378 lvm[250658]: VG ceph_vg0 finished
Jan 27 08:40:27 np0005597378 lvm[250659]: VG ceph_vg1 finished
Jan 27 08:40:27 np0005597378 lvm[250661]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:40:27 np0005597378 lvm[250661]: VG ceph_vg2 finished
Jan 27 08:40:28 np0005597378 lvm[250662]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:40:28 np0005597378 lvm[250663]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:40:28 np0005597378 lvm[250662]: VG ceph_vg2 finished
Jan 27 08:40:28 np0005597378 lvm[250663]: VG ceph_vg0 finished
Jan 27 08:40:28 np0005597378 lvm[250666]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:40:28 np0005597378 lvm[250666]: VG ceph_vg2 finished
Jan 27 08:40:28 np0005597378 upbeat_morse[250579]: {}
Jan 27 08:40:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 121 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.6 MiB/s wr, 128 op/s
Jan 27 08:40:28 np0005597378 systemd[1]: libpod-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope: Deactivated successfully.
Jan 27 08:40:28 np0005597378 systemd[1]: libpod-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope: Consumed 1.351s CPU time.
Jan 27 08:40:28 np0005597378 podman[250563]: 2026-01-27 13:40:28.109469615 +0000 UTC m=+1.310453542 container died b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1feb130c6575f1201663e8d04cdcd295fbb126a2ae48c8ed46975741c90ef4d5-merged.mount: Deactivated successfully.
Jan 27 08:40:28 np0005597378 podman[250563]: 2026-01-27 13:40:28.896981514 +0000 UTC m=+2.097965441 container remove b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_morse, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:40:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:40:28 np0005597378 systemd[1]: libpod-conmon-b1d0b43a98ac976c8c21f4271371e369feb425b7b2e137ab1044443e778d4c6f.scope: Deactivated successfully.
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.981 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.984 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.984 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.985 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.985 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.986 238945 INFO nova.compute.manager [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Terminating instance#033[00m
Jan 27 08:40:28 np0005597378 nova_compute[238941]: 2026-01-27 13:40:28.988 238945 DEBUG nova.compute.manager [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:40:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:40:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:40:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:40:29 np0005597378 kernel: tap558630aa-19 (unregistering): left promiscuous mode
Jan 27 08:40:29 np0005597378 NetworkManager[48904]: <info>  [1769521229.6383] device (tap558630aa-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:40:29 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:29Z|00040|binding|INFO|Releasing lport 558630aa-19e2-4422-b561-e8f9bf906893 from this chassis (sb_readonly=0)
Jan 27 08:40:29 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:29Z|00041|binding|INFO|Setting lport 558630aa-19e2-4422-b561-e8f9bf906893 down in Southbound
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:29Z|00042|binding|INFO|Removing iface tap558630aa-19 ovn-installed in OVS
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.647 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:1b:ef 10.100.0.10'], port_security=['fa:16:3e:cf:1b:ef 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aad8e98b-f3fc-4b25-bde9-310210ec6f13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d498e730-2c72-4423-80f9-9db85c3d90b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c8760bce6747b1a4ba3511f8705506', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31419324-7d4c-43e9-852f-e0d589f988c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475bcc00-e7b6-41a0-91e0-5d0bed50aab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=558630aa-19e2-4422-b561-e8f9bf906893) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:40:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:40:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:40:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.649 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 558630aa-19e2-4422-b561-e8f9bf906893 in datapath d498e730-2c72-4423-80f9-9db85c3d90b3 unbound from our chassis#033[00m
Jan 27 08:40:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.650 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d498e730-2c72-4423-80f9-9db85c3d90b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:40:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05536207-cd62-4a9a-9c9f-b8b05ac731f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:29.652 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 namespace which is not needed anymore#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 27 08:40:29 np0005597378 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.343s CPU time.
Jan 27 08:40:29 np0005597378 systemd-machined[207425]: Machine qemu-8-instance-00000008 terminated.
Jan 27 08:40:29 np0005597378 NetworkManager[48904]: <info>  [1769521229.8087] manager: (tap558630aa-19): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.834 238945 INFO nova.virt.libvirt.driver [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance destroyed successfully.#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.835 238945 DEBUG nova.objects.instance [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lazy-loading 'resources' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:29 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : haproxy version is 2.8.14-c23fe91
Jan 27 08:40:29 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [NOTICE]   (250008) : path to executable is /usr/sbin/haproxy
Jan 27 08:40:29 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [WARNING]  (250008) : Exiting Master process...
Jan 27 08:40:29 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [WARNING]  (250008) : Exiting Master process...
Jan 27 08:40:29 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [ALERT]    (250008) : Current worker (250010) exited with code 143 (Terminated)
Jan 27 08:40:29 np0005597378 neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3[250004]: [WARNING]  (250008) : All workers exited. Exiting... (0)
Jan 27 08:40:29 np0005597378 systemd[1]: libpod-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a.scope: Deactivated successfully.
Jan 27 08:40:29 np0005597378 podman[250725]: 2026-01-27 13:40:29.892346244 +0000 UTC m=+0.151641453 container died f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.975 238945 DEBUG nova.virt.libvirt.vif [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:39:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1542454913',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1542454913',id=8,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwzPpGs9i367sw8MOocvigF4zqItmlmPkAlaT0CtVf6ehKNBvAu8CNmGhT+K+n26UIw0F1D+s8EmBKRSNqCInwW2US8JTvgufbvsYeddIAe+Q1kookPhPKMV19zyKHZsw==',key_name='tempest-keypair-347425152',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:40:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c8760bce6747b1a4ba3511f8705506',ramdisk_id='',reservation_id='r-5920xcy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-418976095',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-418976095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:40:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11072876e4694e33bece015a47248409',uuid=aad8e98b-f3fc-4b25-bde9-310210ec6f13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.975 238945 DEBUG nova.network.os_vif_util [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converting VIF {"id": "558630aa-19e2-4422-b561-e8f9bf906893", "address": "fa:16:3e:cf:1b:ef", "network": {"id": "d498e730-2c72-4423-80f9-9db85c3d90b3", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1515870511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c8760bce6747b1a4ba3511f8705506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558630aa-19", "ovs_interfaceid": "558630aa-19e2-4422-b561-e8f9bf906893", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.976 238945 DEBUG nova.network.os_vif_util [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.976 238945 DEBUG os_vif [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.978 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap558630aa-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:29 np0005597378 nova_compute[238941]: 2026-01-27 13:40:29.982 238945 INFO os_vif [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:1b:ef,bridge_name='br-int',has_traffic_filtering=True,id=558630aa-19e2-4422-b561-e8f9bf906893,network=Network(d498e730-2c72-4423-80f9-9db85c3d90b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558630aa-19')#033[00m
Jan 27 08:40:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 123 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.107 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521215.1054792, 730980bf-3349-4faf-8757-7bcc05dac289 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.107 238945 INFO nova.compute.manager [-] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.132 238945 DEBUG nova.compute.manager [None req-0dd77eb5-a10a-40da-80d6-359178a81105 - - - - - -] [instance: 730980bf-3349-4faf-8757-7bcc05dac289] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.184 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.184 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a-userdata-shm.mount: Deactivated successfully.
Jan 27 08:40:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e8ef4fa3e8c90a013155b674ab0282224988c43f39bf1987a30926144ac64ff9-merged.mount: Deactivated successfully.
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.198 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.257 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.258 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.266 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.267 238945 INFO nova.compute.claims [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.408 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:30 np0005597378 podman[250725]: 2026-01-27 13:40:30.560288629 +0000 UTC m=+0.819583848 container cleanup f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.561 238945 DEBUG nova.compute.manager [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-unplugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.561 238945 DEBUG oslo_concurrency.lockutils [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG oslo_concurrency.lockutils [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG oslo_concurrency.lockutils [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG nova.compute.manager [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] No waiting events found dispatching network-vif-unplugged-558630aa-19e2-4422-b561-e8f9bf906893 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:40:30 np0005597378 nova_compute[238941]: 2026-01-27 13:40:30.562 238945 DEBUG nova.compute.manager [req-ef76c368-5724-4c1f-a50d-a4820b46b47e req-3ba30c90-4a3b-479d-a7c1-83cb29ed5d79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-unplugged-558630aa-19e2-4422-b561-e8f9bf906893 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:40:30 np0005597378 systemd[1]: libpod-conmon-f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a.scope: Deactivated successfully.
Jan 27 08:40:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147673534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.037 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.043 238945 DEBUG nova.compute.provider_tree [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.064 238945 DEBUG nova.scheduler.client.report [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:31 np0005597378 podman[250802]: 2026-01-27 13:40:31.08130504 +0000 UTC m=+0.496482219 container remove f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.086 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32702b01-c9c3-49ab-9b5b-676422343004]: (4, ('Tue Jan 27 01:40:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a)\nf6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a\nTue Jan 27 01:40:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 (f6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a)\nf6f81aa4b11f23b81258d574758ae40b93d963969b893480519c00a8fd98195a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f56945a8-b7a0-43cc-87fe-4668b513195a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.089 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd498e730-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:31 np0005597378 kernel: tapd498e730-20: left promiscuous mode
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.097 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.098 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78aa3d63-2302-451f-acf8-20164100af7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.132 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81c1fd4c-05ea-4bf2-be33-79250b76af21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.133 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c249f95-2d7e-4078-9ec1-b25a6b82add8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.146 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.146 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.152 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c21df1d-1643-4351-b220-e254c343bdc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384715, 'reachable_time': 28182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250818, 'error': None, 'target': 'ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.155 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d498e730-2c72-4423-80f9-9db85c3d90b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:40:31 np0005597378 systemd[1]: run-netns-ovnmeta\x2dd498e730\x2d2c72\x2d4423\x2d80f9\x2d9db85c3d90b3.mount: Deactivated successfully.
Jan 27 08:40:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:31.156 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b5998d24-11f6-4ffc-b688-6ed0904af667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.166 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.183 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.277 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.279 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.279 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Creating image(s)#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.299 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.322 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.345 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.348 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.368 238945 DEBUG nova.policy [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8045d106ed8b424aaa83fc2438f630c5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76574efd3c594ec5ad8e8d556f365038', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:40:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.416 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.417 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.418 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.418 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.447 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.452 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:31 np0005597378 nova_compute[238941]: 2026-01-27 13:40:31.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 123 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 2.4 MiB/s wr, 81 op/s
Jan 27 08:40:32 np0005597378 nova_compute[238941]: 2026-01-27 13:40:32.309 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Successfully created port: 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:40:32 np0005597378 nova_compute[238941]: 2026-01-27 13:40:32.551 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:32 np0005597378 nova_compute[238941]: 2026-01-27 13:40:32.603 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] resizing rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:40:32 np0005597378 podman[250972]: 2026-01-27 13:40:32.766059494 +0000 UTC m=+0.108812144 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:40:32 np0005597378 nova_compute[238941]: 2026-01-27 13:40:32.961 238945 DEBUG nova.objects.instance [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'migration_context' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.020 238945 DEBUG nova.compute.manager [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.020 238945 DEBUG oslo_concurrency.lockutils [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 DEBUG oslo_concurrency.lockutils [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 DEBUG oslo_concurrency.lockutils [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 DEBUG nova.compute.manager [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] No waiting events found dispatching network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.021 238945 WARNING nova.compute.manager [req-73cb899f-900c-42ee-960f-12d53dc9d111 req-b0dbdce8-7718-42b8-82a7-c55ce4bba160 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received unexpected event network-vif-plugged-558630aa-19e2-4422-b561-e8f9bf906893 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.035 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.035 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Ensure instance console log exists: /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.036 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.036 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.036 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:33.766 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:40:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:33.767 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:33 np0005597378 nova_compute[238941]: 2026-01-27 13:40:33.959 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Successfully updated port: 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.040 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.040 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.041 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:40:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 102 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 394 KiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.354 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.572 238945 INFO nova.virt.libvirt.driver [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deleting instance files /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13_del#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.573 238945 INFO nova.virt.libvirt.driver [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deletion of /var/lib/nova/instances/aad8e98b-f3fc-4b25-bde9-310210ec6f13_del complete#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.793 238945 INFO nova.compute.manager [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 5.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.793 238945 DEBUG oslo.service.loopingcall [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.794 238945 DEBUG nova.compute.manager [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.794 238945 DEBUG nova.network.neutron [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:40:34 np0005597378 nova_compute[238941]: 2026-01-27 13:40:34.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.120 238945 DEBUG nova.compute.manager [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.121 238945 DEBUG nova.compute.manager [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.121 238945 DEBUG oslo_concurrency.lockutils [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.259 238945 DEBUG nova.network.neutron [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.465 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.466 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance network_info: |[{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.466 238945 DEBUG oslo_concurrency.lockutils [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.466 238945 DEBUG nova.network.neutron [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.469 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start _get_guest_xml network_info=[{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.473 238945 WARNING nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.478 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.478 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.484 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.484 238945 DEBUG nova.virt.libvirt.host [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.485 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.485 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.485 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.486 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.487 238945 DEBUG nova.virt.hardware [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.490 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:35 np0005597378 nova_compute[238941]: 2026-01-27 13:40:35.967 238945 DEBUG nova.network.neutron [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Jan 27 08:40:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:40:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935361014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.145 238945 INFO nova.compute.manager [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Took 1.35 seconds to deallocate network for instance.#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.152 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.171 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.176 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.269 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.269 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.413 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.643 238945 DEBUG oslo_concurrency.processutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:36 np0005597378 podman[251076]: 2026-01-27 13:40:36.703192544 +0000 UTC m=+0.044822463 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:40:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:40:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542741747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.731 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.733 238945 DEBUG nova.virt.libvirt.vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1535303300',display_name='tempest-FloatingIPsAssociationTestJSON-server-1535303300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1535303300',id=9,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-sq4zkikx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:31Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=bb83a99e-76c6-4a1a-8b12-39a44d77f760,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.733 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.734 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.735 238945 DEBUG nova.objects.instance [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.786 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <uuid>bb83a99e-76c6-4a1a-8b12-39a44d77f760</uuid>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <name>instance-00000009</name>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1535303300</nova:name>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:40:35</nova:creationTime>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:user uuid="8045d106ed8b424aaa83fc2438f630c5">tempest-FloatingIPsAssociationTestJSON-1663098013-project-member</nova:user>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:project uuid="76574efd3c594ec5ad8e8d556f365038">tempest-FloatingIPsAssociationTestJSON-1663098013</nova:project>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <nova:port uuid="402a0a5c-d6b4-4d22-843f-4e65f18d7327">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <entry name="serial">bb83a99e-76c6-4a1a-8b12-39a44d77f760</entry>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <entry name="uuid">bb83a99e-76c6-4a1a-8b12-39a44d77f760</entry>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e6:57:29"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <target dev="tap402a0a5c-d6"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/console.log" append="off"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:40:36 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:40:36 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:40:36 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:40:36 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.788 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Preparing to wait for external event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.788 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.788 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.789 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.789 238945 DEBUG nova.virt.libvirt.vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1535303300',display_name='tempest-FloatingIPsAssociationTestJSON-server-1535303300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1535303300',id=9,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-sq4zkikx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:31Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=bb83a99e-76c6-4a1a-8b12-39a44d77f760,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.790 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.790 238945 DEBUG nova.network.os_vif_util [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.791 238945 DEBUG os_vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.792 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.792 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.795 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap402a0a5c-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.796 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap402a0a5c-d6, col_values=(('external_ids', {'iface-id': '402a0a5c-d6b4-4d22-843f-4e65f18d7327', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:57:29', 'vm-uuid': 'bb83a99e-76c6-4a1a-8b12-39a44d77f760'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:36 np0005597378 NetworkManager[48904]: <info>  [1769521236.7986] manager: (tap402a0a5c-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:36 np0005597378 nova_compute[238941]: 2026-01-27 13:40:36.803 238945 INFO os_vif [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6')#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.010 238945 DEBUG nova.network.neutron [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.010 238945 DEBUG nova.network.neutron [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.040 238945 DEBUG oslo_concurrency.lockutils [req-c5d7864a-27c2-471a-89e9-787c445e7b85 req-dd2af98f-afcc-4531-a328-918003753eda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.166 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.167 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.167 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No VIF found with MAC fa:16:3e:e6:57:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.167 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Using config drive#033[00m
Jan 27 08:40:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580304604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.231 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.237 238945 DEBUG oslo_concurrency.processutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.241 238945 DEBUG nova.compute.provider_tree [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.247 238945 DEBUG nova.compute.manager [req-8fd70049-8669-41f7-8791-2b021a5d9ed6 req-7dbd3f76-ca62-438d-9db7-a28d560573e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Received event network-vif-deleted-558630aa-19e2-4422-b561-e8f9bf906893 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.266 238945 DEBUG nova.scheduler.client.report [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.332 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.415 238945 INFO nova.scheduler.client.report [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Deleted allocations for instance aad8e98b-f3fc-4b25-bde9-310210ec6f13#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.428 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.585 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.586 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.586 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.586 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aad8e98b-f3fc-4b25-bde9-310210ec6f13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.656 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Creating config drive at /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.661 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdw6pt_x8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.790 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdw6pt_x8" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.813 238945 DEBUG nova.storage.rbd_utils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.817 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.838 238945 DEBUG oslo_concurrency.lockutils [None req-7df016e2-8e7e-4be6-8120-43a7e409fd87 11072876e4694e33bece015a47248409 e6c8760bce6747b1a4ba3511f8705506 - - default default] Lock "aad8e98b-f3fc-4b25-bde9-310210ec6f13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.839 238945 DEBUG nova.compute.utils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Jan 27 08:40:37 np0005597378 nova_compute[238941]: 2026-01-27 13:40:37.950 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.452 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.682 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-aad8e98b-f3fc-4b25-bde9-310210ec6f13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.682 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.683 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.683 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.683 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:40:38 np0005597378 nova_compute[238941]: 2026-01-27 13:40:38.684 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.117 238945 DEBUG oslo_concurrency.processutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config bb83a99e-76c6-4a1a-8b12-39a44d77f760_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.118 238945 INFO nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deleting local config drive /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760/disk.config because it was imported into RBD.#033[00m
Jan 27 08:40:39 np0005597378 kernel: tap402a0a5c-d6: entered promiscuous mode
Jan 27 08:40:39 np0005597378 NetworkManager[48904]: <info>  [1769521239.1598] manager: (tap402a0a5c-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 27 08:40:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:39Z|00043|binding|INFO|Claiming lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 for this chassis.
Jan 27 08:40:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:39Z|00044|binding|INFO|402a0a5c-d6b4-4d22-843f-4e65f18d7327: Claiming fa:16:3e:e6:57:29 10.100.0.14
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:39Z|00045|binding|INFO|Setting lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 ovn-installed in OVS
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:39 np0005597378 systemd-udevd[251193]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:40:39 np0005597378 systemd-machined[207425]: New machine qemu-9-instance-00000009.
Jan 27 08:40:39 np0005597378 NetworkManager[48904]: <info>  [1769521239.1991] device (tap402a0a5c-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:40:39 np0005597378 NetworkManager[48904]: <info>  [1769521239.2006] device (tap402a0a5c-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:40:39 np0005597378 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.591 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.593 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.593 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.594 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.594 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:39Z|00046|binding|INFO|Setting lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 up in Southbound
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.661 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:57:29 10.100.0.14'], port_security=['fa:16:3e:e6:57:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bb83a99e-76c6-4a1a-8b12-39a44d77f760', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=402a0a5c-d6b4-4d22-843f-4e65f18d7327) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.663 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf bound to our chassis#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.665 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.676 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dabbc69-7905-4ac3-ba06-e80698f55ecb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.676 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape52da3e3-81 in ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.678 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape52da3e3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.679 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edb55ffd-7e32-4184-a816-b30133c16607]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.680 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6a29dd-fd61-49e0-8776-9da857a71843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.698 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b3038971-b439-4f5f-a430-a8a545f1ba99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.728 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5b3d84-2439-4d90-8c65-f09e6c51a5f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.759 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1c4715-1e20-4a65-a5b3-51dc0bddf914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 NetworkManager[48904]: <info>  [1769521239.7663] manager: (tape52da3e3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.767 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60187632-da26-4beb-a295-a39e01e293c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.807 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ab048ddd-6719-49cb-b9e2-262ec065fda4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.810 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[98fe8920-e02f-4a64-ad82-797b820a7c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.820 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521239.8201237, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.821 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Started (Lifecycle Event)#033[00m
Jan 27 08:40:39 np0005597378 NetworkManager[48904]: <info>  [1769521239.8320] device (tape52da3e3-80): carrier: link connected
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.837 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[44e74f78-7b16-48d4-bef5-b9f0230b2369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.857 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bf6a25-0e8d-47ad-a211-f8d9a0968672]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251288, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.875 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d50e940e-0c54-42da-8788-2d0b2258c0e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:c7b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387945, 'tstamp': 387945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251289, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77ee5d95-237b-48bb-808a-d737725fdb05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251290, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91814a98-0e61-4769-8a63-e310f79a326b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.981 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c03cef6e-e2dc-4806-8194-4318911d1985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.982 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.982 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.982 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape52da3e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:39 np0005597378 kernel: tape52da3e3-80: entered promiscuous mode
Jan 27 08:40:39 np0005597378 NetworkManager[48904]: <info>  [1769521239.9864] manager: (tape52da3e3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:39.989 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape52da3e3-80, col_values=(('external_ids', {'iface-id': 'e49b2201-5631-4e9a-aefd-04e11db46733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:39 np0005597378 nova_compute[238941]: 2026-01-27 13:40:39.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:39Z|00047|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.008 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.009 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1c77f5-85ff-4c04-9506-1fd4919f8dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.009 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.pid.haproxy
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e52da3e3-8f9f-4f76-b6d4-298e7af46abf
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:40:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:40.010 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'env', 'PROCESS_TAG=haproxy-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e52da3e3-8f9f-4f76-b6d4-298e7af46abf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:40:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 08:40:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910277325' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.276 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:40 np0005597378 podman[251323]: 2026-01-27 13:40:40.350169347 +0000 UTC m=+0.023069306 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.589 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.592 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521239.821223, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.593 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.767 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.772 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:40 np0005597378 podman[251323]: 2026-01-27 13:40:40.776715432 +0000 UTC m=+0.449615371 container create 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.861 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.867 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:40:40 np0005597378 nova_compute[238941]: 2026-01-27 13:40:40.868 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:40:41 np0005597378 systemd[1]: Started libpod-conmon-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306.scope.
Jan 27 08:40:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:40:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73eff589aa6237cc3bbb3f41238c040449583acb2328fd2f0237bb549399c0fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.132 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.134 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4590MB free_disk=59.967469753697515GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.134 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.134 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:41 np0005597378 podman[251323]: 2026-01-27 13:40:41.169891206 +0000 UTC m=+0.842791155 container init 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:40:41 np0005597378 podman[251323]: 2026-01-27 13:40:41.176910586 +0000 UTC m=+0.849810525 container start 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:40:41 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : New worker (251344) forked
Jan 27 08:40:41 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : Loading success.
Jan 27 08:40:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bb83a99e-76c6-4a1a-8b12-39a44d77f760 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:40:41 np0005597378 nova_compute[238941]: 2026-01-27 13:40:41.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.014 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 27 08:40:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292512770' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.768 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.775 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.795 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.815 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.816 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.816 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.817 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 08:40:42 np0005597378 nova_compute[238941]: 2026-01-27 13:40:42.833 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.082 238945 DEBUG nova.compute.manager [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.082 238945 DEBUG oslo_concurrency.lockutils [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.082 238945 DEBUG oslo_concurrency.lockutils [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.083 238945 DEBUG oslo_concurrency.lockutils [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.083 238945 DEBUG nova.compute.manager [req-6b61dd06-647e-4930-a06f-4c92983bf663 req-8ce968bd-0c9f-4c8d-a6a3-eed957f8fc06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Processing event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.083 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.087 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.088 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521243.0876036, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.088 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.092 238945 INFO nova.virt.libvirt.driver [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance spawned successfully.#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.092 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.112 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.119 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.123 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.124 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.124 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.125 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.125 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.126 238945 DEBUG nova.virt.libvirt.driver [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.149 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.178 238945 INFO nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 11.90 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.178 238945 DEBUG nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.247 238945 INFO nova.compute.manager [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 13.01 seconds to build instance.#033[00m
Jan 27 08:40:43 np0005597378 nova_compute[238941]: 2026-01-27 13:40:43.262 238945 DEBUG oslo_concurrency.lockutils [None req-ce2cdbda-4208-4ea5-8751-2c2e7277bd94 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:43.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:40:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.532 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.532 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.581 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.581 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.833 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521229.832773, aad8e98b-f3fc-4b25-bde9-310210ec6f13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.834 238945 INFO nova.compute.manager [-] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:40:44 np0005597378 nova_compute[238941]: 2026-01-27 13:40:44.905 238945 DEBUG nova.compute.manager [None req-72f482e4-792b-4f44-b4dd-24b1d019a909 - - - - - -] [instance: aad8e98b-f3fc-4b25-bde9-310210ec6f13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:40:45 np0005597378 nova_compute[238941]: 2026-01-27 13:40:45.287 238945 DEBUG nova.compute.manager [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:45 np0005597378 nova_compute[238941]: 2026-01-27 13:40:45.288 238945 DEBUG oslo_concurrency.lockutils [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:45 np0005597378 nova_compute[238941]: 2026-01-27 13:40:45.288 238945 DEBUG oslo_concurrency.lockutils [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:45 np0005597378 nova_compute[238941]: 2026-01-27 13:40:45.289 238945 DEBUG oslo_concurrency.lockutils [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:45 np0005597378 nova_compute[238941]: 2026-01-27 13:40:45.289 238945 DEBUG nova.compute.manager [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] No waiting events found dispatching network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:40:45 np0005597378 nova_compute[238941]: 2026-01-27 13:40:45.289 238945 WARNING nova.compute.manager [req-63663c8f-2485-45b5-bd1e-3a60d4144c91 req-3e5d19c2-1150-42dc-bfd4-867ac56e2766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received unexpected event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:40:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 82 op/s
Jan 27 08:40:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:46.288 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:40:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:46 np0005597378 nova_compute[238941]: 2026-01-27 13:40:46.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:46 np0005597378 nova_compute[238941]: 2026-01-27 13:40:46.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:47Z|00048|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 08:40:47 np0005597378 nova_compute[238941]: 2026-01-27 13:40:47.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:40:47Z|00049|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 08:40:47 np0005597378 nova_compute[238941]: 2026-01-27 13:40:47.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:40:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 68 op/s
Jan 27 08:40:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Jan 27 08:40:50 np0005597378 nova_compute[238941]: 2026-01-27 13:40:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:50 np0005597378 nova_compute[238941]: 2026-01-27 13:40:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:50 np0005597378 nova_compute[238941]: 2026-01-27 13:40:50.898 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.088 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.089 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.097 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.097 238945 INFO nova.compute.claims [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.281 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348685814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.878 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.885 238945 DEBUG nova.compute.provider_tree [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.946 238945 DEBUG nova.scheduler.client.report [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.972 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:51 np0005597378 nova_compute[238941]: 2026-01-27 13:40:51.973 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.022 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.023 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.044 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.075 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:40:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.154 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.155 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.156 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Creating image(s)#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.176 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.198 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.224 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.228 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.288 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.289 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.290 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.290 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.314 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.320 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c74daffe-5fa9-4786-abf4-05f8af1b2808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:52 np0005597378 nova_compute[238941]: 2026-01-27 13:40:52.468 238945 DEBUG nova.policy [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8045d106ed8b424aaa83fc2438f630c5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76574efd3c594ec5ad8e8d556f365038', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:40:53 np0005597378 nova_compute[238941]: 2026-01-27 13:40:53.622 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c74daffe-5fa9-4786-abf4-05f8af1b2808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:53 np0005597378 nova_compute[238941]: 2026-01-27 13:40:53.684 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] resizing rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:40:53 np0005597378 nova_compute[238941]: 2026-01-27 13:40:53.778 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Successfully created port: 3e092867-6724-49e3-a148-1355677054d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:40:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 107 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 593 KiB/s wr, 83 op/s
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.193 238945 DEBUG nova.objects.instance [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'migration_context' on Instance uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.210 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.211 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Ensure instance console log exists: /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.211 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.212 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.212 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.688 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Successfully updated port: 3e092867-6724-49e3-a148-1355677054d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.735 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.735 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.735 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.841 238945 DEBUG nova.compute.manager [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-changed-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.842 238945 DEBUG nova.compute.manager [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing instance network info cache due to event network-changed-3e092867-6724-49e3-a148-1355677054d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.842 238945 DEBUG oslo_concurrency.lockutils [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:40:54 np0005597378 nova_compute[238941]: 2026-01-27 13:40:54.982 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.182 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.183 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.209 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.332 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.333 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.340 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.340 238945 INFO nova.compute.claims [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:40:55 np0005597378 nova_compute[238941]: 2026-01-27 13:40:55.535 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:40:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3412207322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.075 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.080 238945 DEBUG nova.compute.provider_tree [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:40:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 131 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.098 238945 DEBUG nova.scheduler.client.report [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.133 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.134 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.185 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.185 238945 DEBUG nova.network.neutron [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.214 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.260 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.362 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.363 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.363 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Creating image(s)#033[00m
Jan 27 08:40:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.387 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.409 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.430 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.434 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.452 238945 DEBUG nova.network.neutron [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.482 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.482 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance network_info: |[{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.483 238945 DEBUG oslo_concurrency.lockutils [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.483 238945 DEBUG nova.network.neutron [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing network info cache for port 3e092867-6724-49e3-a148-1355677054d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.486 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start _get_guest_xml network_info=[{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.490 238945 WARNING nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.493 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.494 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.494 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:40:56 np0005597378 nova_compute[238941]: 2026-01-27 13:40:56.494 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:40:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 131 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 649 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.745 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.749 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a3a647d0-79b5-49c3-891d-3e28d357e92c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.785 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.786 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.793 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.794 238945 DEBUG nova.virt.libvirt.host [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.794 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.795 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.795 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.796 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.796 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.796 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.797 238945 DEBUG nova.virt.hardware [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.801 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:40:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:40:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294826738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:40:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:40:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294826738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.927 238945 DEBUG nova.network.neutron [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:40:59 np0005597378 nova_compute[238941]: 2026-01-27 13:40:59.928 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:41:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 476 KiB/s rd, 3.7 MiB/s wr, 65 op/s
Jan 27 08:41:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:41:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455450542' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:41:00 np0005597378 nova_compute[238941]: 2026-01-27 13:41:00.458 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:00 np0005597378 nova_compute[238941]: 2026-01-27 13:41:00.487 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:00 np0005597378 nova_compute[238941]: 2026-01-27 13:41:00.507 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.215 238945 DEBUG nova.network.neutron [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updated VIF entry in instance network info cache for port 3e092867-6724-49e3-a148-1355677054d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.216 238945 DEBUG nova.network.neutron [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.234 238945 DEBUG oslo_concurrency.lockutils [req-1bfc871c-e031-4d97-a44c-538215b7afcc req-619c33b8-c315-4656-95a7-30faaaa3dd9f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:41:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952832789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.258 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.260 238945 DEBUG nova.virt.libvirt.vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1976063352',display_name='tempest-FloatingIPsAssociationTestJSON-server-1976063352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1976063352',id=10,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-xceuzb0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:52Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=c74daffe-5fa9-4786-abf4-05f8af1b2808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.260 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.261 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.262 238945 DEBUG nova.objects.instance [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'pci_devices' on Instance uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.278 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <uuid>c74daffe-5fa9-4786-abf4-05f8af1b2808</uuid>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <name>instance-0000000a</name>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1976063352</nova:name>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:40:56</nova:creationTime>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:user uuid="8045d106ed8b424aaa83fc2438f630c5">tempest-FloatingIPsAssociationTestJSON-1663098013-project-member</nova:user>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:project uuid="76574efd3c594ec5ad8e8d556f365038">tempest-FloatingIPsAssociationTestJSON-1663098013</nova:project>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <nova:port uuid="3e092867-6724-49e3-a148-1355677054d9">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <entry name="serial">c74daffe-5fa9-4786-abf4-05f8af1b2808</entry>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <entry name="uuid">c74daffe-5fa9-4786-abf4-05f8af1b2808</entry>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c74daffe-5fa9-4786-abf4-05f8af1b2808_disk">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:69:69:e7"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <target dev="tap3e092867-67"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/console.log" append="off"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:41:01 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:41:01 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:41:01 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:41:01 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.279 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Preparing to wait for external event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.279 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.280 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.280 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.281 238945 DEBUG nova.virt.libvirt.vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:40:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1976063352',display_name='tempest-FloatingIPsAssociationTestJSON-server-1976063352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1976063352',id=10,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-xceuzb0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:40:52Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=c74daffe-5fa9-4786-abf4-05f8af1b2808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.281 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.282 238945 DEBUG nova.network.os_vif_util [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.282 238945 DEBUG os_vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.282 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.283 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.283 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.288 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e092867-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.289 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e092867-67, col_values=(('external_ids', {'iface-id': '3e092867-6724-49e3-a148-1355677054d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:69:e7', 'vm-uuid': 'c74daffe-5fa9-4786-abf4-05f8af1b2808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:01 np0005597378 NetworkManager[48904]: <info>  [1769521261.2918] manager: (tap3e092867-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.299 238945 INFO os_vif [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67')#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.311 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a3a647d0-79b5-49c3-891d-3e28d357e92c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.382 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] resizing rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:41:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.490 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.491 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.494 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] No VIF found with MAC fa:16:3e:69:69:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.494 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Using config drive#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.526 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.727 238945 DEBUG nova.objects.instance [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lazy-loading 'migration_context' on Instance uuid a3a647d0-79b5-49c3-891d-3e28d357e92c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.741 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.742 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Ensure instance console log exists: /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.743 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.743 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.744 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.745 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.751 238945 WARNING nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.755 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.755 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.758 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.758 238945 DEBUG nova.virt.libvirt.host [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.759 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.759 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.759 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.760 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.761 238945 DEBUG nova.virt.hardware [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:41:01 np0005597378 nova_compute[238941]: 2026-01-27 13:41:01.764 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.018 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Creating config drive at /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.024 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtyd3mgi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 MiB/s wr, 51 op/s
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.151 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphtyd3mgi" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.181 238945 DEBUG nova.storage.rbd_utils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] rbd image c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:02Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:57:29 10.100.0.14
Jan 27 08:41:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:02Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:57:29 10.100.0.14
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.188 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:41:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663797088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.385 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.411 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.415 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.855 238945 DEBUG oslo_concurrency.processutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config c74daffe-5fa9-4786-abf4-05f8af1b2808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.856 238945 INFO nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deleting local config drive /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808/disk.config because it was imported into RBD.#033[00m
Jan 27 08:41:02 np0005597378 NetworkManager[48904]: <info>  [1769521262.9227] manager: (tap3e092867-67): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Jan 27 08:41:02 np0005597378 kernel: tap3e092867-67: entered promiscuous mode
Jan 27 08:41:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:02Z|00050|binding|INFO|Claiming lport 3e092867-6724-49e3-a148-1355677054d9 for this chassis.
Jan 27 08:41:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:02Z|00051|binding|INFO|3e092867-6724-49e3-a148-1355677054d9: Claiming fa:16:3e:69:69:e7 10.100.0.11
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.936 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:69:e7 10.100.0.11'], port_security=['fa:16:3e:69:69:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c74daffe-5fa9-4786-abf4-05f8af1b2808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e092867-6724-49e3-a148-1355677054d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:41:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.937 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e092867-6724-49e3-a148-1355677054d9 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf bound to our chassis#033[00m
Jan 27 08:41:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.939 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf#033[00m
Jan 27 08:41:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:02Z|00052|binding|INFO|Setting lport 3e092867-6724-49e3-a148-1355677054d9 ovn-installed in OVS
Jan 27 08:41:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:02Z|00053|binding|INFO|Setting lport 3e092867-6724-49e3-a148-1355677054d9 up in Southbound
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:02 np0005597378 nova_compute[238941]: 2026-01-27 13:41:02.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90051ddc-f2a3-4403-907a-cf7b92158862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:02 np0005597378 systemd-machined[207425]: New machine qemu-10-instance-0000000a.
Jan 27 08:41:02 np0005597378 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 27 08:41:02 np0005597378 systemd-udevd[251963]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:41:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:41:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081864505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:02.999 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[70ae95f1-3a2e-4430-b0be-e7b2e14d0350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.004 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e9262bf7-9565-4559-bbde-891a27bdb734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:03 np0005597378 NetworkManager[48904]: <info>  [1769521263.0073] device (tap3e092867-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:41:03 np0005597378 NetworkManager[48904]: <info>  [1769521263.0078] device (tap3e092867-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.038 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[48d4dcea-0e6d-4124-b07d-ccbeb652e00d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.046 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.048 238945 DEBUG nova.objects.instance [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lazy-loading 'pci_devices' on Instance uuid a3a647d0-79b5-49c3-891d-3e28d357e92c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:03 np0005597378 podman[251944]: 2026-01-27 13:41:03.053614073 +0000 UTC m=+0.097146469 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.058 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[751b79a2-faf5-4e9c-938b-5573381b7e1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251988, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.069 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <uuid>a3a647d0-79b5-49c3-891d-3e28d357e92c</uuid>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <name>instance-0000000b</name>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerExternalEventsTest-server-1836330712</nova:name>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:41:01</nova:creationTime>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:user uuid="e0379e1f4d80496a8f167543928d2e7c">tempest-ServerExternalEventsTest-379332276-project-member</nova:user>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <nova:project uuid="d7c827466fd0453f9b9282ed7baee99f">tempest-ServerExternalEventsTest-379332276</nova:project>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <entry name="serial">a3a647d0-79b5-49c3-891d-3e28d357e92c</entry>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <entry name="uuid">a3a647d0-79b5-49c3-891d-3e28d357e92c</entry>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a3a647d0-79b5-49c3-891d-3e28d357e92c_disk">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/console.log" append="off"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:41:03 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:41:03 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:41:03 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:41:03 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85c72e8b-53ee-43d8-8f13-0cb07d4ddc41]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387957, 'tstamp': 387957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251990, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387960, 'tstamp': 387960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251990, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.086 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.090 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape52da3e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.090 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.091 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape52da3e3-80, col_values=(('external_ids', {'iface-id': 'e49b2201-5631-4e9a-aefd-04e11db46733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:03.091 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.712 238945 DEBUG nova.compute.manager [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.712 238945 DEBUG oslo_concurrency.lockutils [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.712 238945 DEBUG oslo_concurrency.lockutils [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.713 238945 DEBUG oslo_concurrency.lockutils [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.713 238945 DEBUG nova.compute.manager [req-bdac0ef7-3c65-4c56-b394-3872e8f8825b req-52ff372d-8692-403e-aa7f-a7987344ff93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Processing event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.730 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.730 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.731 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Using config drive#033[00m
Jan 27 08:41:03 np0005597378 nova_compute[238941]: 2026-01-27 13:41:03.755 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 200 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 5.1 MiB/s wr, 96 op/s
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.110 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Creating config drive at /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.115 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuknwfok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.241 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuknwfok" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.283 238945 DEBUG nova.storage.rbd_utils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] rbd image a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.297 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.327 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521264.2949, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.329 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Started (Lifecycle Event)#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.341 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.355 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.363 238945 INFO nova.virt.libvirt.driver [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance spawned successfully.#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.365 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.467 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.469 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.497 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.497 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.498 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.498 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.499 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.499 238945 DEBUG nova.virt.libvirt.driver [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.602 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.603 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521264.2958002, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.603 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.638 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.642 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521264.3461232, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.642 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.656 238945 INFO nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 12.50 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.656 238945 DEBUG nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.666 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.670 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.704 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.730 238945 INFO nova.compute.manager [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 13.68 seconds to build instance.#033[00m
Jan 27 08:41:04 np0005597378 nova_compute[238941]: 2026-01-27 13:41:04.745 238945 DEBUG oslo_concurrency.lockutils [None req-f45d6ec9-a9ec-401a-addd-70e2a14f0e79 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.810 238945 DEBUG oslo_concurrency.processutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config a3a647d0-79b5-49c3-891d-3e28d357e92c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.811 238945 INFO nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deleting local config drive /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.847 238945 DEBUG nova.compute.manager [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG oslo_concurrency.lockutils [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG oslo_concurrency.lockutils [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG oslo_concurrency.lockutils [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 DEBUG nova.compute.manager [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] No waiting events found dispatching network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:41:05 np0005597378 nova_compute[238941]: 2026-01-27 13:41:05.848 238945 WARNING nova.compute.manager [req-ae62e694-10ba-488d-bdd3-35a6bf48badc req-bafadc0f-b5d1-4e11-9ec9-171e9cb8aaba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received unexpected event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:41:05 np0005597378 systemd-machined[207425]: New machine qemu-11-instance-0000000b.
Jan 27 08:41:05 np0005597378 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Jan 27 08:41:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 213 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 5.1 MiB/s wr, 109 op/s
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.392 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521266.3925257, a3a647d0-79b5-49c3-891d-3e28d357e92c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.393 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.395 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.396 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.398 238945 INFO nova.virt.libvirt.driver [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance spawned successfully.#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.398 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.417 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.421 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.421 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.422 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.422 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.422 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.423 238945 DEBUG nova.virt.libvirt.driver [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.430 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.459 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.461 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521266.395157, a3a647d0-79b5-49c3-891d-3e28d357e92c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.461 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.488 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.495 238945 INFO nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 10.13 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.496 238945 DEBUG nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.502 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.529 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.589 238945 INFO nova.compute.manager [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 11.28 seconds to build instance.#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.610 238945 DEBUG oslo_concurrency.lockutils [None req-40113885-ee07-4691-ac8a-b8a2901c3500 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:06 np0005597378 NetworkManager[48904]: <info>  [1769521266.8238] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 27 08:41:06 np0005597378 NetworkManager[48904]: <info>  [1769521266.8246] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 27 08:41:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:06Z|00054|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:06Z|00055|binding|INFO|Releasing lport e49b2201-5631-4e9a-aefd-04e11db46733 from this chassis (sb_readonly=0)
Jan 27 08:41:06 np0005597378 nova_compute[238941]: 2026-01-27 13:41:06.870 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:07 np0005597378 podman[252151]: 2026-01-27 13:41:07.727737985 +0000 UTC m=+0.067386593 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 27 08:41:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 213 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 4.2 MiB/s wr, 103 op/s
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG nova.compute.manager [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG nova.compute.manager [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG oslo_concurrency.lockutils [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.460 238945 DEBUG oslo_concurrency.lockutils [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.461 238945 DEBUG nova.network.neutron [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.637 238945 DEBUG nova.compute.manager [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.637 238945 DEBUG nova.compute.manager [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.638 238945 DEBUG oslo_concurrency.lockutils [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] Acquiring lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.638 238945 DEBUG oslo_concurrency.lockutils [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] Acquired lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.639 238945 DEBUG nova.network.neutron [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.883 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.884 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.884 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.885 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.885 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.886 238945 INFO nova.compute.manager [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Terminating instance#033[00m
Jan 27 08:41:08 np0005597378 nova_compute[238941]: 2026-01-27 13:41:08.887 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.082 238945 DEBUG nova.network.neutron [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.456 238945 DEBUG nova.network.neutron [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.473 238945 DEBUG oslo_concurrency.lockutils [None req-a9c4f695-878d-4467-b145-8385bcc38f47 0adac50fec1f4148aa5a97cd75a8f316 3427def2a4f94630982c6ba926d7733f - - default default] Releasing lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.473 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquired lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.473 238945 DEBUG nova.network.neutron [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.606 238945 DEBUG nova.network.neutron [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.956 238945 DEBUG nova.network.neutron [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.968 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Releasing lock "refresh_cache-a3a647d0-79b5-49c3-891d-3e28d357e92c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:09 np0005597378 nova_compute[238941]: 2026-01-27 13:41:09.969 238945 DEBUG nova.compute.manager [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:41:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 245 op/s
Jan 27 08:41:10 np0005597378 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 27 08:41:10 np0005597378 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 3.965s CPU time.
Jan 27 08:41:10 np0005597378 systemd-machined[207425]: Machine qemu-11-instance-0000000b terminated.
Jan 27 08:41:10 np0005597378 nova_compute[238941]: 2026-01-27 13:41:10.388 238945 INFO nova.virt.libvirt.driver [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance destroyed successfully.#033[00m
Jan 27 08:41:10 np0005597378 nova_compute[238941]: 2026-01-27 13:41:10.389 238945 DEBUG nova.objects.instance [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lazy-loading 'resources' on Instance uuid a3a647d0-79b5-49c3-891d-3e28d357e92c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:10 np0005597378 nova_compute[238941]: 2026-01-27 13:41:10.394 238945 DEBUG nova.network.neutron [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:10 np0005597378 nova_compute[238941]: 2026-01-27 13:41:10.394 238945 DEBUG nova.network.neutron [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:10 np0005597378 nova_compute[238941]: 2026-01-27 13:41:10.424 238945 DEBUG oslo_concurrency.lockutils [req-dc24a37d-e8e8-44a0-9940-d6ac6702393a req-5a6041a0-f7a8-4e5b-8ccf-3a66be914bf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:11 np0005597378 nova_compute[238941]: 2026-01-27 13:41:11.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:11 np0005597378 nova_compute[238941]: 2026-01-27 13:41:11.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 214 op/s
Jan 27 08:41:12 np0005597378 nova_compute[238941]: 2026-01-27 13:41:12.219 238945 DEBUG nova.compute.manager [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:12 np0005597378 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG nova.compute.manager [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:12 np0005597378 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG oslo_concurrency.lockutils [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:12 np0005597378 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG oslo_concurrency.lockutils [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:12 np0005597378 nova_compute[238941]: 2026-01-27 13:41:12.220 238945 DEBUG nova.network.neutron [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:41:12 np0005597378 nova_compute[238941]: 2026-01-27 13:41:12.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 180 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 224 op/s
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.282 238945 DEBUG nova.network.neutron [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.282 238945 DEBUG nova.network.neutron [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.310 238945 DEBUG oslo_concurrency.lockutils [req-a2417bb4-33c9-42dc-adb4-5f16f8791759 req-44d58617-d693-46b8-80dc-7a0afc6f6b59 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.825 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.826 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.826 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid a3a647d0-79b5-49c3-891d-3e28d357e92c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.826 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.828 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.828 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.852 238945 INFO nova.virt.libvirt.driver [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deleting instance files /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c_del#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.853 238945 INFO nova.virt.libvirt.driver [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deletion of /var/lib/nova/instances/a3a647d0-79b5-49c3-891d-3e28d357e92c_del complete#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.905 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.910 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.951 238945 INFO nova.compute.manager [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 4.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.951 238945 DEBUG oslo.service.loopingcall [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.952 238945 DEBUG nova.compute.manager [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:41:14 np0005597378 nova_compute[238941]: 2026-01-27 13:41:14.952 238945 DEBUG nova.network.neutron [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.223 238945 DEBUG nova.network.neutron [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.240 238945 DEBUG nova.network.neutron [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.258 238945 INFO nova.compute.manager [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Took 0.31 seconds to deallocate network for instance.#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.317 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.318 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.410 238945 DEBUG oslo_concurrency.processutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:41:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768244066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.985 238945 DEBUG nova.compute.manager [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-changed-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.985 238945 DEBUG nova.compute.manager [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing instance network info cache due to event network-changed-3e092867-6724-49e3-a148-1355677054d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.986 238945 DEBUG oslo_concurrency.lockutils [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.986 238945 DEBUG oslo_concurrency.lockutils [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:15 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.986 238945 DEBUG nova.network.neutron [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing network info cache for port 3e092867-6724-49e3-a148-1355677054d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:15.999 238945 DEBUG oslo_concurrency.processutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.008 238945 DEBUG nova.compute.provider_tree [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.032 238945 DEBUG nova.scheduler.client.report [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.053 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.091 238945 INFO nova.scheduler.client.report [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Deleted allocations for instance a3a647d0-79b5-49c3-891d-3e28d357e92c#033[00m
Jan 27 08:41:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 632 KiB/s wr, 192 op/s
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.154 238945 DEBUG oslo_concurrency.lockutils [None req-db85c8cf-ff48-4088-95da-81467cd00958 e0379e1f4d80496a8f167543928d2e7c d7c827466fd0453f9b9282ed7baee99f - - default default] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.155 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.155 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.155 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "a3a647d0-79b5-49c3-891d-3e28d357e92c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:16 np0005597378 nova_compute[238941]: 2026-01-27 13:41:16.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:41:17
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:17 np0005597378 nova_compute[238941]: 2026-01-27 13:41:17.933 238945 DEBUG nova.network.neutron [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updated VIF entry in instance network info cache for port 3e092867-6724-49e3-a148-1355677054d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:17 np0005597378 nova_compute[238941]: 2026-01-27 13:41:17.933 238945 DEBUG nova.network.neutron [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:41:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:41:17 np0005597378 nova_compute[238941]: 2026-01-27 13:41:17.950 238945 DEBUG oslo_concurrency.lockutils [req-660db86c-0e27-42be-8e25-3ea9ef940b73 req-ad27e90a-7e0d-45ea-9b83-557d31ab1a56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 165 op/s
Jan 27 08:41:18 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:18Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:69:e7 10.100.0.11
Jan 27 08:41:18 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:69:e7 10.100.0.11
Jan 27 08:41:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 188 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.404 238945 DEBUG nova.compute.manager [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-changed-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.404 238945 DEBUG nova.compute.manager [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing instance network info cache due to event network-changed-3e092867-6724-49e3-a148-1355677054d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.405 238945 DEBUG oslo_concurrency.lockutils [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.405 238945 DEBUG oslo_concurrency.lockutils [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.405 238945 DEBUG nova.network.neutron [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Refreshing network info cache for port 3e092867-6724-49e3-a148-1355677054d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:41:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:21 np0005597378 nova_compute[238941]: 2026-01-27 13:41:21.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 188 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.689 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.689 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.690 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.690 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.690 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.691 238945 INFO nova.compute.manager [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Terminating instance#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.692 238945 DEBUG nova.compute.manager [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.780 238945 DEBUG nova.network.neutron [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updated VIF entry in instance network info cache for port 3e092867-6724-49e3-a148-1355677054d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.781 238945 DEBUG nova.network.neutron [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [{"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.795 238945 DEBUG oslo_concurrency.lockutils [req-4bb2f391-6c48-4b1e-9ab6-3308bce4abb2 req-68fa2fcc-5a5d-442c-82b7-c5b2e4d5f55f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c74daffe-5fa9-4786-abf4-05f8af1b2808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:22 np0005597378 kernel: tap3e092867-67 (unregistering): left promiscuous mode
Jan 27 08:41:22 np0005597378 NetworkManager[48904]: <info>  [1769521282.9595] device (tap3e092867-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:41:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:22Z|00056|binding|INFO|Releasing lport 3e092867-6724-49e3-a148-1355677054d9 from this chassis (sb_readonly=0)
Jan 27 08:41:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:22Z|00057|binding|INFO|Setting lport 3e092867-6724-49e3-a148-1355677054d9 down in Southbound
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:22Z|00058|binding|INFO|Removing iface tap3e092867-67 ovn-installed in OVS
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:22 np0005597378 nova_compute[238941]: 2026-01-27 13:41:22.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:22.989 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:69:e7 10.100.0.11'], port_security=['fa:16:3e:69:69:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c74daffe-5fa9-4786-abf4-05f8af1b2808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e092867-6724-49e3-a148-1355677054d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:41:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:22.990 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e092867-6724-49e3-a148-1355677054d9 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf unbound from our chassis#033[00m
Jan 27 08:41:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:22.992 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.008 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2b28e8-ebaa-4ce6-8f14-ef2ca58084cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:23 np0005597378 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 27 08:41:23 np0005597378 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 14.051s CPU time.
Jan 27 08:41:23 np0005597378 systemd-machined[207425]: Machine qemu-10-instance-0000000a terminated.
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.037 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a84813-cfb4-4fb0-95d1-c8d159f75728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[27bf738f-fcdf-46be-a7a3-bd11b3c2db32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.067 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4efb34b6-47eb-4ffe-9a97-b77d786d33b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80bf3b06-8877-4e53-bd7a-2e1778a31784]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape52da3e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c7:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387945, 'reachable_time': 23585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252225, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4678fcf-dd8e-4d64-bc91-97efc5871b6a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387957, 'tstamp': 387957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252226, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape52da3e3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387960, 'tstamp': 387960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252226, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.112 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape52da3e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape52da3e3-80, col_values=(('external_ids', {'iface-id': 'e49b2201-5631-4e9a-aefd-04e11db46733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:23.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.122 238945 INFO nova.virt.libvirt.driver [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Instance destroyed successfully.#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.123 238945 DEBUG nova.objects.instance [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'resources' on Instance uuid c74daffe-5fa9-4786-abf4-05f8af1b2808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.144 238945 DEBUG nova.virt.libvirt.vif [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:40:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1976063352',display_name='tempest-FloatingIPsAssociationTestJSON-server-1976063352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1976063352',id=10,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-xceuzb0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:41:04Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=c74daffe-5fa9-4786-abf4-05f8af1b2808,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.144 238945 DEBUG nova.network.os_vif_util [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "3e092867-6724-49e3-a148-1355677054d9", "address": "fa:16:3e:69:69:e7", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e092867-67", "ovs_interfaceid": "3e092867-6724-49e3-a148-1355677054d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.145 238945 DEBUG nova.network.os_vif_util [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.145 238945 DEBUG os_vif [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.148 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e092867-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.154 238945 INFO os_vif [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:69:e7,bridge_name='br-int',has_traffic_filtering=True,id=3e092867-6724-49e3-a148-1355677054d9,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e092867-67')#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.742 238945 INFO nova.virt.libvirt.driver [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deleting instance files /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808_del#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.743 238945 INFO nova.virt.libvirt.driver [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deletion of /var/lib/nova/instances/c74daffe-5fa9-4786-abf4-05f8af1b2808_del complete#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.747 238945 DEBUG nova.compute.manager [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-unplugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG oslo_concurrency.lockutils [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG oslo_concurrency.lockutils [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG oslo_concurrency.lockutils [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG nova.compute.manager [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] No waiting events found dispatching network-vif-unplugged-3e092867-6724-49e3-a148-1355677054d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.748 238945 DEBUG nova.compute.manager [req-9e130680-0818-4dc8-a390-6cae72aa8ef8 req-c99be6d1-d04a-48c2-8380-4aadde14c0ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-unplugged-3e092867-6724-49e3-a148-1355677054d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.789 238945 INFO nova.compute.manager [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.790 238945 DEBUG oslo.service.loopingcall [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.790 238945 DEBUG nova.compute.manager [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:41:23 np0005597378 nova_compute[238941]: 2026-01-27 13:41:23.790 238945 DEBUG nova.network.neutron [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:41:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 172 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 27 08:41:24 np0005597378 nova_compute[238941]: 2026-01-27 13:41:24.991 238945 DEBUG nova.compute.manager [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:24 np0005597378 nova_compute[238941]: 2026-01-27 13:41:24.991 238945 DEBUG nova.compute.manager [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:24 np0005597378 nova_compute[238941]: 2026-01-27 13:41:24.992 238945 DEBUG oslo_concurrency.lockutils [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:24 np0005597378 nova_compute[238941]: 2026-01-27 13:41:24.992 238945 DEBUG oslo_concurrency.lockutils [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:24 np0005597378 nova_compute[238941]: 2026-01-27 13:41:24.992 238945 DEBUG nova.network.neutron [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.011 238945 DEBUG nova.network.neutron [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.034 238945 INFO nova.compute.manager [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Took 1.24 seconds to deallocate network for instance.#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.090 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.091 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.166 238945 DEBUG oslo_concurrency.processutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.388 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521270.386228, a3a647d0-79b5-49c3-891d-3e28d357e92c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.389 238945 INFO nova.compute.manager [-] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.416 238945 DEBUG nova.compute.manager [None req-a6e9c6e2-85d7-480a-b461-b474e1af7e41 - - - - - -] [instance: a3a647d0-79b5-49c3-891d-3e28d357e92c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:41:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646627632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.787 238945 DEBUG oslo_concurrency.processutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.792 238945 DEBUG nova.compute.provider_tree [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.809 238945 DEBUG nova.scheduler.client.report [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.831 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.877 238945 INFO nova.scheduler.client.report [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Deleted allocations for instance c74daffe-5fa9-4786-abf4-05f8af1b2808#033[00m
Jan 27 08:41:25 np0005597378 nova_compute[238941]: 2026-01-27 13:41:25.943 238945 DEBUG oslo_concurrency.lockutils [None req-36b2df52-efce-42ae-85b3-a5db6e22ffa7 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 27 08:41:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:26 np0005597378 nova_compute[238941]: 2026-01-27 13:41:26.665 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010416373429047204 of space, bias 1.0, pg target 0.31249120287141613 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666717068512645 of space, bias 1.0, pg target 0.20000151205537933 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.099392671778215e-06 of space, bias 4.0, pg target 0.001319271206133858 quantized to 16 (current 16)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:41:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.485 238945 DEBUG nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG oslo_concurrency.lockutils [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG oslo_concurrency.lockutils [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG oslo_concurrency.lockutils [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c74daffe-5fa9-4786-abf4-05f8af1b2808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.486 238945 DEBUG nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] No waiting events found dispatching network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.487 238945 WARNING nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received unexpected event network-vif-plugged-3e092867-6724-49e3-a148-1355677054d9 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:41:27 np0005597378 nova_compute[238941]: 2026-01-27 13:41:27.487 238945 DEBUG nova.compute.manager [req-15d7fbb0-b44a-4443-8a3e-5ccea6352174 req-69326226-3ebd-41f3-b447-f286e247ffd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Received event network-vif-deleted-3e092867-6724-49e3-a148-1355677054d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.099 238945 DEBUG nova.network.neutron [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.100 238945 DEBUG nova.network.neutron [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.116 238945 DEBUG oslo_concurrency.lockutils [req-fdb67455-d81e-41f8-85a5-f420d7a31b52 req-739ffa57-c62a-4ae7-afb4-cb06574d491e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.595 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "b302d131-0feb-4256-a088-4ee6521b1ed1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.595 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.617 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.685 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.686 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.694 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.695 238945 INFO nova.compute.claims [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:41:28 np0005597378 nova_compute[238941]: 2026-01-27 13:41:28.835 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:41:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569680424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.401 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.411 238945 DEBUG nova.compute.provider_tree [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.431 238945 DEBUG nova.scheduler.client.report [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.453 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.454 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.513 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.530 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.550 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.654 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.655 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.656 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating image(s)#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.678 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.704 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.724 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.728 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.793 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.794 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.795 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.795 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.816 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:29 np0005597378 nova_compute[238941]: 2026-01-27 13:41:29.819 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:41:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:41:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.424 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.484 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] resizing rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:41:30 np0005597378 podman[252546]: 2026-01-27 13:41:30.540390904 +0000 UTC m=+0.095623717 container create cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:41:30 np0005597378 podman[252546]: 2026-01-27 13:41:30.468247673 +0000 UTC m=+0.023480516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:41:30 np0005597378 systemd[1]: Started libpod-conmon-cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041.scope.
Jan 27 08:41:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:41:30 np0005597378 podman[252546]: 2026-01-27 13:41:30.706544928 +0000 UTC m=+0.261777771 container init cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:41:30 np0005597378 podman[252546]: 2026-01-27 13:41:30.714320818 +0000 UTC m=+0.269553631 container start cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:41:30 np0005597378 xenodochial_cray[252609]: 167 167
Jan 27 08:41:30 np0005597378 systemd[1]: libpod-cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041.scope: Deactivated successfully.
Jan 27 08:41:30 np0005597378 podman[252546]: 2026-01-27 13:41:30.763578791 +0000 UTC m=+0.318811604 container attach cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:41:30 np0005597378 podman[252546]: 2026-01-27 13:41:30.764047003 +0000 UTC m=+0.319279826 container died cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.796 238945 DEBUG nova.objects.instance [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'migration_context' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.811 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.812 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ensure instance console log exists: /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.812 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.813 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.813 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.815 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.820 238945 WARNING nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.825 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.826 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.829 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.829 238945 DEBUG nova.virt.libvirt.host [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.829 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.830 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.830 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.830 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.831 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.832 238945 DEBUG nova.virt.hardware [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:41:30 np0005597378 nova_compute[238941]: 2026-01-27 13:41:30.835 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3e833223e6508f7d3e8750cb9eda8144db703e237030a7f55dbd11a9f35fb2fb-merged.mount: Deactivated successfully.
Jan 27 08:41:31 np0005597378 podman[252546]: 2026-01-27 13:41:31.029162343 +0000 UTC m=+0.584395156 container remove cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:41:31 np0005597378 systemd[1]: libpod-conmon-cc7227bb95eeef469b3a533da2af77f02f40c4c36219781fcf4ef283a193e041.scope: Deactivated successfully.
Jan 27 08:41:31 np0005597378 podman[252673]: 2026-01-27 13:41:31.255850744 +0000 UTC m=+0.101082065 container create fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:41:31 np0005597378 podman[252673]: 2026-01-27 13:41:31.176720704 +0000 UTC m=+0.021952055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:41:31 np0005597378 systemd[1]: Started libpod-conmon-fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da.scope.
Jan 27 08:41:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:41:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:41:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042373135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:41:31 np0005597378 nova_compute[238941]: 2026-01-27 13:41:31.447 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:31 np0005597378 podman[252673]: 2026-01-27 13:41:31.475147905 +0000 UTC m=+0.320379226 container init fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:41:31 np0005597378 podman[252673]: 2026-01-27 13:41:31.484438027 +0000 UTC m=+0.329669348 container start fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:41:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:31 np0005597378 podman[252673]: 2026-01-27 13:41:31.631697539 +0000 UTC m=+0.476928870 container attach fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:41:31 np0005597378 nova_compute[238941]: 2026-01-27 13:41:31.643 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:31 np0005597378 nova_compute[238941]: 2026-01-27 13:41:31.647 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:31 np0005597378 nova_compute[238941]: 2026-01-27 13:41:31.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:31 np0005597378 sad_newton[252690]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:41:31 np0005597378 sad_newton[252690]: --> All data devices are unavailable
Jan 27 08:41:31 np0005597378 systemd[1]: libpod-fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da.scope: Deactivated successfully.
Jan 27 08:41:31 np0005597378 podman[252673]: 2026-01-27 13:41:31.990068841 +0000 UTC m=+0.835300162 container died fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:41:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 88 KiB/s wr, 47 op/s
Jan 27 08:41:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-84f218fed2935a3aae6983445cb6f785cd3a7eeb2a7479cca2325523482844a1-merged.mount: Deactivated successfully.
Jan 27 08:41:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:41:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381685837' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.219 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.222 238945 DEBUG nova.objects.instance [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'pci_devices' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.241 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <uuid>b302d131-0feb-4256-a088-4ee6521b1ed1</uuid>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <name>instance-0000000c</name>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdmin275Test-server-1048773693</nova:name>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:41:30</nova:creationTime>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:user uuid="367b5fa4b1ea4ac8bc5003a145b7aadb">tempest-ServersAdmin275Test-938318828-project-member</nova:user>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <nova:project uuid="f0a8272120624f10ab79ece3c464f817">tempest-ServersAdmin275Test-938318828</nova:project>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <entry name="serial">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <entry name="uuid">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log" append="off"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:41:32 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:41:32 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:41:32 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:41:32 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.389 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.390 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.391 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Using config drive#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.412 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.616 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating config drive at /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.621 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ftliy4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:32 np0005597378 podman[252673]: 2026-01-27 13:41:32.635879617 +0000 UTC m=+1.481110938 container remove fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:41:32 np0005597378 systemd[1]: libpod-conmon-fb00540074b57d69722e0e6fc2de1cf0fa717c041633728647c253092733c6da.scope: Deactivated successfully.
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.750 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ftliy4e" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.777 238945 DEBUG nova.storage.rbd_utils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:32 np0005597378 nova_compute[238941]: 2026-01-27 13:41:32.784 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:33 np0005597378 podman[252886]: 2026-01-27 13:41:33.080773579 +0000 UTC m=+0.021965425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:41:33 np0005597378 podman[252886]: 2026-01-27 13:41:33.184760751 +0000 UTC m=+0.125952577 container create c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.199 238945 DEBUG oslo_concurrency.processutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.200 238945 INFO nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting local config drive /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config because it was imported into RBD.#033[00m
Jan 27 08:41:33 np0005597378 systemd[1]: Started libpod-conmon-c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600.scope.
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.283 238945 DEBUG nova.compute.manager [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG nova.compute.manager [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing instance network info cache due to event network-changed-402a0a5c-d6b4-4d22-843f-4e65f18d7327. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG oslo_concurrency.lockutils [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG oslo_concurrency.lockutils [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:33 np0005597378 nova_compute[238941]: 2026-01-27 13:41:33.284 238945 DEBUG nova.network.neutron [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Refreshing network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:41:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:41:33 np0005597378 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Jan 27 08:41:33 np0005597378 systemd-machined[207425]: New machine qemu-12-instance-0000000c.
Jan 27 08:41:33 np0005597378 podman[252886]: 2026-01-27 13:41:33.315263641 +0000 UTC m=+0.256455487 container init c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:41:33 np0005597378 podman[252886]: 2026-01-27 13:41:33.325865078 +0000 UTC m=+0.267056904 container start c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:41:33 np0005597378 pensive_carson[252917]: 167 167
Jan 27 08:41:33 np0005597378 systemd[1]: libpod-c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600.scope: Deactivated successfully.
Jan 27 08:41:33 np0005597378 podman[252886]: 2026-01-27 13:41:33.370439843 +0000 UTC m=+0.311631689 container attach c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:41:33 np0005597378 podman[252886]: 2026-01-27 13:41:33.372510609 +0000 UTC m=+0.313702435 container died c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:41:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-479e728d12d106d96bea9a0e58be794ae7db83d5b28ef05cae8613274a91a792-merged.mount: Deactivated successfully.
Jan 27 08:41:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 140 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 508 KiB/s wr, 63 op/s
Jan 27 08:41:34 np0005597378 podman[252886]: 2026-01-27 13:41:34.237800331 +0000 UTC m=+1.178992157 container remove c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:41:34 np0005597378 systemd[1]: libpod-conmon-c9b68e13fe9b7206f99050fd439b229860e9f09244b19786c1367e843d07b600.scope: Deactivated successfully.
Jan 27 08:41:34 np0005597378 podman[252900]: 2026-01-27 13:41:34.278973664 +0000 UTC m=+1.057797219 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.475 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521294.4755602, b302d131-0feb-4256-a088-4ee6521b1ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.476 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.478 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.479 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:41:34 np0005597378 podman[253005]: 2026-01-27 13:41:34.386971095 +0000 UTC m=+0.027195937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.482 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance spawned successfully.#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.483 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:41:34 np0005597378 podman[253005]: 2026-01-27 13:41:34.524051842 +0000 UTC m=+0.164276664 container create 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.540 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.543 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.569 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.569 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.570 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.570 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.571 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.571 238945 DEBUG nova.virt.libvirt.driver [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.577 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521294.4764085, b302d131-0feb-4256-a088-4ee6521b1ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Started (Lifecycle Event)#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.607 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.610 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.639 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.649 238945 INFO nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 4.99 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.650 238945 DEBUG nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:34 np0005597378 systemd[1]: Started libpod-conmon-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope.
Jan 27 08:41:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:41:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.714 238945 INFO nova.compute.manager [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 6.05 seconds to build instance.#033[00m
Jan 27 08:41:34 np0005597378 nova_compute[238941]: 2026-01-27 13:41:34.733 238945 DEBUG oslo_concurrency.lockutils [None req-1c73dee0-17d4-40ef-b070-118e504b526f 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:34 np0005597378 podman[253005]: 2026-01-27 13:41:34.783556851 +0000 UTC m=+0.423781683 container init 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:41:34 np0005597378 podman[253005]: 2026-01-27 13:41:34.791199267 +0000 UTC m=+0.431424099 container start 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:41:34 np0005597378 podman[253005]: 2026-01-27 13:41:34.802575175 +0000 UTC m=+0.442799997 container attach 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]: {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:    "0": [
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:        {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "devices": [
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "/dev/loop3"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            ],
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_name": "ceph_lv0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_size": "21470642176",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "name": "ceph_lv0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "tags": {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cluster_name": "ceph",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.crush_device_class": "",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.encrypted": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.objectstore": "bluestore",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osd_id": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.type": "block",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.vdo": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.with_tpm": "0"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            },
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "type": "block",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "vg_name": "ceph_vg0"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:        }
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:    ],
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:    "1": [
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:        {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "devices": [
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "/dev/loop4"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            ],
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_name": "ceph_lv1",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_size": "21470642176",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "name": "ceph_lv1",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "tags": {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cluster_name": "ceph",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.crush_device_class": "",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.encrypted": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.objectstore": "bluestore",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osd_id": "1",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.type": "block",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.vdo": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.with_tpm": "0"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            },
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "type": "block",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "vg_name": "ceph_vg1"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:        }
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:    ],
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:    "2": [
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:        {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "devices": [
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "/dev/loop5"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            ],
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_name": "ceph_lv2",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_size": "21470642176",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "name": "ceph_lv2",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "tags": {
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.cluster_name": "ceph",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.crush_device_class": "",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.encrypted": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.objectstore": "bluestore",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osd_id": "2",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.type": "block",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.vdo": "0",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:                "ceph.with_tpm": "0"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            },
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "type": "block",
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:            "vg_name": "ceph_vg2"
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:        }
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]:    ]
Jan 27 08:41:35 np0005597378 tender_leavitt[253027]: }
Jan 27 08:41:35 np0005597378 systemd[1]: libpod-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope: Deactivated successfully.
Jan 27 08:41:35 np0005597378 conmon[253027]: conmon 04b612da88cd57cd08bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope/container/memory.events
Jan 27 08:41:35 np0005597378 podman[253005]: 2026-01-27 13:41:35.10448013 +0000 UTC m=+0.744704952 container died 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:41:35 np0005597378 nova_compute[238941]: 2026-01-27 13:41:35.213 238945 DEBUG nova.network.neutron [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated VIF entry in instance network info cache for port 402a0a5c-d6b4-4d22-843f-4e65f18d7327. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:41:35 np0005597378 nova_compute[238941]: 2026-01-27 13:41:35.215 238945 DEBUG nova.network.neutron [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:35 np0005597378 nova_compute[238941]: 2026-01-27 13:41:35.244 238945 DEBUG oslo_concurrency.lockutils [req-0b918797-961a-4795-b91c-c00a52ea759f req-fdf762ab-e7bb-4785-b6f2-5bdac7c5940e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-02b091c92fe19e98a0d39ebbc30585af4a2683698f3652f2354234247795d07a-merged.mount: Deactivated successfully.
Jan 27 08:41:35 np0005597378 podman[253005]: 2026-01-27 13:41:35.452988826 +0000 UTC m=+1.093213648 container remove 04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 08:41:35 np0005597378 systemd[1]: libpod-conmon-04b612da88cd57cd08bb55b39acf29ea2ae37a74700d63a4f26bbd951f91af3f.scope: Deactivated successfully.
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:35.911544817 +0000 UTC m=+0.020576447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:36.0299664 +0000 UTC m=+0.138998000 container create a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:41:36 np0005597378 systemd[1]: Started libpod-conmon-a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0.scope.
Jan 27 08:41:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 27 08:41:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:36.196175015 +0000 UTC m=+0.305206655 container init a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:36.203812261 +0000 UTC m=+0.312843861 container start a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:41:36 np0005597378 amazing_merkle[253130]: 167 167
Jan 27 08:41:36 np0005597378 systemd[1]: libpod-a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0.scope: Deactivated successfully.
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:36.236459754 +0000 UTC m=+0.345491374 container attach a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:36.237608005 +0000 UTC m=+0.346639615 container died a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:41:36 np0005597378 nova_compute[238941]: 2026-01-27 13:41:36.407 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:36 np0005597378 nova_compute[238941]: 2026-01-27 13:41:36.409 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e8e26b8714ade031ce1412e19eb2b903cf264eef2da38c4ce714998adcc1d3fd-merged.mount: Deactivated successfully.
Jan 27 08:41:36 np0005597378 podman[253114]: 2026-01-27 13:41:36.618679351 +0000 UTC m=+0.727710951 container remove a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:41:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:36 np0005597378 systemd[1]: libpod-conmon-a81f65f20dcb03cc861b210cffc5b587063e1c691d96d3f17a335c6c9621b1a0.scope: Deactivated successfully.
Jan 27 08:41:36 np0005597378 nova_compute[238941]: 2026-01-27 13:41:36.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:36 np0005597378 podman[253155]: 2026-01-27 13:41:36.766652344 +0000 UTC m=+0.023820656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:41:36 np0005597378 podman[253155]: 2026-01-27 13:41:36.930821363 +0000 UTC m=+0.187989655 container create d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:41:37 np0005597378 systemd[1]: Started libpod-conmon-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope.
Jan 27 08:41:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:41:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:41:37 np0005597378 podman[253155]: 2026-01-27 13:41:37.292965598 +0000 UTC m=+0.550133940 container init d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:41:37 np0005597378 podman[253155]: 2026-01-27 13:41:37.300317917 +0000 UTC m=+0.557486209 container start d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:41:37 np0005597378 podman[253155]: 2026-01-27 13:41:37.358830019 +0000 UTC m=+0.615998391 container attach d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:41:37 np0005597378 nova_compute[238941]: 2026-01-27 13:41:37.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:37 np0005597378 nova_compute[238941]: 2026-01-27 13:41:37.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:41:37 np0005597378 nova_compute[238941]: 2026-01-27 13:41:37.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:41:37 np0005597378 nova_compute[238941]: 2026-01-27 13:41:37.547 238945 INFO nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Rebuilding instance#033[00m
Jan 27 08:41:37 np0005597378 nova_compute[238941]: 2026-01-27 13:41:37.815 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:38 np0005597378 lvm[253257]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:41:38 np0005597378 lvm[253256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:41:38 np0005597378 lvm[253256]: VG ceph_vg0 finished
Jan 27 08:41:38 np0005597378 lvm[253257]: VG ceph_vg1 finished
Jan 27 08:41:38 np0005597378 lvm[253260]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:41:38 np0005597378 lvm[253260]: VG ceph_vg2 finished
Jan 27 08:41:38 np0005597378 podman[253247]: 2026-01-27 13:41:38.085247875 +0000 UTC m=+0.063390665 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 08:41:38 np0005597378 lvm[253275]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:41:38 np0005597378 lvm[253275]: VG ceph_vg2 finished
Jan 27 08:41:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.121 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521283.1201887, c74daffe-5fa9-4786-abf4-05f8af1b2808 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.121 238945 INFO nova.compute.manager [-] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:41:38 np0005597378 funny_ride[253172]: {}
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:38 np0005597378 systemd[1]: libpod-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope: Deactivated successfully.
Jan 27 08:41:38 np0005597378 systemd[1]: libpod-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope: Consumed 1.353s CPU time.
Jan 27 08:41:38 np0005597378 podman[253155]: 2026-01-27 13:41:38.186865683 +0000 UTC m=+1.444033995 container died d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.388 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.388 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.389 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:41:38 np0005597378 nova_compute[238941]: 2026-01-27 13:41:38.389 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d5e5697eaeb1bbb92079412224932f5b757ded62e6eb2804087021db9dafa98e-merged.mount: Deactivated successfully.
Jan 27 08:41:38 np0005597378 podman[253155]: 2026-01-27 13:41:38.710270589 +0000 UTC m=+1.967438881 container remove d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:41:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:41:38 np0005597378 systemd[1]: libpod-conmon-d97e0a798fc562c6989b834221f5ab8764ac5dd51ddb2d8ac4d7a19652069826.scope: Deactivated successfully.
Jan 27 08:41:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:41:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:41:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:41:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:41:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:41:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.374 238945 DEBUG nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.436 238945 DEBUG nova.compute.manager [None req-aeef3b50-9888-4916-84cc-05beedaeaa28 - - - - - -] [instance: c74daffe-5fa9-4786-abf4-05f8af1b2808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.483 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'pci_requests' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.501 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'pci_devices' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.525 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'resources' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.545 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'migration_context' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.562 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:41:40 np0005597378 nova_compute[238941]: 2026-01-27 13:41:40.567 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:41:41 np0005597378 nova_compute[238941]: 2026-01-27 13:41:41.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:41 np0005597378 nova_compute[238941]: 2026-01-27 13:41:41.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:41.949 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:41:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:41.950 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:41:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.877 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.878 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.879 238945 INFO nova.compute.manager [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Terminating instance#033[00m
Jan 27 08:41:42 np0005597378 nova_compute[238941]: 2026-01-27 13:41:42.880 238945 DEBUG nova.compute.manager [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:41:43 np0005597378 kernel: tap402a0a5c-d6 (unregistering): left promiscuous mode
Jan 27 08:41:43 np0005597378 NetworkManager[48904]: <info>  [1769521303.0463] device (tap402a0a5c-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:43Z|00059|binding|INFO|Releasing lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 from this chassis (sb_readonly=0)
Jan 27 08:41:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:43Z|00060|binding|INFO|Setting lport 402a0a5c-d6b4-4d22-843f-4e65f18d7327 down in Southbound
Jan 27 08:41:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:41:43Z|00061|binding|INFO|Removing iface tap402a0a5c-d6 ovn-installed in OVS
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.075 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:57:29 10.100.0.14'], port_security=['fa:16:3e:e6:57:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bb83a99e-76c6-4a1a-8b12-39a44d77f760', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76574efd3c594ec5ad8e8d556f365038', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9dae007-cf18-48ab-a310-74aab34287dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162a92f9-92b8-44f9-aed8-aaa877d5df8a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=402a0a5c-d6b4-4d22-843f-4e65f18d7327) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:41:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.077 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 402a0a5c-d6b4-4d22-843f-4e65f18d7327 in datapath e52da3e3-8f9f-4f76-b6d4-298e7af46abf unbound from our chassis#033[00m
Jan 27 08:41:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.078 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e52da3e3-8f9f-4f76-b6d4-298e7af46abf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:41:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4106954c-8a5a-4b9c-9629-9091d5996fd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:43.080 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf namespace which is not needed anymore#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 27 08:41:43 np0005597378 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 15.680s CPU time.
Jan 27 08:41:43 np0005597378 systemd-machined[207425]: Machine qemu-9-instance-00000009 terminated.
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.327 238945 INFO nova.virt.libvirt.driver [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Instance destroyed successfully.#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.327 238945 DEBUG nova.objects.instance [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lazy-loading 'resources' on Instance uuid bb83a99e-76c6-4a1a-8b12-39a44d77f760 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:41:43 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : haproxy version is 2.8.14-c23fe91
Jan 27 08:41:43 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [NOTICE]   (251342) : path to executable is /usr/sbin/haproxy
Jan 27 08:41:43 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [WARNING]  (251342) : Exiting Master process...
Jan 27 08:41:43 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [ALERT]    (251342) : Current worker (251344) exited with code 143 (Terminated)
Jan 27 08:41:43 np0005597378 neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf[251338]: [WARNING]  (251342) : All workers exited. Exiting... (0)
Jan 27 08:41:43 np0005597378 systemd[1]: libpod-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306.scope: Deactivated successfully.
Jan 27 08:41:43 np0005597378 podman[253335]: 2026-01-27 13:41:43.341171713 +0000 UTC m=+0.157939703 container died 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.379 238945 DEBUG nova.virt.libvirt.vif [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1535303300',display_name='tempest-FloatingIPsAssociationTestJSON-server-1535303300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1535303300',id=9,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:40:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76574efd3c594ec5ad8e8d556f365038',ramdisk_id='',reservation_id='r-sq4zkikx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1663098013',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1663098013-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:40:43Z,user_data=None,user_id='8045d106ed8b424aaa83fc2438f630c5',uuid=bb83a99e-76c6-4a1a-8b12-39a44d77f760,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.380 238945 DEBUG nova.network.os_vif_util [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converting VIF {"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.380 238945 DEBUG nova.network.os_vif_util [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.381 238945 DEBUG os_vif [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.383 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap402a0a5c-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.388 238945 INFO os_vif [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:57:29,bridge_name='br-int',has_traffic_filtering=True,id=402a0a5c-d6b4-4d22-843f-4e65f18d7327,network=Network(e52da3e3-8f9f-4f76-b6d4-298e7af46abf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402a0a5c-d6')#033[00m
Jan 27 08:41:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306-userdata-shm.mount: Deactivated successfully.
Jan 27 08:41:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-73eff589aa6237cc3bbb3f41238c040449583acb2328fd2f0237bb549399c0fc-merged.mount: Deactivated successfully.
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.761 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [{"id": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "address": "fa:16:3e:e6:57:29", "network": {"id": "e52da3e3-8f9f-4f76-b6d4-298e7af46abf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-233108357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76574efd3c594ec5ad8e8d556f365038", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402a0a5c-d6", "ovs_interfaceid": "402a0a5c-d6b4-4d22-843f-4e65f18d7327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.788 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-bb83a99e-76c6-4a1a-8b12-39a44d77f760" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.789 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.789 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.790 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.824 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:41:43 np0005597378 nova_compute[238941]: 2026-01-27 13:41:43.825 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:43 np0005597378 podman[253335]: 2026-01-27 13:41:43.874397344 +0000 UTC m=+0.691165334 container cleanup 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:41:43 np0005597378 systemd[1]: libpod-conmon-7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306.scope: Deactivated successfully.
Jan 27 08:41:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 27 08:41:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:41:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168460936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.389 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.487 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.487 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.491 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.491 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:41:44 np0005597378 podman[253395]: 2026-01-27 13:41:44.534553028 +0000 UTC m=+0.635582801 container remove 7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95f61e27-300c-4787-bc6b-a71fc7eaecbf]: (4, ('Tue Jan 27 01:41:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf (7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306)\n7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306\nTue Jan 27 01:41:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf (7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306)\n7ebb15e04017ddbd25df3a5c7b3fbbfa20284d599c015c9e2154a81506c98306\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.544 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8030db-2c77-4e95-ae0e-464a56847cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.546 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape52da3e3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:44 np0005597378 kernel: tape52da3e3-80: left promiscuous mode
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1480fe2-7974-4309-8391-9ac43bfd5475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.582 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bf729572-9f68-4e12-8e35-d4b70189ea14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.584 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ceedbf4d-1269-4574-b183-0966cba5a51d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.602 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[42f0a0ac-46e4-4667-a663-1810cdb232c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387937, 'reachable_time': 19163, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253433, 'error': None, 'target': 'ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.605 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e52da3e3-8f9f-4f76-b6d4-298e7af46abf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:41:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:44.605 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3d429397-8dbe-4492-9ff2-de2395951d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:41:44 np0005597378 systemd[1]: run-netns-ovnmeta\x2de52da3e3\x2d8f9f\x2d4f76\x2db6d4\x2d298e7af46abf.mount: Deactivated successfully.
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.663 238945 DEBUG nova.compute.manager [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-unplugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.663 238945 DEBUG oslo_concurrency.lockutils [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG oslo_concurrency.lockutils [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG oslo_concurrency.lockutils [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG nova.compute.manager [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] No waiting events found dispatching network-vif-unplugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.664 238945 DEBUG nova.compute.manager [req-866ca45f-45d8-4006-8b33-022775645143 req-89b1e90a-46d3-4390-b4ef-a7ebea12b2d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-unplugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.768 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.771 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4486MB free_disk=59.92183001060039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.771 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:44 np0005597378 nova_compute[238941]: 2026-01-27 13:41:44.771 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:45 np0005597378 nova_compute[238941]: 2026-01-27 13:41:45.330 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bb83a99e-76c6-4a1a-8b12-39a44d77f760 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:41:45 np0005597378 nova_compute[238941]: 2026-01-27 13:41:45.331 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b302d131-0feb-4256-a088-4ee6521b1ed1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:41:45 np0005597378 nova_compute[238941]: 2026-01-27 13:41:45.331 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:41:45 np0005597378 nova_compute[238941]: 2026-01-27 13:41:45.331 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:41:45 np0005597378 nova_compute[238941]: 2026-01-27 13:41:45.408 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:41:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3955738054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.017 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.022 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.077 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.099 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.100 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 132 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Jan 27 08:41:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:46.289 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.905 238945 DEBUG nova.compute.manager [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.905 238945 DEBUG oslo_concurrency.lockutils [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.905 238945 DEBUG oslo_concurrency.lockutils [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.906 238945 DEBUG oslo_concurrency.lockutils [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.906 238945 DEBUG nova.compute.manager [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] No waiting events found dispatching network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:41:46 np0005597378 nova_compute[238941]: 2026-01-27 13:41:46.906 238945 WARNING nova.compute.manager [req-c89ff1bf-85b1-400a-b199-2e45fd5c62f0 req-4e52ddd4-63dd-4d46-92db-19cf49aa9717 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received unexpected event network-vif-plugged-402a0a5c-d6b4-4d22-843f-4e65f18d7327 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:41:47 np0005597378 nova_compute[238941]: 2026-01-27 13:41:47.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:47 np0005597378 nova_compute[238941]: 2026-01-27 13:41:47.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.094 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:41:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 132 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 71 op/s
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.167 238945 INFO nova.virt.libvirt.driver [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deleting instance files /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760_del#033[00m
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.168 238945 INFO nova.virt.libvirt.driver [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deletion of /var/lib/nova/instances/bb83a99e-76c6-4a1a-8b12-39a44d77f760_del complete#033[00m
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.226 238945 INFO nova.compute.manager [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 5.35 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.227 238945 DEBUG oslo.service.loopingcall [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.227 238945 DEBUG nova.compute.manager [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.227 238945 DEBUG nova.network.neutron [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:41:48 np0005597378 nova_compute[238941]: 2026-01-27 13:41:48.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:49 np0005597378 nova_compute[238941]: 2026-01-27 13:41:49.280 238945 DEBUG nova.network.neutron [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:41:49 np0005597378 nova_compute[238941]: 2026-01-27 13:41:49.298 238945 INFO nova.compute.manager [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Took 1.07 seconds to deallocate network for instance.#033[00m
Jan 27 08:41:49 np0005597378 nova_compute[238941]: 2026-01-27 13:41:49.347 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:49 np0005597378 nova_compute[238941]: 2026-01-27 13:41:49.348 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:49 np0005597378 nova_compute[238941]: 2026-01-27 13:41:49.399 238945 DEBUG oslo_concurrency.processutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:49 np0005597378 nova_compute[238941]: 2026-01-27 13:41:49.432 238945 DEBUG nova.compute.manager [req-a3be43fd-2b1c-41a9-a2ac-519b3f658173 req-6d68a269-ae2a-4a58-85a5-05c43682aa1f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Received event network-vif-deleted-402a0a5c-d6b4-4d22-843f-4e65f18d7327 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:41:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:41:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1057721660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.054 238945 DEBUG oslo_concurrency.processutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.060 238945 DEBUG nova.compute.provider_tree [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.075 238945 DEBUG nova.scheduler.client.report [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.097 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 109 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.121 238945 INFO nova.scheduler.client.report [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Deleted allocations for instance bb83a99e-76c6-4a1a-8b12-39a44d77f760#033[00m
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.210 238945 DEBUG oslo_concurrency.lockutils [None req-b9818c39-6c70-479c-8cc3-0948e7a86be5 8045d106ed8b424aaa83fc2438f630c5 76574efd3c594ec5ad8e8d556f365038 - - default default] Lock "bb83a99e-76c6-4a1a-8b12-39a44d77f760" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:41:50 np0005597378 nova_compute[238941]: 2026-01-27 13:41:50.616 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:41:51 np0005597378 nova_compute[238941]: 2026-01-27 13:41:51.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:41:51.952 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:41:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 109 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Jan 27 08:41:53 np0005597378 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 08:41:53 np0005597378 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.483s CPU time.
Jan 27 08:41:53 np0005597378 systemd-machined[207425]: Machine qemu-12-instance-0000000c terminated.
Jan 27 08:41:53 np0005597378 nova_compute[238941]: 2026-01-27 13:41:53.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:53 np0005597378 nova_compute[238941]: 2026-01-27 13:41:53.630 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 08:41:53 np0005597378 nova_compute[238941]: 2026-01-27 13:41:53.635 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.#033[00m
Jan 27 08:41:53 np0005597378 nova_compute[238941]: 2026-01-27 13:41:53.640 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.#033[00m
Jan 27 08:41:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 117 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.648316) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315648418, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1412, "num_deletes": 251, "total_data_size": 2085070, "memory_usage": 2116720, "flush_reason": "Manual Compaction"}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315766035, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2052679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19636, "largest_seqno": 21047, "table_properties": {"data_size": 2046204, "index_size": 3614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14333, "raw_average_key_size": 20, "raw_value_size": 2032853, "raw_average_value_size": 2851, "num_data_blocks": 164, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521182, "oldest_key_time": 1769521182, "file_creation_time": 1769521315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 117775 microseconds, and 6015 cpu microseconds.
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.766124) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2052679 bytes OK
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.766166) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.780301) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.780381) EVENT_LOG_v1 {"time_micros": 1769521315780370, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.780408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2078763, prev total WAL file size 2078763, number of live WAL files 2.
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.781373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2004KB)], [47(6991KB)]
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315781405, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9211905, "oldest_snapshot_seqno": -1}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4410 keys, 7476382 bytes, temperature: kUnknown
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315954945, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7476382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7446179, "index_size": 18061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109206, "raw_average_key_size": 24, "raw_value_size": 7365779, "raw_average_value_size": 1670, "num_data_blocks": 754, "num_entries": 4410, "num_filter_entries": 4410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.955503) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7476382 bytes
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.992105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.0 rd, 43.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.1) write-amplify(3.6) OK, records in: 4928, records dropped: 518 output_compression: NoCompression
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.992137) EVENT_LOG_v1 {"time_micros": 1769521315992123, "job": 24, "event": "compaction_finished", "compaction_time_micros": 173884, "compaction_time_cpu_micros": 17912, "output_level": 6, "num_output_files": 1, "total_output_size": 7476382, "num_input_records": 4928, "num_output_records": 4410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315993124, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521315994691, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.781263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:41:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:41:55.994823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:41:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 82 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.360 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting instance files /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.360 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deletion of /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del complete#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.729 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.729 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating image(s)#033[00m
Jan 27 08:41:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.749 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.772 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.791 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.795 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:41:56 np0005597378 nova_compute[238941]: 2026-01-27 13:41:56.796 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:41:57 np0005597378 nova_compute[238941]: 2026-01-27 13:41:57.114 238945 DEBUG nova.virt.libvirt.imagebackend [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/0ee8954b-88fb-4f95-ac2f-0ee07bab09cc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/0ee8954b-88fb-4f95-ac2f-0ee07bab09cc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 08:41:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 82 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Jan 27 08:41:58 np0005597378 nova_compute[238941]: 2026-01-27 13:41:58.326 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521303.3243978, bb83a99e-76c6-4a1a-8b12-39a44d77f760 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:41:58 np0005597378 nova_compute[238941]: 2026-01-27 13:41:58.326 238945 INFO nova.compute.manager [-] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:41:58 np0005597378 nova_compute[238941]: 2026-01-27 13:41:58.349 238945 DEBUG nova.compute.manager [None req-f1f0b605-7951-43ec-9c04-15f826a005ed - - - - - -] [instance: bb83a99e-76c6-4a1a-8b12-39a44d77f760] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:41:58 np0005597378 nova_compute[238941]: 2026-01-27 13:41:58.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:41:59 np0005597378 nova_compute[238941]: 2026-01-27 13:41:59.426 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:59 np0005597378 nova_compute[238941]: 2026-01-27 13:41:59.489 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:41:59 np0005597378 nova_compute[238941]: 2026-01-27 13:41:59.490 238945 DEBUG nova.virt.images [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 27 08:41:59 np0005597378 nova_compute[238941]: 2026-01-27 13:41:59.496 238945 DEBUG nova.privsep.utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 27 08:41:59 np0005597378 nova_compute[238941]: 2026-01-27 13:41:59.496 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:41:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:41:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3316024553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:41:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:41:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3316024553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:42:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.435 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.436 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.459 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.527 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.528 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.538 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.538 238945 INFO nova.compute.claims [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:42:00 np0005597378 nova_compute[238941]: 2026-01-27 13:42:00.651 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871617958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.401 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.750s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.407 238945 DEBUG nova.compute.provider_tree [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.424 238945 DEBUG nova.scheduler.client.report [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.454 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.455 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.502 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.503 238945 DEBUG nova.network.neutron [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.525 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.658 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.807 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.808 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.809 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Creating image(s)#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.828 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.974 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:01 np0005597378 nova_compute[238941]: 2026-01-27 13:42:01.998 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.002 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.021 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.part /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted" returned: 0 in 2.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.024 238945 DEBUG nova.network.neutron [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.025 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.031 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.062 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.063 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.064 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.064 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.086 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.090 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a724662c-197d-43f2-aa20-c656ae3e4f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 106 KiB/s wr, 59 op/s
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.126 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea.converted --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.127 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.151 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:02 np0005597378 nova_compute[238941]: 2026-01-27 13:42:02.157 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:03 np0005597378 nova_compute[238941]: 2026-01-27 13:42:03.394 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:04 np0005597378 nova_compute[238941]: 2026-01-27 13:42:04.097 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a724662c-197d-43f2-aa20-c656ae3e4f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 85 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 87 op/s
Jan 27 08:42:04 np0005597378 nova_compute[238941]: 2026-01-27 13:42:04.155 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] resizing rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:42:04 np0005597378 nova_compute[238941]: 2026-01-27 13:42:04.260 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:04 np0005597378 nova_compute[238941]: 2026-01-27 13:42:04.317 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] resizing rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:42:04 np0005597378 podman[253833]: 2026-01-27 13:42:04.74060023 +0000 UTC m=+0.081733312 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.002 238945 DEBUG nova.objects.instance [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lazy-loading 'migration_context' on Instance uuid a724662c-197d-43f2-aa20-c656ae3e4f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.097 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.097 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Ensure instance console log exists: /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.100 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.105 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.106 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ensure instance console log exists: /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.106 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.107 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.107 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.108 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.112 238945 WARNING nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.117 238945 WARNING nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.118 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.119 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.122 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.123 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.124 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.124 238945 DEBUG nova.virt.libvirt.host [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.125 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.125 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.126 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.126 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.126 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.127 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.128 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.128 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.128 238945 DEBUG nova.virt.hardware [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.131 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.156 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.157 238945 DEBUG nova.virt.libvirt.host [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.158 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.158 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.159 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.159 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.160 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.161 238945 DEBUG nova.virt.hardware [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.162 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.182 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159056982' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.733 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.754 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.757 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328049517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.892 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.913 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:05 np0005597378 nova_compute[238941]: 2026-01-27 13:42:05.917 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 115 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 69 op/s
Jan 27 08:42:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190681863' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:06 np0005597378 nova_compute[238941]: 2026-01-27 13:42:06.318 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:06 np0005597378 nova_compute[238941]: 2026-01-27 13:42:06.320 238945 DEBUG nova.objects.instance [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid a724662c-197d-43f2-aa20-c656ae3e4f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026454782' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:06 np0005597378 nova_compute[238941]: 2026-01-27 13:42:06.521 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:06 np0005597378 nova_compute[238941]: 2026-01-27 13:42:06.523 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <uuid>b302d131-0feb-4256-a088-4ee6521b1ed1</uuid>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <name>instance-0000000c</name>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdmin275Test-server-1048773693</nova:name>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:42:05</nova:creationTime>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:user uuid="367b5fa4b1ea4ac8bc5003a145b7aadb">tempest-ServersAdmin275Test-938318828-project-member</nova:user>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <nova:project uuid="f0a8272120624f10ab79ece3c464f817">tempest-ServersAdmin275Test-938318828</nova:project>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <entry name="serial">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <entry name="uuid">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log" append="off"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:42:06 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:42:06 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:42:06 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:42:06 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:42:06 np0005597378 nova_compute[238941]: 2026-01-27 13:42:06.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.210 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <uuid>a724662c-197d-43f2-aa20-c656ae3e4f2f</uuid>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <name>instance-0000000d</name>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiagnosticsTest-server-1020721999</nova:name>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:42:05</nova:creationTime>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:user uuid="b41bcf293a5049e6802dd2ef596d9e7e">tempest-ServerDiagnosticsTest-619372961-project-member</nova:user>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <nova:project uuid="7dd8034cac054c4389b9572f39a39c3a">tempest-ServerDiagnosticsTest-619372961</nova:project>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <entry name="serial">a724662c-197d-43f2-aa20-c656ae3e4f2f</entry>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <entry name="uuid">a724662c-197d-43f2-aa20-c656ae3e4f2f</entry>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a724662c-197d-43f2-aa20-c656ae3e4f2f_disk">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/console.log" append="off"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:42:07 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:42:07 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:42:07 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:42:07 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.256 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.256 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.257 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Using config drive#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.277 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.361 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.477 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.477 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.478 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Using config drive#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.494 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.548 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'keypairs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.924 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating config drive at /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.929 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83eczzwq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.950 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Creating config drive at /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config#033[00m
Jan 27 08:42:07 np0005597378 nova_compute[238941]: 2026-01-27 13:42:07.956 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkpau9h6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.054 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83eczzwq" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.078 238945 DEBUG nova.storage.rbd_utils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.083 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.099 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkpau9h6x" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 115 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.123 238945 DEBUG nova.storage.rbd_utils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] rbd image a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.127 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.166 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521313.1648479, b302d131-0feb-4256-a088-4ee6521b1ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.167 238945 INFO nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.190 238945 DEBUG nova.compute.manager [None req-8fc9ce15-521a-4adc-89fd-2875300c2c11 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.194 238945 DEBUG nova.compute.manager [None req-8fc9ce15-521a-4adc-89fd-2875300c2c11 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.214 238945 INFO nova.compute.manager [None req-8fc9ce15-521a-4adc-89fd-2875300c2c11 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:42:08 np0005597378 nova_compute[238941]: 2026-01-27 13:42:08.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:08 np0005597378 podman[254136]: 2026-01-27 13:42:08.71322916 +0000 UTC m=+0.049527371 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.321 238945 DEBUG oslo_concurrency.processutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.322 238945 INFO nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting local config drive /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config because it was imported into RBD.#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.343 238945 DEBUG oslo_concurrency.processutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config a724662c-197d-43f2-aa20-c656ae3e4f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.344 238945 INFO nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deleting local config drive /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f/disk.config because it was imported into RBD.#033[00m
Jan 27 08:42:09 np0005597378 systemd-machined[207425]: New machine qemu-13-instance-0000000c.
Jan 27 08:42:09 np0005597378 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Jan 27 08:42:09 np0005597378 systemd-machined[207425]: New machine qemu-14-instance-0000000d.
Jan 27 08:42:09 np0005597378 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.951 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521329.9511104, a724662c-197d-43f2-aa20-c656ae3e4f2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.952 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.954 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.954 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.958 238945 INFO nova.virt.libvirt.driver [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance spawned successfully.#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.959 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.983 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:09 np0005597378 nova_compute[238941]: 2026-01-27 13:42:09.987 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.014 238945 DEBUG nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.015 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.018 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance spawned successfully.#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.019 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.088 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.089 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.090 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.090 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.091 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.091 238945 DEBUG nova.virt.libvirt.driver [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.096 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.097 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521329.9524431, a724662c-197d-43f2-aa20-c656ae3e4f2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.097 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] VM Started (Lifecycle Event)#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.102 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.103 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.103 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.104 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.105 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.105 238945 DEBUG nova.virt.libvirt.driver [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 89 op/s
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.143 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.148 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.181 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.182 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521330.0132506, b302d131-0feb-4256-a088-4ee6521b1ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.182 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.193 238945 DEBUG nova.compute.manager [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.207 238945 INFO nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 8.40 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.208 238945 DEBUG nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.211 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.220 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.275 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.275 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521330.0137374, b302d131-0feb-4256-a088-4ee6521b1ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.276 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Started (Lifecycle Event)#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.309 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.311 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.312 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.312 238945 DEBUG nova.objects.instance [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.318 238945 INFO nova.compute.manager [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 9.81 seconds to build instance.#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.322 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.353 238945 DEBUG oslo_concurrency.lockutils [None req-b1ab7166-f1d0-44b3-bf2f-f1eedffb3e7e b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:10 np0005597378 nova_compute[238941]: 2026-01-27 13:42:10.384 238945 DEBUG oslo_concurrency.lockutils [None req-06132f63-da75-4c1c-9707-e9800bc2d1ae 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.575 238945 DEBUG nova.compute.manager [None req-b9a8e68b-0379-4bef-bde0-487ae4ecc2c2 0b32cb80e4634338909c7a6a1a7866fd 8094b7ce90e14225afd880a07b64c2d6 - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.579 238945 INFO nova.compute.manager [None req-b9a8e68b-0379-4bef-bde0-487ae4ecc2c2 0b32cb80e4634338909c7a6a1a7866fd 8094b7ce90e14225afd880a07b64c2d6 - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Retrieving diagnostics#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.763 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "a724662c-197d-43f2-aa20-c656ae3e4f2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.764 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.766 238945 INFO nova.compute.manager [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Terminating instance#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.767 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "refresh_cache-a724662c-197d-43f2-aa20-c656ae3e4f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.767 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquired lock "refresh_cache-a724662c-197d-43f2-aa20-c656ae3e4f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:11 np0005597378 nova_compute[238941]: 2026-01-27 13:42:11.767 238945 DEBUG nova.network.neutron [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.037 238945 DEBUG nova.network.neutron [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.317 238945 DEBUG nova.network.neutron [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.343 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Releasing lock "refresh_cache-a724662c-197d-43f2-aa20-c656ae3e4f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.343 238945 DEBUG nova.compute.manager [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.373 238945 INFO nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Rebuilding instance#033[00m
Jan 27 08:42:12 np0005597378 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 27 08:42:12 np0005597378 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 2.909s CPU time.
Jan 27 08:42:12 np0005597378 systemd-machined[207425]: Machine qemu-14-instance-0000000d terminated.
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.563 238945 INFO nova.virt.libvirt.driver [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance destroyed successfully.#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.564 238945 DEBUG nova.objects.instance [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lazy-loading 'resources' on Instance uuid a724662c-197d-43f2-aa20-c656ae3e4f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.613 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'trusted_certs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.669 238945 DEBUG nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.727 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'pci_requests' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.748 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'pci_devices' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.761 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'resources' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.788 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'migration_context' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.832 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:42:12 np0005597378 nova_compute[238941]: 2026-01-27 13:42:12.837 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.014 238945 INFO nova.virt.libvirt.driver [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deleting instance files /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f_del#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.015 238945 INFO nova.virt.libvirt.driver [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deletion of /var/lib/nova/instances/a724662c-197d-43f2-aa20-c656ae3e4f2f_del complete#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.121 238945 INFO nova.compute.manager [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.121 238945 DEBUG oslo.service.loopingcall [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.122 238945 DEBUG nova.compute.manager [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.122 238945 DEBUG nova.network.neutron [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.280 238945 DEBUG nova.network.neutron [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.306 238945 DEBUG nova.network.neutron [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.338 238945 INFO nova.compute.manager [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Took 0.22 seconds to deallocate network for instance.#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.401 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.402 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.403 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:13 np0005597378 nova_compute[238941]: 2026-01-27 13:42:13.463 238945 DEBUG oslo_concurrency.processutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24839351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:14 np0005597378 nova_compute[238941]: 2026-01-27 13:42:14.037 238945 DEBUG oslo_concurrency.processutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:14 np0005597378 nova_compute[238941]: 2026-01-27 13:42:14.043 238945 DEBUG nova.compute.provider_tree [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:14 np0005597378 nova_compute[238941]: 2026-01-27 13:42:14.060 238945 DEBUG nova.scheduler.client.report [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:14 np0005597378 nova_compute[238941]: 2026-01-27 13:42:14.083 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 107 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Jan 27 08:42:14 np0005597378 nova_compute[238941]: 2026-01-27 13:42:14.133 238945 INFO nova.scheduler.client.report [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Deleted allocations for instance a724662c-197d-43f2-aa20-c656ae3e4f2f#033[00m
Jan 27 08:42:14 np0005597378 nova_compute[238941]: 2026-01-27 13:42:14.257 238945 DEBUG oslo_concurrency.lockutils [None req-9cd29a78-c783-48e5-90b5-124907035830 b41bcf293a5049e6802dd2ef596d9e7e 7dd8034cac054c4389b9572f39a39c3a - - default default] Lock "a724662c-197d-43f2-aa20-c656ae3e4f2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.310 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.310 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.326 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.412 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.413 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.419 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.419 238945 INFO nova.compute.claims [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:42:15 np0005597378 nova_compute[238941]: 2026-01-27 13:42:15.589 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 199 op/s
Jan 27 08:42:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925749534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.169 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.174 238945 DEBUG nova.compute.provider_tree [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.199 238945 DEBUG nova.scheduler.client.report [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.275 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.276 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.326 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.327 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.353 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.375 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.472 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.473 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.474 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Creating image(s)#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.493 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.516 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.534 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.537 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.559 238945 DEBUG nova.policy [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01b0e341f4e9495f8d9fe42a148123f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de3e13e4602c4c9c8503b1baaa962908', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.596 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.597 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.597 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.597 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.618 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.623 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b54f7f28-d070-4afd-94b1-24775374d89d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:16 np0005597378 nova_compute[238941]: 2026-01-27 13:42:16.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:42:17
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:42:17 np0005597378 nova_compute[238941]: 2026-01-27 13:42:17.321 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Successfully created port: 640931bb-6240-4b85-a02c-96b1f07c8170 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:42:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:42:17 np0005597378 nova_compute[238941]: 2026-01-27 13:42:17.995 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Successfully updated port: 640931bb-6240-4b85-a02c-96b1f07c8170 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.012 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.013 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquired lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.013 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:42:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 914 KiB/s wr, 195 op/s
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.141 238945 DEBUG nova.compute.manager [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-changed-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.142 238945 DEBUG nova.compute.manager [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Refreshing instance network info cache due to event network-changed-640931bb-6240-4b85-a02c-96b1f07c8170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.142 238945 DEBUG oslo_concurrency.lockutils [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.300 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.661 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b54f7f28-d070-4afd-94b1-24775374d89d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:18 np0005597378 nova_compute[238941]: 2026-01-27 13:42:18.721 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] resizing rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.174 238945 DEBUG nova.network.neutron [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updating instance_info_cache with network_info: [{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.192 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Releasing lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.194 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance network_info: |[{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.195 238945 DEBUG oslo_concurrency.lockutils [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.195 238945 DEBUG nova.network.neutron [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Refreshing network info cache for port 640931bb-6240-4b85-a02c-96b1f07c8170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.658 238945 DEBUG nova.objects.instance [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lazy-loading 'migration_context' on Instance uuid b54f7f28-d070-4afd-94b1-24775374d89d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.675 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.676 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Ensure instance console log exists: /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.676 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.677 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.677 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.679 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start _get_guest_xml network_info=[{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.684 238945 WARNING nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.689 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.690 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.694 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.694 238945 DEBUG nova.virt.libvirt.host [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.695 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.695 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.696 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.697 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.698 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.698 238945 DEBUG nova.virt.hardware [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:42:19 np0005597378 nova_compute[238941]: 2026-01-27 13:42:19.700 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Jan 27 08:42:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113627042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:20 np0005597378 nova_compute[238941]: 2026-01-27 13:42:20.397 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:20 np0005597378 nova_compute[238941]: 2026-01-27 13:42:20.416 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:20 np0005597378 nova_compute[238941]: 2026-01-27 13:42:20.420 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:20 np0005597378 nova_compute[238941]: 2026-01-27 13:42:20.512 238945 DEBUG nova.network.neutron [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updated VIF entry in instance network info cache for port 640931bb-6240-4b85-a02c-96b1f07c8170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:42:20 np0005597378 nova_compute[238941]: 2026-01-27 13:42:20.513 238945 DEBUG nova.network.neutron [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updating instance_info_cache with network_info: [{"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:20 np0005597378 nova_compute[238941]: 2026-01-27 13:42:20.675 238945 DEBUG oslo_concurrency.lockutils [req-6cdacae0-b37f-4b97-8c2f-b28f4bcf6f65 req-d5d198bf-2220-464b-930b-00d748b4dfd0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b54f7f28-d070-4afd-94b1-24775374d89d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850938441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.005 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.006 238945 DEBUG nova.virt.libvirt.vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-387471087',display_name='tempest-ImagesOneServerTestJSON-server-387471087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-387471087',id=14,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de3e13e4602c4c9c8503b1baaa962908',ramdisk_id='',reservation_id='r-vljnyb10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1846368836',owner_user_name='tempest-ImagesOneServerTestJSON-1846368836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:16Z,user_data=None,user_id='01b0e341f4e9495f8d9fe42a148123f6',uuid=b54f7f28-d070-4afd-94b1-24775374d89d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.007 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converting VIF {"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.008 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.009 238945 DEBUG nova.objects.instance [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lazy-loading 'pci_devices' on Instance uuid b54f7f28-d070-4afd-94b1-24775374d89d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.047 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <uuid>b54f7f28-d070-4afd-94b1-24775374d89d</uuid>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <name>instance-0000000e</name>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesOneServerTestJSON-server-387471087</nova:name>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:42:19</nova:creationTime>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:user uuid="01b0e341f4e9495f8d9fe42a148123f6">tempest-ImagesOneServerTestJSON-1846368836-project-member</nova:user>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:project uuid="de3e13e4602c4c9c8503b1baaa962908">tempest-ImagesOneServerTestJSON-1846368836</nova:project>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <nova:port uuid="640931bb-6240-4b85-a02c-96b1f07c8170">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <entry name="serial">b54f7f28-d070-4afd-94b1-24775374d89d</entry>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <entry name="uuid">b54f7f28-d070-4afd-94b1-24775374d89d</entry>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk.config">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:69:c6:70"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <target dev="tap640931bb-62"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/console.log" append="off"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:42:21 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:42:21 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:42:21 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:42:21 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.049 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Preparing to wait for external event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.050 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.050 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.050 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.051 238945 DEBUG nova.virt.libvirt.vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-387471087',display_name='tempest-ImagesOneServerTestJSON-server-387471087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-387471087',id=14,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de3e13e4602c4c9c8503b1baaa962908',ramdisk_id='',reservation_id='r-vljnyb10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1846368836',owner_user_name='tempest-ImagesOneServerTestJSON-1846368836-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:16Z,user_data=None,user_id='01b0e341f4e9495f8d9fe42a148123f6',uuid=b54f7f28-d070-4afd-94b1-24775374d89d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.051 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converting VIF {"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.052 238945 DEBUG nova.network.os_vif_util [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.052 238945 DEBUG os_vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.058 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap640931bb-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.059 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap640931bb-62, col_values=(('external_ids', {'iface-id': '640931bb-6240-4b85-a02c-96b1f07c8170', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:c6:70', 'vm-uuid': 'b54f7f28-d070-4afd-94b1-24775374d89d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:21 np0005597378 NetworkManager[48904]: <info>  [1769521341.0616] manager: (tap640931bb-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.069 238945 INFO os_vif [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62')#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.181 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.182 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.182 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No VIF found with MAC fa:16:3e:69:c6:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.183 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Using config drive#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.201 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.899 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Creating config drive at /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.904 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwr10xoiq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.932 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "325fa6d5-6a4b-4551-af87-acb87aab870b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:21 np0005597378 nova_compute[238941]: 2026-01-27 13:42:21.933 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.032 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwr10xoiq" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.063 238945 DEBUG nova.storage.rbd_utils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] rbd image b54f7f28-d070-4afd-94b1-24775374d89d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.068 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config b54f7f28-d070-4afd-94b1-24775374d89d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.097 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:42:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 134 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.331 238945 DEBUG oslo_concurrency.processutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config b54f7f28-d070-4afd-94b1-24775374d89d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.331 238945 INFO nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deleting local config drive /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d/disk.config because it was imported into RBD.#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.350 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.350 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.362 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.362 238945 INFO nova.compute.claims [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:42:22 np0005597378 kernel: tap640931bb-62: entered promiscuous mode
Jan 27 08:42:22 np0005597378 NetworkManager[48904]: <info>  [1769521342.3798] manager: (tap640931bb-62): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:22Z|00062|binding|INFO|Claiming lport 640931bb-6240-4b85-a02c-96b1f07c8170 for this chassis.
Jan 27 08:42:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:22Z|00063|binding|INFO|640931bb-6240-4b85-a02c-96b1f07c8170: Claiming fa:16:3e:69:c6:70 10.100.0.9
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.396 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c6:70 10.100.0.9'], port_security=['fa:16:3e:69:c6:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b54f7f28-d070-4afd-94b1-24775374d89d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de3e13e4602c4c9c8503b1baaa962908', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e59b44be-45bb-4a6f-8bba-7e255b92edf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fecd366-5b93-42cf-acb9-74d482fd8eca, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=640931bb-6240-4b85-a02c-96b1f07c8170) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.397 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 640931bb-6240-4b85-a02c-96b1f07c8170 in datapath c9a30bdb-f010-4449-afb6-cf95c4e85fbd bound to our chassis#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.398 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9a30bdb-f010-4449-afb6-cf95c4e85fbd#033[00m
Jan 27 08:42:22 np0005597378 systemd-udevd[254636]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.412 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3e550d-92d1-4c12-944e-c383387bcb5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.416 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9a30bdb-f1 in ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.417 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9a30bdb-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[670e958f-5dc8-415f-84fc-53c8528d37a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6f2bc9-1785-4c02-938d-6b575557a454]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 systemd-machined[207425]: New machine qemu-15-instance-0000000e.
Jan 27 08:42:22 np0005597378 NetworkManager[48904]: <info>  [1769521342.4347] device (tap640931bb-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:42:22 np0005597378 NetworkManager[48904]: <info>  [1769521342.4353] device (tap640931bb-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.435 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ece4052d-9ce5-45db-a396-3944d78a922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6aff1323-dd7c-4156-a2ad-0221ef3ae84b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:22Z|00064|binding|INFO|Setting lport 640931bb-6240-4b85-a02c-96b1f07c8170 ovn-installed in OVS
Jan 27 08:42:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:22Z|00065|binding|INFO|Setting lport 640931bb-6240-4b85-a02c-96b1f07c8170 up in Southbound
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.472 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.502 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b6218045-bd0b-443c-9c19-8730763e2570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 NetworkManager[48904]: <info>  [1769521342.5083] manager: (tapc9a30bdb-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Jan 27 08:42:22 np0005597378 systemd-udevd[254640]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.509 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[472d5ebc-bac4-40cb-8699-35ec19e8b10b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.522 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.543 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b719d30c-22b7-41a6-96d2-dbec4e26635d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.546 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83d9f131-1b44-4413-aba4-0c8fdaa4eb7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 NetworkManager[48904]: <info>  [1769521342.5727] device (tapc9a30bdb-f0): carrier: link connected
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.578 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf6a1c6-b18b-4358-9041-e73e6d50435f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[434be345-43a7-4c72-a4ca-efa08418f55d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9a30bdb-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:5e:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398219, 'reachable_time': 22899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254670, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.615 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bd69b3-1d9d-4e9f-880d-c37e1297ddac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:5e6d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398219, 'tstamp': 398219}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254671, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.638 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fb3db4-f3b7-481e-a871-e0cacce9905d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9a30bdb-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:5e:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398219, 'reachable_time': 22899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254672, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.686 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a45c55-95bd-432b-b5e6-d84ea1ecff31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.725 238945 DEBUG nova.compute.manager [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.725 238945 DEBUG oslo_concurrency.lockutils [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.725 238945 DEBUG oslo_concurrency.lockutils [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.726 238945 DEBUG oslo_concurrency.lockutils [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.726 238945 DEBUG nova.compute.manager [req-6e63745f-598c-4412-9731-aa4cdbbaffcd req-aeda4c40-7851-4d10-a3af-41597ce978c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Processing event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b544fad6-2242-4d00-8eb6-fc0b9cf1d3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9a30bdb-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.769 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9a30bdb-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 kernel: tapc9a30bdb-f0: entered promiscuous mode
Jan 27 08:42:22 np0005597378 NetworkManager[48904]: <info>  [1769521342.7725] manager: (tapc9a30bdb-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.774 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9a30bdb-f0, col_values=(('external_ids', {'iface-id': '0c554392-9662-4423-af6f-6fff4869f586'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.776 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:42:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:22Z|00066|binding|INFO|Releasing lport 0c554392-9662-4423-af6f-6fff4869f586 from this chassis (sb_readonly=0)
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[586f96b2-8471-4cb1-ac5d-fc7827130736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.777 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-c9a30bdb-f010-4449-afb6-cf95c4e85fbd
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.pid.haproxy
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID c9a30bdb-f010-4449-afb6-cf95c4e85fbd
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:42:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:22.778 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'env', 'PROCESS_TAG=haproxy-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9a30bdb-f010-4449-afb6-cf95c4e85fbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.893 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.906 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521342.9032521, b54f7f28-d070-4afd-94b1-24775374d89d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.907 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Started (Lifecycle Event)#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.910 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.913 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.917 238945 INFO nova.virt.libvirt.driver [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance spawned successfully.#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.918 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.930 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.936 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.939 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.940 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.940 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.940 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.941 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.941 238945 DEBUG nova.virt.libvirt.driver [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.976 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.977 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521342.90945, b54f7f28-d070-4afd-94b1-24775374d89d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:22 np0005597378 nova_compute[238941]: 2026-01-27 13:42:22.977 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.010 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.016 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521342.9169824, b54f7f28-d070-4afd-94b1-24775374d89d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.016 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.020 238945 INFO nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 6.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.021 238945 DEBUG nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.037 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.040 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.067 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.101 238945 INFO nova.compute.manager [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 7.72 seconds to build instance.#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.130 238945 DEBUG oslo_concurrency.lockutils [None req-dbb9226d-1d7f-4738-b286-c3f968493759 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483171772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.161 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.169 238945 DEBUG nova.compute.provider_tree [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.195 238945 DEBUG nova.scheduler.client.report [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:23 np0005597378 podman[254762]: 2026-01-27 13:42:23.223007949 +0000 UTC m=+0.087972631 container create 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.224 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.226 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:42:23 np0005597378 systemd[1]: Started libpod-conmon-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope.
Jan 27 08:42:23 np0005597378 podman[254762]: 2026-01-27 13:42:23.169256434 +0000 UTC m=+0.034221126 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:42:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57033dec845061747035ac4a99f15e0bcc1a0301768f1cd2ebbba68ed50504b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.288 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.302 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:42:23 np0005597378 podman[254762]: 2026-01-27 13:42:23.306684422 +0000 UTC m=+0.171649124 container init 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:23 np0005597378 podman[254762]: 2026-01-27 13:42:23.311963724 +0000 UTC m=+0.176928406 container start 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.317 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:42:23 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : New worker (254784) forked
Jan 27 08:42:23 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : Loading success.
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.395 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.396 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.396 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Creating image(s)#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.414 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.437 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.462 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.466 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.538 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.539 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.540 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.540 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.561 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.566 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 325fa6d5-6a4b-4551-af87-acb87aab870b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.911 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 325fa6d5-6a4b-4551-af87-acb87aab870b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:23 np0005597378 nova_compute[238941]: 2026-01-27 13:42:23.984 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] resizing rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.121 238945 DEBUG nova.objects.instance [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lazy-loading 'migration_context' on Instance uuid 325fa6d5-6a4b-4551-af87-acb87aab870b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 153 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.0 MiB/s wr, 258 op/s
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.142 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.142 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Ensure instance console log exists: /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.143 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.143 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.143 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.145 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.149 238945 WARNING nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.154 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.155 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.158 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.158 238945 DEBUG nova.virt.libvirt.host [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.159 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.159 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.160 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.160 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.160 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.161 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.162 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.162 238945 DEBUG nova.virt.hardware [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.166 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330902503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.797 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.820 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:24 np0005597378 nova_compute[238941]: 2026-01-27 13:42:24.826 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.314 238945 DEBUG nova.compute.manager [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.316 238945 DEBUG oslo_concurrency.lockutils [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.317 238945 DEBUG oslo_concurrency.lockutils [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.318 238945 DEBUG oslo_concurrency.lockutils [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.318 238945 DEBUG nova.compute.manager [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] No waiting events found dispatching network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.319 238945 WARNING nova.compute.manager [req-90896de0-7ffa-40c9-a03c-9e2ec6a75a44 req-3aadd474-f3bc-4aff-a95c-cce5642df577 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received unexpected event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:42:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497082153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.464 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.465 238945 DEBUG nova.objects.instance [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lazy-loading 'pci_devices' on Instance uuid 325fa6d5-6a4b-4551-af87-acb87aab870b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.498 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <uuid>325fa6d5-6a4b-4551-af87-acb87aab870b</uuid>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <name>instance-0000000f</name>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-471551799</nova:name>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:42:24</nova:creationTime>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:user uuid="5d9966a09e6941468045cfd8d4e0fffb">tempest-ServerDiagnosticsV248Test-1748794496-project-member</nova:user>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <nova:project uuid="d9a18c1ebae446cd91ddc4d0a42ebbfc">tempest-ServerDiagnosticsV248Test-1748794496</nova:project>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <entry name="serial">325fa6d5-6a4b-4551-af87-acb87aab870b</entry>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <entry name="uuid">325fa6d5-6a4b-4551-af87-acb87aab870b</entry>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/325fa6d5-6a4b-4551-af87-acb87aab870b_disk">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/console.log" append="off"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:42:25 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:42:25 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:42:25 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:42:25 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.660 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.662 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.663 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Using config drive#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.718 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:25 np0005597378 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 08:42:25 np0005597378 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 13.263s CPU time.
Jan 27 08:42:25 np0005597378 systemd-machined[207425]: Machine qemu-13-instance-0000000c terminated.
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.952 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.960 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Creating config drive at /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.967 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjnjatg82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:25 np0005597378 nova_compute[238941]: 2026-01-27 13:42:25.993 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.008 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.032 238945 DEBUG nova.compute.manager [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.098 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjnjatg82" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 191 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 204 op/s
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.132 238945 DEBUG nova.storage.rbd_utils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] rbd image 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.140 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.166 238945 INFO nova.compute.manager [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] instance snapshotting#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.306 238945 DEBUG oslo_concurrency.processutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config 325fa6d5-6a4b-4551-af87-acb87aab870b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.308 238945 INFO nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deleting local config drive /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b/disk.config because it was imported into RBD.#033[00m
Jan 27 08:42:26 np0005597378 systemd-machined[207425]: New machine qemu-16-instance-0000000f.
Jan 27 08:42:26 np0005597378 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.424 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting instance files /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.425 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deletion of /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del complete#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.483 238945 INFO nova.virt.libvirt.driver [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Beginning live snapshot process#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.682 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.683 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating image(s)#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.708 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.732 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.757 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.762 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.793 238945 DEBUG nova.virt.libvirt.imagebackend [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.824 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.825 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.826 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.827 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.847 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:26 np0005597378 nova_compute[238941]: 2026-01-27 13:42:26.855 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.045 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(e9295da3f60b4655a3931013a3ed55e5) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.116 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b302d131-0feb-4256-a088-4ee6521b1ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.197 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] resizing rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.304 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.305 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Ensure instance console log exists: /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.305 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.305 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.306 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.307 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.312 238945 WARNING nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.321 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.322 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.330 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.331 238945 DEBUG nova.virt.libvirt.host [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.332 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.332 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.333 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.334 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.335 238945 DEBUG nova.virt.hardware [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.335 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'vcpu_model' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.356 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012837955400084753 of space, bias 1.0, pg target 0.3851386620025426 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666779013562499 of space, bias 1.0, pg target 0.20000337040687496 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.257476971579023e-07 of space, bias 4.0, pg target 0.0009908972365894827 quantized to 16 (current 16)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:42:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.396 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521347.3750873, 325fa6d5-6a4b-4551-af87-acb87aab870b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.398 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.407 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.408 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.420 238945 INFO nova.virt.libvirt.driver [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance spawned successfully.#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.422 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.487 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.489 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.490 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.492 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.492 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.492 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.493 238945 DEBUG nova.virt.libvirt.driver [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.562 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521332.56112, a724662c-197d-43f2-aa20-c656ae3e4f2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.562 238945 INFO nova.compute.manager [-] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.568 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.569 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521347.3772151, 325fa6d5-6a4b-4551-af87-acb87aab870b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.569 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] VM Started (Lifecycle Event)#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.605 238945 DEBUG nova.compute.manager [None req-3c19b770-f736-4018-acc1-16f57ba97ea4 - - - - - -] [instance: a724662c-197d-43f2-aa20-c656ae3e4f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.615 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.617 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.627 238945 INFO nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 4.23 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.627 238945 DEBUG nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.684 238945 INFO nova.compute.manager [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 5.52 seconds to build instance.#033[00m
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.708 238945 DEBUG oslo_concurrency.lockutils [None req-a02d6991-b8c7-4160-a2cb-6a922542c07b 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069381629' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:27 np0005597378 nova_compute[238941]: 2026-01-27 13:42:27.995 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.029 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.036 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 27 08:42:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 27 08:42:28 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 27 08:42:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 191 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 185 op/s
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.130 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] cloning vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk@e9295da3f60b4655a3931013a3ed55e5 to images/95762ad0-9658-4d01-9b80-b51a7dc9cf1c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.234 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] flattening images/95762ad0-9658-4d01-9b80-b51a7dc9cf1c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.482 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] removing snapshot(e9295da3f60b4655a3931013a3ed55e5) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:42:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346230149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.720 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.722 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <uuid>b302d131-0feb-4256-a088-4ee6521b1ed1</uuid>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <name>instance-0000000c</name>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdmin275Test-server-1048773693</nova:name>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:42:27</nova:creationTime>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:user uuid="367b5fa4b1ea4ac8bc5003a145b7aadb">tempest-ServersAdmin275Test-938318828-project-member</nova:user>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <nova:project uuid="f0a8272120624f10ab79ece3c464f817">tempest-ServersAdmin275Test-938318828</nova:project>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <entry name="serial">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <entry name="uuid">b302d131-0feb-4256-a088-4ee6521b1ed1</entry>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/console.log" append="off"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:42:28 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:42:28 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:42:28 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:42:28 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.788 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.789 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.789 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Using config drive#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.811 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.839 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'ec2_ids' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:28 np0005597378 nova_compute[238941]: 2026-01-27 13:42:28.881 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lazy-loading 'keypairs' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 27 08:42:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 27 08:42:29 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.176 238945 DEBUG nova.storage.rbd_utils [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(snap) on rbd image(95762ad0-9658-4d01-9b80-b51a7dc9cf1c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.364 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Creating config drive at /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.368 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_gbongo8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.470 238945 DEBUG nova.compute.manager [None req-1e672f5a-8762-4a67-b52a-b3dadc0f59e3 e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.476 238945 INFO nova.compute.manager [None req-1e672f5a-8762-4a67-b52a-b3dadc0f59e3 e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Retrieving diagnostics#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.497 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_gbongo8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.520 238945 DEBUG nova.storage.rbd_utils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] rbd image b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.523 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.653 238945 DEBUG oslo_concurrency.processutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config b302d131-0feb-4256-a088-4ee6521b1ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:29 np0005597378 nova_compute[238941]: 2026-01-27 13:42:29.654 238945 INFO nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting local config drive /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1/disk.config because it was imported into RBD.#033[00m
Jan 27 08:42:29 np0005597378 systemd-machined[207425]: New machine qemu-17-instance-0000000c.
Jan 27 08:42:29 np0005597378 systemd[1]: Started Virtual Machine qemu-17-instance-0000000c.
Jan 27 08:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 27 08:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 27 08:42:30 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 27 08:42:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 227 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 544 op/s
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.381 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for b302d131-0feb-4256-a088-4ee6521b1ed1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.382 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521350.3808842, b302d131-0feb-4256-a088-4ee6521b1ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.382 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.384 238945 DEBUG nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.385 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.388 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance spawned successfully.#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.388 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.425 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.432 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.436 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.436 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.437 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.437 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.438 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.439 238945 DEBUG nova.virt.libvirt.driver [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.491 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.492 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521350.3817918, b302d131-0feb-4256-a088-4ee6521b1ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.492 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Started (Lifecycle Event)#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.534 238945 DEBUG nova.compute.manager [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.554 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.585 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.611 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.611 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.612 238945 DEBUG nova.objects.instance [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:42:30 np0005597378 nova_compute[238941]: 2026-01-27 13:42:30.688 238945 DEBUG oslo_concurrency.lockutils [None req-d045efe1-6d3b-4f26-8ed3-101f26b767aa 7bb9a17daba54d058540fa9b0958f35f 448440bacc3c4b729f625b8f0725188d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.635 238945 INFO nova.virt.libvirt.driver [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Snapshot image upload complete#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.636 238945 INFO nova.compute.manager [None req-00f0763e-dcf6-4572-aa2a-0d4665e33cd1 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 5.47 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.687 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.872 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "b302d131-0feb-4256-a088-4ee6521b1ed1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.873 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.874 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "b302d131-0feb-4256-a088-4ee6521b1ed1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.874 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.874 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.876 238945 INFO nova.compute.manager [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Terminating instance#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.877 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "refresh_cache-b302d131-0feb-4256-a088-4ee6521b1ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.877 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquired lock "refresh_cache-b302d131-0feb-4256-a088-4ee6521b1ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:31 np0005597378 nova_compute[238941]: 2026-01-27 13:42:31.877 238945 DEBUG nova.network.neutron [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:42:32 np0005597378 nova_compute[238941]: 2026-01-27 13:42:32.043 238945 DEBUG nova.network.neutron [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 227 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 9.5 MiB/s rd, 8.9 MiB/s wr, 425 op/s
Jan 27 08:42:32 np0005597378 nova_compute[238941]: 2026-01-27 13:42:32.325 238945 DEBUG nova.network.neutron [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:32 np0005597378 nova_compute[238941]: 2026-01-27 13:42:32.338 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Releasing lock "refresh_cache-b302d131-0feb-4256-a088-4ee6521b1ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:32 np0005597378 nova_compute[238941]: 2026-01-27 13:42:32.338 238945 DEBUG nova.compute.manager [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:42:32 np0005597378 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 08:42:32 np0005597378 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000c.scope: Consumed 2.717s CPU time.
Jan 27 08:42:32 np0005597378 systemd-machined[207425]: Machine qemu-17-instance-0000000c terminated.
Jan 27 08:42:32 np0005597378 nova_compute[238941]: 2026-01-27 13:42:32.555 238945 INFO nova.virt.libvirt.driver [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance destroyed successfully.#033[00m
Jan 27 08:42:32 np0005597378 nova_compute[238941]: 2026-01-27 13:42:32.555 238945 DEBUG nova.objects.instance [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lazy-loading 'resources' on Instance uuid b302d131-0feb-4256-a088-4ee6521b1ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 27 08:42:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 27 08:42:33 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 27 08:42:33 np0005597378 nova_compute[238941]: 2026-01-27 13:42:33.752 238945 INFO nova.virt.libvirt.driver [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deleting instance files /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del#033[00m
Jan 27 08:42:33 np0005597378 nova_compute[238941]: 2026-01-27 13:42:33.753 238945 INFO nova.virt.libvirt.driver [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deletion of /var/lib/nova/instances/b302d131-0feb-4256-a088-4ee6521b1ed1_del complete#033[00m
Jan 27 08:42:33 np0005597378 nova_compute[238941]: 2026-01-27 13:42:33.807 238945 INFO nova.compute.manager [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:42:33 np0005597378 nova_compute[238941]: 2026-01-27 13:42:33.808 238945 DEBUG oslo.service.loopingcall [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:42:33 np0005597378 nova_compute[238941]: 2026-01-27 13:42:33.808 238945 DEBUG nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:42:33 np0005597378 nova_compute[238941]: 2026-01-27 13:42:33.809 238945 DEBUG nova.network.neutron [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:42:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 188 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 8.9 MiB/s wr, 637 op/s
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.337 238945 DEBUG nova.network.neutron [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.348 238945 DEBUG nova.network.neutron [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.362 238945 INFO nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Took 0.55 seconds to deallocate network for instance.#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.402 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.403 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.426 238945 DEBUG nova.compute.manager [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.468 238945 INFO nova.compute.manager [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] instance snapshotting#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.480 238945 DEBUG oslo_concurrency.processutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.675 238945 INFO nova.virt.libvirt.driver [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Beginning live snapshot process#033[00m
Jan 27 08:42:34 np0005597378 nova_compute[238941]: 2026-01-27 13:42:34.820 238945 DEBUG nova.virt.libvirt.imagebackend [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.020 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(a073511ce8274fa2b6ed6cda077a516a) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:42:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119405160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.058 238945 DEBUG oslo_concurrency.processutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.065 238945 DEBUG nova.compute.provider_tree [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.214 238945 DEBUG nova.scheduler.client.report [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.234 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.261 238945 INFO nova.scheduler.client.report [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Deleted allocations for instance b302d131-0feb-4256-a088-4ee6521b1ed1#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.314 238945 DEBUG oslo_concurrency.lockutils [None req-fa8a0cf3-dcc5-4978-a756-e7f527887bdb 367b5fa4b1ea4ac8bc5003a145b7aadb f0a8272120624f10ab79ece3c464f817 - - default default] Lock "b302d131-0feb-4256-a088-4ee6521b1ed1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 27 08:42:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 27 08:42:35 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.478 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] cloning vms/b54f7f28-d070-4afd-94b1-24775374d89d_disk@a073511ce8274fa2b6ed6cda077a516a to images/bac82c13-fb2a-4fb6-8e0a-0bba773d6598 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.579 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] flattening images/bac82c13-fb2a-4fb6-8e0a-0bba773d6598 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:42:35 np0005597378 podman[255794]: 2026-01-27 13:42:35.766107907 +0000 UTC m=+0.105584016 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:42:35 np0005597378 nova_compute[238941]: 2026-01-27 13:42:35.867 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] removing snapshot(a073511ce8274fa2b6ed6cda077a516a) on rbd image(b54f7f28-d070-4afd-94b1-24775374d89d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.094 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.095 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.112 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:42:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 174 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.5 KiB/s wr, 247 op/s
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.201 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.202 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.207 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.208 238945 INFO nova.compute.claims [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.324 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.482 238945 DEBUG nova.storage.rbd_utils [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] creating snapshot(snap) on rbd image(bac82c13-fb2a-4fb6-8e0a-0bba773d6598) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.689 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30228094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.938 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.945 238945 DEBUG nova.compute.provider_tree [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.965 238945 DEBUG nova.scheduler.client.report [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.991 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:36 np0005597378 nova_compute[238941]: 2026-01-27 13:42:36.991 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.068 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.069 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.093 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.121 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.213 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.214 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.215 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Creating image(s)#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.236 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.259 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.278 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.281 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.349 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.349 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.350 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.350 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.369 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.373 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aa157503-9eb6-44e1-9bdd-2c902a907faf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.396 238945 DEBUG nova.policy [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.400 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.438 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.438 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.601 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f aa157503-9eb6-44e1-9bdd-2c902a907faf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.663 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.796 238945 DEBUG nova.objects.instance [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.812 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.813 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Ensure instance console log exists: /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.813 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.814 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:37 np0005597378 nova_compute[238941]: 2026-01-27 13:42:37.814 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:37Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:c6:70 10.100.0.9
Jan 27 08:42:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:37Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:c6:70 10.100.0.9
Jan 27 08:42:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 174 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.7 KiB/s wr, 314 op/s
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.388 238945 INFO nova.virt.libvirt.driver [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Snapshot image upload complete#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.389 238945 INFO nova.compute.manager [None req-ab5e2884-751e-43d6-9875-9966d8fa432b 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 3.92 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.408 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully created port: 57b7d200-69e2-4204-8382-ca897741aa3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.451 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.452 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.453 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.453 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:42:38 np0005597378 nova_compute[238941]: 2026-01-27 13:42:38.453 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19222589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.009 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.093 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.093 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:42:39 np0005597378 podman[256067]: 2026-01-27 13:42:39.095556612 +0000 UTC m=+0.051469453 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.098 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.099 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.307 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.308 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4172MB free_disk=59.94637236464769GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.309 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.309 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b54f7f28-d070-4afd-94b1-24775374d89d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 325fa6d5-6a4b-4551-af87-acb87aab870b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance aa157503-9eb6-44e1-9bdd-2c902a907faf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.421 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.421 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.508 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.542 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully updated port: 57b7d200-69e2-4204-8382-ca897741aa3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.566 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.568 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.568 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.706 238945 DEBUG nova.compute.manager [None req-ec0125fc-d8cd-4fc6-b99d-703b89a0a4cd e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.710 238945 INFO nova.compute.manager [None req-ec0125fc-d8cd-4fc6-b99d-703b89a0a4cd e344e7c8471649a492fc0a6d5e9f28ba 77d1369c464c4194933648dc7fb39860 - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Retrieving diagnostics#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.804 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.884 238945 DEBUG nova.compute.manager [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.885 238945 DEBUG nova.compute.manager [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing instance network info cache due to event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:42:39 np0005597378 nova_compute[238941]: 2026-01-27 13:42:39.885 238945 DEBUG oslo_concurrency.lockutils [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:42:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:42:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:42:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:42:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.076 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "325fa6d5-6a4b-4551-af87-acb87aab870b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.076 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.077 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "325fa6d5-6a4b-4551-af87-acb87aab870b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.077 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.077 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.079 238945 INFO nova.compute.manager [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Terminating instance#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.080 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "refresh_cache-325fa6d5-6a4b-4551-af87-acb87aab870b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.080 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquired lock "refresh_cache-325fa6d5-6a4b-4551-af87-acb87aab870b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.080 238945 DEBUG nova.network.neutron [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:42:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 12 MiB/s wr, 418 op/s
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655747521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.234 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.243 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.260 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.267 238945 DEBUG nova.network.neutron [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.302 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.453518499 +0000 UTC m=+0.044806543 container create 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:42:40 np0005597378 systemd[1]: Started libpod-conmon-3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b.scope.
Jan 27 08:42:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.434878475 +0000 UTC m=+0.026166539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.546219006 +0000 UTC m=+0.137507070 container init 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.557202263 +0000 UTC m=+0.148490307 container start 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.560 238945 DEBUG nova.network.neutron [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.562233609 +0000 UTC m=+0.153521673 container attach 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 08:42:40 np0005597378 jovial_ardinghelli[256264]: 167 167
Jan 27 08:42:40 np0005597378 systemd[1]: libpod-3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b.scope: Deactivated successfully.
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.563209385 +0000 UTC m=+0.154497429 container died 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:42:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cee8c7442c04b971d2e9d2e6e2601fecc06419e11602e1ad3368edc396cbb050-merged.mount: Deactivated successfully.
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.606 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Releasing lock "refresh_cache-325fa6d5-6a4b-4551-af87-acb87aab870b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.608 238945 DEBUG nova.compute.manager [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:42:40 np0005597378 podman[256249]: 2026-01-27 13:42:40.615617333 +0000 UTC m=+0.206905377 container remove 3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:42:40 np0005597378 systemd[1]: libpod-conmon-3b6b521c42fc5bf11da9c776bb0a946e011827da940386d6bdb19256222a374b.scope: Deactivated successfully.
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.631 238945 DEBUG nova.network.neutron [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:42:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:42:40 np0005597378 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 27 08:42:40 np0005597378 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 12.823s CPU time.
Jan 27 08:42:40 np0005597378 systemd-machined[207425]: Machine qemu-16-instance-0000000f terminated.
Jan 27 08:42:40 np0005597378 podman[256288]: 2026-01-27 13:42:40.815858028 +0000 UTC m=+0.050287001 container create bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.835 238945 INFO nova.virt.libvirt.driver [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance destroyed successfully.#033[00m
Jan 27 08:42:40 np0005597378 nova_compute[238941]: 2026-01-27 13:42:40.836 238945 DEBUG nova.objects.instance [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lazy-loading 'resources' on Instance uuid 325fa6d5-6a4b-4551-af87-acb87aab870b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:40 np0005597378 systemd[1]: Started libpod-conmon-bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1.scope.
Jan 27 08:42:40 np0005597378 podman[256288]: 2026-01-27 13:42:40.796419643 +0000 UTC m=+0.030848636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:42:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:40 np0005597378 podman[256288]: 2026-01-27 13:42:40.914952909 +0000 UTC m=+0.149381902 container init bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:40 np0005597378 podman[256288]: 2026-01-27 13:42:40.923263974 +0000 UTC m=+0.157692947 container start bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:42:40 np0005597378 podman[256288]: 2026-01-27 13:42:40.926793788 +0000 UTC m=+0.161222761 container attach bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.150 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.151 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance network_info: |[{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.151 238945 DEBUG oslo_concurrency.lockutils [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.152 238945 DEBUG nova.network.neutron [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.156 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start _get_guest_xml network_info=[{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.162 238945 WARNING nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.168 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.170 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.180 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.181 238945 DEBUG nova.virt.libvirt.host [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.182 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.182 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.183 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.183 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.183 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.184 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.184 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.184 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.185 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.185 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.185 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.186 238945 DEBUG nova.virt.hardware [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.188 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.296 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.298 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.298 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.298 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:42:41 np0005597378 charming_cannon[256307]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:42:41 np0005597378 charming_cannon[256307]: --> All data devices are unavailable
Jan 27 08:42:41 np0005597378 systemd[1]: libpod-bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1.scope: Deactivated successfully.
Jan 27 08:42:41 np0005597378 podman[256288]: 2026-01-27 13:42:41.496081905 +0000 UTC m=+0.730510908 container died bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.533 238945 INFO nova.virt.libvirt.driver [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deleting instance files /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b_del#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.535 238945 INFO nova.virt.libvirt.driver [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deletion of /var/lib/nova/instances/325fa6d5-6a4b-4551-af87-acb87aab870b_del complete#033[00m
Jan 27 08:42:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-61599af33c8bcdbc1de3f4609fff5383568bb0a792d2d89b226718f6716334a1-merged.mount: Deactivated successfully.
Jan 27 08:42:41 np0005597378 podman[256288]: 2026-01-27 13:42:41.569727797 +0000 UTC m=+0.804156770 container remove bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:42:41 np0005597378 systemd[1]: libpod-conmon-bb15782d031f0decfcc94f016c408a4356e3e36e8839ab4e538ec7ba841cf6f1.scope: Deactivated successfully.
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.604 238945 INFO nova.compute.manager [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.605 238945 DEBUG oslo.service.loopingcall [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.605 238945 DEBUG nova.compute.manager [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.605 238945 DEBUG nova.network.neutron [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.771 238945 DEBUG nova.network.neutron [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.788 238945 DEBUG nova.network.neutron [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.804 238945 INFO nova.compute.manager [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Took 0.20 seconds to deallocate network for instance.#033[00m
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099354900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.830 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.861 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.865 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.891 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.891 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.913 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.913 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.914 238945 INFO nova.compute.manager [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Terminating instance#033[00m
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.915 238945 DEBUG nova.compute.manager [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:42:41 np0005597378 kernel: tap640931bb-62 (unregistering): left promiscuous mode
Jan 27 08:42:41 np0005597378 NetworkManager[48904]: <info>  [1769521361.9800] device (tap640931bb-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:42:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:41Z|00067|binding|INFO|Releasing lport 640931bb-6240-4b85-a02c-96b1f07c8170 from this chassis (sb_readonly=0)
Jan 27 08:42:41 np0005597378 nova_compute[238941]: 2026-01-27 13:42:41.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:41Z|00068|binding|INFO|Setting lport 640931bb-6240-4b85-a02c-96b1f07c8170 down in Southbound
Jan 27 08:42:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:41Z|00069|binding|INFO|Removing iface tap640931bb-62 ovn-installed in OVS
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.000 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c6:70 10.100.0.9'], port_security=['fa:16:3e:69:c6:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b54f7f28-d070-4afd-94b1-24775374d89d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de3e13e4602c4c9c8503b1baaa962908', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e59b44be-45bb-4a6f-8bba-7e255b92edf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fecd366-5b93-42cf-acb9-74d482fd8eca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=640931bb-6240-4b85-a02c-96b1f07c8170) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.002 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 640931bb-6240-4b85-a02c-96b1f07c8170 in datapath c9a30bdb-f010-4449-afb6-cf95c4e85fbd unbound from our chassis#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.004 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.006 238945 DEBUG oslo_concurrency.processutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1662f0-3139-4b83-8f50-15d95aead3c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.007 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd namespace which is not needed anymore#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 27 08:42:42 np0005597378 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 14.774s CPU time.
Jan 27 08:42:42 np0005597378 systemd-machined[207425]: Machine qemu-15-instance-0000000e terminated.
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.123846253 +0000 UTC m=+0.059081559 container create 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:42:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 13 MiB/s wr, 403 op/s
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.152 238945 INFO nova.virt.libvirt.driver [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Instance destroyed successfully.#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.153 238945 DEBUG nova.objects.instance [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lazy-loading 'resources' on Instance uuid b54f7f28-d070-4afd-94b1-24775374d89d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.171 238945 DEBUG nova.virt.libvirt.vif [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-387471087',display_name='tempest-ImagesOneServerTestJSON-server-387471087',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-387471087',id=14,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de3e13e4602c4c9c8503b1baaa962908',ramdisk_id='',reservation_id='r-vljnyb10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1846368836',owner_user_name='tempest-ImagesOneServerTestJSON-1846368836-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:38Z,user_data=None,user_id='01b0e341f4e9495f8d9fe42a148123f6',uuid=b54f7f28-d070-4afd-94b1-24775374d89d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.172 238945 DEBUG nova.network.os_vif_util [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converting VIF {"id": "640931bb-6240-4b85-a02c-96b1f07c8170", "address": "fa:16:3e:69:c6:70", "network": {"id": "c9a30bdb-f010-4449-afb6-cf95c4e85fbd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-389153712-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de3e13e4602c4c9c8503b1baaa962908", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap640931bb-62", "ovs_interfaceid": "640931bb-6240-4b85-a02c-96b1f07c8170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.173 238945 DEBUG nova.network.os_vif_util [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.174 238945 DEBUG os_vif [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.176 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.176 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap640931bb-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.182 238945 INFO os_vif [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c6:70,bridge_name='br-int',has_traffic_filtering=True,id=640931bb-6240-4b85-a02c-96b1f07c8170,network=Network(c9a30bdb-f010-4449-afb6-cf95c4e85fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap640931bb-62')#033[00m
Jan 27 08:42:42 np0005597378 systemd[1]: Started libpod-conmon-478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787.scope.
Jan 27 08:42:42 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : haproxy version is 2.8.14-c23fe91
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.096962246 +0000 UTC m=+0.032197552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:42:42 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [NOTICE]   (254782) : path to executable is /usr/sbin/haproxy
Jan 27 08:42:42 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [WARNING]  (254782) : Exiting Master process...
Jan 27 08:42:42 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [WARNING]  (254782) : Exiting Master process...
Jan 27 08:42:42 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [ALERT]    (254782) : Current worker (254784) exited with code 143 (Terminated)
Jan 27 08:42:42 np0005597378 neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd[254778]: [WARNING]  (254782) : All workers exited. Exiting... (0)
Jan 27 08:42:42 np0005597378 systemd[1]: libpod-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope: Deactivated successfully.
Jan 27 08:42:42 np0005597378 conmon[254778]: conmon 9f46a5cf27e2d83da514 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope/container/memory.events
Jan 27 08:42:42 np0005597378 podman[256511]: 2026-01-27 13:42:42.20398652 +0000 UTC m=+0.087249200 container died 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.241022512 +0000 UTC m=+0.176257848 container init 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.249062579 +0000 UTC m=+0.184297885 container start 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:42:42 np0005597378 peaceful_varahamihira[256552]: 167 167
Jan 27 08:42:42 np0005597378 systemd[1]: libpod-478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787.scope: Deactivated successfully.
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.264009013 +0000 UTC m=+0.199244369 container attach 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.266264875 +0000 UTC m=+0.201500181 container died 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:42:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777-userdata-shm.mount: Deactivated successfully.
Jan 27 08:42:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-57033dec845061747035ac4a99f15e0bcc1a0301768f1cd2ebbba68ed50504b1-merged.mount: Deactivated successfully.
Jan 27 08:42:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f8b4b0b1f679f0c0f60e4640e54b67ccf5cebf18786727a60e47553d5d51b4a5-merged.mount: Deactivated successfully.
Jan 27 08:42:42 np0005597378 podman[256511]: 2026-01-27 13:42:42.335174528 +0000 UTC m=+0.218437358 container cleanup 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 27 08:42:42 np0005597378 systemd[1]: libpod-conmon-9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777.scope: Deactivated successfully.
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.370 238945 DEBUG nova.compute.manager [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-unplugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.372 238945 DEBUG oslo_concurrency.lockutils [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.372 238945 DEBUG oslo_concurrency.lockutils [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:42 np0005597378 podman[256494]: 2026-01-27 13:42:42.372038865 +0000 UTC m=+0.307274171 container remove 478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_varahamihira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.372 238945 DEBUG oslo_concurrency.lockutils [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.373 238945 DEBUG nova.compute.manager [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] No waiting events found dispatching network-vif-unplugged-640931bb-6240-4b85-a02c-96b1f07c8170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.373 238945 DEBUG nova.compute.manager [req-15335a52-f262-4ba3-97f9-33c9fc622543 req-ac86cc7a-f7b8-4618-a704-c42d69fe4dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-unplugged-640931bb-6240-4b85-a02c-96b1f07c8170 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:42:42 np0005597378 systemd[1]: libpod-conmon-478e7f589a191128a46ec8e616cfc11ebfcb880957c7630eb9d44803ccc47787.scope: Deactivated successfully.
Jan 27 08:42:42 np0005597378 podman[256602]: 2026-01-27 13:42:42.439178571 +0000 UTC m=+0.075067581 container remove 9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.457 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7697a52-233b-42cd-ad67-74ee2dd22898]: (4, ('Tue Jan 27 01:42:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd (9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777)\n9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777\nTue Jan 27 01:42:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd (9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777)\n9f46a5cf27e2d83da514d5ed51e062f9e98632fb30013e396697c1c4347d7777\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.460 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c740541e-3a33-475e-aad1-cbd7f5ce9ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.461 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9a30bdb-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 kernel: tapc9a30bdb-f0: left promiscuous mode
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16f5c533-d5f7-48d2-8e43-a2a30e7b8dd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.497 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.498 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14786dd2-f955-4f32-af8d-a74fc9250349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.499 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d99575-44df-4b13-af81-b81f077b9fbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:42:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460917957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.520 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efeb10e6-59a9-4097-90c5-adc3b53c1729]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398211, 'reachable_time': 29805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256626, 'error': None, 'target': 'ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.523 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9a30bdb-f010-4449-afb6-cf95c4e85fbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.523 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[90d2b1f8-88b5-47c3-b4cf-82008eeed4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:42.524 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.532 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.533 238945 DEBUG nova.virt.libvirt.vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.534 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.534 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.536 238945 DEBUG nova.objects.instance [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:42:42 np0005597378 systemd[1]: run-netns-ovnmeta\x2dc9a30bdb\x2df010\x2d4449\x2dafb6\x2dcf95c4e85fbd.mount: Deactivated successfully.
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.556 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <uuid>aa157503-9eb6-44e1-9bdd-2c902a907faf</uuid>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <name>instance-00000010</name>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:42:41</nova:creationTime>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <entry name="serial">aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <entry name="uuid">aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:10:94:20"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <target dev="tap57b7d200-69"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log" append="off"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:42:42 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:42:42 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:42:42 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:42:42 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Preparing to wait for external event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.558 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.559 238945 DEBUG nova.virt.libvirt.vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:42:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.560 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.561 238945 DEBUG nova.network.os_vif_util [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.562 238945 DEBUG os_vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.563 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.563 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.565 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57b7d200-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.565 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57b7d200-69, col_values=(('external_ids', {'iface-id': '57b7d200-69e2-4204-8382-ca897741aa3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:94:20', 'vm-uuid': 'aa157503-9eb6-44e1-9bdd-2c902a907faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:42 np0005597378 NetworkManager[48904]: <info>  [1769521362.5681] manager: (tap57b7d200-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.574 238945 INFO os_vif [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69')#033[00m
Jan 27 08:42:42 np0005597378 podman[256627]: 2026-01-27 13:42:42.577231505 +0000 UTC m=+0.052435690 container create dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.618 238945 DEBUG nova.network.neutron [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updated VIF entry in instance network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.619 238945 DEBUG nova.network.neutron [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:42 np0005597378 systemd[1]: Started libpod-conmon-dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894.scope.
Jan 27 08:42:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236835606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.637 238945 DEBUG oslo_concurrency.lockutils [req-323f8039-239a-4a29-bac0-3f2c3c9defe0 req-de112dc8-ddda-464a-8137-b5fb022cacf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.644 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.644 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.644 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:10:94:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.645 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Using config drive#033[00m
Jan 27 08:42:42 np0005597378 podman[256627]: 2026-01-27 13:42:42.550629786 +0000 UTC m=+0.025833991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:42:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:42 np0005597378 podman[256627]: 2026-01-27 13:42:42.662230734 +0000 UTC m=+0.137434929 container init dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:42:42 np0005597378 podman[256627]: 2026-01-27 13:42:42.671671119 +0000 UTC m=+0.146875294 container start dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:42:42 np0005597378 podman[256627]: 2026-01-27 13:42:42.675971585 +0000 UTC m=+0.151175780 container attach dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.676 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.686 238945 DEBUG oslo_concurrency.processutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.693 238945 DEBUG nova.compute.provider_tree [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.721 238945 DEBUG nova.scheduler.client.report [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.744 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.761 238945 INFO nova.virt.libvirt.driver [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deleting instance files /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d_del#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.762 238945 INFO nova.virt.libvirt.driver [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deletion of /var/lib/nova/instances/b54f7f28-d070-4afd-94b1-24775374d89d_del complete#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.781 238945 INFO nova.scheduler.client.report [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Deleted allocations for instance 325fa6d5-6a4b-4551-af87-acb87aab870b#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.869 238945 INFO nova.compute.manager [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.870 238945 DEBUG oslo.service.loopingcall [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.871 238945 DEBUG nova.compute.manager [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.871 238945 DEBUG nova.network.neutron [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:42:42 np0005597378 nova_compute[238941]: 2026-01-27 13:42:42.877 238945 DEBUG oslo_concurrency.lockutils [None req-5d771e0d-164a-4bb5-891b-cd925ad149f2 5d9966a09e6941468045cfd8d4e0fffb d9a18c1ebae446cd91ddc4d0a42ebbfc - - default default] Lock "325fa6d5-6a4b-4551-af87-acb87aab870b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]: {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:    "0": [
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:        {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "devices": [
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "/dev/loop3"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            ],
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_name": "ceph_lv0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_size": "21470642176",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "name": "ceph_lv0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "tags": {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cluster_name": "ceph",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.crush_device_class": "",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.encrypted": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.objectstore": "bluestore",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osd_id": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.type": "block",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.vdo": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.with_tpm": "0"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            },
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "type": "block",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "vg_name": "ceph_vg0"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:        }
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:    ],
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:    "1": [
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:        {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "devices": [
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "/dev/loop4"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            ],
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_name": "ceph_lv1",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_size": "21470642176",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "name": "ceph_lv1",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "tags": {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cluster_name": "ceph",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.crush_device_class": "",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.encrypted": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.objectstore": "bluestore",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osd_id": "1",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.type": "block",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.vdo": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.with_tpm": "0"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            },
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "type": "block",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "vg_name": "ceph_vg1"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:        }
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:    ],
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:    "2": [
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:        {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "devices": [
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "/dev/loop5"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            ],
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_name": "ceph_lv2",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_size": "21470642176",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "name": "ceph_lv2",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "tags": {
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.cluster_name": "ceph",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.crush_device_class": "",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.encrypted": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.objectstore": "bluestore",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osd_id": "2",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.type": "block",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.vdo": "0",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:                "ceph.with_tpm": "0"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            },
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "type": "block",
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:            "vg_name": "ceph_vg2"
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:        }
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]:    ]
Jan 27 08:42:42 np0005597378 suspicious_ganguly[256646]: }
Jan 27 08:42:42 np0005597378 systemd[1]: libpod-dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894.scope: Deactivated successfully.
Jan 27 08:42:42 np0005597378 podman[256627]: 2026-01-27 13:42:42.990613865 +0000 UTC m=+0.465818040 container died dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:42:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dea0dcb80da797e709f3e46fb90bf1a24763642522e73c9575d3b8d70228ae24-merged.mount: Deactivated successfully.
Jan 27 08:42:43 np0005597378 podman[256627]: 2026-01-27 13:42:43.128589427 +0000 UTC m=+0.603793602 container remove dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:42:43 np0005597378 systemd[1]: libpod-conmon-dcb23bc008ec7823da3e803e3cff37e763c523cfc0ea1b9483b200e6ab9c2894.scope: Deactivated successfully.
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.204 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Creating config drive at /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.211 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ozq4dxh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.344 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ozq4dxh" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.375 238945 DEBUG nova.storage.rbd_utils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.381 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.410 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.436 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.631 238945 DEBUG nova.network.neutron [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.633892823 +0000 UTC m=+0.060022895 container create 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.649 238945 INFO nova.compute.manager [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Took 0.78 seconds to deallocate network for instance.#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.653 238945 DEBUG oslo_concurrency.processutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.654 238945 INFO nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deleting local config drive /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/disk.config because it was imported into RBD.#033[00m
Jan 27 08:42:43 np0005597378 systemd[1]: Started libpod-conmon-72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079.scope.
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.599635666 +0000 UTC m=+0.025765768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.698 238945 DEBUG nova.compute.manager [req-21a66b4b-a9cb-4953-a43e-bd695093e14f req-2011c844-b8e1-41ec-9964-56eb33148213 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-deleted-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.700 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.700 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:43 np0005597378 kernel: tap57b7d200-69: entered promiscuous mode
Jan 27 08:42:43 np0005597378 NetworkManager[48904]: <info>  [1769521363.7180] manager: (tap57b7d200-69): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 27 08:42:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:43Z|00070|binding|INFO|Claiming lport 57b7d200-69e2-4204-8382-ca897741aa3d for this chassis.
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:43Z|00071|binding|INFO|57b7d200-69e2-4204-8382-ca897741aa3d: Claiming fa:16:3e:10:94:20 10.100.0.6
Jan 27 08:42:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.732 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:94:20 10.100.0.6'], port_security=['fa:16:3e:10:94:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '444bd80b-15fe-4cfe-971b-457370ed22f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=57b7d200-69e2-4204-8382-ca897741aa3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.733 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 57b7d200-69e2-4204-8382-ca897741aa3d in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.734 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.745619614 +0000 UTC m=+0.171749696 container init 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.750 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[806495f6-bc2e-46ce-b975-74261666e1de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.752 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee180809-31 in ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.755863231 +0000 UTC m=+0.181993303 container start 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.755 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee180809-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a95edbc4-c64c-48ed-9ec6-8aa8225a6ea4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.757 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[301ca98c-ae5c-48e1-bbd5-7e798f92444b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 systemd-machined[207425]: New machine qemu-18-instance-00000010.
Jan 27 08:42:43 np0005597378 upbeat_shannon[256811]: 167 167
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.762744828 +0000 UTC m=+0.188874920 container attach 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.76396289 +0000 UTC m=+0.190092962 container died 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.771 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[df0c961a-1cb2-43d8-89d4-a94ca987cf81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.776 238945 DEBUG oslo_concurrency.processutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:42:43 np0005597378 systemd[1]: libpod-72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079.scope: Deactivated successfully.
Jan 27 08:42:43 np0005597378 systemd-udevd[256834]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:43 np0005597378 NetworkManager[48904]: <info>  [1769521363.8021] device (tap57b7d200-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:42:43 np0005597378 NetworkManager[48904]: <info>  [1769521363.8027] device (tap57b7d200-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c66fb26-b5c7-44e3-8f93-8fc9b17c1ff6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:43Z|00072|binding|INFO|Setting lport 57b7d200-69e2-4204-8382-ca897741aa3d ovn-installed in OVS
Jan 27 08:42:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:43Z|00073|binding|INFO|Setting lport 57b7d200-69e2-4204-8382-ca897741aa3d up in Southbound
Jan 27 08:42:43 np0005597378 nova_compute[238941]: 2026-01-27 13:42:43.811 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.836 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf43dffd-0cbb-41b9-b52c-8f522c5cfdb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2eaaaffa-4a9e-4d7b-9199-60c54d9e4528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 NetworkManager[48904]: <info>  [1769521363.8460] manager: (tapee180809-30): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.890 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[234510ed-fada-49fb-b644-e736f02a757d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-962ac010cf41a089cc6cc61c25d5105d9cc5fadc28f9256cc111d3c577780ebd-merged.mount: Deactivated successfully.
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.896 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17486c36-767f-49d1-ae7a-90f0ed4792b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 NetworkManager[48904]: <info>  [1769521363.9257] device (tapee180809-30): carrier: link connected
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.931 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e8391220-4233-4893-866f-2c6a1304cc3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 podman[256791]: 2026-01-27 13:42:43.949159049 +0000 UTC m=+0.375289121 container remove 72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shannon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.955 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66b3bf28-f025-4a97-b27f-00beb31daa60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256891, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.975 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3fe9a2-6e3d-4bf9-8980-6f1abc2e578f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:c077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400354, 'tstamp': 400354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256893, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:43.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fdad5bd1-edf8-49c7-8dc1-6970165595f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256895, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:44 np0005597378 systemd[1]: libpod-conmon-72ca1a34a3941152e67e0338629216dd3b12922f3ce40b9ea231c52249c66079.scope: Deactivated successfully.
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66f7d32e-9ea4-4ca0-b42c-4728cebe5f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5943c4aa-f8b3-4542-a120-896e3e8bd502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.115 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.115 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.117 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:44 np0005597378 NetworkManager[48904]: <info>  [1769521364.1195] manager: (tapee180809-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 27 08:42:44 np0005597378 kernel: tapee180809-30: entered promiscuous mode
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.126 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:44 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:44Z|00074|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.131 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.132 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e448d486-0133-4295-b2a7-8d49a7844012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.133 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:42:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:44.134 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'env', 'PROCESS_TAG=haproxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:42:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 147 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 12 MiB/s wr, 398 op/s
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:44 np0005597378 podman[256922]: 2026-01-27 13:42:44.208930444 +0000 UTC m=+0.076388076 container create 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 08:42:44 np0005597378 podman[256922]: 2026-01-27 13:42:44.159516948 +0000 UTC m=+0.026974610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:42:44 np0005597378 systemd[1]: Started libpod-conmon-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope.
Jan 27 08:42:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:44 np0005597378 podman[256922]: 2026-01-27 13:42:44.310357888 +0000 UTC m=+0.177815550 container init 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:42:44 np0005597378 podman[256922]: 2026-01-27 13:42:44.321432557 +0000 UTC m=+0.188890189 container start 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.378 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521364.3777702, aa157503-9eb6-44e1-9bdd-2c902a907faf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.379 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Started (Lifecycle Event)#033[00m
Jan 27 08:42:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:42:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019319363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.429 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.433 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521364.3779743, aa157503-9eb6-44e1-9bdd-2c902a907faf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.433 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.455 238945 DEBUG oslo_concurrency.processutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.461 238945 DEBUG nova.compute.provider_tree [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:42:44 np0005597378 podman[256922]: 2026-01-27 13:42:44.493739377 +0000 UTC m=+0.361197009 container attach 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:42:44 np0005597378 podman[257008]: 2026-01-27 13:42:44.591696687 +0000 UTC m=+0.030065755 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.713 238945 DEBUG nova.compute.manager [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG oslo_concurrency.lockutils [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG oslo_concurrency.lockutils [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG oslo_concurrency.lockutils [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.714 238945 DEBUG nova.compute.manager [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] No waiting events found dispatching network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.715 238945 WARNING nova.compute.manager [req-b613bf18-dce9-4b78-85d6-37f96e6974ba req-2feae6cc-fa08-406c-ab57-19b75cb35ac1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Received unexpected event network-vif-plugged-640931bb-6240-4b85-a02c-96b1f07c8170 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.723 238945 DEBUG nova.scheduler.client.report [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.727 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.730 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.746 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.751 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.771 238945 INFO nova.scheduler.client.report [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Deleted allocations for instance b54f7f28-d070-4afd-94b1-24775374d89d#033[00m
Jan 27 08:42:44 np0005597378 nova_compute[238941]: 2026-01-27 13:42:44.830 238945 DEBUG oslo_concurrency.lockutils [None req-a593fe18-f52e-46cd-bfdf-ed725b5ccd76 01b0e341f4e9495f8d9fe42a148123f6 de3e13e4602c4c9c8503b1baaa962908 - - default default] Lock "b54f7f28-d070-4afd-94b1-24775374d89d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:44 np0005597378 podman[257008]: 2026-01-27 13:42:44.968627531 +0000 UTC m=+0.406996599 container create d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 08:42:45 np0005597378 systemd[1]: Started libpod-conmon-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0.scope.
Jan 27 08:42:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:42:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96914522e435965eb97e058d8e426ded3cff39d5505b4a5a3ed36b244f04c669/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:42:45 np0005597378 lvm[257086]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:42:45 np0005597378 lvm[257086]: VG ceph_vg0 finished
Jan 27 08:42:45 np0005597378 lvm[257088]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:42:45 np0005597378 lvm[257088]: VG ceph_vg1 finished
Jan 27 08:42:45 np0005597378 podman[257008]: 2026-01-27 13:42:45.133683765 +0000 UTC m=+0.572052863 container init d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:42:45 np0005597378 lvm[257089]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:42:45 np0005597378 lvm[257089]: VG ceph_vg2 finished
Jan 27 08:42:45 np0005597378 podman[257008]: 2026-01-27 13:42:45.143381267 +0000 UTC m=+0.581750335 container start d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:42:45 np0005597378 lvm[257090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:42:45 np0005597378 lvm[257090]: VG ceph_vg0 finished
Jan 27 08:42:45 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : New worker (257094) forked
Jan 27 08:42:45 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : Loading success.
Jan 27 08:42:45 np0005597378 adoring_noether[256963]: {}
Jan 27 08:42:45 np0005597378 systemd[1]: libpod-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope: Deactivated successfully.
Jan 27 08:42:45 np0005597378 systemd[1]: libpod-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope: Consumed 1.466s CPU time.
Jan 27 08:42:45 np0005597378 podman[256922]: 2026-01-27 13:42:45.276047344 +0000 UTC m=+1.143504996 container died 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:42:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e69980677a9111751056362fca54ace534965b4932577cd03670bb75bb410ba8-merged.mount: Deactivated successfully.
Jan 27 08:42:45 np0005597378 podman[256922]: 2026-01-27 13:42:45.354660801 +0000 UTC m=+1.222118433 container remove 746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:42:45 np0005597378 systemd[1]: libpod-conmon-746ac0045316c99d94419aa04a2c65553d3d7e2f6fbac70d26b7f44d0c4e5d44.scope: Deactivated successfully.
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:42:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:42:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:42:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:42:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:42:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:45.526 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.811 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Processing event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.812 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 DEBUG oslo_concurrency.lockutils [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 DEBUG nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.813 238945 WARNING nova.compute.manager [req-3a8f3abe-7daa-4f98-b53a-fd11eb28bb84 req-d7535dbd-b58f-4124-a8c9-3676cf07a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.814 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.819 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521365.8190086, aa157503-9eb6-44e1-9bdd-2c902a907faf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.819 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.821 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.825 238945 INFO nova.virt.libvirt.driver [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance spawned successfully.#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.826 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.858 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.862 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.862 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.863 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.864 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.864 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.864 238945 DEBUG nova.virt.libvirt.driver [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.894 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.932 238945 INFO nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 8.72 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:42:45 np0005597378 nova_compute[238941]: 2026-01-27 13:42:45.932 238945 DEBUG nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:46 np0005597378 nova_compute[238941]: 2026-01-27 13:42:46.011 238945 INFO nova.compute.manager [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 9.83 seconds to build instance.#033[00m
Jan 27 08:42:46 np0005597378 nova_compute[238941]: 2026-01-27 13:42:46.034 238945 DEBUG oslo_concurrency.lockutils [None req-43773a0c-1635-434a-9d50-acad288ae0fc 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 12 MiB/s wr, 461 op/s
Jan 27 08:42:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:46.290 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:42:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:46.291 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:42:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:42:46.292 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:42:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:42:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:42:46 np0005597378 nova_compute[238941]: 2026-01-27 13:42:46.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 27 08:42:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 27 08:42:46 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 27 08:42:47 np0005597378 nova_compute[238941]: 2026-01-27 13:42:47.553 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521352.552433, b302d131-0feb-4256-a088-4ee6521b1ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:47 np0005597378 nova_compute[238941]: 2026-01-27 13:42:47.554 238945 INFO nova.compute.manager [-] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:42:47 np0005597378 nova_compute[238941]: 2026-01-27 13:42:47.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:47 np0005597378 nova_compute[238941]: 2026-01-27 13:42:47.581 238945 DEBUG nova.compute.manager [None req-498c50b6-5f39-4126-9ad4-1bd4f04d7149 - - - - - -] [instance: b302d131-0feb-4256-a088-4ee6521b1ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:42:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.8 MiB/s wr, 197 op/s
Jan 27 08:42:48 np0005597378 NetworkManager[48904]: <info>  [1769521368.4849] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 27 08:42:48 np0005597378 NetworkManager[48904]: <info>  [1769521368.4859] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 27 08:42:48 np0005597378 nova_compute[238941]: 2026-01-27 13:42:48.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:48 np0005597378 nova_compute[238941]: 2026-01-27 13:42:48.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:48Z|00075|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:42:48 np0005597378 nova_compute[238941]: 2026-01-27 13:42:48.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:48 np0005597378 nova_compute[238941]: 2026-01-27 13:42:48.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:49 np0005597378 nova_compute[238941]: 2026-01-27 13:42:49.208 238945 DEBUG nova.compute.manager [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:42:49 np0005597378 nova_compute[238941]: 2026-01-27 13:42:49.208 238945 DEBUG nova.compute.manager [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing instance network info cache due to event network-changed-57b7d200-69e2-4204-8382-ca897741aa3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:42:49 np0005597378 nova_compute[238941]: 2026-01-27 13:42:49.209 238945 DEBUG oslo_concurrency.lockutils [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:42:49 np0005597378 nova_compute[238941]: 2026-01-27 13:42:49.209 238945 DEBUG oslo_concurrency.lockutils [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:42:49 np0005597378 nova_compute[238941]: 2026-01-27 13:42:49.209 238945 DEBUG nova.network.neutron [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:42:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:49Z|00076|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:42:49 np0005597378 nova_compute[238941]: 2026-01-27 13:42:49.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:50 np0005597378 nova_compute[238941]: 2026-01-27 13:42:50.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.4 MiB/s wr, 259 op/s
Jan 27 08:42:50 np0005597378 nova_compute[238941]: 2026-01-27 13:42:50.932 238945 DEBUG nova.network.neutron [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updated VIF entry in instance network info cache for port 57b7d200-69e2-4204-8382-ca897741aa3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:42:50 np0005597378 nova_compute[238941]: 2026-01-27 13:42:50.933 238945 DEBUG nova.network.neutron [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:42:50 np0005597378 nova_compute[238941]: 2026-01-27 13:42:50.954 238945 DEBUG oslo_concurrency.lockutils [req-e37723be-0f40-457b-85bb-84184bd41980 req-5f1dede1-5505-4069-977a-b0bc2146b8b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:42:51 np0005597378 nova_compute[238941]: 2026-01-27 13:42:51.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 217 op/s
Jan 27 08:42:52 np0005597378 nova_compute[238941]: 2026-01-27 13:42:52.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 342 KiB/s wr, 152 op/s
Jan 27 08:42:54 np0005597378 nova_compute[238941]: 2026-01-27 13:42:54.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:55 np0005597378 nova_compute[238941]: 2026-01-27 13:42:55.833 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521360.831585, 325fa6d5-6a4b-4551-af87-acb87aab870b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:55 np0005597378 nova_compute[238941]: 2026-01-27 13:42:55.833 238945 INFO nova.compute.manager [-] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:42:55 np0005597378 nova_compute[238941]: 2026-01-27 13:42:55.852 238945 DEBUG nova.compute.manager [None req-2100155b-a2a2-4c4c-a26b-ef6815954208 - - - - - -] [instance: 325fa6d5-6a4b-4551-af87-acb87aab870b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 77 op/s
Jan 27 08:42:56 np0005597378 nova_compute[238941]: 2026-01-27 13:42:56.696 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:42:57 np0005597378 nova_compute[238941]: 2026-01-27 13:42:57.150 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521362.1498084, b54f7f28-d070-4afd-94b1-24775374d89d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:42:57 np0005597378 nova_compute[238941]: 2026-01-27 13:42:57.151 238945 INFO nova.compute.manager [-] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:42:57 np0005597378 nova_compute[238941]: 2026-01-27 13:42:57.177 238945 DEBUG nova.compute.manager [None req-68c9bf71-e6bc-4f08-9182-a1e53cd19e4c - - - - - -] [instance: b54f7f28-d070-4afd-94b1-24775374d89d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:42:57 np0005597378 nova_compute[238941]: 2026-01-27 13:42:57.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:57 np0005597378 nova_compute[238941]: 2026-01-27 13:42:57.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:42:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4818 writes, 21K keys, 4818 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4818 writes, 4818 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1443 writes, 6498 keys, 1443 commit groups, 1.0 writes per commit group, ingest: 9.08 MB, 0.02 MB/s#012Interval WAL: 1443 writes, 1443 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.9      0.42              0.06        12    0.035       0      0       0.0       0.0#012  L6      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     81.6     66.8      1.18              0.20        11    0.107     48K   5788       0.0       0.0#012 Sum      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     60.3     64.8      1.60              0.26        23    0.069     48K   5788       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     54.6     54.8      0.82              0.12        10    0.082     24K   2584       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     81.6     66.8      1.18              0.20        11    0.107     48K   5788       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.2      0.41              0.06        11    0.037       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.1 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.6 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 9.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000247 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(555,8.73 MB,2.87288%) FilterBlock(24,141.80 KB,0.0455505%) IndexBlock(24,266.08 KB,0.0854743%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 08:42:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 68 op/s
Jan 27 08:42:58 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:58Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:94:20 10.100.0.6
Jan 27 08:42:58 np0005597378 ovn_controller[144812]: 2026-01-27T13:42:58Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:94:20 10.100.0.6
Jan 27 08:42:58 np0005597378 nova_compute[238941]: 2026-01-27 13:42:58.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:42:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:42:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2980606914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:42:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:42:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2980606914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:43:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 113 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 27 08:43:01 np0005597378 nova_compute[238941]: 2026-01-27 13:43:01.698 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 113 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 27 08:43:02 np0005597378 nova_compute[238941]: 2026-01-27 13:43:02.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.531 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.723 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.723 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.746 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.839 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.840 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.849 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.849 238945 INFO nova.compute.claims [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:03 np0005597378 nova_compute[238941]: 2026-01-27 13:43:03.980 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 121 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 08:43:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321469870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.592 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.599 238945 DEBUG nova.compute.provider_tree [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.615 238945 DEBUG nova.scheduler.client.report [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.637 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.638 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.678 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.680 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.699 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.716 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.785 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.787 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.787 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating image(s)#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.811 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.835 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.862 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.865 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.940 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.941 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.942 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.943 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.970 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:04 np0005597378 nova_compute[238941]: 2026-01-27 13:43:04.976 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.276 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.343 238945 DEBUG nova.policy [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.352 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.440 238945 DEBUG nova.objects.instance [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.458 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.459 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ensure instance console log exists: /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.459 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.459 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:05 np0005597378 nova_compute[238941]: 2026-01-27 13:43:05.460 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:06 np0005597378 podman[257334]: 2026-01-27 13:43:06.747578765 +0000 UTC m=+0.090627313 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:43:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.817 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.817 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.864 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.959 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.960 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.971 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:06 np0005597378 nova_compute[238941]: 2026-01-27 13:43:06.972 238945 INFO nova.compute.claims [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:07 np0005597378 nova_compute[238941]: 2026-01-27 13:43:07.171 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:07 np0005597378 nova_compute[238941]: 2026-01-27 13:43:07.203 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Successfully created port: 851829c6-49a6-4580-90d9-df985a736216 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:07 np0005597378 nova_compute[238941]: 2026-01-27 13:43:07.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819925514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:07 np0005597378 nova_compute[238941]: 2026-01-27 13:43:07.776 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:07 np0005597378 nova_compute[238941]: 2026-01-27 13:43:07.783 238945 DEBUG nova.compute.provider_tree [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.100 238945 DEBUG nova.scheduler.client.report [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.127 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.128 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.128 238945 DEBUG nova.objects.instance [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.164 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.164 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.186 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.186 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.213 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.223 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.223 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.250 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.298 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.298 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.304 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.305 238945 INFO nova.compute.claims [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.308 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.426 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.427 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.428 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Creating image(s)#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.452 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.478 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.500 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.505 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.531 238945 DEBUG nova.policy [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.574 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.575 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.575 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.576 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.599 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.604 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4c52012f-9a4f-4599-adb0-2c658a054f91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.644 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.789 238945 DEBUG nova.objects.instance [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.813 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.861 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4c52012f-9a4f-4599-adb0-2c658a054f91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:08 np0005597378 nova_compute[238941]: 2026-01-27 13:43:08.930 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.021 238945 DEBUG nova.objects.instance [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c52012f-9a4f-4599-adb0-2c658a054f91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.037 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.038 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Ensure instance console log exists: /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.038 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.038 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.039 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.169 238945 DEBUG nova.policy [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492659669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.285 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.290 238945 DEBUG nova.compute.provider_tree [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.315 238945 DEBUG nova.scheduler.client.report [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.347 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.347 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.426 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.427 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.462 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Successfully updated port: 851829c6-49a6-4580-90d9-df985a736216 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.503 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.520 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.520 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.521 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.525 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.583 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Successfully created port: 3cd42161-aa97-4ecb-9e41-e7a887f02d7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.595 238945 DEBUG nova.policy [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb804373b8be4577a6623d2131cdcd59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8773022351141649f1c7a9db9002d2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:09 np0005597378 podman[257570]: 2026-01-27 13:43:09.715744797 +0000 UTC m=+0.050631570 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.733 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.740 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.741 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.742 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Creating image(s)#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.762 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.790 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.815 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.819 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.881 238945 DEBUG nova.compute.manager [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-changed-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.882 238945 DEBUG nova.compute.manager [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Refreshing instance network info cache due to event network-changed-851829c6-49a6-4580-90d9-df985a736216. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.882 238945 DEBUG oslo_concurrency.lockutils [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.889 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.890 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.891 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.891 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.912 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:09 np0005597378 nova_compute[238941]: 2026-01-27 13:43:09.918 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 186 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 4.8 MiB/s wr, 93 op/s
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.326 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully created port: f090189d-af3a-4961-83a9-d4a369167af0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.358 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Successfully created port: 67dffbe5-7a66-478a-b9a7-8042fe48ca17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.502 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.561 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] resizing rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.588 238945 DEBUG nova.network.neutron [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.648 238945 DEBUG nova.objects.instance [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'migration_context' on Instance uuid a2bf4dff-c501-4c5d-8573-bba7ceabc549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.680 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.681 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Ensure instance console log exists: /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.681 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.682 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.682 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.687 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.687 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance network_info: |[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.687 238945 DEBUG oslo_concurrency.lockutils [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.689 238945 DEBUG nova.network.neutron [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Refreshing network info cache for port 851829c6-49a6-4580-90d9-df985a736216 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.692 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start _get_guest_xml network_info=[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.698 238945 WARNING nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.703 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.704 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.708 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.709 238945 DEBUG nova.virt.libvirt.host [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.709 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.709 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.710 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.711 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.712 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.712 238945 DEBUG nova.virt.hardware [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:10 np0005597378 nova_compute[238941]: 2026-01-27 13:43:10.715 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774938106' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.309 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.328 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.331 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.496 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Successfully updated port: 3cd42161-aa97-4ecb-9e41-e7a887f02d7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.520 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.520 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.521 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.606 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Successfully updated port: 67dffbe5-7a66-478a-b9a7-8042fe48ca17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.636 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.636 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquired lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.637 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142265827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.959 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.961 238945 DEBUG nova.virt.libvirt.vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:04Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.962 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.963 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.965 238945 DEBUG nova.objects.instance [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:11 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.998 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:11 np0005597378 nova_compute[238941]:  <uuid>bee7c432-6457-4160-917c-a807eca3df0e</uuid>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:  <name>instance-00000011</name>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:11 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminTestJSON-server-752871201</nova:name>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:10</nova:creationTime>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:11 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:11 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <nova:port uuid="851829c6-49a6-4580-90d9-df985a736216">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <entry name="serial">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <entry name="uuid">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk.config">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:5b:0a:48"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <target dev="tap851829c6-49"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log" append="off"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:12 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:12 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:12 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:12 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:11.999 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Preparing to wait for external event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.000 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.000 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.001 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.001 238945 DEBUG nova.virt.libvirt.vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:04Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.002 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.003 238945 DEBUG nova.network.os_vif_util [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.003 238945 DEBUG os_vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.004 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.005 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.010 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851829c6-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.010 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap851829c6-49, col_values=(('external_ids', {'iface-id': '851829c6-49a6-4580-90d9-df985a736216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:0a:48', 'vm-uuid': 'bee7c432-6457-4160-917c-a807eca3df0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 NetworkManager[48904]: <info>  [1769521392.0139] manager: (tap851829c6-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.023 238945 INFO os_vif [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.099 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.099 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.099 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:5b:0a:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.100 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Using config drive#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.121 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 186 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.196 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.238 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.570 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating config drive at /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.575 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2wa4o25k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.704 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2wa4o25k" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.726 238945 DEBUG nova.storage.rbd_utils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.730 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.855 238945 DEBUG oslo_concurrency.processutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.856 238945 INFO nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting local config drive /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:12 np0005597378 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 08:43:12 np0005597378 NetworkManager[48904]: <info>  [1769521392.9046] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 27 08:43:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:12Z|00077|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:12Z|00078|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:43:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:12Z|00079|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 nova_compute[238941]: 2026-01-27 13:43:12.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:12Z|00080|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.932 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.933 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis#033[00m
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.935 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:43:12 np0005597378 systemd-udevd[257890]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:12 np0005597378 systemd-machined[207425]: New machine qemu-19-instance-00000011.
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7379b9f7-5ba3-4fcc-9d58-3d4605202b12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.947 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4bc78608-11 in ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.949 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4bc78608-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72783359-5bbd-4173-b4a3-17b988a2ab33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:12 np0005597378 NetworkManager[48904]: <info>  [1769521392.9511] device (tap851829c6-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a956e95e-d8e8-4225-a034-d71c68e508ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:12 np0005597378 NetworkManager[48904]: <info>  [1769521392.9522] device (tap851829c6-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:12 np0005597378 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.967 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf7acbc-9d30-4df8-aade-76698e6566af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:12.982 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[989035c6-2c44-415f-bb36-2c8176d2ae46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.010 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c6546b3d-7cf5-4002-8571-147b7c24d4d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.016 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[468f6421-4561-4c50-ad41-552b9a9c317e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 NetworkManager[48904]: <info>  [1769521393.0177] manager: (tap4bc78608-10): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.046 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8531e4-44df-42e0-900a-b916dd0b5912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.049 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[925e48f1-2caa-4dc5-97bc-e8e58db59eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 NetworkManager[48904]: <info>  [1769521393.0755] device (tap4bc78608-10): carrier: link connected
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[87a0ba48-66ee-4e7c-bb19-4338bd218d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.095 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79047359-17dc-4850-9d9e-315969058d89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257923, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84461f78-a48d-4d8c-8788-4573a738a413]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:3f82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403269, 'tstamp': 403269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257924, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b541fa5-5fd6-4f1e-9c54-2282cb14c894]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257925, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.157 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9af63b0-4fac-4d1e-a51b-cb8875f549a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[439ef4ae-49bc-4be3-9a48-55c92f02e9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.229 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.229 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.230 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:13 np0005597378 NetworkManager[48904]: <info>  [1769521393.2340] manager: (tap4bc78608-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:13 np0005597378 kernel: tap4bc78608-10: entered promiscuous mode
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.240 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:13Z|00081|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.257 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4bc78608-1746-40d0-a3d3-be467e4c23ef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4bc78608-1746-40d0-a3d3-be467e4c23ef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.258 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c63d11-71ee-4ce1-99da-57aae2fa211f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.259 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4bc78608-1746-40d0-a3d3-be467e4c23ef.pid.haproxy
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4bc78608-1746-40d0-a3d3-be467e4c23ef
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:43:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:13.260 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'env', 'PROCESS_TAG=haproxy-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4bc78608-1746-40d0-a3d3-be467e4c23ef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.361 238945 DEBUG nova.network.neutron [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updated VIF entry in instance network info cache for port 851829c6-49a6-4580-90d9-df985a736216. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.362 238945 DEBUG nova.network.neutron [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.370 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Successfully updated port: f090189d-af3a-4961-83a9-d4a369167af0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.390 238945 DEBUG oslo_concurrency.lockutils [req-6349e82e-2b6d-46fe-9369-09598152e1e4 req-913c856d-42c0-4217-a042-95e88d18bfcc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.392 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.392 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.392 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.432 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521393.4316468, bee7c432-6457-4160-917c-a807eca3df0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.433 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Started (Lifecycle Event)#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.469 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.485 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521393.4320395, bee7c432-6457-4160-917c-a807eca3df0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.486 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.510 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.513 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.545 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.634 238945 WARNING nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.645 238945 DEBUG nova.network.neutron [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updating instance_info_cache with network_info: [{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.668 238945 DEBUG nova.network.neutron [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:13 np0005597378 podman[257999]: 2026-01-27 13:43:13.627150832 +0000 UTC m=+0.024718720 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:43:13 np0005597378 podman[257999]: 2026-01-27 13:43:13.734225719 +0000 UTC m=+0.131793597 container create 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.752 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Releasing lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.753 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance network_info: |[{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.755 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start _get_guest_xml network_info=[{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.758 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.759 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance network_info: |[{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.761 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start _get_guest_xml network_info=[{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.765 238945 WARNING nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.770 238945 WARNING nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.772 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.773 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.776 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.776 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.776 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.777 238945 DEBUG nova.virt.libvirt.host [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.778 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.778 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.778 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.779 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.780 238945 DEBUG nova.virt.hardware [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:13 np0005597378 systemd[1]: Started libpod-conmon-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25.scope.
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.784 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.813 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.815 238945 DEBUG nova.virt.libvirt.host [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.815 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.816 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.817 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.817 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7912d8889bcc710109044d974d117cd8921ca98d61262588e916347c04d92b3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.818 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.818 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.818 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.819 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.819 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.819 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.820 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.820 238945 DEBUG nova.virt.hardware [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:13 np0005597378 nova_compute[238941]: 2026-01-27 13:43:13.823 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:13 np0005597378 podman[257999]: 2026-01-27 13:43:13.830592824 +0000 UTC m=+0.228160722 container init 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:43:13 np0005597378 podman[257999]: 2026-01-27 13:43:13.836467374 +0000 UTC m=+0.234035252 container start 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:43:13 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : New worker (258022) forked
Jan 27 08:43:13 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : Loading success.
Jan 27 08:43:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 245 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.6 MiB/s wr, 87 op/s
Jan 27 08:43:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797124501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.383 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879799846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.402 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.406 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.426 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.445 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.449 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864966625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.977 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.979 238945 DEBUG nova.virt.libvirt.vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1866996444',id=19,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-8gi0fsu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:09Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=a2bf4dff-c501-4c5d-8573-bba7ceabc549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.979 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.980 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:14 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.982 238945 DEBUG nova.objects.instance [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid a2bf4dff-c501-4c5d-8573-bba7ceabc549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:14.999 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <uuid>a2bf4dff-c501-4c5d-8573-bba7ceabc549</uuid>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <name>instance-00000013</name>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1866996444</nova:name>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:13</nova:creationTime>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:user uuid="bb804373b8be4577a6623d2131cdcd59">tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member</nova:user>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:project uuid="c8773022351141649f1c7a9db9002d2f">tempest-ImagesOneServerNegativeTestJSON-1108889514</nova:project>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:port uuid="67dffbe5-7a66-478a-b9a7-8042fe48ca17">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="serial">a2bf4dff-c501-4c5d-8573-bba7ceabc549</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="uuid">a2bf4dff-c501-4c5d-8573-bba7ceabc549</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:a8:f2:78"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <target dev="tap67dffbe5-7a"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/console.log" append="off"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:15 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:15 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.000 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Preparing to wait for external event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.001 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.001 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.001 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.002 238945 DEBUG nova.virt.libvirt.vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1866996444',id=19,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-8gi0fsu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:09Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=a2bf4dff-c501-4c5d-8573-bba7ceabc549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.002 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.003 238945 DEBUG nova.network.os_vif_util [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.003 238945 DEBUG os_vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.004 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.005 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.008 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67dffbe5-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.009 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67dffbe5-7a, col_values=(('external_ids', {'iface-id': '67dffbe5-7a66-478a-b9a7-8042fe48ca17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:f2:78', 'vm-uuid': 'a2bf4dff-c501-4c5d-8573-bba7ceabc549'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523708818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 NetworkManager[48904]: <info>  [1769521395.0120] manager: (tap67dffbe5-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.018 238945 INFO os_vif [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a')#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.032 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.033 238945 DEBUG nova.virt.libvirt.vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1989956857',display_name='tempest-ServersAdminTestJSON-server-1989956857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1989956857',id=18,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-uvcvbdkl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:08Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=4c52012f-9a4f-4599-adb0-2c658a054f91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.033 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.034 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.034 238945 DEBUG nova.objects.instance [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c52012f-9a4f-4599-adb0-2c658a054f91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.048 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <uuid>4c52012f-9a4f-4599-adb0-2c658a054f91</uuid>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <name>instance-00000012</name>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminTestJSON-server-1989956857</nova:name>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:13</nova:creationTime>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <nova:port uuid="3cd42161-aa97-4ecb-9e41-e7a887f02d7c">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="serial">4c52012f-9a4f-4599-adb0-2c658a054f91</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="uuid">4c52012f-9a4f-4599-adb0-2c658a054f91</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/4c52012f-9a4f-4599-adb0-2c658a054f91_disk">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:3f:48:4c"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <target dev="tap3cd42161-aa"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/console.log" append="off"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:15 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:15 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:15 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:15 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.049 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Preparing to wait for external event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.051 238945 DEBUG nova.virt.libvirt.vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1989956857',display_name='tempest-ServersAdminTestJSON-server-1989956857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1989956857',id=18,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-uvcvbdkl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:08Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=4c52012f-9a4f-4599-adb0-2c658a054f91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.051 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.052 238945 DEBUG nova.network.os_vif_util [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.052 238945 DEBUG os_vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.054 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.059 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cd42161-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.059 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cd42161-aa, col_values=(('external_ids', {'iface-id': '3cd42161-aa97-4ecb-9e41-e7a887f02d7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:48:4c', 'vm-uuid': '4c52012f-9a4f-4599-adb0-2c658a054f91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 NetworkManager[48904]: <info>  [1769521395.0624] manager: (tap3cd42161-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.069 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.070 238945 INFO os_vif [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa')#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.092 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.093 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.093 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No VIF found with MAC fa:16:3e:a8:f2:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.093 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Using config drive#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.118 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.148 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.148 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.148 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:3f:48:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.149 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Using config drive#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.170 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.389 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Creating config drive at /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.394 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3s6cwjq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.502 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Creating config drive at /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.507 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb6iongty execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.535 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3s6cwjq" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.564 238945 DEBUG nova.storage.rbd_utils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.569 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.638 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb6iongty" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.669 238945 DEBUG nova.storage.rbd_utils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:15 np0005597378 nova_compute[238941]: 2026-01-27 13:43:15.674 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 5.3 MiB/s wr, 92 op/s
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.394 238945 DEBUG oslo_concurrency.processutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.825s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.395 238945 INFO nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deleting local config drive /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.416 238945 DEBUG oslo_concurrency.processutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config 4c52012f-9a4f-4599-adb0-2c658a054f91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.417 238945 INFO nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deleting local config drive /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.4643] manager: (tap67dffbe5-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Jan 27 08:43:16 np0005597378 kernel: tap67dffbe5-7a: entered promiscuous mode
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00082|binding|INFO|Claiming lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 for this chassis.
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00083|binding|INFO|67dffbe5-7a66-478a-b9a7-8042fe48ca17: Claiming fa:16:3e:a8:f2:78 10.100.0.4
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.484 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:f2:78 10.100.0.4'], port_security=['fa:16:3e:a8:f2:78 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a2bf4dff-c501-4c5d-8573-bba7ceabc549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=67dffbe5-7a66-478a-b9a7-8042fe48ca17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.485 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 67dffbe5-7a66-478a-b9a7-8042fe48ca17 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 bound to our chassis#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.487 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58208cdc-4099-47ab-9729-2e87f01c74f8#033[00m
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00084|binding|INFO|Setting lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 ovn-installed in OVS
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00085|binding|INFO|Setting lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 up in Southbound
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.501 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5751007e-71b2-4427-a65f-21f9f395324d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.502 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58208cdc-41 in ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.505 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58208cdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cefa409f-bd1c-44ce-a981-c18bf12799c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3827fc-0ed5-4edf-8048-9f32527a61e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 kernel: tap3cd42161-aa: entered promiscuous mode
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.5089] manager: (tap3cd42161-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Jan 27 08:43:16 np0005597378 systemd-udevd[258297]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:16 np0005597378 systemd-udevd[258299]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.516 238945 DEBUG nova.compute.manager [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-changed-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.516 238945 DEBUG nova.compute.manager [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Refreshing instance network info cache due to event network-changed-3cd42161-aa97-4ecb-9e41-e7a887f02d7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.517 238945 DEBUG oslo_concurrency.lockutils [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.517 238945 DEBUG oslo_concurrency.lockutils [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.517 238945 DEBUG nova.network.neutron [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Refreshing network info cache for port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00086|binding|INFO|Claiming lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c for this chassis.
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00087|binding|INFO|3cd42161-aa97-4ecb-9e41-e7a887f02d7c: Claiming fa:16:3e:3f:48:4c 10.100.0.14
Jan 27 08:43:16 np0005597378 systemd-machined[207425]: New machine qemu-20-instance-00000013.
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.5253] device (tap67dffbe5-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.523 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e72652e2-4d18-466e-b78d-9583ac222782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.5260] device (tap67dffbe5-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.5314] device (tap3cd42161-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.5322] device (tap3cd42161-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:16 np0005597378 systemd[1]: Started Virtual Machine qemu-20-instance-00000013.
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.533 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:48:4c 10.100.0.14'], port_security=['fa:16:3e:3f:48:4c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c52012f-9a4f-4599-adb0-2c658a054f91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3cd42161-aa97-4ecb-9e41-e7a887f02d7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00088|binding|INFO|Setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c ovn-installed in OVS
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00089|binding|INFO|Setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c up in Southbound
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.554 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a02dbe9-963b-40eb-8080-5674872de61a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 systemd-machined[207425]: New machine qemu-21-instance-00000012.
Jan 27 08:43:16 np0005597378 systemd[1]: Started Virtual Machine qemu-21-instance-00000012.
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.586 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8443b-457b-4647-b1fd-48622c2cd9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.5933] manager: (tap58208cdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4fe271-f882-4834-8cd3-0421f165503e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.607 238945 DEBUG nova.compute.manager [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-changed-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.607 238945 DEBUG nova.compute.manager [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Refreshing instance network info cache due to event network-changed-67dffbe5-7a66-478a-b9a7-8042fe48ca17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.608 238945 DEBUG oslo_concurrency.lockutils [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.608 238945 DEBUG oslo_concurrency.lockutils [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.609 238945 DEBUG nova.network.neutron [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Refreshing network info cache for port 67dffbe5-7a66-478a-b9a7-8042fe48ca17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.626 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[77862a52-f1aa-4dd5-af62-6efedf2d6af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.630 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2563cef8-0303-491a-8350-4b22b979d11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.6564] device (tap58208cdc-40): carrier: link connected
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.666 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e194c91-a2ae-4478-a8ee-0bf76644daff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.684 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f3a78-2fbf-47a8-915b-8dfa33ce17fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403628, 'reachable_time': 26276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258342, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.701 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90bcba5b-0caf-4047-bbd6-41271e7303bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403628, 'tstamp': 403628}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258343, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.703 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb7e04a-f64a-4de8-987f-cb3d0b4962b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403628, 'reachable_time': 26276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258344, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.752 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf52724-8045-4240-8e12-bb5b3921f76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2a40bd-8396-4c15-95e1-7e97b80bd58b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.815 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.815 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.816 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58208cdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 NetworkManager[48904]: <info>  [1769521396.8184] manager: (tap58208cdc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 27 08:43:16 np0005597378 kernel: tap58208cdc-40: entered promiscuous mode
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.826 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58208cdc-40, col_values=(('external_ids', {'iface-id': '42783ab6-7560-4ef7-b70e-aaa544a1d882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:16Z|00090|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.833 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.834 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f34a1ea2-a6df-4566-a81c-aba638838596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.835 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:43:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:16.837 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'env', 'PROCESS_TAG=haproxy-58208cdc-4099-47ab-9729-2e87f01c74f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58208cdc-4099-47ab-9729-2e87f01c74f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:43:16 np0005597378 nova_compute[238941]: 2026-01-27 13:43:16.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.015 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.0150032, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.016 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Started (Lifecycle Event)#033[00m
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:43:17
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images']
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.058 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.063 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.0152948, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.064 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.089 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.091 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.130 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.130 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.1292245, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.130 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Started (Lifecycle Event)#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.158 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.163 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521397.1292908, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.164 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.187 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.191 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.224 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:17 np0005597378 podman[258458]: 2026-01-27 13:43:17.232252303 +0000 UTC m=+0.046835628 container create 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:43:17 np0005597378 systemd[1]: Started libpod-conmon-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19.scope.
Jan 27 08:43:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:17 np0005597378 podman[258458]: 2026-01-27 13:43:17.206441965 +0000 UTC m=+0.021025320 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:43:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d225bf4f286cd910e43e60eca46f0d9f8bc364d819cf9cdd7301aa57d09e31b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:17 np0005597378 podman[258458]: 2026-01-27 13:43:17.320676445 +0000 UTC m=+0.135259800 container init 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:43:17 np0005597378 podman[258458]: 2026-01-27 13:43:17.327279353 +0000 UTC m=+0.141862688 container start 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 08:43:17 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : New worker (258479) forked
Jan 27 08:43:17 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : Loading success.
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.391 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.394 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.403 238945 DEBUG nova.network.neutron [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[50757997-9c36-4005-a64b-28b8ef9ba567]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.448 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a8c5cb-f37b-4c74-bad7-47087b3b01fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.452 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4f3089-8681-4e0e-befd-96a3d11fba53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.473 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.476 238945 DEBUG nova.virt.libvirt.vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.477 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.478 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.478 238945 DEBUG os_vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.479 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.480 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.483 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf090189d-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.483 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf090189d-af, col_values=(('external_ids', {'iface-id': 'f090189d-af3a-4961-83a9-d4a369167af0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:44:57', 'vm-uuid': 'aa157503-9eb6-44e1-9bdd-2c902a907faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 NetworkManager[48904]: <info>  [1769521397.4859] manager: (tapf090189d-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.488 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.494 238945 INFO os_vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af')#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.495 238945 DEBUG nova.virt.libvirt.vif [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.494 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[23241130-02f8-449d-8e85-27cfb619f68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.496 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.497 238945 DEBUG nova.network.os_vif_util [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.501 238945 DEBUG nova.virt.libvirt.guest [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:2d:44:57"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <target dev="tapf090189d-af"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:43:17 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 08:43:17 np0005597378 NetworkManager[48904]: <info>  [1769521397.5159] manager: (tapf090189d-af): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Jan 27 08:43:17 np0005597378 kernel: tapf090189d-af: entered promiscuous mode
Jan 27 08:43:17 np0005597378 systemd-udevd[258325]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:17Z|00091|binding|INFO|Claiming lport f090189d-af3a-4961-83a9-d4a369167af0 for this chassis.
Jan 27 08:43:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:17Z|00092|binding|INFO|f090189d-af3a-4961-83a9-d4a369167af0: Claiming fa:16:3e:2d:44:57 10.100.0.14
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.523 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6709b7-95e0-4ce0-95be-b2ee4841dd76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258494, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 NetworkManager[48904]: <info>  [1769521397.5341] device (tapf090189d-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:17 np0005597378 NetworkManager[48904]: <info>  [1769521397.5346] device (tapf090189d-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:17Z|00093|binding|INFO|Setting lport f090189d-af3a-4961-83a9-d4a369167af0 ovn-installed in OVS
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ce606b-a427-4deb-b675-8445ec3eecce]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258499, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258499, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.543 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.549 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.549 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.550 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.550 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:17Z|00094|binding|INFO|Setting lport f090189d-af3a-4961-83a9-d4a369167af0 up in Southbound
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.569 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:44:57 10.100.0.14'], port_security=['fa:16:3e:2d:44:57 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f090189d-af3a-4961-83a9-d4a369167af0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.570 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f090189d-af3a-4961-83a9-d4a369167af0 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.572 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[368a86e1-8640-47f0-acd9-b7a1cc4d9602]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.622 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0de543-d270-420b-ae69-5ea6dd685c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.625 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ac040c-3661-47ba-9421-528df6e5f8aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:10:94:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.632 238945 DEBUG nova.virt.libvirt.driver [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:2d:44:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.659 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83c406f0-a7c2-4a25-958b-3af546d757ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[134a5638-8445-429e-a86e-180c35aa6710]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258506, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.678 238945 DEBUG nova.virt.libvirt.guest [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:43:17</nova:creationTime>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 08:43:17 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    <nova:port uuid="f090189d-af3a-4961-83a9-d4a369167af0">
Jan 27 08:43:17 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:17 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:43:17 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:43:17 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f399e05-50fc-408d-9383-2b598f823ee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400368, 'tstamp': 400368}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258507, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400372, 'tstamp': 400372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258507, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.695 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.696 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.697 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.699 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.699 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.700 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:17.700 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:17 np0005597378 nova_compute[238941]: 2026-01-27 13:43:17.706 238945 DEBUG oslo_concurrency.lockutils [None req-602bcda8-6fc8-4199-855c-244e455111a6 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:43:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:43:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 260 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 MiB/s wr, 91 op/s
Jan 27 08:43:19 np0005597378 nova_compute[238941]: 2026-01-27 13:43:19.440 238945 DEBUG nova.network.neutron [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updated VIF entry in instance network info cache for port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:19 np0005597378 nova_compute[238941]: 2026-01-27 13:43:19.440 238945 DEBUG nova.network.neutron [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:19 np0005597378 nova_compute[238941]: 2026-01-27 13:43:19.572 238945 DEBUG oslo_concurrency.lockutils [req-88fd6bab-ea3b-4fb1-b1f9-e40d895b507a req-a88686ae-2ea4-4659-aa9c-9cf4af795d14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:20Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:44:57 10.100.0.14
Jan 27 08:43:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:20Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:44:57 10.100.0.14
Jan 27 08:43:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.4 MiB/s wr, 111 op/s
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.464 238945 DEBUG nova.network.neutron [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updated VIF entry in instance network info cache for port 67dffbe5-7a66-478a-b9a7-8042fe48ca17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.465 238945 DEBUG nova.network.neutron [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updating instance_info_cache with network_info: [{"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.496 238945 DEBUG oslo_concurrency.lockutils [req-79406a06-c6af-409f-8f5c-375355f5692a req-f9115cca-b1a6-4f7a-a2c1-0731bd3675c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a2bf4dff-c501-4c5d-8573-bba7ceabc549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.707 238945 DEBUG nova.compute.manager [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG oslo_concurrency.lockutils [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG oslo_concurrency.lockutils [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG oslo_concurrency.lockutils [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.708 238945 DEBUG nova.compute.manager [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.709 238945 WARNING nova.compute.manager [req-f7be393b-6a07-4f63-9dbc-2fd444c1de79 req-bea331ce-b7be-47be-b09b-d064b1ad6670 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.773 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-changed-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing instance network info cache due to event network-changed-f090189d-af3a-4961-83a9-d4a369167af0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:20 np0005597378 nova_compute[238941]: 2026-01-27 13:43:20.774 238945 DEBUG nova.network.neutron [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Refreshing network info cache for port f090189d-af3a-4961-83a9-d4a369167af0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:21 np0005597378 nova_compute[238941]: 2026-01-27 13:43:21.709 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 MiB/s wr, 81 op/s
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.486 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.525 238945 DEBUG nova.network.neutron [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updated VIF entry in instance network info cache for port f090189d-af3a-4961-83a9-d4a369167af0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.526 238945 DEBUG nova.network.neutron [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.556 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.556 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Processing event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.557 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 WARNING nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.558 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Processing event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.559 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] No waiting events found dispatching network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 WARNING nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received unexpected event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.560 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.561 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.561 238945 DEBUG oslo_concurrency.lockutils [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.561 238945 DEBUG nova.compute.manager [req-f5e55dba-7393-4454-ab18-458e698e96f0 req-af6143a3-a392-4b66-9e36-dc728afb87cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Processing event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.562 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.563 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.563 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.568 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521402.5677962, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.568 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.571 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.571 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.572 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.582 238945 INFO nova.virt.libvirt.driver [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance spawned successfully.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.583 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.589 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.591 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance spawned successfully.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.591 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.595 238945 INFO nova.virt.libvirt.driver [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance spawned successfully.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.595 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.598 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.643 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.644 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.644 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.645 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.645 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.646 238945 DEBUG nova.virt.libvirt.driver [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.651 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.652 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.652 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.653 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.653 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.654 238945 DEBUG nova.virt.libvirt.driver [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.667 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.668 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521402.5690322, bee7c432-6457-4160-917c-a807eca3df0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.668 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.675 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.675 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.676 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.676 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.676 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.677 238945 DEBUG nova.virt.libvirt.driver [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.823 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.833 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.894 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-f090189d-af3a-4961-83a9-d4a369167af0" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.895 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-f090189d-af3a-4961-83a9-d4a369167af0" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.904 238945 INFO nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 14.48 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.905 238945 DEBUG nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.917 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.917 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521402.569347, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.918 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.933 238945 DEBUG nova.objects.instance [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.946 238945 INFO nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 18.16 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.946 238945 DEBUG nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.955 238945 INFO nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 13.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:43:22 np0005597378 nova_compute[238941]: 2026-01-27 13:43:22.956 238945 DEBUG nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.034 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.038 238945 INFO nova.compute.manager [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 16.10 seconds to build instance.#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.042 238945 DEBUG nova.virt.libvirt.vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.043 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.043 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.044 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.047 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.050 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.052 238945 DEBUG nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Attempting to detach device tapf090189d-af from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.052 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:2d:44:57"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <target dev="tapf090189d-af"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.058 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.060 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface>not found in domain: <domain type='kvm' id='18'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <name>instance-00000010</name>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <uuid>aa157503-9eb6-44e1-9bdd-2c902a907faf</uuid>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:43:17</nova:creationTime>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:port uuid="f090189d-af3a-4961-83a9-d4a369167af0">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <resource>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </resource>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='serial'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='uuid'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk' index='2'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config' index='1'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:10:94:20'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='tap57b7d200-69'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:2d:44:57'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='tapf090189d-af'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='net1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c345,c800</label>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c345,c800</imagelabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.063 238945 INFO nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapf090189d-af from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the persistent domain config.#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.063 238945 DEBUG nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] (1/8): Attempting to detach device tapf090189d-af with device alias net1 from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.063 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:2d:44:57"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <target dev="tapf090189d-af"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.153 238945 DEBUG oslo_concurrency.lockutils [None req-0d11d715-0500-4be3-a34b-f15fe94c349b 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.159 238945 INFO nova.compute.manager [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 14.89 seconds to build instance.#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.163 238945 INFO nova.compute.manager [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 19.36 seconds to build instance.#033[00m
Jan 27 08:43:23 np0005597378 kernel: tapf090189d-af (unregistering): left promiscuous mode
Jan 27 08:43:23 np0005597378 NetworkManager[48904]: <info>  [1769521403.2057] device (tapf090189d-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.211 238945 DEBUG oslo_concurrency.lockutils [None req-1c69eaad-d38d-48cc-ba8a-3a44043c36f4 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.212 238945 DEBUG oslo_concurrency.lockutils [None req-05408df5-bb13-4fe3-83a4-f82f81fd3e79 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:23 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:23Z|00095|binding|INFO|Releasing lport f090189d-af3a-4961-83a9-d4a369167af0 from this chassis (sb_readonly=0)
Jan 27 08:43:23 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:23Z|00096|binding|INFO|Setting lport f090189d-af3a-4961-83a9-d4a369167af0 down in Southbound
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:23Z|00097|binding|INFO|Removing iface tapf090189d-af ovn-installed in OVS
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.226 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769521403.225291, aa157503-9eb6-44e1-9bdd-2c902a907faf => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.227 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:44:57 10.100.0.14'], port_security=['fa:16:3e:2d:44:57 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f090189d-af3a-4961-83a9-d4a369167af0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.228 238945 DEBUG nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Start waiting for the detach event from libvirt for device tapf090189d-af with device alias net1 for instance aa157503-9eb6-44e1-9bdd-2c902a907faf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.228 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.229 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f090189d-af3a-4961-83a9-d4a369167af0 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.231 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.238 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2d:44:57"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf090189d-af"/></interface>not found in domain: <domain type='kvm' id='18'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <name>instance-00000010</name>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <uuid>aa157503-9eb6-44e1-9bdd-2c902a907faf</uuid>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:43:17</nova:creationTime>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:port uuid="f090189d-af3a-4961-83a9-d4a369167af0">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <resource>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </resource>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='serial'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='uuid'>aa157503-9eb6-44e1-9bdd-2c902a907faf</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk' index='2'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/aa157503-9eb6-44e1-9bdd-2c902a907faf_disk.config' index='1'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:10:94:20'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target dev='tap57b7d200-69'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf/console.log' append='off'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c345,c800</label>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c345,c800</imagelabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.239 238945 INFO nova.virt.libvirt.driver [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tapf090189d-af from instance aa157503-9eb6-44e1-9bdd-2c902a907faf from the live domain config.#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.240 238945 DEBUG nova.virt.libvirt.vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.240 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "f090189d-af3a-4961-83a9-d4a369167af0", "address": "fa:16:3e:2d:44:57", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf090189d-af", "ovs_interfaceid": "f090189d-af3a-4961-83a9-d4a369167af0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.241 238945 DEBUG nova.network.os_vif_util [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.241 238945 DEBUG os_vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.244 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf090189d-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.246 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.249 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.251 238945 INFO os_vif [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:44:57,bridge_name='br-int',has_traffic_filtering=True,id=f090189d-af3a-4961-83a9-d4a369167af0,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf090189d-af')#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.252 238945 DEBUG nova.virt.libvirt.guest [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1323128172</nova:name>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:43:23</nova:creationTime>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    <nova:port uuid="57b7d200-69e2-4204-8382-ca897741aa3d">
Jan 27 08:43:23 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:43:23 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:43:23 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.253 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09ac394c-b92b-47ad-8412-c8269c077494]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.293 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a75a04df-ed59-4903-b865-1ce2c92670b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.297 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e799f3e-92ba-42bd-9134-a9cfb0e5488a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.346 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe0ede2-65e1-448c-bc99-5ab7991fa96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.379 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1322fd-4878-4895-9173-cc5a7cce131b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400354, 'reachable_time': 39003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258519, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[282ffb5c-0f40-401e-b523-1cde4eb13717]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400368, 'tstamp': 400368}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258520, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400372, 'tstamp': 400372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258520, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.407 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.411 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.412 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:23.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.493 238945 DEBUG nova.compute.manager [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.494 238945 DEBUG oslo_concurrency.lockutils [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.494 238945 DEBUG oslo_concurrency.lockutils [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.494 238945 DEBUG oslo_concurrency.lockutils [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.495 238945 DEBUG nova.compute.manager [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.495 238945 WARNING nova.compute.manager [req-7805edb6-b01e-42c5-8dd5-84881619cbf1 req-68d39ec6-7f37-4273-b508-f3d2b26926ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.579 238945 DEBUG nova.compute.manager [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.580 238945 DEBUG oslo_concurrency.lockutils [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.580 238945 DEBUG oslo_concurrency.lockutils [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.581 238945 DEBUG oslo_concurrency.lockutils [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.581 238945 DEBUG nova.compute.manager [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] No waiting events found dispatching network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.581 238945 WARNING nova.compute.manager [req-53a42d9a-4587-459e-a1c4-46d1fb9fd7d7 req-5550543f-6018-4fbd-99be-b0d6e6b50b76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received unexpected event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.919 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.919 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:23 np0005597378 nova_compute[238941]: 2026-01-27 13:43:23.919 238945 DEBUG nova.network.neutron [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 27 08:43:25 np0005597378 nova_compute[238941]: 2026-01-27 13:43:25.728 238945 INFO nova.network.neutron [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Port f090189d-af3a-4961-83a9-d4a369167af0 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 27 08:43:25 np0005597378 nova_compute[238941]: 2026-01-27 13:43:25.729 238945 DEBUG nova.network.neutron [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [{"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:25 np0005597378 nova_compute[238941]: 2026-01-27 13:43:25.763 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-aa157503-9eb6-44e1-9bdd-2c902a907faf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:25 np0005597378 nova_compute[238941]: 2026-01-27 13:43:25.793 238945 DEBUG oslo_concurrency.lockutils [None req-5778050c-5683-46aa-b056-dacac3daa257 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-aa157503-9eb6-44e1-9bdd-2c902a907faf-f090189d-af3a-4961-83a9-d4a369167af0" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.013 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-unplugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.014 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.014 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.014 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-unplugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 WARNING nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-unplugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.015 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.016 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.016 238945 DEBUG oslo_concurrency.lockutils [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.016 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.017 238945 WARNING nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-f090189d-af3a-4961-83a9-d4a369167af0 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.017 238945 DEBUG nova.compute.manager [req-68f1c1de-d77b-47c3-8c7d-d4fc17007756 req-75629e20-e98a-438a-8368-99e3158e8a3b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-deleted-f090189d-af3a-4961-83a9-d4a369167af0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 787 KiB/s wr, 143 op/s
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.947 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.948 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.948 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.949 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.949 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.950 238945 INFO nova.compute.manager [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Terminating instance#033[00m
Jan 27 08:43:26 np0005597378 nova_compute[238941]: 2026-01-27 13:43:26.952 238945 DEBUG nova.compute.manager [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:43:27 np0005597378 kernel: tap57b7d200-69 (unregistering): left promiscuous mode
Jan 27 08:43:27 np0005597378 NetworkManager[48904]: <info>  [1769521407.0460] device (tap57b7d200-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:43:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:27Z|00098|binding|INFO|Releasing lport 57b7d200-69e2-4204-8382-ca897741aa3d from this chassis (sb_readonly=0)
Jan 27 08:43:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:27Z|00099|binding|INFO|Setting lport 57b7d200-69e2-4204-8382-ca897741aa3d down in Southbound
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:27Z|00100|binding|INFO|Removing iface tap57b7d200-69 ovn-installed in OVS
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.076 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:94:20 10.100.0.6'], port_security=['fa:16:3e:10:94:20 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'aa157503-9eb6-44e1-9bdd-2c902a907faf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '444bd80b-15fe-4cfe-971b-457370ed22f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=57b7d200-69e2-4204-8382-ca897741aa3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.077 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 57b7d200-69e2-4204-8382-ca897741aa3d in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis#033[00m
Jan 27 08:43:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.080 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:43:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.081 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67329c2b-edd0-4fbf-8316-bbba2950d56e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:27.082 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace which is not needed anymore#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:27 np0005597378 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 27 08:43:27 np0005597378 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 15.089s CPU time.
Jan 27 08:43:27 np0005597378 systemd-machined[207425]: Machine qemu-18-instance-00000010 terminated.
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.184 238945 INFO nova.virt.libvirt.driver [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Instance destroyed successfully.#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.185 238945 DEBUG nova.objects.instance [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'resources' on Instance uuid aa157503-9eb6-44e1-9bdd-2c902a907faf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.199 238945 DEBUG nova.virt.libvirt.vif [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:42:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1323128172',display_name='tempest-AttachInterfacesTestJSON-server-1323128172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1323128172',id=16,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSjFYO469v4PX+cKFEKK4kSl16LobIEboONHMYaGiFq6qrJvZMHnL/K8qFmN3+sIrWK4CeKCk0a/RT8KsOcSNg7EqAMADFh1fU4+5gdg64PUA5ENyxoqzBrdGGrkVM5kw==',key_name='tempest-keypair-1778141518',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:42:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-l0ll9j6a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:42:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=aa157503-9eb6-44e1-9bdd-2c902a907faf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.200 238945 DEBUG nova.network.os_vif_util [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "57b7d200-69e2-4204-8382-ca897741aa3d", "address": "fa:16:3e:10:94:20", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b7d200-69", "ovs_interfaceid": "57b7d200-69e2-4204-8382-ca897741aa3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.201 238945 DEBUG nova.network.os_vif_util [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.202 238945 DEBUG os_vif [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.204 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57b7d200-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.210 238945 INFO os_vif [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:94:20,bridge_name='br-int',has_traffic_filtering=True,id=57b7d200-69e2-4204-8382-ca897741aa3d,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b7d200-69')#033[00m
Jan 27 08:43:27 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : haproxy version is 2.8.14-c23fe91
Jan 27 08:43:27 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [NOTICE]   (257092) : path to executable is /usr/sbin/haproxy
Jan 27 08:43:27 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [WARNING]  (257092) : Exiting Master process...
Jan 27 08:43:27 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [ALERT]    (257092) : Current worker (257094) exited with code 143 (Terminated)
Jan 27 08:43:27 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[257082]: [WARNING]  (257092) : All workers exited. Exiting... (0)
Jan 27 08:43:27 np0005597378 systemd[1]: libpod-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0.scope: Deactivated successfully.
Jan 27 08:43:27 np0005597378 podman[258544]: 2026-01-27 13:43:27.243097868 +0000 UTC m=+0.070137068 container died d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018077647304864043 of space, bias 1.0, pg target 0.5423294191459213 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673708475605619 of space, bias 1.0, pg target 0.20021125426816858 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0077326280837678e-06 of space, bias 4.0, pg target 0.0012092791537005214 quantized to 16 (current 16)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:43:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:43:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0-userdata-shm.mount: Deactivated successfully.
Jan 27 08:43:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-96914522e435965eb97e058d8e426ded3cff39d5505b4a5a3ed36b244f04c669-merged.mount: Deactivated successfully.
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.582 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.583 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.603 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.686 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.686 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.698 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.699 238945 INFO nova.compute.claims [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:27 np0005597378 podman[258544]: 2026-01-27 13:43:27.745929727 +0000 UTC m=+0.572968927 container cleanup d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:43:27 np0005597378 systemd[1]: libpod-conmon-d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0.scope: Deactivated successfully.
Jan 27 08:43:27 np0005597378 nova_compute[238941]: 2026-01-27 13:43:27.941 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:28 np0005597378 podman[258603]: 2026-01-27 13:43:28.138915646 +0000 UTC m=+0.368360855 container remove d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29e1c672-8f23-43f5-b3f7-6bfc3b7ac4c6]: (4, ('Tue Jan 27 01:43:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0)\nd59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0\nTue Jan 27 01:43:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (d59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0)\nd59532df501226db215ba23c3680efd35b1f48369715fb95ce10a9489fcd4ae0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[943f1aaa-27b0-4de6-9a6f-66abac548082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:28 np0005597378 kernel: tapee180809-30: left promiscuous mode
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 260 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 27 KiB/s wr, 122 op/s
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.154 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5401d0cc-af3d-487b-972e-41330f0a1c3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.167 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[69a8dde0-43dd-4a41-8fb7-b33feb46a8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.168 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f668eb61-2e43-48ba-b559-310d0c7546e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.190 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-unplugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.191 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.192 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.192 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.193 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-unplugged-57b7d200-69e2-4204-8382-ca897741aa3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.193 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-unplugged-57b7d200-69e2-4204-8382-ca897741aa3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.194 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.194 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.194 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.195 238945 DEBUG oslo_concurrency.lockutils [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.195 238945 DEBUG nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] No waiting events found dispatching network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.195 238945 WARNING nova.compute.manager [req-15ba7c18-0548-4fcd-83d6-e73e4d1fe405 req-57133dcc-08c0-4ffd-8a37-14fc2d00544b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received unexpected event network-vif-plugged-57b7d200-69e2-4204-8382-ca897741aa3d for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.197 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[da811906-06e4-4ec0-88a6-ae8c52e2be1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400345, 'reachable_time': 42439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258638, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 systemd[1]: run-netns-ovnmeta\x2dee180809\x2d3e36\x2d46bd\x2dba3a\x2d3bacc6f9ce96.mount: Deactivated successfully.
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.203 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:43:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:28.203 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d4befff5-c928-4b43-b180-e233ffd40aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174087942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.524 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.529 238945 DEBUG nova.compute.provider_tree [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.573 238945 DEBUG nova.scheduler.client.report [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.725 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.726 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.867 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.869 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.903 238945 INFO nova.virt.libvirt.driver [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deleting instance files /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf_del#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.904 238945 INFO nova.virt.libvirt.driver [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deletion of /var/lib/nova/instances/aa157503-9eb6-44e1-9bdd-2c902a907faf_del complete#033[00m
Jan 27 08:43:28 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.974 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:28.999 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.004 238945 INFO nova.compute.manager [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 2.05 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.005 238945 DEBUG oslo.service.loopingcall [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.006 238945 DEBUG nova.compute.manager [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.006 238945 DEBUG nova.network.neutron [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.081 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.082 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.083 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Creating image(s)#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.108 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.130 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.158 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.162 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.190 238945 DEBUG nova.policy [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.222 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.223 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.223 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.224 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.243 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.246 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.491 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.546 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.631 238945 DEBUG nova.objects.instance [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid 677a728d-1d2a-4e11-909d-c2c91838cfbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.649 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.649 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Ensure instance console log exists: /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.650 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.650 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:29 np0005597378 nova_compute[238941]: 2026-01-27 13:43:29.650 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.113 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Successfully created port: 5f5812b1-ad53-4ee5-8409-ce2c112fa95a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 196 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 759 KiB/s wr, 238 op/s
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.293 238945 DEBUG nova.network.neutron [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.328 238945 INFO nova.compute.manager [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Took 1.32 seconds to deallocate network for instance.#033[00m
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.380 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.383 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.396 238945 DEBUG nova.compute.manager [req-a26ba799-f0df-4e56-a206-1beace31aee6 req-c6616e8f-d824-4c16-adc3-c603a4ed1b92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Received event network-vif-deleted-57b7d200-69e2-4204-8382-ca897741aa3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:30 np0005597378 nova_compute[238941]: 2026-01-27 13:43:30.514 238945 DEBUG oslo_concurrency.processutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036449516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.101 238945 DEBUG oslo_concurrency.processutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.108 238945 DEBUG nova.compute.provider_tree [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.125 238945 DEBUG nova.scheduler.client.report [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.163 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.187 238945 INFO nova.scheduler.client.report [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Deleted allocations for instance aa157503-9eb6-44e1-9bdd-2c902a907faf#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.261 238945 DEBUG oslo_concurrency.lockutils [None req-9f975766-eecf-4ae3-8651-06956f6c6779 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "aa157503-9eb6-44e1-9bdd-2c902a907faf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.411 238945 DEBUG nova.compute.manager [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.466 238945 INFO nova.compute.manager [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] instance snapshotting#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.496 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Successfully updated port: 5f5812b1-ad53-4ee5-8409-ce2c112fa95a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.515 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.516 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.516 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.598 238945 DEBUG nova.compute.manager [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-changed-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.601 238945 DEBUG nova.compute.manager [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Refreshing instance network info cache due to event network-changed-5f5812b1-ad53-4ee5-8409-ce2c112fa95a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.601 238945 DEBUG oslo_concurrency.lockutils [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.662 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:31Z|00101|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 08:43:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:31Z|00102|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.748 238945 INFO nova.virt.libvirt.driver [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Beginning live snapshot process#033[00m
Jan 27 08:43:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.905 238945 DEBUG nova.virt.libvirt.imagebackend [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:43:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:31Z|00103|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 08:43:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:31Z|00104|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 08:43:31 np0005597378 nova_compute[238941]: 2026-01-27 13:43:31.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 196 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 733 KiB/s wr, 218 op/s
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.197 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(0df130ea3f75462e87d09cd98177a5f1) on rbd image(a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 27 08:43:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 27 08:43:32 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.648 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] cloning vms/a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk@0df130ea3f75462e87d09cd98177a5f1 to images/48432db4-e7ca-4ebf-805f-6f33cb2760bc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.695 238945 DEBUG nova.network.neutron [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updating instance_info_cache with network_info: [{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.716 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.717 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance network_info: |[{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.717 238945 DEBUG oslo_concurrency.lockutils [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.718 238945 DEBUG nova.network.neutron [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Refreshing network info cache for port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.722 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start _get_guest_xml network_info=[{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.729 238945 WARNING nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.734 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.735 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.743 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.744 238945 DEBUG nova.virt.libvirt.host [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.745 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.746 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.749 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.749 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.750 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.750 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.750 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.751 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.751 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.752 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.753 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.753 238945 DEBUG nova.virt.hardware [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.760 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:32 np0005597378 nova_compute[238941]: 2026-01-27 13:43:32.802 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] flattening images/48432db4-e7ca-4ebf-805f-6f33cb2760bc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:43:33 np0005597378 nova_compute[238941]: 2026-01-27 13:43:33.055 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(0df130ea3f75462e87d09cd98177a5f1) on rbd image(a2bf4dff-c501-4c5d-8573-bba7ceabc549_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:43:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267476038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:33 np0005597378 nova_compute[238941]: 2026-01-27 13:43:33.368 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:33 np0005597378 nova_compute[238941]: 2026-01-27 13:43:33.392 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:33 np0005597378 nova_compute[238941]: 2026-01-27 13:43:33.399 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 27 08:43:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 27 08:43:33 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 27 08:43:33 np0005597378 nova_compute[238941]: 2026-01-27 13:43:33.660 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(snap) on rbd image(48432db4-e7ca-4ebf-805f-6f33cb2760bc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:43:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876881591' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.054 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.055 238945 DEBUG nova.virt.libvirt.vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2040609420',display_name='tempest-ServersAdminTestJSON-server-2040609420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2040609420',id=20,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-s9uhvpm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:29Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=677a728d-1d2a-4e11-909d-c2c91838cfbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.056 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.057 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.058 238945 DEBUG nova.objects.instance [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 677a728d-1d2a-4e11-909d-c2c91838cfbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.074 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <uuid>677a728d-1d2a-4e11-909d-c2c91838cfbe</uuid>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <name>instance-00000014</name>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminTestJSON-server-2040609420</nova:name>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:32</nova:creationTime>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <nova:port uuid="5f5812b1-ad53-4ee5-8409-ce2c112fa95a">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <entry name="serial">677a728d-1d2a-4e11-909d-c2c91838cfbe</entry>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <entry name="uuid">677a728d-1d2a-4e11-909d-c2c91838cfbe</entry>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/677a728d-1d2a-4e11-909d-c2c91838cfbe_disk">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:3d:fa:85"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <target dev="tap5f5812b1-ad"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/console.log" append="off"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:34 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:34 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:34 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:34 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.074 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Preparing to wait for external event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.076 238945 DEBUG nova.virt.libvirt.vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2040609420',display_name='tempest-ServersAdminTestJSON-server-2040609420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2040609420',id=20,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-s9uhvpm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:29Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=677a728d-1d2a-4e11-909d-c2c91838cfbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.076 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.077 238945 DEBUG nova.network.os_vif_util [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.077 238945 DEBUG os_vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.078 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.078 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.082 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f5812b1-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.083 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f5812b1-ad, col_values=(('external_ids', {'iface-id': '5f5812b1-ad53-4ee5-8409-ce2c112fa95a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:fa:85', 'vm-uuid': '677a728d-1d2a-4e11-909d-c2c91838cfbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:34 np0005597378 NetworkManager[48904]: <info>  [1769521414.0856] manager: (tap5f5812b1-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.091 238945 INFO os_vif [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad')#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.148 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.148 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.148 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:3d:fa:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.149 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Using config drive#033[00m
Jan 27 08:43:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 262 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 4.8 MiB/s wr, 272 op/s
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.177 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 27 08:43:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 27 08:43:34 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.676 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Creating config drive at /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.682 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph1_p6ys5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.815 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph1_p6ys5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.840 238945 DEBUG nova.storage.rbd_utils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.850 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 48432db4-e7ca-4ebf-805f-6f33cb2760bc could not be found.
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 48432db4-e7ca-4ebf-805f-6f33cb2760bc
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver 
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver 
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 48432db4-e7ca-4ebf-805f-6f33cb2760bc could not be found.
Jan 27 08:43:34 np0005597378 nova_compute[238941]: 2026-01-27 13:43:34.958 238945 ERROR nova.virt.libvirt.driver #033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.019 238945 DEBUG nova.network.neutron [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updated VIF entry in instance network info cache for port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.020 238945 DEBUG nova.network.neutron [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updating instance_info_cache with network_info: [{"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.023 238945 DEBUG oslo_concurrency.processutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config 677a728d-1d2a-4e11-909d-c2c91838cfbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.023 238945 INFO nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deleting local config drive /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.042 238945 DEBUG oslo_concurrency.lockutils [req-5f4d7eb0-39f1-48dc-8edf-313b877902e2 req-f02065ca-f091-4479-83aa-8d68ad248930 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-677a728d-1d2a-4e11-909d-c2c91838cfbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.043 238945 DEBUG nova.storage.rbd_utils [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(snap) on rbd image(48432db4-e7ca-4ebf-805f-6f33cb2760bc) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:43:35 np0005597378 NetworkManager[48904]: <info>  [1769521415.0769] manager: (tap5f5812b1-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Jan 27 08:43:35 np0005597378 kernel: tap5f5812b1-ad: entered promiscuous mode
Jan 27 08:43:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:35Z|00105|binding|INFO|Claiming lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a for this chassis.
Jan 27 08:43:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:35Z|00106|binding|INFO|5f5812b1-ad53-4ee5-8409-ce2c112fa95a: Claiming fa:16:3e:3d:fa:85 10.100.0.4
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.094 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:fa:85 10.100.0.4'], port_security=['fa:16:3e:3d:fa:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '677a728d-1d2a-4e11-909d-c2c91838cfbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5f5812b1-ad53-4ee5-8409-ce2c112fa95a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.098 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.100 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:43:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:35Z|00107|binding|INFO|Setting lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a ovn-installed in OVS
Jan 27 08:43:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:35Z|00108|binding|INFO|Setting lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a up in Southbound
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:35 np0005597378 systemd-udevd[259126]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:35 np0005597378 systemd-machined[207425]: New machine qemu-22-instance-00000014.
Jan 27 08:43:35 np0005597378 NetworkManager[48904]: <info>  [1769521415.1279] device (tap5f5812b1-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:35 np0005597378 NetworkManager[48904]: <info>  [1769521415.1293] device (tap5f5812b1-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:35 np0005597378 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.129 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b70e7c1-759d-4814-a75c-5f0b55f84722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.171 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dc19c9a2-acf7-4f4e-a311-f8f560be5ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.174 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[da9f3919-b9d7-4ef7-a6b9-54f61785d2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.210 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39b69481-82fd-496e-9e67-de0e07a57aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43829125-5f57-456b-aea3-8899854a526e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 18015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259139, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.252 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49b0c375-5ec4-485a-bfe9-fff06a9f0d7f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259140, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259140, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.254 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:35.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.342 238945 DEBUG nova.compute.manager [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.342 238945 DEBUG oslo_concurrency.lockutils [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.343 238945 DEBUG oslo_concurrency.lockutils [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.343 238945 DEBUG oslo_concurrency.lockutils [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.343 238945 DEBUG nova.compute.manager [req-2e3773ee-7075-4ff3-9142-7779e7fe592b req-e70696ff-fb3d-4926-b655-84a624ca1c67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Processing event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 27 08:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 27 08:43:35 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.694 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521415.6831267, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Started (Lifecycle Event)#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.696 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.704 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.707 238945 INFO nova.virt.libvirt.driver [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance spawned successfully.#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.708 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.724 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.731 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.735 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.735 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.735 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.736 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.736 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.736 238945 DEBUG nova.virt.libvirt.driver [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.762 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.762 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521415.6832376, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.762 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.787 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.790 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521415.7038314, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.791 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.796 238945 INFO nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 6.72 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.797 238945 DEBUG nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.806 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.810 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.846 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.861 238945 INFO nova.compute.manager [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 8.21 seconds to build instance.#033[00m
Jan 27 08:43:35 np0005597378 nova_compute[238941]: 2026-01-27 13:43:35.876 238945 DEBUG oslo_concurrency.lockutils [None req-db9abefd-f9ac-4efc-95e6-2d7c62840d2f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 273 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 246 op/s
Jan 27 08:43:36 np0005597378 nova_compute[238941]: 2026-01-27 13:43:36.357 238945 WARNING nova.compute.manager [None req-dcf5ba65-d3de-4325-9ffd-e4b1f022270f bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Image not found during snapshot: nova.exception.ImageNotFound: Image 48432db4-e7ca-4ebf-805f-6f33cb2760bc could not be found.#033[00m
Jan 27 08:43:36 np0005597378 nova_compute[238941]: 2026-01-27 13:43:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:36 np0005597378 nova_compute[238941]: 2026-01-27 13:43:36.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:37Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:48:4c 10.100.0.14
Jan 27 08:43:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:37Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:48:4c 10.100.0.14
Jan 27 08:43:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:37Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:f2:78 10.100.0.4
Jan 27 08:43:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:37Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:f2:78 10.100.0.4
Jan 27 08:43:37 np0005597378 podman[259202]: 2026-01-27 13:43:37.799204169 +0000 UTC m=+0.129879384 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.047 238945 DEBUG nova.compute.manager [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.048 238945 DEBUG oslo_concurrency.lockutils [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.048 238945 DEBUG oslo_concurrency.lockutils [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.048 238945 DEBUG oslo_concurrency.lockutils [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.049 238945 DEBUG nova.compute.manager [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] No waiting events found dispatching network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.049 238945 WARNING nova.compute.manager [req-5ad9a642-e610-4350-ba71-e9dbe387916a req-3cf3dac5-f54b-4885-8747-84c0aa4d9eee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received unexpected event network-vif-plugged-5f5812b1-ad53-4ee5-8409-ce2c112fa95a for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 273 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 177 op/s
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:38Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:43:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:38Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.530 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.531 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.531 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.532 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.532 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.533 238945 INFO nova.compute.manager [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Terminating instance#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.534 238945 DEBUG nova.compute.manager [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:43:38 np0005597378 kernel: tap67dffbe5-7a (unregistering): left promiscuous mode
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 NetworkManager[48904]: <info>  [1769521418.5921] device (tap67dffbe5-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:43:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:38Z|00109|binding|INFO|Releasing lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 from this chassis (sb_readonly=0)
Jan 27 08:43:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:38Z|00110|binding|INFO|Setting lport 67dffbe5-7a66-478a-b9a7-8042fe48ca17 down in Southbound
Jan 27 08:43:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:38Z|00111|binding|INFO|Removing iface tap67dffbe5-7a ovn-installed in OVS
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.625 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.633 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:f2:78 10.100.0.4'], port_security=['fa:16:3e:a8:f2:78 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a2bf4dff-c501-4c5d-8573-bba7ceabc549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=67dffbe5-7a66-478a-b9a7-8042fe48ca17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.634 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 67dffbe5-7a66-478a-b9a7-8042fe48ca17 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 unbound from our chassis#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.636 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58208cdc-4099-47ab-9729-2e87f01c74f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.638 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2743572e-a55b-49b1-a459-5c586d58757b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.639 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace which is not needed anymore#033[00m
Jan 27 08:43:38 np0005597378 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 27 08:43:38 np0005597378 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Consumed 14.949s CPU time.
Jan 27 08:43:38 np0005597378 systemd-machined[207425]: Machine qemu-20-instance-00000013 terminated.
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.771 238945 INFO nova.virt.libvirt.driver [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Instance destroyed successfully.#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.772 238945 DEBUG nova.objects.instance [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'resources' on Instance uuid a2bf4dff-c501-4c5d-8573-bba7ceabc549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:38 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : haproxy version is 2.8.14-c23fe91
Jan 27 08:43:38 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [NOTICE]   (258477) : path to executable is /usr/sbin/haproxy
Jan 27 08:43:38 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [WARNING]  (258477) : Exiting Master process...
Jan 27 08:43:38 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [ALERT]    (258477) : Current worker (258479) exited with code 143 (Terminated)
Jan 27 08:43:38 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[258473]: [WARNING]  (258477) : All workers exited. Exiting... (0)
Jan 27 08:43:38 np0005597378 systemd[1]: libpod-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19.scope: Deactivated successfully.
Jan 27 08:43:38 np0005597378 podman[259251]: 2026-01-27 13:43:38.794747224 +0000 UTC m=+0.051758071 container died 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.799 238945 DEBUG nova.virt.libvirt.vif [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1866996444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1866996444',id=19,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-8gi0fsu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:36Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=a2bf4dff-c501-4c5d-8573-bba7ceabc549,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.799 238945 DEBUG nova.network.os_vif_util [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "address": "fa:16:3e:a8:f2:78", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67dffbe5-7a", "ovs_interfaceid": "67dffbe5-7a66-478a-b9a7-8042fe48ca17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.800 238945 DEBUG nova.network.os_vif_util [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.800 238945 DEBUG os_vif [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.803 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67dffbe5-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.811 238945 INFO os_vif [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:f2:78,bridge_name='br-int',has_traffic_filtering=True,id=67dffbe5-7a66-478a-b9a7-8042fe48ca17,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67dffbe5-7a')#033[00m
Jan 27 08:43:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19-userdata-shm.mount: Deactivated successfully.
Jan 27 08:43:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4d225bf4f286cd910e43e60eca46f0d9f8bc364d819cf9cdd7301aa57d09e31b-merged.mount: Deactivated successfully.
Jan 27 08:43:38 np0005597378 podman[259251]: 2026-01-27 13:43:38.844764917 +0000 UTC m=+0.101775734 container cleanup 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:43:38 np0005597378 systemd[1]: libpod-conmon-15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19.scope: Deactivated successfully.
Jan 27 08:43:38 np0005597378 podman[259303]: 2026-01-27 13:43:38.923601029 +0000 UTC m=+0.054222188 container remove 15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.932 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4dea3f-cbaf-41c1-8f9a-39c60dfb58a5]: (4, ('Tue Jan 27 01:43:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19)\n15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19\nTue Jan 27 01:43:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19)\n15330ff4623cbb7d9f00ad8c7eececbcf8555ff1917211ca1dfec8b8307d8f19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.937 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4eb315e-74f2-4242-959f-e82239f0bc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.938 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 kernel: tap58208cdc-40: left promiscuous mode
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[296370bf-c9f3-4284-9fc2-4ef491ad49a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 nova_compute[238941]: 2026-01-27 13:43:38.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.969 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb0de34-2a43-48e5-a2e0-2acdc9d6d5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4dc6989-708d-4c7e-9abd-ec094de1004d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b12ffded-34e2-4ea0-9a62-abbcec79e205]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403620, 'reachable_time': 24466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259320, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:38 np0005597378 systemd[1]: run-netns-ovnmeta\x2d58208cdc\x2d4099\x2d47ab\x2d9729\x2d2e87f01c74f8.mount: Deactivated successfully.
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.993 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:43:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:38.993 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[38c1c684-7e7a-4e2f-acd0-2030ae566626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.038 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.038 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.082 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.134 238945 INFO nova.virt.libvirt.driver [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deleting instance files /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549_del#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.135 238945 INFO nova.virt.libvirt.driver [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deletion of /var/lib/nova/instances/a2bf4dff-c501-4c5d-8573-bba7ceabc549_del complete#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.211 238945 INFO nova.compute.manager [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.212 238945 DEBUG oslo.service.loopingcall [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.212 238945 DEBUG nova.compute.manager [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.213 238945 DEBUG nova.network.neutron [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.219 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.219 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.227 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.227 238945 INFO nova.compute.claims [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.334 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.353 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.354 238945 DEBUG nova.compute.provider_tree [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.374 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.404 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.513 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.645 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.645 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:39 np0005597378 nova_compute[238941]: 2026-01-27 13:43:39.987 238945 DEBUG nova.network.neutron [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.015 238945 INFO nova.compute.manager [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Took 0.80 seconds to deallocate network for instance.#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.063 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252214150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.099 238945 DEBUG nova.compute.manager [req-1f3b227c-8c2e-46e7-8ff4-bb4f041f175b req-6fec0b5c-12eb-47b1-8232-4cf5e14d0f10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-deleted-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.100 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.106 238945 DEBUG nova.compute.provider_tree [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.124 238945 DEBUG nova.scheduler.client.report [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.148 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.149 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.152 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 297 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 15 MiB/s wr, 676 op/s
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.179 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-unplugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.180 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.180 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.180 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] No waiting events found dispatching network-vif-unplugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 WARNING nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received unexpected event network-vif-unplugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.181 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 DEBUG oslo_concurrency.lockutils [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 DEBUG nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] No waiting events found dispatching network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.182 238945 WARNING nova.compute.manager [req-258ffa43-ad40-4793-b5a7-a9f057982d5f req-44e9820a-5c93-40b4-89a4-94aa06b87c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Received unexpected event network-vif-plugged-67dffbe5-7a66-478a-b9a7-8042fe48ca17 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.203 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.203 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.225 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.250 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.310 238945 DEBUG oslo_concurrency.processutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.353 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.355 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.355 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Creating image(s)#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.378 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.401 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.424 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.428 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.460 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.461 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.491 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.492 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.492 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.492 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.515 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.522 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.551 238945 DEBUG nova.policy [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.556 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.640 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:40 np0005597378 podman[259455]: 2026-01-27 13:43:40.732352977 +0000 UTC m=+0.064793053 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 08:43:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825117378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.963 238945 DEBUG oslo_concurrency.processutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.973 238945 DEBUG nova.compute.provider_tree [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:40 np0005597378 nova_compute[238941]: 2026-01-27 13:43:40.991 238945 DEBUG nova.scheduler.client.report [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.014 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.017 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.024 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.025 238945 INFO nova.compute.claims [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.054 238945 INFO nova.scheduler.client.report [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Deleted allocations for instance a2bf4dff-c501-4c5d-8573-bba7ceabc549#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.154 238945 DEBUG oslo_concurrency.lockutils [None req-a8afb1f5-a66d-4abd-a5dc-55e8e8c4f29b bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "a2bf4dff-c501-4c5d-8573-bba7ceabc549" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.277 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.303 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.322 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-bee7c432-6457-4160-917c-a807eca3df0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.323 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.324 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.324 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.345 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.509 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.578 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:41 np0005597378 nova_compute[238941]: 2026-01-27 13:43:41.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 27 08:43:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812939079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.021 238945 DEBUG nova.objects.instance [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.024 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.029 238945 DEBUG nova.compute.provider_tree [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.040 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.041 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Ensure instance console log exists: /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.041 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.042 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.042 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.043 238945 DEBUG nova.scheduler.client.report [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.074 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.075 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.077 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.077 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.077 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.078 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 27 08:43:42 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 27 08:43:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 297 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 489 op/s
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.176 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.176 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.181 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521407.180113, aa157503-9eb6-44e1-9bdd-2c902a907faf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.181 238945 INFO nova.compute.manager [-] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.219 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.223 238945 DEBUG nova.compute.manager [None req-647c63fd-4e08-4776-979d-36e3f6e0270f - - - - - -] [instance: aa157503-9eb6-44e1-9bdd-2c902a907faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.243 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.265 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully created port: c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.330 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.331 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.332 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Creating image(s)#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.351 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.369 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.393 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.399 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.462 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.463 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.463 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.464 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.489 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.494 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b816093f-751c-4d16-bb91-82ae954a9732_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.519 238945 DEBUG nova.policy [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97755bdfdc1140aa970fa69a04baeb3c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213374648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.739 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.824 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.825 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.829 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.829 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.833 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:43:42 np0005597378 nova_compute[238941]: 2026-01-27 13:43:42.833 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.069 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.071 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3895MB free_disk=59.84982183761895GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.071 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.071 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:43.161 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:43.161 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.165 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bee7c432-6457-4160-917c-a807eca3df0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.165 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 4c52012f-9a4f-4599-adb0-2c658a054f91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.165 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 677a728d-1d2a-4e11-909d-c2c91838cfbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b816093f-751c-4d16-bb91-82ae954a9732 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.166 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.358 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.444 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Successfully created port: 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.811 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b816093f-751c-4d16-bb91-82ae954a9732_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.900 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390178242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.971 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:43 np0005597378 nova_compute[238941]: 2026-01-27 13:43:43.977 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.050 238945 DEBUG nova.objects.instance [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid b816093f-751c-4d16-bb91-82ae954a9732 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 284 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 498 op/s
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.297 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.301 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.301 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Ensure instance console log exists: /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.302 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.302 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.303 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.506 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.506 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.564 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.564 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:44 np0005597378 nova_compute[238941]: 2026-01-27 13:43:44.565 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.079 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.113 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.114 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.114 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:45.163 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.194 238945 DEBUG nova.compute.manager [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.194 238945 DEBUG nova.compute.manager [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.195 238945 DEBUG oslo_concurrency.lockutils [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.328 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.361 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Successfully updated port: 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.415 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.415 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquired lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.416 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.540 238945 DEBUG nova.compute.manager [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-changed-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.540 238945 DEBUG nova.compute.manager [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Refreshing instance network info cache due to event network-changed-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.541 238945 DEBUG oslo_concurrency.lockutils [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:45 np0005597378 nova_compute[238941]: 2026-01-27 13:43:45.612 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 320 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 428 op/s
Jan 27 08:43:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:46.292 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:46.292 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:46 np0005597378 nova_compute[238941]: 2026-01-27 13:43:46.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:43:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.281767205 +0000 UTC m=+0.062073110 container create 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.247746164 +0000 UTC m=+0.028052089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:43:47 np0005597378 systemd[1]: Started libpod-conmon-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope.
Jan 27 08:43:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.394616897 +0000 UTC m=+0.174922832 container init 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.403734114 +0000 UTC m=+0.184040029 container start 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.408730978 +0000 UTC m=+0.189036923 container attach 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 27 08:43:47 np0005597378 systemd[1]: libpod-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope: Deactivated successfully.
Jan 27 08:43:47 np0005597378 zen_leakey[260011]: 167 167
Jan 27 08:43:47 np0005597378 conmon[260011]: conmon 396c779860f0bc3b69c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope/container/memory.events
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.412129941 +0000 UTC m=+0.192435856 container died 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:43:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d6524d24c8d40ca40ed5847bf02ebe70595980324460865c8d5b009dfc1365ac-merged.mount: Deactivated successfully.
Jan 27 08:43:47 np0005597378 podman[259995]: 2026-01-27 13:43:47.473439478 +0000 UTC m=+0.253745393 container remove 396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_leakey, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:43:47 np0005597378 systemd[1]: libpod-conmon-396c779860f0bc3b69c1e4dd8d19ef18430e25ba8aa92064294d979e7f0aa558.scope: Deactivated successfully.
Jan 27 08:43:47 np0005597378 podman[260037]: 2026-01-27 13:43:47.654889606 +0000 UTC m=+0.043975560 container create 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:43:47 np0005597378 systemd[1]: Started libpod-conmon-194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254.scope.
Jan 27 08:43:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:47 np0005597378 podman[260037]: 2026-01-27 13:43:47.63545597 +0000 UTC m=+0.024541934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:43:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:47 np0005597378 podman[260037]: 2026-01-27 13:43:47.754743166 +0000 UTC m=+0.143829150 container init 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:43:47 np0005597378 podman[260037]: 2026-01-27 13:43:47.761729645 +0000 UTC m=+0.150815599 container start 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 08:43:47 np0005597378 podman[260037]: 2026-01-27 13:43:47.771038847 +0000 UTC m=+0.160124841 container attach 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:43:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.118 238945 DEBUG nova.network.neutron [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 320 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 428 op/s
Jan 27 08:43:48 np0005597378 zen_torvalds[260053]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:43:48 np0005597378 zen_torvalds[260053]: --> All data devices are unavailable
Jan 27 08:43:48 np0005597378 systemd[1]: libpod-194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254.scope: Deactivated successfully.
Jan 27 08:43:48 np0005597378 podman[260037]: 2026-01-27 13:43:48.327525247 +0000 UTC m=+0.716611211 container died 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:43:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0aef826667d622069f30f77eab6c830b05911a832bd0a98fc51493cbb8f087d0-merged.mount: Deactivated successfully.
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.386 238945 DEBUG nova.network.neutron [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.478 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.479 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance network_info: |[{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.479 238945 DEBUG oslo_concurrency.lockutils [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.479 238945 DEBUG nova.network.neutron [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.485 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start _get_guest_xml network_info=[{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.492 238945 WARNING nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.501 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.502 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.510 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.511 238945 DEBUG nova.virt.libvirt.host [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.511 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.511 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.512 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.512 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.513 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.514 238945 DEBUG nova.virt.hardware [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.517 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:48 np0005597378 podman[260037]: 2026-01-27 13:43:48.518539643 +0000 UTC m=+0.907625597 container remove 194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:43:48 np0005597378 systemd[1]: libpod-conmon-194c08e1f6c956afb50f86777dacaf31e3793df329f4ca195dfe34d8db8e7254.scope: Deactivated successfully.
Jan 27 08:43:48 np0005597378 nova_compute[238941]: 2026-01-27 13:43:48.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.099000281 +0000 UTC m=+0.097720103 container create 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.118 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Releasing lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.119 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance network_info: |[{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.026616324 +0000 UTC m=+0.025336176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.128 238945 DEBUG oslo_concurrency.lockutils [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.129 238945 DEBUG nova.network.neutron [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Refreshing network info cache for port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.136 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start _get_guest_xml network_info=[{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.147 238945 WARNING nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.154 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.155 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:49 np0005597378 systemd[1]: Started libpod-conmon-1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe.scope.
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.159 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.160 238945 DEBUG nova.virt.libvirt.host [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.161 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.161 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.162 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.162 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.163 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.163 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.163 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.164 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.164 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.165 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.165 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.165 238945 DEBUG nova.virt.hardware [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.169 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.209776148 +0000 UTC m=+0.208496000 container init 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 27 08:43:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243116878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.222615675 +0000 UTC m=+0.221335507 container start 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:49 np0005597378 naughty_curie[260184]: 167 167
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.231572637 +0000 UTC m=+0.230292559 container attach 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:43:49 np0005597378 systemd[1]: libpod-1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe.scope: Deactivated successfully.
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.233650334 +0000 UTC m=+0.232370196 container died 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.256 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.292 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.307 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dcee4bb632c6454bad5567b7573e969c0b993c937b9ac3f730aceb031807aaa1-merged.mount: Deactivated successfully.
Jan 27 08:43:49 np0005597378 podman[260168]: 2026-01-27 13:43:49.336944306 +0000 UTC m=+0.335664138 container remove 1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:49 np0005597378 systemd[1]: libpod-conmon-1faee428330ed96e471b2393dd2867a2f95201e0333a2248e651678ec33fc7fe.scope: Deactivated successfully.
Jan 27 08:43:49 np0005597378 podman[260270]: 2026-01-27 13:43:49.57077323 +0000 UTC m=+0.066308224 container create 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:43:49 np0005597378 podman[260270]: 2026-01-27 13:43:49.530874072 +0000 UTC m=+0.026409086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:43:49 np0005597378 systemd[1]: Started libpod-conmon-058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825.scope.
Jan 27 08:43:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976950400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.833 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.854 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.858 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:49 np0005597378 podman[260270]: 2026-01-27 13:43:49.870281951 +0000 UTC m=+0.365816945 container init 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:43:49 np0005597378 podman[260270]: 2026-01-27 13:43:49.877262339 +0000 UTC m=+0.372797333 container start 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193198840' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.973 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.977 238945 DEBUG nova.virt.libvirt.vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.978 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.980 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:49 np0005597378 nova_compute[238941]: 2026-01-27 13:43:49.982 238945 DEBUG nova.objects.instance [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:50 np0005597378 podman[260270]: 2026-01-27 13:43:50.004880761 +0000 UTC m=+0.500415795 container attach 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.125 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <name>instance-00000015</name>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:48</nova:creationTime>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="serial">9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="uuid">9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:d8:2d:f1"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <target dev="tapc2b2aaa7-69"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log" append="off"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:50 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:50 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.126 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Preparing to wait for external event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.132 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.132 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.132 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.133 238945 DEBUG nova.virt.libvirt.vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.134 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.135 238945 DEBUG nova.network.os_vif_util [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.135 238945 DEBUG os_vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.137 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.137 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.141 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2b2aaa7-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.142 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2b2aaa7-69, col_values=(('external_ids', {'iface-id': 'c2b2aaa7-69a4-4868-bbe6-d21fd9974c34', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:2d:f1', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:50 np0005597378 NetworkManager[48904]: <info>  [1769521430.1467] manager: (tapc2b2aaa7-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.158 238945 INFO os_vif [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69')#033[00m
Jan 27 08:43:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 340 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.7 MiB/s wr, 106 op/s
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]: {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:    "0": [
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:        {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "devices": [
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "/dev/loop3"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            ],
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_name": "ceph_lv0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_size": "21470642176",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "name": "ceph_lv0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "tags": {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cluster_name": "ceph",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.crush_device_class": "",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.encrypted": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.objectstore": "bluestore",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osd_id": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.type": "block",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.vdo": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.with_tpm": "0"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            },
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "type": "block",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "vg_name": "ceph_vg0"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:        }
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:    ],
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:    "1": [
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:        {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "devices": [
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "/dev/loop4"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            ],
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_name": "ceph_lv1",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_size": "21470642176",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "name": "ceph_lv1",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "tags": {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cluster_name": "ceph",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.crush_device_class": "",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.encrypted": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.objectstore": "bluestore",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osd_id": "1",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.type": "block",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.vdo": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.with_tpm": "0"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            },
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "type": "block",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "vg_name": "ceph_vg1"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:        }
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:    ],
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:    "2": [
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:        {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "devices": [
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "/dev/loop5"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            ],
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_name": "ceph_lv2",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_size": "21470642176",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "name": "ceph_lv2",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "tags": {
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.cluster_name": "ceph",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.crush_device_class": "",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.encrypted": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.objectstore": "bluestore",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osd_id": "2",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.type": "block",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.vdo": "0",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:                "ceph.with_tpm": "0"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            },
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "type": "block",
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:            "vg_name": "ceph_vg2"
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:        }
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]:    ]
Jan 27 08:43:50 np0005597378 sharp_banzai[260287]: }
Jan 27 08:43:50 np0005597378 systemd[1]: libpod-058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825.scope: Deactivated successfully.
Jan 27 08:43:50 np0005597378 podman[260270]: 2026-01-27 13:43:50.258951883 +0000 UTC m=+0.754486887 container died 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.295 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.295 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8459fce52714034bec2d42c817889041d8134c2302b8a5d7cf10152c0c6b3857-merged.mount: Deactivated successfully.
Jan 27 08:43:50 np0005597378 podman[260270]: 2026-01-27 13:43:50.357429546 +0000 UTC m=+0.852964540 container remove 058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:43:50 np0005597378 systemd[1]: libpod-conmon-058012ec8d1b20e1f8ff486ffa317ceead057938a3bfb0a226b58840bbb0a825.scope: Deactivated successfully.
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.380 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.381 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.382 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.382 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Using config drive#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.412 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565667008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.528 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.551 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.553 238945 DEBUG nova.virt.libvirt.vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-147275193',display_name='tempest-ServersAdminTestJSON-server-147275193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-147275193',id=22,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-1hcjid1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:42Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=b816093f-751c-4d16-bb91-82ae954a9732,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.553 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.554 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.556 238945 DEBUG nova.objects.instance [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b816093f-751c-4d16-bb91-82ae954a9732 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:50Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:fa:85 10.100.0.4
Jan 27 08:43:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:50Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:fa:85 10.100.0.4
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.715 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <uuid>b816093f-751c-4d16-bb91-82ae954a9732</uuid>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <name>instance-00000016</name>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminTestJSON-server-147275193</nova:name>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:49</nova:creationTime>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <nova:port uuid="6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="serial">b816093f-751c-4d16-bb91-82ae954a9732</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="uuid">b816093f-751c-4d16-bb91-82ae954a9732</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b816093f-751c-4d16-bb91-82ae954a9732_disk">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b816093f-751c-4d16-bb91-82ae954a9732_disk.config">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:67:7e:c5"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <target dev="tap6f5f40a3-5f"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/console.log" append="off"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:50 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:50 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:50 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:50 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.715 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Preparing to wait for external event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.716 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.716 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.716 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.718 238945 DEBUG nova.virt.libvirt.vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-147275193',display_name='tempest-ServersAdminTestJSON-server-147275193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-147275193',id=22,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-1hcjid1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:42Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=b816093f-751c-4d16-bb91-82ae954a9732,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.719 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.721 238945 DEBUG nova.network.os_vif_util [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.721 238945 DEBUG os_vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.723 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.723 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.728 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f5f40a3-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f5f40a3-5f, col_values=(('external_ids', {'iface-id': '6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:7e:c5', 'vm-uuid': 'b816093f-751c-4d16-bb91-82ae954a9732'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:50 np0005597378 NetworkManager[48904]: <info>  [1769521430.7314] manager: (tap6f5f40a3-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.745 238945 INFO os_vif [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f')#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.773 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.773 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.785 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.786 238945 INFO nova.compute.claims [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.879 238945 DEBUG nova.network.neutron [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.880 238945 DEBUG nova.network.neutron [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.883 238945 DEBUG nova.network.neutron [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updated VIF entry in instance network info cache for port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:50 np0005597378 nova_compute[238941]: 2026-01-27 13:43:50.883 238945 DEBUG nova.network.neutron [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:50 np0005597378 podman[260435]: 2026-01-27 13:43:50.917984596 +0000 UTC m=+0.059035368 container create ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:43:50 np0005597378 systemd[1]: Started libpod-conmon-ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3.scope.
Jan 27 08:43:50 np0005597378 podman[260435]: 2026-01-27 13:43:50.886596917 +0000 UTC m=+0.027647709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:43:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:51 np0005597378 podman[260435]: 2026-01-27 13:43:51.01867675 +0000 UTC m=+0.159727542 container init ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:51 np0005597378 podman[260435]: 2026-01-27 13:43:51.025022581 +0000 UTC m=+0.166073353 container start ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:43:51 np0005597378 elastic_meitner[260449]: 167 167
Jan 27 08:43:51 np0005597378 systemd[1]: libpod-ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3.scope: Deactivated successfully.
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.031 238945 DEBUG oslo_concurrency.lockutils [req-4624ac5b-9e49-44f7-8adb-4972098d99ba req-82ca440c-ef8d-42bd-a51d-879b3e3d683f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:51 np0005597378 podman[260435]: 2026-01-27 13:43:51.033140381 +0000 UTC m=+0.174191153 container attach ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.033 238945 DEBUG oslo_concurrency.lockutils [req-36661141-05f0-4680-8be5-2e6bf7ccdc41 req-ab4fe645-2423-4479-b627-f44c9bfe2a65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:51 np0005597378 podman[260435]: 2026-01-27 13:43:51.033763747 +0000 UTC m=+0.174814529 container died ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.050 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.051 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.051 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:67:7e:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.051 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Using config drive#033[00m
Jan 27 08:43:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-158244ce1aaf71a206b73adfdd782c3419f7197e0cfef1d3a4ef980d8dc2ce03-merged.mount: Deactivated successfully.
Jan 27 08:43:51 np0005597378 podman[260435]: 2026-01-27 13:43:51.09970094 +0000 UTC m=+0.240751732 container remove ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.110 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:51 np0005597378 systemd[1]: libpod-conmon-ff2f30eff82b48c85123810993dd3841858c4786b18ebca43af88bea070fadf3.scope: Deactivated successfully.
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.243 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:51 np0005597378 podman[260491]: 2026-01-27 13:43:51.344697596 +0000 UTC m=+0.068488673 container create ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:43:51 np0005597378 systemd[1]: Started libpod-conmon-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope.
Jan 27 08:43:51 np0005597378 podman[260491]: 2026-01-27 13:43:51.312883257 +0000 UTC m=+0.036674364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:43:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.447 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Creating config drive at /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.454 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu71_r0n8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:51 np0005597378 podman[260491]: 2026-01-27 13:43:51.477038936 +0000 UTC m=+0.200830053 container init ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:43:51 np0005597378 podman[260491]: 2026-01-27 13:43:51.486698907 +0000 UTC m=+0.210489984 container start ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:43:51 np0005597378 podman[260491]: 2026-01-27 13:43:51.506122573 +0000 UTC m=+0.229913650 container attach ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.561 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Creating config drive at /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.567 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsyy79mxo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.592 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu71_r0n8" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.624 238945 DEBUG nova.storage.rbd_utils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.630 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.702 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsyy79mxo" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.703 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.704 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.741 238945 DEBUG nova.storage.rbd_utils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image b816093f-751c-4d16-bb91-82ae954a9732_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.748 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config b816093f-751c-4d16-bb91-82ae954a9732_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.825 238945 DEBUG oslo_concurrency.processutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.826 238945 INFO nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deleting local config drive /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:51 np0005597378 NetworkManager[48904]: <info>  [1769521431.8931] manager: (tapc2b2aaa7-69): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Jan 27 08:43:51 np0005597378 kernel: tapc2b2aaa7-69: entered promiscuous mode
Jan 27 08:43:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:51Z|00112|binding|INFO|Claiming lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for this chassis.
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:51Z|00113|binding|INFO|c2b2aaa7-69a4-4868-bbe6-d21fd9974c34: Claiming fa:16:3e:d8:2d:f1 10.100.0.7
Jan 27 08:43:51 np0005597378 nova_compute[238941]: 2026-01-27 13:43:51.910 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:43:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729298094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:51 np0005597378 systemd-machined[207425]: New machine qemu-23-instance-00000015.
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.942 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:2d:f1 10.100.0.7'], port_security=['fa:16:3e:d8:2d:f1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45353761-c75a-4426-88a9-3022541c9e26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.944 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.946 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:43:51 np0005597378 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Jan 27 08:43:51 np0005597378 systemd-udevd[260645]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.961 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8453d401-ee6b-4a3c-b361-faff6fe139bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.961 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee180809-31 in ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.964 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee180809-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37822c7e-76d5-4d23-9851-29da2ed712d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdbac1f-11b1-4224-a056-43b190ae3dff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:51 np0005597378 NetworkManager[48904]: <info>  [1769521431.9791] device (tapc2b2aaa7-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:51 np0005597378 NetworkManager[48904]: <info>  [1769521431.9803] device (tapc2b2aaa7-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:51.980 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[229ab4f0-9710-4020-a2a5-bae4204fea76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.002 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.004 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00114|binding|INFO|Setting lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 ovn-installed in OVS
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00115|binding|INFO|Setting lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 up in Southbound
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39449803-c020-4726-ad54-94b10d3c68af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.025 238945 DEBUG nova.compute.provider_tree [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.052 238945 DEBUG nova.scheduler.client.report [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.053 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cb06f32e-bdad-4ed3-9cdc-a3bccfba5840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 systemd-udevd[260650]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:52 np0005597378 NetworkManager[48904]: <info>  [1769521432.0631] manager: (tapee180809-30): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c43c4cf6-5667-4b54-bd9b-b5bdbae0d5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.082 238945 DEBUG oslo_concurrency.processutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config b816093f-751c-4d16-bb91-82ae954a9732_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.083 238945 INFO nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deleting local config drive /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.101 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.102 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.104 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.115 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.115 238945 INFO nova.compute.claims [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:43:52 np0005597378 virtqemud[238711]: End of file while reading data: Input/output error
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.127 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c803c552-8079-42ec-a9b8-762f6408c7fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.132 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f52d635c-fe74-42f6-9dba-7ba954f5b6e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 340 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 4.7 MiB/s wr, 106 op/s
Jan 27 08:43:52 np0005597378 NetworkManager[48904]: <info>  [1769521432.1638] device (tapee180809-30): carrier: link connected
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.171 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f404ca-8772-41db-8a19-03bd1fcfd75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 NetworkManager[48904]: <info>  [1769521432.1821] manager: (tap6f5f40a3-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Jan 27 08:43:52 np0005597378 kernel: tap6f5f40a3-5f: entered promiscuous mode
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00116|binding|INFO|Claiming lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for this chassis.
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00117|binding|INFO|6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac: Claiming fa:16:3e:67:7e:c5 10.100.0.11
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.203 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.203 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:52 np0005597378 systemd-udevd[260695]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb314a-f723-4f27-906c-c8a4b37dd67a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260723, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00118|binding|INFO|Setting lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac ovn-installed in OVS
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00119|binding|INFO|Setting lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac up in Southbound
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 NetworkManager[48904]: <info>  [1769521432.2338] device (tap6f5f40a3-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:52 np0005597378 NetworkManager[48904]: <info>  [1769521432.2349] device (tap6f5f40a3-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.231 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:7e:c5 10.100.0.11'], port_security=['fa:16:3e:67:7e:c5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b816093f-751c-4d16-bb91-82ae954a9732', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.238 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c682c5a-b9b3-42df-9afb-e41fd7cacc0d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:c077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407178, 'tstamp': 407178}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260729, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 systemd-machined[207425]: New machine qemu-24-instance-00000016.
Jan 27 08:43:52 np0005597378 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.270 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d16f265d-022b-4db5-b0be-4287980cb55e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260736, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.308 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.314 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01625150-7fbb-458c-ace8-862db71caabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.396 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.413 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[034577cd-f410-4875-80de-93be5499dc61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.415 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.416 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.416 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:52 np0005597378 kernel: tapee180809-30: entered promiscuous mode
Jan 27 08:43:52 np0005597378 NetworkManager[48904]: <info>  [1769521432.4195] manager: (tapee180809-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.424 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:52Z|00120|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.428 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.430 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05b6db83-e52b-45e6-9fa8-b258b1281001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.431 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:43:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:52.431 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'env', 'PROCESS_TAG=haproxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.433 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.446 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.448 238945 DEBUG nova.compute.manager [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.449 238945 DEBUG oslo_concurrency.lockutils [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.449 238945 DEBUG oslo_concurrency.lockutils [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.450 238945 DEBUG oslo_concurrency.lockutils [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.450 238945 DEBUG nova.compute.manager [req-db63f10f-92e4-4ab2-ab1a-c183de764483 req-4cd2c795-c2a1-4c7a-b4e1-d30a9949967d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Processing event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.469 238945 DEBUG nova.policy [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb804373b8be4577a6623d2131cdcd59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8773022351141649f1c7a9db9002d2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:52 np0005597378 lvm[260765]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:43:52 np0005597378 lvm[260765]: VG ceph_vg0 finished
Jan 27 08:43:52 np0005597378 lvm[260767]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:43:52 np0005597378 lvm[260767]: VG ceph_vg1 finished
Jan 27 08:43:52 np0005597378 lvm[260769]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:43:52 np0005597378 lvm[260769]: VG ceph_vg2 finished
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.558 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.560 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.561 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Creating image(s)#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.599 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:52 np0005597378 tender_archimedes[260525]: {}
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.642 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:52 np0005597378 systemd[1]: libpod-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope: Deactivated successfully.
Jan 27 08:43:52 np0005597378 systemd[1]: libpod-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope: Consumed 1.559s CPU time.
Jan 27 08:43:52 np0005597378 podman[260491]: 2026-01-27 13:43:52.670719089 +0000 UTC m=+1.394510166 container died ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.695 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.701 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.785 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.786 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.787 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.788 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.815 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:52 np0005597378 nova_compute[238941]: 2026-01-27 13:43:52.821 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9e514618c5a842d8072950f05e84e6c0782386f3f9184c15d094c82d07bcbbaa-merged.mount: Deactivated successfully.
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.019 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.0185182, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.021 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.024 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.029 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.035 238945 INFO nova.virt.libvirt.driver [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance spawned successfully.#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.036 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.049 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.059 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.088 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.088 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.089 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.089 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.090 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.090 238945 DEBUG nova.virt.libvirt.driver [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.094 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.094 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.019162, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.094 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:43:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:43:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890636956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.136 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.141 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.0286577, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.141 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.154 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.758s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.160 238945 DEBUG nova.compute.provider_tree [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.179 238945 INFO nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 12.83 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.180 238945 DEBUG nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.189 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.191 238945 DEBUG nova.scheduler.client.report [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.198 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.235 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.255 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.255 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.270 238945 INFO nova.compute.manager [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 14.08 seconds to build instance.#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.293 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Successfully created port: 178da3ce-54dc-4965-aa17-4ac98d2ec152 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.312 238945 DEBUG oslo_concurrency.lockutils [None req-73da79a0-4d4a-4e5e-a1ed-d2f9531f3ead 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.318 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.318 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.324 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.324393, b816093f-751c-4d16-bb91-82ae954a9732 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.325 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Started (Lifecycle Event)#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.360 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.365 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.370 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521433.324984, b816093f-751c-4d16-bb91-82ae954a9732 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.370 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.388 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.392 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.397 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.420 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.514 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.515 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.516 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Creating image(s)#033[00m
Jan 27 08:43:53 np0005597378 podman[260491]: 2026-01-27 13:43:53.592574611 +0000 UTC m=+2.316365688 container remove ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 08:43:53 np0005597378 systemd[1]: libpod-conmon-ebbfa54aee0568845aced6af14242d7efea448ccede8ad150ac1edb2cacd52bc.scope: Deactivated successfully.
Jan 27 08:43:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.661 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:53 np0005597378 podman[261018]: 2026-01-27 13:43:53.696966283 +0000 UTC m=+0.023854786 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.800 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.825 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.834 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.869 238945 DEBUG nova.policy [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5bbd48f1c4304d319aa847aa717dd4d6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07f2b9fda9204458be8cb076e9d2b9f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.872 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521418.7695007, a2bf4dff-c501-4c5d-8573-bba7ceabc549 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.873 238945 INFO nova.compute.manager [-] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.896 238945 DEBUG nova.compute.manager [None req-2633eb22-3e06-4d6f-b8c7-abca2dcfb37f - - - - - -] [instance: a2bf4dff-c501-4c5d-8573-bba7ceabc549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.914 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.916 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.916 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.917 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.944 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:53 np0005597378 nova_compute[238941]: 2026-01-27 13:43:53.950 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 371 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 5.7 MiB/s wr, 157 op/s
Jan 27 08:43:54 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:54 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:43:54 np0005597378 podman[261018]: 2026-01-27 13:43:54.307986789 +0000 UTC m=+0.634875272 container create 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.449 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Successfully updated port: 178da3ce-54dc-4965-aa17-4ac98d2ec152 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.470 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.471 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquired lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.472 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.489 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:54 np0005597378 systemd[1]: Started libpod-conmon-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12.scope.
Jan 27 08:43:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:43:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0157471d0fc7b6f22b1e4c33cb32963de33bfa1a16e2aa53868880ed4e9eb7de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.593 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] resizing rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:54 np0005597378 podman[261018]: 2026-01-27 13:43:54.671568302 +0000 UTC m=+0.998456805 container init 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:43:54 np0005597378 podman[261018]: 2026-01-27 13:43:54.679907567 +0000 UTC m=+1.006796070 container start 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:43:54 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : New worker (261196) forked
Jan 27 08:43:54 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : Loading success.
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.793 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.799 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 WARNING nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.800 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.801 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.801 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.801 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Processing event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.802 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] No waiting events found dispatching network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 WARNING nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received unexpected event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-changed-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.803 238945 DEBUG nova.compute.manager [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Refreshing instance network info cache due to event network-changed-178da3ce-54dc-4965-aa17-4ac98d2ec152. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.804 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.804 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.809 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521434.809279, b816093f-751c-4d16-bb91-82ae954a9732 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.810 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.829 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.836 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.838 238945 INFO nova.virt.libvirt.driver [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance spawned successfully.#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.838 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.842 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.880 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.882 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.883 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.883 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.883 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.884 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.884 238945 DEBUG nova.virt.libvirt.driver [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.887 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.894 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[204a0c9f-ec6d-423f-a9c1-cbbd88221cf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.930 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[95e5c970-6b25-4ae1-812d-9705a043dd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.934 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5db0627-5314-4409-aede-53198553fad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.959 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Successfully created port: 3c601043-e73a-4b81-b274-c8d791f8bc3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.966 238945 INFO nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 12.64 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:43:54 np0005597378 nova_compute[238941]: 2026-01-27 13:43:54.967 238945 DEBUG nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.966 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1093d8a8-1151-430e-87f8-00a4ded6e3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:54.984 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d077dec5-005f-4d76-9588-c6ee28f0b72e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261210, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce768146-8290-47e8-bf3a-c20a103a0eee]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261211, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261211, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.005 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.011 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:55.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.042 238945 INFO nova.compute.manager [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 14.43 seconds to build instance.#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.130 238945 DEBUG oslo_concurrency.lockutils [None req-22846087-050b-400a-9138-8099da508bd9 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.551 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.663 238945 DEBUG nova.objects.instance [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'migration_context' on Instance uuid b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.671 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] resizing rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.711 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.712 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Ensure instance console log exists: /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.712 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.712 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.713 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:55 np0005597378 NetworkManager[48904]: <info>  [1769521435.8239] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 27 08:43:55 np0005597378 NetworkManager[48904]: <info>  [1769521435.8249] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 27 08:43:55 np0005597378 nova_compute[238941]: 2026-01-27 13:43:55.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:55Z|00121|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 08:43:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:55Z|00122|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 386 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.2 MiB/s wr, 151 op/s
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.292 238945 DEBUG nova.objects.instance [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a83bcb6-4245-4637-81be-f4c0c75bc965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.307 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.308 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Ensure instance console log exists: /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.309 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.309 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.309 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.510 238945 DEBUG nova.network.neutron [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updating instance_info_cache with network_info: [{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.526 238945 DEBUG nova.compute.manager [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.526 238945 DEBUG nova.compute.manager [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.526 238945 DEBUG oslo_concurrency.lockutils [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.527 238945 DEBUG oslo_concurrency.lockutils [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.527 238945 DEBUG nova.network.neutron [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.706 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Releasing lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.706 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance network_info: |[{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.710 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.710 238945 DEBUG nova.network.neutron [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Refreshing network info cache for port 178da3ce-54dc-4965-aa17-4ac98d2ec152 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.715 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start _get_guest_xml network_info=[{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.720 238945 WARNING nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.727 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.728 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.732 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.732 238945 DEBUG nova.virt.libvirt.host [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.733 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.734 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.734 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.735 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.735 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.735 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.736 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.736 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.736 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.737 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.737 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.737 238945 DEBUG nova.virt.hardware [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.741 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.777 238945 DEBUG oslo_concurrency.lockutils [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] Acquiring lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.777 238945 DEBUG oslo_concurrency.lockutils [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] Acquired lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.778 238945 DEBUG nova.network.neutron [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.859 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Successfully updated port: 3c601043-e73a-4b81-b274-c8d791f8bc3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.890 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.891 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:56 np0005597378 nova_compute[238941]: 2026-01-27 13:43:56.891 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.090 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.309 238945 DEBUG nova.compute.manager [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.309 238945 DEBUG nova.compute.manager [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing instance network info cache due to event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.310 238945 DEBUG oslo_concurrency.lockutils [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:43:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310927706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.422 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.445 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.450 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:57 np0005597378 nova_compute[238941]: 2026-01-27 13:43:57.987 238945 DEBUG nova.network.neutron [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.026 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.026 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance network_info: |[{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.027 238945 DEBUG oslo_concurrency.lockutils [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.027 238945 DEBUG nova.network.neutron [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.030 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start _get_guest_xml network_info=[{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.041 238945 WARNING nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.049 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.049 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.054 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.054 238945 DEBUG nova.virt.libvirt.host [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.055 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.055 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.056 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.057 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.057 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.057 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.058 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.058 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.058 238945 DEBUG nova.virt.hardware [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.062 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648106793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.127 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.129 238945 DEBUG nova.virt.libvirt.vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-328526640',display_name='tempest-ImagesOneServerNegativeTestJSON-server-328526640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-328526640',id=23,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-ii5k0cgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:52Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.129 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.130 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.131 238945 DEBUG nova.objects.instance [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.157 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <uuid>b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33</uuid>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <name>instance-00000017</name>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-328526640</nova:name>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:56</nova:creationTime>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:user uuid="bb804373b8be4577a6623d2131cdcd59">tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member</nova:user>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:project uuid="c8773022351141649f1c7a9db9002d2f">tempest-ImagesOneServerNegativeTestJSON-1108889514</nova:project>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <nova:port uuid="178da3ce-54dc-4965-aa17-4ac98d2ec152">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <entry name="serial">b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33</entry>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <entry name="uuid">b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33</entry>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:64:5f:a0"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <target dev="tap178da3ce-54"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/console.log" append="off"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:58 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:58 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:58 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:58 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.158 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Preparing to wait for external event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.158 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.159 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.159 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.160 238945 DEBUG nova.virt.libvirt.vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-328526640',display_name='tempest-ImagesOneServerNegativeTestJSON-server-328526640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-328526640',id=23,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-ii5k0cgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:52Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.160 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.161 238945 DEBUG nova.network.os_vif_util [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.161 238945 DEBUG os_vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.162 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.162 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.165 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.165 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap178da3ce-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.165 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap178da3ce-54, col_values=(('external_ids', {'iface-id': '178da3ce-54dc-4965-aa17-4ac98d2ec152', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:5f:a0', 'vm-uuid': 'b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 386 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:58 np0005597378 NetworkManager[48904]: <info>  [1769521438.1677] manager: (tap178da3ce-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.179 238945 INFO os_vif [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54')#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.346 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.347 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.347 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No VIF found with MAC fa:16:3e:64:5f:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.347 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Using config drive#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.368 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.433 238945 DEBUG nova.network.neutron [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.434 238945 DEBUG nova.network.neutron [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.469 238945 DEBUG oslo_concurrency.lockutils [req-d2fdc699-05f1-4aa4-b41e-800c2df0be50 req-5240d61c-7469-4eca-9ac7-cb2fd965a4c6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444780049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.691 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.740 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:58 np0005597378 nova_compute[238941]: 2026-01-27 13:43:58.748 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.092 238945 DEBUG nova.network.neutron [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.097 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Creating config drive at /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.102 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpedd38jk4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.131 238945 DEBUG nova.network.neutron [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updated VIF entry in instance network info cache for port 178da3ce-54dc-4965-aa17-4ac98d2ec152. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.132 238945 DEBUG nova.network.neutron [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updating instance_info_cache with network_info: [{"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.153 238945 DEBUG oslo_concurrency.lockutils [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] Releasing lock "refresh_cache-b816093f-751c-4d16-bb91-82ae954a9732" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.154 238945 DEBUG nova.compute.manager [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.154 238945 DEBUG nova.compute.manager [None req-488328f2-42ee-4cb9-9804-ee8085712093 1aef55e2bf8143008de9a251079854e7 2fbfbc72d44e440eb05d4b9391466413 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] network_info to inject: |[{"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.155 238945 DEBUG nova.network.neutron [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updated VIF entry in instance network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.156 238945 DEBUG nova.network.neutron [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.158 238945 DEBUG oslo_concurrency.lockutils [req-6377a386-17e3-464f-b0cc-06865b2a300f req-2c3dcb2c-3d49-42f3-8a14-7b5e0b66dcb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.238 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpedd38jk4" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.267 238945 DEBUG nova.storage.rbd_utils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.274 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.301 238945 DEBUG oslo_concurrency.lockutils [req-5128ecba-ec62-43c8-8acf-f51400ae5b52 req-cb786650-72e0-4d5a-b5f3-2245b57286e4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:43:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:43:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892291461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.441 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.443 238945 DEBUG nova.virt.libvirt.vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-139273636',id=24,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07f2b9fda9204458be8cb076e9d2b9f3',ramdisk_id='',reservation_id='r-7ix8tutd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:53Z,user_data=None,user_id='5bbd48f1c4304d319aa847aa717dd4d6',uuid=3a83bcb6-4245-4637-81be-f4c0c75bc965,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.443 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converting VIF {"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.444 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.445 238945 DEBUG nova.objects.instance [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a83bcb6-4245-4637-81be-f4c0c75bc965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.465 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <uuid>3a83bcb6-4245-4637-81be-f4c0c75bc965</uuid>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <name>instance-00000018</name>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369</nova:name>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:58</nova:creationTime>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:user uuid="5bbd48f1c4304d319aa847aa717dd4d6">tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member</nova:user>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:project uuid="07f2b9fda9204458be8cb076e9d2b9f3">tempest-FloatingIPsAssociationNegativeTestJSON-754008104</nova:project>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <nova:port uuid="3c601043-e73a-4b81-b274-c8d791f8bc3d">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <entry name="serial">3a83bcb6-4245-4637-81be-f4c0c75bc965</entry>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <entry name="uuid">3a83bcb6-4245-4637-81be-f4c0c75bc965</entry>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3a83bcb6-4245-4637-81be-f4c0c75bc965_disk">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:84:71:9b"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <target dev="tap3c601043-e7"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/console.log" append="off"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:43:59 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:43:59 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:43:59 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:43:59 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.465 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Preparing to wait for external event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.465 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.466 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.466 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.467 238945 DEBUG nova.virt.libvirt.vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-139273636',id=24,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07f2b9fda9204458be8cb076e9d2b9f3',ramdisk_id='',reservation_id='r-7ix8tutd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:43:53Z,user_data=None,user_id='5bbd48f1c4304d319aa847aa717dd4d6',uuid=3a83bcb6-4245-4637-81be-f4c0c75bc965,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.467 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converting VIF {"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.468 238945 DEBUG nova.network.os_vif_util [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.468 238945 DEBUG os_vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.469 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.469 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.472 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.472 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c601043-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.472 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c601043-e7, col_values=(('external_ids', {'iface-id': '3c601043-e73a-4b81-b274-c8d791f8bc3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:71:9b', 'vm-uuid': '3a83bcb6-4245-4637-81be-f4c0c75bc965'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 NetworkManager[48904]: <info>  [1769521439.4755] manager: (tap3c601043-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.488 238945 INFO os_vif [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7')#033[00m
Jan 27 08:43:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:43:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1215491735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:43:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:43:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1215491735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.541 238945 DEBUG oslo_concurrency.processutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.541 238945 INFO nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deleting local config drive /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33/disk.config because it was imported into RBD.#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.572 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.573 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.574 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] No VIF found with MAC fa:16:3e:84:71:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.575 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Using config drive#033[00m
Jan 27 08:43:59 np0005597378 NetworkManager[48904]: <info>  [1769521439.5976] manager: (tap178da3ce-54): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 27 08:43:59 np0005597378 kernel: tap178da3ce-54: entered promiscuous mode
Jan 27 08:43:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:59Z|00123|binding|INFO|Claiming lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 for this chassis.
Jan 27 08:43:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:59Z|00124|binding|INFO|178da3ce-54dc-4965-aa17-4ac98d2ec152: Claiming fa:16:3e:64:5f:a0 10.100.0.11
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.608 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.624 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:5f:a0 10.100.0.11'], port_security=['fa:16:3e:64:5f:a0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=178da3ce-54dc-4965-aa17-4ac98d2ec152) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.625 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 178da3ce-54dc-4965-aa17-4ac98d2ec152 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 bound to our chassis#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.627 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58208cdc-4099-47ab-9729-2e87f01c74f8#033[00m
Jan 27 08:43:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:59Z|00125|binding|INFO|Setting lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 ovn-installed in OVS
Jan 27 08:43:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:43:59Z|00126|binding|INFO|Setting lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 up in Southbound
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd231365-2515-4804-b48f-8fcf9e3288f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.643 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58208cdc-41 in ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.646 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58208cdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43b456d9-d72d-4071-ade1-7d6804778ad8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 systemd-udevd[261525]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.649 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08a58de0-88f1-4e67-89a7-825af23d7e64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 systemd-machined[207425]: New machine qemu-25-instance-00000017.
Jan 27 08:43:59 np0005597378 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Jan 27 08:43:59 np0005597378 NetworkManager[48904]: <info>  [1769521439.6671] device (tap178da3ce-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:43:59 np0005597378 NetworkManager[48904]: <info>  [1769521439.6677] device (tap178da3ce-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.670 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ff192270-2b55-42ad-8b60-1926ce9b63ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[222844c2-c356-40b8-93d9-a5931afad42e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.745 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9eed2f-bf45-4f23-aa50-b7f115e9d865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 NetworkManager[48904]: <info>  [1769521439.7517] manager: (tap58208cdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.751 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3e74c8-002e-4b77-a689-e6efc0f82a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.796 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17516657-f998-4930-a791-4c00c5985d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.802 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5b87a6b3-39ca-4fda-85e9-a2078535b6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 NetworkManager[48904]: <info>  [1769521439.8300] device (tap58208cdc-40): carrier: link connected
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.838 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0781b237-16d8-47a4-a844-b04247cb002c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.862 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5955fc-611e-47df-9964-eb2908b66713]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407945, 'reachable_time': 28400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261560, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbaa5d1-aed5-4802-aa2e-0277fa099c54]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407945, 'tstamp': 407945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261561, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.905 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08b84fc2-2568-4823-b1bb-070f28eb36ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407945, 'reachable_time': 28400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261562, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:43:59.937 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1526a28e-9f5b-4a89-97ac-f14112382379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.961 238945 DEBUG nova.compute.manager [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.961 238945 DEBUG oslo_concurrency.lockutils [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.961 238945 DEBUG oslo_concurrency.lockutils [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.962 238945 DEBUG oslo_concurrency.lockutils [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:43:59 np0005597378 nova_compute[238941]: 2026-01-27 13:43:59.962 238945 DEBUG nova.compute.manager [req-07e11d2b-ad89-41c2-8083-61d5ad63fa90 req-b190fb1d-444a-489c-985d-b5b6911ac0e7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Processing event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08e66189-7c96-4fce-8570-e174fc8e15a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58208cdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:00 np0005597378 kernel: tap58208cdc-40: entered promiscuous mode
Jan 27 08:44:00 np0005597378 NetworkManager[48904]: <info>  [1769521440.0108] manager: (tap58208cdc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.016 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58208cdc-40, col_values=(('external_ids', {'iface-id': '42783ab6-7560-4ef7-b70e-aaa544a1d882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:00 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:00Z|00127|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.039 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Creating config drive at /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.045 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwfa2tntm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.045 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.048 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[336d3ad8-a6af-440e-9d37-a1ac480eeec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.050 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.051 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'env', 'PROCESS_TAG=haproxy-58208cdc-4099-47ab-9729-2e87f01c74f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58208cdc-4099-47ab-9729-2e87f01c74f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 277 op/s
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.185 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwfa2tntm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.210 238945 DEBUG nova.storage.rbd_utils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] rbd image 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.221 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.242 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521440.1850915, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.243 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Started (Lifecycle Event)#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.246 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.250 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.259 238945 INFO nova.virt.libvirt.driver [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance spawned successfully.#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.260 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.270 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.296 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.296 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.298 238945 DEBUG nova.virt.libvirt.driver [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.304 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521440.1854868, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.331 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.341 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521440.2488546, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.342 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.358 238945 INFO nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 7.80 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.359 238945 DEBUG nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.374 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.416 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.444 238945 INFO nova.compute.manager [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 9.71 seconds to build instance.#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.510 238945 DEBUG oslo_concurrency.lockutils [None req-096f840f-c7d7-4fb1-bbeb-44bbb54645fa bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:00 np0005597378 podman[261679]: 2026-01-27 13:44:00.449484646 +0000 UTC m=+0.027254877 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:44:00 np0005597378 podman[261679]: 2026-01-27 13:44:00.718120412 +0000 UTC m=+0.295890623 container create 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.718 238945 DEBUG oslo_concurrency.processutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config 3a83bcb6-4245-4637-81be-f4c0c75bc965_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.718 238945 INFO nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deleting local config drive /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965/disk.config because it was imported into RBD.#033[00m
Jan 27 08:44:00 np0005597378 kernel: tap3c601043-e7: entered promiscuous mode
Jan 27 08:44:00 np0005597378 NetworkManager[48904]: <info>  [1769521440.7977] manager: (tap3c601043-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Jan 27 08:44:00 np0005597378 systemd-udevd[261549]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:44:00 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:00Z|00128|binding|INFO|Claiming lport 3c601043-e73a-4b81-b274-c8d791f8bc3d for this chassis.
Jan 27 08:44:00 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:00Z|00129|binding|INFO|3c601043-e73a-4b81-b274-c8d791f8bc3d: Claiming fa:16:3e:84:71:9b 10.100.0.14
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:00 np0005597378 NetworkManager[48904]: <info>  [1769521440.8216] device (tap3c601043-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:44:00 np0005597378 NetworkManager[48904]: <info>  [1769521440.8227] device (tap3c601043-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:44:00 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:00Z|00130|binding|INFO|Setting lport 3c601043-e73a-4b81-b274-c8d791f8bc3d ovn-installed in OVS
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:00 np0005597378 nova_compute[238941]: 2026-01-27 13:44:00.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:00 np0005597378 systemd-machined[207425]: New machine qemu-26-instance-00000018.
Jan 27 08:44:00 np0005597378 systemd[1]: Started Virtual Machine qemu-26-instance-00000018.
Jan 27 08:44:00 np0005597378 systemd[1]: Started libpod-conmon-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509.scope.
Jan 27 08:44:00 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:00Z|00131|binding|INFO|Setting lport 3c601043-e73a-4b81-b274-c8d791f8bc3d up in Southbound
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.883 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:71:9b 10.100.0.14'], port_security=['fa:16:3e:84:71:9b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a83bcb6-4245-4637-81be-f4c0c75bc965', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07f2b9fda9204458be8cb076e9d2b9f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58ec66a6-49e6-409b-9a00-2ec14259d882', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b47c7cc-bcaa-4c9d-99f8-0bd4e23fa5fa, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c601043-e73a-4b81-b274-c8d791f8bc3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6566e0cad5da8144f268f9bf68209f99efd43e6d287847402d8fec50b6dba0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:00 np0005597378 podman[261679]: 2026-01-27 13:44:00.917138284 +0000 UTC m=+0.494908515 container init 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:00 np0005597378 podman[261679]: 2026-01-27 13:44:00.924252176 +0000 UTC m=+0.502022387 container start 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:44:00 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : New worker (261720) forked
Jan 27 08:44:00 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : Loading success.
Jan 27 08:44:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:00.999 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c601043-e73a-4b81-b274-c8d791f8bc3d in datapath d69bf49f-3f47-4f46-973f-413e56d2f52a unbound from our chassis#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.001 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d69bf49f-3f47-4f46-973f-413e56d2f52a#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[910b50fd-20b8-4a58-943b-66066da274d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.015 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd69bf49f-31 in ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.017 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd69bf49f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4bc924-1802-4950-9893-32044b203b21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.018 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f186a4c7-84ce-4424-8055-f9bcbf079956]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.038 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f8db182b-da47-4ad4-b689-5ad954d356e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b52e368b-2f61-44d9-878e-399212a892fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.101 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eedafb92-5773-449a-92b9-5bc271c8e7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.102 238945 DEBUG nova.compute.manager [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.102 238945 DEBUG oslo_concurrency.lockutils [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.103 238945 DEBUG oslo_concurrency.lockutils [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.103 238945 DEBUG oslo_concurrency.lockutils [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.103 238945 DEBUG nova.compute.manager [req-fb5ed4b5-7abd-41d7-8537-409e5a7aa43c req-46aeb7f5-cbb5-4cbf-be7f-0ced96ea073a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Processing event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:44:01 np0005597378 NetworkManager[48904]: <info>  [1769521441.1116] manager: (tapd69bf49f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.110 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[518a8e25-0c38-4aec-b91f-c14908a5f455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.153 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9979d49d-a6d5-4f5c-92f4-07c45b1d9aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.156 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4788645-1ec7-44c5-93bd-0b5f37dfb3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 NetworkManager[48904]: <info>  [1769521441.1820] device (tapd69bf49f-30): carrier: link connected
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e537a4-34f8-441f-a773-fac84e567cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.213 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd75267-7532-472d-80f2-008ab74aefd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd69bf49f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:a3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408080, 'reachable_time': 37814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261739, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f12cecc5-4d97-49eb-b139-701b836553e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:a304'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408080, 'tstamp': 408080}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261740, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d5022c-c992-4b1e-b2a6-f4c5befc51c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd69bf49f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:a3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408080, 'reachable_time': 37814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261741, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.301 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4afc0d9b-fe79-426a-adb7-44aae0e2b8d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.385 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c539939-1bfb-4c99-b418-89932e9ff0c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.387 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd69bf49f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.387 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.388 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd69bf49f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:01 np0005597378 NetworkManager[48904]: <info>  [1769521441.3908] manager: (tapd69bf49f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 27 08:44:01 np0005597378 kernel: tapd69bf49f-30: entered promiscuous mode
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.394 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd69bf49f-30, col_values=(('external_ids', {'iface-id': 'a089d2f4-e1c5-43e5-8875-9297c14d1a3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.397 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d69bf49f-3f47-4f46-973f-413e56d2f52a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d69bf49f-3f47-4f46-973f-413e56d2f52a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.398 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08d4ad0d-02dd-40bf-87e9-9ab67eb97df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:01 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:01Z|00132|binding|INFO|Releasing lport a089d2f4-e1c5-43e5-8875-9297c14d1a3f from this chassis (sb_readonly=0)
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.399 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-d69bf49f-3f47-4f46-973f-413e56d2f52a
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/d69bf49f-3f47-4f46-973f-413e56d2f52a.pid.haproxy
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID d69bf49f-3f47-4f46-973f-413e56d2f52a
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:44:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:01.400 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'env', 'PROCESS_TAG=haproxy-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d69bf49f-3f47-4f46-973f-413e56d2f52a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.706 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521441.7055128, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.706 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Started (Lifecycle Event)#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.708 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.713 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.719 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance spawned successfully.#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.720 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.731 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.739 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.748 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.749 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.750 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.750 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.751 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.752 238945 DEBUG nova.virt.libvirt.driver [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.789 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.789 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521441.7058213, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.789 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.829 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.836 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521441.7126477, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.845 238945 INFO nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 8.33 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.845 238945 DEBUG nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.862 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.872 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.892 238945 INFO nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Rebuilding instance#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.911 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.932 238945 INFO nova.compute.manager [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 9.96 seconds to build instance.#033[00m
Jan 27 08:44:01 np0005597378 nova_compute[238941]: 2026-01-27 13:44:01.954 238945 DEBUG oslo_concurrency.lockutils [None req-feb08a74-43c6-49a0-b809-9c525f2fbe9b 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:01 np0005597378 podman[261812]: 2026-01-27 13:44:01.968613862 +0000 UTC m=+0.085771381 container create b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:44:02 np0005597378 podman[261812]: 2026-01-27 13:44:01.920509881 +0000 UTC m=+0.037667420 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:44:02 np0005597378 systemd[1]: Started libpod-conmon-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc.scope.
Jan 27 08:44:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.073 238945 DEBUG nova.compute.manager [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.075 238945 DEBUG oslo_concurrency.lockutils [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.075 238945 DEBUG oslo_concurrency.lockutils [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.075 238945 DEBUG oslo_concurrency.lockutils [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.076 238945 DEBUG nova.compute.manager [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] No waiting events found dispatching network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8721205b28cc5fae1ea1768ac2616775241516c97db180811030d1178b496c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.076 238945 WARNING nova.compute.manager [req-5f2b20d7-dacf-4202-a010-3b6d98b77360 req-78bb3e48-bc83-429c-a912-757e302661d1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received unexpected event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:02 np0005597378 podman[261812]: 2026-01-27 13:44:02.095511714 +0000 UTC m=+0.212669263 container init b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 08:44:02 np0005597378 podman[261812]: 2026-01-27 13:44:02.105714179 +0000 UTC m=+0.222871698 container start b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:44:02 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : New worker (261833) forked
Jan 27 08:44:02 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : Loading success.
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.145 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.163 238945 DEBUG nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.214 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_requests' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.226 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.243 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.260 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.276 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.287 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:44:02 np0005597378 nova_compute[238941]: 2026-01-27 13:44:02.972 238945 DEBUG nova.compute.manager [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.012 238945 INFO nova.compute.manager [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] instance snapshotting#033[00m
Jan 27 08:44:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:44:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3060 syncs, 3.67 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5383 writes, 20K keys, 5383 commit groups, 1.0 writes per commit group, ingest: 20.71 MB, 0.03 MB/s#012Interval WAL: 5383 writes, 2057 syncs, 2.62 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.265 238945 INFO nova.virt.libvirt.driver [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Beginning live snapshot process#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.425 238945 DEBUG nova.virt.libvirt.imagebackend [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.633 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(52a0ec719d4741e9b16e698de4e55d55) on rbd image(b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.686 238945 DEBUG nova.compute.manager [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.687 238945 DEBUG oslo_concurrency.lockutils [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.688 238945 DEBUG oslo_concurrency.lockutils [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.688 238945 DEBUG oslo_concurrency.lockutils [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.689 238945 DEBUG nova.compute.manager [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] No waiting events found dispatching network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:03 np0005597378 nova_compute[238941]: 2026-01-27 13:44:03.689 238945 WARNING nova.compute.manager [req-5050da69-e408-47c1-888c-cf26336cd73d req-e869c1bd-97db-4b11-b2e0-3fe0e489881d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received unexpected event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.4 MiB/s wr, 378 op/s
Jan 27 08:44:04 np0005597378 nova_compute[238941]: 2026-01-27 13:44:04.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 27 08:44:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 27 08:44:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.325 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 08:44:05 np0005597378 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 08:44:05 np0005597378 NetworkManager[48904]: <info>  [1769521445.4458] device (tap851829c6-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00133|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00134|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00135|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.486 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.487 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.489 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:05 np0005597378 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 27 08:44:05 np0005597378 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 16.078s CPU time.
Jan 27 08:44:05 np0005597378 systemd-machined[207425]: Machine qemu-19-instance-00000011 terminated.
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8f63de-10a0-4084-a895-7c38aedc2f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 08:44:05 np0005597378 NetworkManager[48904]: <info>  [1769521445.5426] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Jan 27 08:44:05 np0005597378 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00136|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00137|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.561 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.561 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c65a8072-028f-4149-ab7d-192e0aef1fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.564 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[20508759-5779-464a-aa4c-f0e0c97a6551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.571 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00138|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00139|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00140|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=1)
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00141|if_status|INFO|Not setting lport 851829c6-49a6-4580-90d9-df985a736216 down as sb is readonly
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00142|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.579 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.580 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.580 238945 DEBUG nova.virt.libvirt.vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:01Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.581 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.582 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.582 238945 DEBUG os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.584 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.584 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851829c6-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.597 238945 INFO os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.605 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e6022e67-9391-4733-a206-ca527ed0be98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.622 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81608ab8-3aa7-44bb-8872-2bcc2597f755]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261915, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3403cda-ecf6-4d52-8b19-519fbf1a16b9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261916, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261916, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.644 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.647 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.648 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.648 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.649 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.650 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.652 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.673 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b855241b-3172-4b74-8bff-f1fcf954482b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00143|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 08:44:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:05Z|00144|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.707 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c784470f-fe73-4151-b64d-618b3548154a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.713 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5b791224-2058-4b41-9e32-e07b3fbe7a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.760 238945 DEBUG nova.compute.manager [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.760 238945 DEBUG oslo_concurrency.lockutils [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.761 238945 DEBUG oslo_concurrency.lockutils [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.761 238945 DEBUG oslo_concurrency.lockutils [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.761 238945 DEBUG nova.compute.manager [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.762 238945 WARNING nova.compute.manager [req-6517813f-0bd2-4acb-b689-9e1c34508677 req-11c7c278-ab3c-42b5-b48c-c7d90f2b25a7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuilding.#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.769 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.765 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcfe1ed-7887-4250-a0d6-ca5ec9f3ad0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.807 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b37d31-defa-4bb6-8981-e422e74cd013]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261933, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.825 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68e5e029-61ea-4874-8d03-1b000cc04d49]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261934, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261934, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.827 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.831 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.831 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.831 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.832 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.833 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.835 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.849 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[346287b2-f73b-44ef-a034-c9242063c4ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.880 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[41e1e9b0-c374-4bf4-a80a-a1ab810d05ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.883 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e6aed487-c740-4ee5-aa10-bdc43673fdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.912 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9a259184-dd4f-46a8-9a40-3612ab1209fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd0f278-6719-4f58-a75f-e4aa39210d36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261940, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.935 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] cloning vms/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk@52a0ec719d4741e9b16e698de4e55d55 to images/6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db3826c2-0a99-4229-a180-e1ca2bebbf1a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261941, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261941, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.956 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.959 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.959 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.960 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:05.960 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.975 238945 DEBUG nova.compute.manager [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.975 238945 DEBUG nova.compute.manager [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing instance network info cache due to event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.976 238945 DEBUG oslo_concurrency.lockutils [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.976 238945 DEBUG oslo_concurrency.lockutils [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:05 np0005597378 nova_compute[238941]: 2026-01-27 13:44:05.976 238945 DEBUG nova.network.neutron [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:44:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 3.6 MiB/s wr, 340 op/s
Jan 27 08:44:06 np0005597378 nova_compute[238941]: 2026-01-27 13:44:06.271 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] flattening images/6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:44:06 np0005597378 nova_compute[238941]: 2026-01-27 13:44:06.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:07 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.383 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(52a0ec719d4741e9b16e698de4e55d55) on rbd image(b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.673 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting instance files /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del#033[00m
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.673 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deletion of /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del complete#033[00m
Jan 27 08:44:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 27 08:44:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 27 08:44:07 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.837 238945 DEBUG nova.network.neutron [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updated VIF entry in instance network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.838 238945 DEBUG nova.network.neutron [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.857 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] creating snapshot(snap) on rbd image(6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:44:07 np0005597378 nova_compute[238941]: 2026-01-27 13:44:07.979 238945 DEBUG oslo_concurrency.lockutils [req-8af51d5e-4810-4d3a-b389-2c253153ae9f req-4c22dc36-480e-428f-8a67-670e65a7226b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.162 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.163 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.164 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.164 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.165 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.166 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.167 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.167 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.168 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.169 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.170 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.170 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.170 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 27 08:44:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 465 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 62 KiB/s wr, 228 op/s
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.171 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.172 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.173 238945 DEBUG oslo_concurrency.lockutils [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.173 238945 DEBUG nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.173 238945 WARNING nova.compute.manager [req-db62cb47-8926-4871-8937-b83fb9b0b762 req-e2953fc3-7f76-4840-99cc-f333df9a15f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state error and task_state rebuild_block_device_mapping.#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.176 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.177 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating image(s)#033[00m
Jan 27 08:44:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:44:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 12K writes, 50K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3467 syncs, 3.62 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5393 writes, 21K keys, 5393 commit groups, 1.0 writes per commit group, ingest: 21.63 MB, 0.04 MB/s#012Interval WAL: 5393 writes, 2052 syncs, 2.63 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.204 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.239 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.266 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.270 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.369 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.370 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.371 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.372 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:08Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:2d:f1 10.100.0.7
Jan 27 08:44:08 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:08Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:2d:f1 10.100.0.7
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.398 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.402 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.726 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:08 np0005597378 podman[262127]: 2026-01-27 13:44:08.780484539 +0000 UTC m=+0.099930423 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 08:44:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.818 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:44:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 27 08:44:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 27 08:44:08 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.990 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.992 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ensure instance console log exists: /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.992 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.993 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.993 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:08 np0005597378 nova_compute[238941]: 2026-01-27 13:44:08.996 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start _get_guest_xml network_info=[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.003 238945 WARNING nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.010 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.011 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.014 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.014 238945 DEBUG nova.virt.libvirt.host [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.015 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.016 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.016 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.017 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.018 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.018 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.018 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.019 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.019 238945 DEBUG nova.virt.hardware [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.019 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d could not be found.
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver 
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver 
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d could not be found.
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.024 238945 ERROR nova.virt.libvirt.driver #033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.076 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.129 238945 DEBUG nova.storage.rbd_utils [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] removing snapshot(snap) on rbd image(6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:44:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:44:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668600113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.685 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.707 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:09 np0005597378 nova_compute[238941]: 2026-01-27 13:44:09.712 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 27 08:44:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 27 08:44:09 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 27 08:44:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 480 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 12 MiB/s wr, 386 op/s
Jan 27 08:44:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:44:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530598884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.302 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.305 238945 DEBUG nova.virt.libvirt.vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:07Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.305 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.306 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.309 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <uuid>bee7c432-6457-4160-917c-a807eca3df0e</uuid>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <name>instance-00000011</name>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminTestJSON-server-752871201</nova:name>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:44:09</nova:creationTime>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <nova:port uuid="851829c6-49a6-4580-90d9-df985a736216">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <entry name="serial">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <entry name="uuid">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk.config">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:5b:0a:48"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <target dev="tap851829c6-49"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log" append="off"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:44:10 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:44:10 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:44:10 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:44:10 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.318 238945 DEBUG nova.virt.libvirt.vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:07Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.319 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.320 238945 DEBUG nova.network.os_vif_util [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.321 238945 DEBUG os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.322 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.323 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.327 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851829c6-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.327 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap851829c6-49, col_values=(('external_ids', {'iface-id': '851829c6-49a6-4580-90d9-df985a736216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:0a:48', 'vm-uuid': 'bee7c432-6457-4160-917c-a807eca3df0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:10 np0005597378 NetworkManager[48904]: <info>  [1769521450.3304] manager: (tap851829c6-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.334 238945 INFO os_vif [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.697 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.698 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.698 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:5b:0a:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.699 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Using config drive#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.723 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.788 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:10 np0005597378 nova_compute[238941]: 2026-01-27 13:44:10.865 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'keypairs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:11 np0005597378 nova_compute[238941]: 2026-01-27 13:44:11.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:11 np0005597378 podman[262344]: 2026-01-27 13:44:11.74852046 +0000 UTC m=+0.090471108 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 08:44:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 27 08:44:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 27 08:44:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 27 08:44:11 np0005597378 nova_compute[238941]: 2026-01-27 13:44:11.970 238945 WARNING nova.compute.manager [None req-ac25f558-3cd4-4a46-9beb-07017110ed68 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Image not found during snapshot: nova.exception.ImageNotFound: Image 6fe2c64e-eca8-4f0c-9ee7-18ff1da22d0d could not be found.#033[00m
Jan 27 08:44:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 480 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 13 MiB/s wr, 429 op/s
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.335 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating config drive at /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config#033[00m
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.340 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpup4un36k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.475 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpup4un36k" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.505 238945 DEBUG nova.storage.rbd_utils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.511 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:12Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:7e:c5 10.100.0.11
Jan 27 08:44:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:12Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:7e:c5 10.100.0.11
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.715 238945 DEBUG oslo_concurrency.processutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.716 238945 INFO nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting local config drive /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config because it was imported into RBD.#033[00m
Jan 27 08:44:12 np0005597378 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 08:44:12 np0005597378 NetworkManager[48904]: <info>  [1769521452.7630] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Jan 27 08:44:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:12Z|00145|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 08:44:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:12Z|00146|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:12Z|00147|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 08:44:12 np0005597378 systemd-udevd[262413]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:12 np0005597378 nova_compute[238941]: 2026-01-27 13:44:12.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:12 np0005597378 NetworkManager[48904]: <info>  [1769521452.8181] device (tap851829c6-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:44:12 np0005597378 NetworkManager[48904]: <info>  [1769521452.8189] device (tap851829c6-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:44:12 np0005597378 systemd-machined[207425]: New machine qemu-27-instance-00000011.
Jan 27 08:44:12 np0005597378 systemd[1]: Started Virtual Machine qemu-27-instance-00000011.
Jan 27 08:44:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:12Z|00148|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.879 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '7', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.882 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis#033[00m
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.886 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.910 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4a4a6c-7efb-40c8-a0fd-dcc814154b62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.949 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e97cc33-d512-4f8e-ad2b-b3838fcb33db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.953 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[85915c7e-1258-4d33-a567-b60952407045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:12.984 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1a3786-67ee-437b-8d54-fe5da4076ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.007 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d30a1243-da52-4407-9ca4-25fef147ccc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262430, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.032 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a43d9c9-b6b8-4770-b720-388261a329d2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262431, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262431, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.033 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.038 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.038 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.038 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.040 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.365 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for bee7c432-6457-4160-917c-a807eca3df0e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.365 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521453.3645527, bee7c432-6457-4160-917c-a807eca3df0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.365 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.368 238945 DEBUG nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.369 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.372 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance spawned successfully.#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.373 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.395 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.399 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.399 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.400 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.400 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.401 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.401 238945 DEBUG nova.virt.libvirt.driver [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.406 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.444 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.445 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521453.3658369, bee7c432-6457-4160-917c-a807eca3df0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.445 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Started (Lifecycle Event)#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.460 238945 DEBUG nova.compute.manager [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.472 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.528 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.536 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.536 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.536 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.537 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.537 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.539 238945 INFO nova.compute.manager [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Terminating instance#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.540 238945 DEBUG nova.compute.manager [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.553 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.553 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.554 238945 DEBUG nova.objects.instance [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:44:13 np0005597378 kernel: tap178da3ce-54 (unregistering): left promiscuous mode
Jan 27 08:44:13 np0005597378 NetworkManager[48904]: <info>  [1769521453.5956] device (tap178da3ce-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:13Z|00149|binding|INFO|Releasing lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 from this chassis (sb_readonly=0)
Jan 27 08:44:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:13Z|00150|binding|INFO|Setting lport 178da3ce-54dc-4965-aa17-4ac98d2ec152 down in Southbound
Jan 27 08:44:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:13Z|00151|binding|INFO|Removing iface tap178da3ce-54 ovn-installed in OVS
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.617 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:5f:a0 10.100.0.11'], port_security=['fa:16:3e:64:5f:a0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=178da3ce-54dc-4965-aa17-4ac98d2ec152) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.618 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 178da3ce-54dc-4965-aa17-4ac98d2ec152 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 unbound from our chassis#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.620 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58208cdc-4099-47ab-9729-2e87f01c74f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.624 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0e7857-ab0d-4043-b483-2cb70970a48a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:13.624 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace which is not needed anymore#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.627 238945 DEBUG oslo_concurrency.lockutils [None req-0bc1e535-3fd7-48fb-8744-cc6036ff552c 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.633 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 27 08:44:13 np0005597378 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 13.549s CPU time.
Jan 27 08:44:13 np0005597378 systemd-machined[207425]: Machine qemu-25-instance-00000017 terminated.
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.799 238945 INFO nova.virt.libvirt.driver [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Instance destroyed successfully.#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.800 238945 DEBUG nova.objects.instance [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'resources' on Instance uuid b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.818 238945 DEBUG nova.virt.libvirt.vif [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-328526640',display_name='tempest-ImagesOneServerNegativeTestJSON-server-328526640',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-328526640',id=23,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-ii5k0cgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:11Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.819 238945 DEBUG nova.network.os_vif_util [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "address": "fa:16:3e:64:5f:a0", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178da3ce-54", "ovs_interfaceid": "178da3ce-54dc-4965-aa17-4ac98d2ec152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.820 238945 DEBUG nova.network.os_vif_util [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.821 238945 DEBUG os_vif [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.823 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap178da3ce-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:13 np0005597378 nova_compute[238941]: 2026-01-27 13:44:13.829 238945 INFO os_vif [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:5f:a0,bridge_name='br-int',has_traffic_filtering=True,id=178da3ce-54dc-4965-aa17-4ac98d2ec152,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178da3ce-54')#033[00m
Jan 27 08:44:13 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : haproxy version is 2.8.14-c23fe91
Jan 27 08:44:13 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [NOTICE]   (261717) : path to executable is /usr/sbin/haproxy
Jan 27 08:44:13 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [WARNING]  (261717) : Exiting Master process...
Jan 27 08:44:13 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [ALERT]    (261717) : Current worker (261720) exited with code 143 (Terminated)
Jan 27 08:44:13 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[261708]: [WARNING]  (261717) : All workers exited. Exiting... (0)
Jan 27 08:44:13 np0005597378 systemd[1]: libpod-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509.scope: Deactivated successfully.
Jan 27 08:44:13 np0005597378 podman[262493]: 2026-01-27 13:44:13.906455022 +0000 UTC m=+0.158193139 container died 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 08:44:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509-userdata-shm.mount: Deactivated successfully.
Jan 27 08:44:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6f6566e0cad5da8144f268f9bf68209f99efd43e6d287847402d8fec50b6dba0-merged.mount: Deactivated successfully.
Jan 27 08:44:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 495 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 15 MiB/s wr, 476 op/s
Jan 27 08:44:14 np0005597378 podman[262493]: 2026-01-27 13:44:14.271944637 +0000 UTC m=+0.523682744 container cleanup 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:44:14 np0005597378 systemd[1]: libpod-conmon-5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509.scope: Deactivated successfully.
Jan 27 08:44:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:44:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.0 total, 600.0 interval#012Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2446 syncs, 4.12 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4350 writes, 18K keys, 4350 commit groups, 1.0 writes per commit group, ingest: 22.59 MB, 0.04 MB/s#012Interval WAL: 4350 writes, 1527 syncs, 2.85 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 08:44:14 np0005597378 podman[262549]: 2026-01-27 13:44:14.485939934 +0000 UTC m=+0.190109792 container remove 5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.498 238945 DEBUG nova.compute.manager [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.498 238945 DEBUG oslo_concurrency.lockutils [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 DEBUG oslo_concurrency.lockutils [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 DEBUG oslo_concurrency.lockutils [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 DEBUG nova.compute.manager [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.499 238945 WARNING nova.compute.manager [req-d18ae37b-ffe1-4430-a9f5-e8cebb6d6c10 req-0fcb7cdf-a1c4-4a3e-8378-a1f7b46a6cc4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.500 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7eb214d-db31-45dc-a1cc-344593cf1c4e]: (4, ('Tue Jan 27 01:44:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509)\n5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509\nTue Jan 27 01:44:14 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509)\n5626348d4d5a43c0c9bd94c6845a7fae8d0c687fd8d402de95f7e4ca15b69509\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[24d99f01-de56-4121-b157-a7aab6b8ecfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.507 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:14 np0005597378 kernel: tap58208cdc-40: left promiscuous mode
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.534 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc29048d-f631-41e2-a1d8-f3d0019bf8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.559 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28eb539e-68ac-4dc9-89ef-0889d2e2bb12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.561 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7732d1d7-7784-46c5-8e0c-15b58f28eb32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.578 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a47b77a7-103a-40e3-9c3d-21995c55bf82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407933, 'reachable_time': 40457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262563, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 systemd[1]: run-netns-ovnmeta\x2d58208cdc\x2d4099\x2d47ab\x2d9729\x2d2e87f01c74f8.mount: Deactivated successfully.
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.583 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:44:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:14.584 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1f340b40-712f-4537-8801-306464b1a11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.994 238945 INFO nova.virt.libvirt.driver [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deleting instance files /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_del#033[00m
Jan 27 08:44:14 np0005597378 nova_compute[238941]: 2026-01-27 13:44:14.995 238945 INFO nova.virt.libvirt.driver [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deletion of /var/lib/nova/instances/b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33_del complete#033[00m
Jan 27 08:44:15 np0005597378 nova_compute[238941]: 2026-01-27 13:44:15.090 238945 INFO nova.compute.manager [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:44:15 np0005597378 nova_compute[238941]: 2026-01-27 13:44:15.091 238945 DEBUG oslo.service.loopingcall [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:44:15 np0005597378 nova_compute[238941]: 2026-01-27 13:44:15.091 238945 DEBUG nova.compute.manager [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:44:15 np0005597378 nova_compute[238941]: 2026-01-27 13:44:15.091 238945 DEBUG nova.network.neutron [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:44:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 486 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 307 op/s
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.324 238945 DEBUG nova.network.neutron [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.342 238945 INFO nova.compute.manager [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Took 1.25 seconds to deallocate network for instance.#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.398 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.399 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.514 238945 INFO nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Rebuilding instance#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.599 238945 DEBUG oslo_concurrency.processutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.691 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing instance network info cache due to event network-changed-3c601043-e73a-4b81-b274-c8d791f8bc3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.692 238945 DEBUG nova.network.neutron [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Refreshing network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.728 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.746 238945 DEBUG nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.799 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_requests' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.812 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 27 08:44:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.834 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.850 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'migration_context' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.862 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:44:16 np0005597378 nova_compute[238941]: 2026-01-27 13:44:16.867 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:44:17
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', '.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:44:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478438580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:17 np0005597378 nova_compute[238941]: 2026-01-27 13:44:17.184 238945 DEBUG oslo_concurrency.processutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:17 np0005597378 nova_compute[238941]: 2026-01-27 13:44:17.190 238945 DEBUG nova.compute.provider_tree [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:17 np0005597378 nova_compute[238941]: 2026-01-27 13:44:17.231 238945 DEBUG nova.scheduler.client.report [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:17 np0005597378 nova_compute[238941]: 2026-01-27 13:44:17.304 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:17 np0005597378 nova_compute[238941]: 2026-01-27 13:44:17.373 238945 INFO nova.scheduler.client.report [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Deleted allocations for instance b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33#033[00m
Jan 27 08:44:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:17Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:71:9b 10.100.0.14
Jan 27 08:44:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:17Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:71:9b 10.100.0.14
Jan 27 08:44:17 np0005597378 nova_compute[238941]: 2026-01-27 13:44:17.525 238945 DEBUG oslo_concurrency.lockutils [None req-c603e3cc-0636-401d-aab1-a3d56a6b75f4 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:44:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:44:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 486 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 4.6 MiB/s wr, 183 op/s
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.580 238945 DEBUG nova.network.neutron [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updated VIF entry in instance network info cache for port 3c601043-e73a-4b81-b274-c8d791f8bc3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.581 238945 DEBUG nova.network.neutron [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [{"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.599 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a83bcb6-4245-4637-81be-f4c0c75bc965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.600 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.600 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.600 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 WARNING nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.601 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-unplugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.602 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] No waiting events found dispatching network-vif-unplugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.603 238945 WARNING nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received unexpected event network-vif-unplugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.603 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.603 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 DEBUG oslo_concurrency.lockutils [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] No waiting events found dispatching network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.604 238945 WARNING nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received unexpected event network-vif-plugged-178da3ce-54dc-4965-aa17-4ac98d2ec152 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.605 238945 DEBUG nova.compute.manager [req-e45c764d-b3e6-4d57-b086-9bdb95fb8b1d req-b5a84661-c993-4b37-bc63-294cf9cb9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Received event network-vif-deleted-178da3ce-54dc-4965-aa17-4ac98d2ec152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:18 np0005597378 nova_compute[238941]: 2026-01-27 13:44:18.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 484 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.5 MiB/s wr, 371 op/s
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.664 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.665 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.687 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.748 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.749 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.754 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.754 238945 INFO nova.compute.claims [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.929 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.929 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.930 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.930 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.931 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.932 238945 INFO nova.compute.manager [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Terminating instance#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.933 238945 DEBUG nova.compute.manager [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:44:20 np0005597378 nova_compute[238941]: 2026-01-27 13:44:20.935 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:21 np0005597378 kernel: tap3c601043-e7 (unregistering): left promiscuous mode
Jan 27 08:44:21 np0005597378 NetworkManager[48904]: <info>  [1769521461.0366] device (tap3c601043-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:21Z|00152|binding|INFO|Releasing lport 3c601043-e73a-4b81-b274-c8d791f8bc3d from this chassis (sb_readonly=0)
Jan 27 08:44:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:21Z|00153|binding|INFO|Setting lport 3c601043-e73a-4b81-b274-c8d791f8bc3d down in Southbound
Jan 27 08:44:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:21Z|00154|binding|INFO|Removing iface tap3c601043-e7 ovn-installed in OVS
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.048 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:71:9b 10.100.0.14'], port_security=['fa:16:3e:84:71:9b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a83bcb6-4245-4637-81be-f4c0c75bc965', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07f2b9fda9204458be8cb076e9d2b9f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58ec66a6-49e6-409b-9a00-2ec14259d882', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b47c7cc-bcaa-4c9d-99f8-0bd4e23fa5fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c601043-e73a-4b81-b274-c8d791f8bc3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.049 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c601043-e73a-4b81-b274-c8d791f8bc3d in datapath d69bf49f-3f47-4f46-973f-413e56d2f52a unbound from our chassis#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.051 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d69bf49f-3f47-4f46-973f-413e56d2f52a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.055 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73a93552-f435-4fc4-9e75-23b1679f9a3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.056 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a namespace which is not needed anymore#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 27 08:44:21 np0005597378 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Consumed 16.429s CPU time.
Jan 27 08:44:21 np0005597378 systemd-machined[207425]: Machine qemu-26-instance-00000018 terminated.
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.202 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Instance destroyed successfully.#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.203 238945 DEBUG nova.objects.instance [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lazy-loading 'resources' on Instance uuid 3a83bcb6-4245-4637-81be-f4c0c75bc965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.218 238945 DEBUG nova.virt.libvirt.vif [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1392736369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-139273636',id=24,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07f2b9fda9204458be8cb076e9d2b9f3',ramdisk_id='',reservation_id='r-7ix8tutd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-754008104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:01Z,user_data=None,user_id='5bbd48f1c4304d319aa847aa717dd4d6',uuid=3a83bcb6-4245-4637-81be-f4c0c75bc965,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.219 238945 DEBUG nova.network.os_vif_util [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converting VIF {"id": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "address": "fa:16:3e:84:71:9b", "network": {"id": "d69bf49f-3f47-4f46-973f-413e56d2f52a", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-903732634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07f2b9fda9204458be8cb076e9d2b9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c601043-e7", "ovs_interfaceid": "3c601043-e73a-4b81-b274-c8d791f8bc3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.220 238945 DEBUG nova.network.os_vif_util [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.221 238945 DEBUG os_vif [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.222 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.223 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c601043-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.229 238945 INFO os_vif [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:71:9b,bridge_name='br-int',has_traffic_filtering=True,id=3c601043-e73a-4b81-b274-c8d791f8bc3d,network=Network(d69bf49f-3f47-4f46-973f-413e56d2f52a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c601043-e7')#033[00m
Jan 27 08:44:21 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : haproxy version is 2.8.14-c23fe91
Jan 27 08:44:21 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [NOTICE]   (261831) : path to executable is /usr/sbin/haproxy
Jan 27 08:44:21 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [WARNING]  (261831) : Exiting Master process...
Jan 27 08:44:21 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [ALERT]    (261831) : Current worker (261833) exited with code 143 (Terminated)
Jan 27 08:44:21 np0005597378 neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a[261827]: [WARNING]  (261831) : All workers exited. Exiting... (0)
Jan 27 08:44:21 np0005597378 systemd[1]: libpod-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc.scope: Deactivated successfully.
Jan 27 08:44:21 np0005597378 podman[262632]: 2026-01-27 13:44:21.319833991 +0000 UTC m=+0.171561270 container died b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:44:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc-userdata-shm.mount: Deactivated successfully.
Jan 27 08:44:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bc8721205b28cc5fae1ea1768ac2616775241516c97db180811030d1178b496c-merged.mount: Deactivated successfully.
Jan 27 08:44:21 np0005597378 podman[262632]: 2026-01-27 13:44:21.5196871 +0000 UTC m=+0.371414389 container cleanup b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 08:44:21 np0005597378 systemd[1]: libpod-conmon-b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc.scope: Deactivated successfully.
Jan 27 08:44:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119034640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.582 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.591 238945 DEBUG nova.compute.provider_tree [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.610 238945 DEBUG nova.scheduler.client.report [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.634 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.692 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.693 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.701 238945 DEBUG nova.compute.manager [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-unplugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.701 238945 DEBUG oslo_concurrency.lockutils [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.702 238945 DEBUG oslo_concurrency.lockutils [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.702 238945 DEBUG oslo_concurrency.lockutils [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.702 238945 DEBUG nova.compute.manager [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] No waiting events found dispatching network-vif-unplugged-3c601043-e73a-4b81-b274-c8d791f8bc3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.703 238945 DEBUG nova.compute.manager [req-d37a172b-c1f0-4166-90f2-c3ba983816af req-4eaa4237-4cc2-4ecd-8de0-a0cc3e86474a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-unplugged-3c601043-e73a-4b81-b274-c8d791f8bc3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.719 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.735 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 podman[262690]: 2026-01-27 13:44:21.758618338 +0000 UTC m=+0.214401960 container remove b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.765 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b288cd6-7091-4027-9616-6e8d712bd129]: (4, ('Tue Jan 27 01:44:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a (b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc)\nb1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc\nTue Jan 27 01:44:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a (b1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc)\nb1d53cf14d8805adf3bdc7bb5ce6fe4c4511ac4ef692ae430cca3d8c7cc607fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.767 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be9b93da-c112-49a8-842d-c0fe35e3f537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.768 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd69bf49f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:21 np0005597378 kernel: tapd69bf49f-30: left promiscuous mode
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.790 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63af7b74-543d-4aa5-ad86-57c1470b9c74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.812 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad5ce25-36a7-4fde-82ea-5290fb8d6082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc032a16-df2e-43b6-8be4-81bfeae81a8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.821 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.823 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.827 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Creating image(s)#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.833 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[596b6d37-2930-4d7c-b022-d873061740d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408071, 'reachable_time': 26944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262706, 'error': None, 'target': 'ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 systemd[1]: run-netns-ovnmeta\x2dd69bf49f\x2d3f47\x2d4f46\x2d973f\x2d413e56d2f52a.mount: Deactivated successfully.
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.837 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d69bf49f-3f47-4f46-973f-413e56d2f52a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:44:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:21.837 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2f79c4-18ab-4e5b-b458-52e8ce227048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.851 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.876 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.897 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.900 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.925 238945 DEBUG nova.policy [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb804373b8be4577a6623d2131cdcd59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8773022351141649f1c7a9db9002d2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.963 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.963 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.964 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.964 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.981 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:21 np0005597378 nova_compute[238941]: 2026-01-27 13:44:21.984 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 484 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.3 MiB/s wr, 310 op/s
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.468 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Successfully created port: 73b5c991-2d4d-4152-a3e1-02379e28f9c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.582 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.660 238945 INFO nova.virt.libvirt.driver [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deleting instance files /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965_del#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.660 238945 INFO nova.virt.libvirt.driver [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deletion of /var/lib/nova/instances/3a83bcb6-4245-4637-81be-f4c0c75bc965_del complete#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.669 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] resizing rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.725 238945 INFO nova.compute.manager [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 1.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.726 238945 DEBUG oslo.service.loopingcall [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.727 238945 DEBUG nova.compute.manager [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.727 238945 DEBUG nova.network.neutron [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.806 238945 DEBUG nova.objects.instance [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'migration_context' on Instance uuid 37f821bc-2bb2-4a60-a76a-4b3123788e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.819 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.819 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Ensure instance console log exists: /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.820 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.820 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:22 np0005597378 nova_compute[238941]: 2026-01-27 13:44:22.821 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.653 238945 DEBUG nova.network.neutron [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.669 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Successfully updated port: 73b5c991-2d4d-4152-a3e1-02379e28f9c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.673 238945 INFO nova.compute.manager [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Took 0.95 seconds to deallocate network for instance.#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.703 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.704 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquired lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.704 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.731 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.732 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.743 238945 DEBUG nova.compute.manager [req-922c94de-3b60-4e25-9426-5ded92319c44 req-b0a8ace8-6f81-459e-a1b6-9274e74c397f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-deleted-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.793 238945 DEBUG nova.compute.manager [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.793 238945 DEBUG oslo_concurrency.lockutils [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.794 238945 DEBUG oslo_concurrency.lockutils [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.794 238945 DEBUG oslo_concurrency.lockutils [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.794 238945 DEBUG nova.compute.manager [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] No waiting events found dispatching network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.795 238945 WARNING nova.compute.manager [req-1a886e43-9f72-4ede-a10b-e6ed1eedbeed req-03fb4114-f15f-47d5-a093-c5c91c3f38ab 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Received unexpected event network-vif-plugged-3c601043-e73a-4b81-b274-c8d791f8bc3d for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.849 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:44:23 np0005597378 nova_compute[238941]: 2026-01-27 13:44:23.874 238945 DEBUG oslo_concurrency.processutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 458 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 252 op/s
Jan 27 08:44:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612328429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:24 np0005597378 nova_compute[238941]: 2026-01-27 13:44:24.463 238945 DEBUG oslo_concurrency.processutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:24 np0005597378 nova_compute[238941]: 2026-01-27 13:44:24.469 238945 DEBUG nova.compute.provider_tree [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:24 np0005597378 nova_compute[238941]: 2026-01-27 13:44:24.590 238945 DEBUG nova.scheduler.client.report [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:24 np0005597378 nova_compute[238941]: 2026-01-27 13:44:24.684 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 451 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 231 op/s
Jan 27 08:44:26 np0005597378 nova_compute[238941]: 2026-01-27 13:44:26.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:26Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:26Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:26 np0005597378 nova_compute[238941]: 2026-01-27 13:44:26.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:26 np0005597378 nova_compute[238941]: 2026-01-27 13:44:26.852 238945 INFO nova.scheduler.client.report [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Deleted allocations for instance 3a83bcb6-4245-4637-81be-f4c0c75bc965#033[00m
Jan 27 08:44:26 np0005597378 nova_compute[238941]: 2026-01-27 13:44:26.926 238945 DEBUG nova.network.neutron [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updating instance_info_cache with network_info: [{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:26 np0005597378 nova_compute[238941]: 2026-01-27 13:44:26.954 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.193 238945 DEBUG oslo_concurrency.lockutils [None req-ef667068-0412-449f-bd9c-3a6aedb08ea4 5bbd48f1c4304d319aa847aa717dd4d6 07f2b9fda9204458be8cb076e9d2b9f3 - - default default] Lock "3a83bcb6-4245-4637-81be-f4c0c75bc965" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.200 238945 DEBUG nova.compute.manager [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-changed-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.200 238945 DEBUG nova.compute.manager [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Refreshing instance network info cache due to event network-changed-73b5c991-2d4d-4152-a3e1-02379e28f9c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.201 238945 DEBUG oslo_concurrency.lockutils [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.303 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Releasing lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.303 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance network_info: |[{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.304 238945 DEBUG oslo_concurrency.lockutils [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.304 238945 DEBUG nova.network.neutron [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Refreshing network info cache for port 73b5c991-2d4d-4152-a3e1-02379e28f9c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.307 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start _get_guest_xml network_info=[{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.311 238945 WARNING nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.316 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.317 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.319 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.320 238945 DEBUG nova.virt.libvirt.host [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.320 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.321 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.322 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.322 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.322 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.323 238945 DEBUG nova.virt.hardware [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.326 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00373180943276878 of space, bias 1.0, pg target 1.119542829830634 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675918470053435 of space, bias 1.0, pg target 0.1996099622545977 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.592322933104865e-07 of space, bias 4.0, pg target 0.001147241822799342 quantized to 16 (current 16)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:44:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 27 08:44:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:44:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197121155' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.942 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.967 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:27 np0005597378 nova_compute[238941]: 2026-01-27 13:44:27.972 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 463 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 233 op/s
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.293 238945 DEBUG nova.network.neutron [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updated VIF entry in instance network info cache for port 73b5c991-2d4d-4152-a3e1-02379e28f9c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.294 238945 DEBUG nova.network.neutron [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updating instance_info_cache with network_info: [{"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.449 238945 DEBUG oslo_concurrency.lockutils [req-333d536a-f942-4880-b461-eb2a3faa9626 req-4d86cc24-b842-40cb-8276-c90136dc470a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-37f821bc-2bb2-4a60-a76a-4b3123788e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:44:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573061447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.620 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.621 238945 DEBUG nova.virt.libvirt.vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2003372765',id=25,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-e4gwhee4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:21Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=37f821bc-2bb2-4a60-a76a-4b3123788e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.622 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.623 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.624 238945 DEBUG nova.objects.instance [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid 37f821bc-2bb2-4a60-a76a-4b3123788e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.719 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <uuid>37f821bc-2bb2-4a60-a76a-4b3123788e6c</uuid>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <name>instance-00000019</name>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-2003372765</nova:name>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:44:27</nova:creationTime>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:user uuid="bb804373b8be4577a6623d2131cdcd59">tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member</nova:user>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:project uuid="c8773022351141649f1c7a9db9002d2f">tempest-ImagesOneServerNegativeTestJSON-1108889514</nova:project>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <nova:port uuid="73b5c991-2d4d-4152-a3e1-02379e28f9c5">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <entry name="serial">37f821bc-2bb2-4a60-a76a-4b3123788e6c</entry>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <entry name="uuid">37f821bc-2bb2-4a60-a76a-4b3123788e6c</entry>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:09:91:ab"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <target dev="tap73b5c991-2d"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/console.log" append="off"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:44:28 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:44:28 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:44:28 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:44:28 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.720 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Preparing to wait for external event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.721 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.721 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.721 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.722 238945 DEBUG nova.virt.libvirt.vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2003372765',id=25,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-e4gwhee4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:21Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=37f821bc-2bb2-4a60-a76a-4b3123788e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.722 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.723 238945 DEBUG nova.network.os_vif_util [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.723 238945 DEBUG os_vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.724 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.725 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73b5c991-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73b5c991-2d, col_values=(('external_ids', {'iface-id': '73b5c991-2d4d-4152-a3e1-02379e28f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:91:ab', 'vm-uuid': '37f821bc-2bb2-4a60-a76a-4b3123788e6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:28 np0005597378 NetworkManager[48904]: <info>  [1769521468.7319] manager: (tap73b5c991-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.738 238945 INFO os_vif [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d')#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.795 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521453.7950451, b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:28 np0005597378 nova_compute[238941]: 2026-01-27 13:44:28.796 238945 INFO nova.compute.manager [-] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.008 238945 DEBUG nova.compute.manager [None req-e54eea71-e7c9-45e3-bd06-bc14d0728b10 - - - - - -] [instance: b4bedee0-7b5a-45f9-8e7f-7647a1b3eb33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.050 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.050 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.050 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] No VIF found with MAC fa:16:3e:09:91:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.051 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Using config drive#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.069 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.301 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.302 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.303 238945 DEBUG nova.objects.instance [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.425 238945 DEBUG nova.objects.instance [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.532 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.949 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Creating config drive at /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config#033[00m
Jan 27 08:44:29 np0005597378 nova_compute[238941]: 2026-01-27 13:44:29.954 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp00tw1y72 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.084 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp00tw1y72" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.113 238945 DEBUG nova.storage.rbd_utils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] rbd image 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.117 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 482 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 253 op/s
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.261 238945 DEBUG oslo_concurrency.processutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config 37f821bc-2bb2-4a60-a76a-4b3123788e6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.261 238945 INFO nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deleting local config drive /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.290 238945 DEBUG nova.policy [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.3217] manager: (tap73b5c991-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Jan 27 08:44:30 np0005597378 kernel: tap73b5c991-2d: entered promiscuous mode
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00155|binding|INFO|Claiming lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 for this chassis.
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00156|binding|INFO|73b5c991-2d4d-4152-a3e1-02379e28f9c5: Claiming fa:16:3e:09:91:ab 10.100.0.4
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00157|binding|INFO|Setting lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 ovn-installed in OVS
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 systemd-machined[207425]: New machine qemu-28-instance-00000019.
Jan 27 08:44:30 np0005597378 systemd-udevd[263034]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:44:30 np0005597378 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.3752] device (tap73b5c991-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.3767] device (tap73b5c991-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00158|binding|INFO|Setting lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 up in Southbound
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.466 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:91:ab 10.100.0.4'], port_security=['fa:16:3e:09:91:ab 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '37f821bc-2bb2-4a60-a76a-4b3123788e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=73b5c991-2d4d-4152-a3e1-02379e28f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.468 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 73b5c991-2d4d-4152-a3e1-02379e28f9c5 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 bound to our chassis#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.470 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58208cdc-4099-47ab-9729-2e87f01c74f8#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7055eda6-b20d-4692-a002-9261c966ea6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.483 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58208cdc-41 in ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.485 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58208cdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.485 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcabf5ad-f481-49ca-8b65-a9f2c1daf6f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.486 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc918052-96b3-40ab-b96d-06a131bf7a3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.497 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d953d0e7-5a0e-47bb-ab17-b5c86b04f4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cc3af6-d6b4-40ea-a683-ab1d86c3484f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.551 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e3986afa-5fde-4935-a5f7-c0d878a87bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.5608] manager: (tap58208cdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.558 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5aeb50c-ea33-48ac-a659-00081e1e2ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.601 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4a7513-759a-4440-9a44-082fcdcc366c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.604 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6f17ba87-5ee9-4d97-9e51-7e85ffee7ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.6321] device (tap58208cdc-40): carrier: link connected
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.640 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7b582089-3f3b-425e-a9cb-99d5dbfab7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30a4b820-17ad-44cd-9af0-54b67e22f30f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411025, 'reachable_time': 30074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263067, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.689 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d17b98bc-f155-4ce6-b1df-474ed7c44c15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e7f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411025, 'tstamp': 411025}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263068, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edca8156-a0f5-4ac1-97ea-5666af40f55c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58208cdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e7:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411025, 'reachable_time': 30074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263069, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1a2d77-a8f6-4c25-bce0-cec5e27d383f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.834 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd35f662-12a6-40f9-b7d3-1462ff7c0055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.836 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.836 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.837 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58208cdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.8396] manager: (tap58208cdc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 27 08:44:30 np0005597378 kernel: tap58208cdc-40: entered promiscuous mode
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.842 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58208cdc-40, col_values=(('external_ids', {'iface-id': '42783ab6-7560-4ef7-b70e-aaa544a1d882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00159|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.864 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b27d846-5b6d-4cd9-9fb1-c00fa2f50c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.866 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/58208cdc-4099-47ab-9729-2e87f01c74f8.pid.haproxy
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 58208cdc-4099-47ab-9729-2e87f01c74f8
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:44:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:30.867 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'env', 'PROCESS_TAG=haproxy-58208cdc-4099-47ab-9729-2e87f01c74f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58208cdc-4099-47ab-9729-2e87f01c74f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.947 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521470.9473047, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.948 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:44:30 np0005597378 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 08:44:30 np0005597378 NetworkManager[48904]: <info>  [1769521470.9583] device (tap851829c6-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00160|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00161|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 08:44:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:30Z|00162|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 08:44:30 np0005597378 nova_compute[238941]: 2026-01-27 13:44:30.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.006 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521470.947651, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.006 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.022 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.038 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.041 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:31 np0005597378 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 27 08:44:31 np0005597378 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000011.scope: Consumed 13.632s CPU time.
Jan 27 08:44:31 np0005597378 systemd-machined[207425]: Machine qemu-27-instance-00000011 terminated.
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.065 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.225 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance shutdown successfully after 14 seconds.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.231 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.235 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.235 238945 DEBUG nova.virt.libvirt.vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:15Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.236 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.237 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.237 238945 DEBUG os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.239 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851829c6-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.246 238945 INFO os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')#033[00m
Jan 27 08:44:31 np0005597378 podman[263149]: 2026-01-27 13:44:31.26118341 +0000 UTC m=+0.031596850 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.408 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully created port: 694e1e12-dc4a-4a42-ba67-46b29efc58c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:44:31 np0005597378 podman[263149]: 2026-01-27 13:44:31.459161148 +0000 UTC m=+0.229574568 container create fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 08:44:31 np0005597378 systemd[1]: Started libpod-conmon-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68.scope.
Jan 27 08:44:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.555 238945 DEBUG nova.compute.manager [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.555 238945 DEBUG oslo_concurrency.lockutils [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.556 238945 DEBUG oslo_concurrency.lockutils [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.556 238945 DEBUG oslo_concurrency.lockutils [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.556 238945 DEBUG nova.compute.manager [req-2a7448a9-d2d7-4e85-b886-e7ae334b21db req-73fe5d18-4b52-40e8-a650-4cfebb980afe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Processing event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.558 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:44:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6599e504eb6add9d5994eb322b189932628e9b99f8395839479c0b61de20cd8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.564 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.564 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521471.564105, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.564 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.568 238945 INFO nova.virt.libvirt.driver [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance spawned successfully.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.569 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.590 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.593 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.593 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.593 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.594 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.594 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.594 238945 DEBUG nova.virt.libvirt.driver [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:31 np0005597378 podman[263149]: 2026-01-27 13:44:31.617654846 +0000 UTC m=+0.388068276 container init fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:44:31 np0005597378 podman[263149]: 2026-01-27 13:44:31.624052427 +0000 UTC m=+0.394465847 container start fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.634 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG nova.compute.manager [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG oslo_concurrency.lockutils [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG oslo_concurrency.lockutils [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.637 238945 DEBUG oslo_concurrency.lockutils [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.638 238945 DEBUG nova.compute.manager [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.638 238945 WARNING nova.compute.manager [req-d50a779f-110c-4bcf-a7da-5e49b0bf3fe1 req-664e1216-e294-4bc0-9afa-915d53c1e1c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 27 08:44:31 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : New worker (263197) forked
Jan 27 08:44:31 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : Loading success.
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.666 238945 INFO nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.666 238945 DEBUG nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.709 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.711 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a6a6d9-436a-4980-8e02-34fd5974c086]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.728 238945 INFO nova.compute.manager [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 10.99 seconds to build instance.#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.747 238945 DEBUG oslo_concurrency.lockutils [None req-2dda2f30-7b26-48a0-b548-ee9b2c51d9ba bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.760 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f385ad03-ed3c-4034-b5d5-dac594878776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.763 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9f72bd-100f-44b6-a14a-8a22e5cfcc70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.791 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c25d642c-50eb-44c4-9d4a-182a5da84076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.810 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce05092-668e-4e2e-b74e-3d2f411986bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263211, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.827 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09d7d4ef-96b7-4209-a580-b35273ec0d76]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263212, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263212, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.829 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 nova_compute[238941]: 2026-01-27 13:44:31.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.834 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.835 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.835 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:31.835 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 482 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.534 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting instance files /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.534 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deletion of /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del complete#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.678 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.679 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating image(s)#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.699 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:32Z|00163|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 08:44:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:32Z|00164|binding|INFO|Releasing lport 42783ab6-7560-4ef7-b70e-aaa544a1d882 from this chassis (sb_readonly=0)
Jan 27 08:44:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:32Z|00165|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.743 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.771 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.774 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.845 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.846 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.847 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.847 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.869 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:32 np0005597378 nova_compute[238941]: 2026-01-27 13:44:32.872 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.086 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: 694e1e12-dc4a-4a42-ba67-46b29efc58c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.106 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.107 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.107 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.190 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.191 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.193 238945 INFO nova.compute.manager [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Terminating instance#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.194 238945 DEBUG nova.compute.manager [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.300 238945 WARNING nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:44:33 np0005597378 kernel: tap73b5c991-2d (unregistering): left promiscuous mode
Jan 27 08:44:33 np0005597378 NetworkManager[48904]: <info>  [1769521473.3516] device (tap73b5c991-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:33Z|00166|binding|INFO|Releasing lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 from this chassis (sb_readonly=0)
Jan 27 08:44:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:33Z|00167|binding|INFO|Setting lport 73b5c991-2d4d-4152-a3e1-02379e28f9c5 down in Southbound
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:33Z|00168|binding|INFO|Removing iface tap73b5c991-2d ovn-installed in OVS
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.373 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:91:ab 10.100.0.4'], port_security=['fa:16:3e:09:91:ab 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '37f821bc-2bb2-4a60-a76a-4b3123788e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58208cdc-4099-47ab-9729-2e87f01c74f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8773022351141649f1c7a9db9002d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b756e75d-bbaf-406b-aafe-8ee3c670480f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77e2e80c-2188-4e01-a2f5-11190a5d263b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=73b5c991-2d4d-4152-a3e1-02379e28f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.374 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 73b5c991-2d4d-4152-a3e1-02379e28f9c5 in datapath 58208cdc-4099-47ab-9729-2e87f01c74f8 unbound from our chassis#033[00m
Jan 27 08:44:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.376 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58208cdc-4099-47ab-9729-2e87f01c74f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:44:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.377 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0968d18c-9deb-40e3-96fd-c509e80df10d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:33.377 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 namespace which is not needed anymore#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.384 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:33 np0005597378 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 27 08:44:33 np0005597378 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 2.240s CPU time.
Jan 27 08:44:33 np0005597378 systemd-machined[207425]: Machine qemu-28-instance-00000019 terminated.
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.429 238945 INFO nova.virt.libvirt.driver [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Instance destroyed successfully.#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.430 238945 DEBUG nova.objects.instance [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lazy-loading 'resources' on Instance uuid 37f821bc-2bb2-4a60-a76a-4b3123788e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.450 238945 DEBUG nova.virt.libvirt.vif [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:44:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2003372765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2003372765',id=25,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8773022351141649f1c7a9db9002d2f',ramdisk_id='',reservation_id='r-e4gwhee4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1108889514',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1108889514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:31Z,user_data=None,user_id='bb804373b8be4577a6623d2131cdcd59',uuid=37f821bc-2bb2-4a60-a76a-4b3123788e6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.451 238945 DEBUG nova.network.os_vif_util [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converting VIF {"id": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "address": "fa:16:3e:09:91:ab", "network": {"id": "58208cdc-4099-47ab-9729-2e87f01c74f8", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1062455350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8773022351141649f1c7a9db9002d2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b5c991-2d", "ovs_interfaceid": "73b5c991-2d4d-4152-a3e1-02379e28f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.451 238945 DEBUG nova.network.os_vif_util [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.452 238945 DEBUG os_vif [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.453 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73b5c991-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.454 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:33 np0005597378 nova_compute[238941]: 2026-01-27 13:44:33.459 238945 INFO os_vif [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:91:ab,bridge_name='br-int',has_traffic_filtering=True,id=73b5c991-2d4d-4152-a3e1-02379e28f9c5,network=Network(58208cdc-4099-47ab-9729-2e87f01c74f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b5c991-2d')#033[00m
Jan 27 08:44:33 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : haproxy version is 2.8.14-c23fe91
Jan 27 08:44:33 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [NOTICE]   (263195) : path to executable is /usr/sbin/haproxy
Jan 27 08:44:33 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [WARNING]  (263195) : Exiting Master process...
Jan 27 08:44:33 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [ALERT]    (263195) : Current worker (263197) exited with code 143 (Terminated)
Jan 27 08:44:33 np0005597378 neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8[263191]: [WARNING]  (263195) : All workers exited. Exiting... (0)
Jan 27 08:44:33 np0005597378 systemd[1]: libpod-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68.scope: Deactivated successfully.
Jan 27 08:44:33 np0005597378 podman[263348]: 2026-01-27 13:44:33.703736225 +0000 UTC m=+0.225646193 container died fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:44:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68-userdata-shm.mount: Deactivated successfully.
Jan 27 08:44:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f6599e504eb6add9d5994eb322b189932628e9b99f8395839479c0b61de20cd8-merged.mount: Deactivated successfully.
Jan 27 08:44:34 np0005597378 podman[263348]: 2026-01-27 13:44:34.132108863 +0000 UTC m=+0.654018831 container cleanup fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.162 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bee7c432-6457-4160-917c-a807eca3df0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:34 np0005597378 systemd[1]: libpod-conmon-fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68.scope: Deactivated successfully.
Jan 27 08:44:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 427 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Jan 27 08:44:34 np0005597378 podman[263387]: 2026-01-27 13:44:34.219148601 +0000 UTC m=+0.064482674 container remove fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.225 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf68908-04e3-458e-9128-310ce530b380]: (4, ('Tue Jan 27 01:44:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68)\nfb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68\nTue Jan 27 01:44:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 (fb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68)\nfb7dcf9298a6a08045cb0822a8640b9b8a8a2e75c5d030473345e442cd284d68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.226 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] resizing rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aab664f9-a8d2-423f-9827-ef88d2fc9577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.228 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58208cdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:34 np0005597378 kernel: tap58208cdc-40: left promiscuous mode
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.253 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e39a0d42-8486-42b2-b648-fc05769a27eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.269 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f053e1-53ca-4f68-9034-5250beda07b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.271 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f61492b-fd92-42e7-9b34-71fdcbc2a080]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a518fd16-72f9-4bf2-b7f1-521ce47413b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411017, 'reachable_time': 29828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263454, 'error': None, 'target': 'ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 systemd[1]: run-netns-ovnmeta\x2d58208cdc\x2d4099\x2d47ab\x2d9729\x2d2e87f01c74f8.mount: Deactivated successfully.
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.290 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58208cdc-4099-47ab-9729-2e87f01c74f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:44:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:34.291 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef06c3f-70bc-48fd-a6c5-3565f104eff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.350 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.351 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Ensure instance console log exists: /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.351 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.351 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.352 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.354 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start _get_guest_xml network_info=[{"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.358 238945 WARNING nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.362 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.363 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.366 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.366 238945 DEBUG nova.virt.libvirt.host [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.367 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.368 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.virt.hardware [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.369 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.390 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.913 238945 INFO nova.virt.libvirt.driver [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deleting instance files /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c_del#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.914 238945 INFO nova.virt.libvirt.driver [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deletion of /var/lib/nova/instances/37f821bc-2bb2-4a60-a76a-4b3123788e6c_del complete#033[00m
Jan 27 08:44:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:44:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768615933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.971 238945 INFO nova.compute.manager [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 1.78 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.972 238945 DEBUG oslo.service.loopingcall [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.972 238945 DEBUG nova.compute.manager [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.972 238945 DEBUG nova.network.neutron [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:44:34 np0005597378 nova_compute[238941]: 2026-01-27 13:44:34.990 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.009 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.014 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.397 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.398 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.398 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.398 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] No waiting events found dispatching network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 WARNING nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received unexpected event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG nova.compute.manager [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-694e1e12-dc4a-4a42-ba67-46b29efc58c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.399 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:44:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331714717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.614 238945 DEBUG nova.compute.manager [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.614 238945 DEBUG oslo_concurrency.lockutils [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 DEBUG oslo_concurrency.lockutils [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 DEBUG oslo_concurrency.lockutils [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 DEBUG nova.compute.manager [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.615 238945 WARNING nova.compute.manager [req-307691e6-ed88-45fb-81e7-e75750ac0d30 req-70cfa1fe-44e5-4563-b6a7-6410bc049003 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.632 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.633 238945 DEBUG nova.virt.libvirt.vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:32Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.634 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.635 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.637 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <uuid>bee7c432-6457-4160-917c-a807eca3df0e</uuid>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <name>instance-00000011</name>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminTestJSON-server-752871201</nova:name>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:44:34</nova:creationTime>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:user uuid="97755bdfdc1140aa970fa69a04baeb3c">tempest-ServersAdminTestJSON-2123092478-project-member</nova:user>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:project uuid="c02e06ff150d4463ba12a3be444a4ae3">tempest-ServersAdminTestJSON-2123092478</nova:project>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <nova:port uuid="851829c6-49a6-4580-90d9-df985a736216">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <entry name="serial">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <entry name="uuid">bee7c432-6457-4160-917c-a807eca3df0e</entry>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bee7c432-6457-4160-917c-a807eca3df0e_disk.config">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:5b:0a:48"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <target dev="tap851829c6-49"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/console.log" append="off"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:44:35 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:44:35 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:44:35 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:44:35 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.638 238945 DEBUG nova.virt.libvirt.vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:44:32Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.638 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.639 238945 DEBUG nova.network.os_vif_util [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.639 238945 DEBUG os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.640 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.640 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.646 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851829c6-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.647 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap851829c6-49, col_values=(('external_ids', {'iface-id': '851829c6-49a6-4580-90d9-df985a736216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:0a:48', 'vm-uuid': 'bee7c432-6457-4160-917c-a807eca3df0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:35 np0005597378 NetworkManager[48904]: <info>  [1769521475.6498] manager: (tap851829c6-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.657 238945 INFO os_vif [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.749 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.749 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.750 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] No VIF found with MAC fa:16:3e:5b:0a:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.750 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Using config drive#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.770 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.807 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.849 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'keypairs' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:35 np0005597378 nova_compute[238941]: 2026-01-27 13:44:35.979 238945 DEBUG nova.network.neutron [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.040 238945 INFO nova.compute.manager [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Took 1.07 seconds to deallocate network for instance.#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.145 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.146 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 402 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 211 op/s
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.198 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521461.1974733, 3a83bcb6-4245-4637-81be-f4c0c75bc965 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.199 238945 INFO nova.compute.manager [-] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.229 238945 DEBUG nova.compute.manager [None req-cd28b877-1f79-461f-a34e-14ef6df61fe8 - - - - - -] [instance: 3a83bcb6-4245-4637-81be-f4c0c75bc965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.299 238945 DEBUG oslo_concurrency.processutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.437 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Creating config drive at /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.444 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6mtbi0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.578 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6mtbi0t" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.607 238945 DEBUG nova.storage.rbd_utils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] rbd image bee7c432-6457-4160-917c-a807eca3df0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.613 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.831 238945 DEBUG oslo_concurrency.processutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config bee7c432-6457-4160-917c-a807eca3df0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.831 238945 INFO nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting local config drive /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e/disk.config because it was imported into RBD.#033[00m
Jan 27 08:44:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066382659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:36 np0005597378 kernel: tap851829c6-49: entered promiscuous mode
Jan 27 08:44:36 np0005597378 NetworkManager[48904]: <info>  [1769521476.8804] manager: (tap851829c6-49): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:36Z|00169|binding|INFO|Claiming lport 851829c6-49a6-4580-90d9-df985a736216 for this chassis.
Jan 27 08:44:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:36Z|00170|binding|INFO|851829c6-49a6-4580-90d9-df985a736216: Claiming fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.905 238945 DEBUG oslo_concurrency.processutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:36 np0005597378 systemd-udevd[263632]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:44:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:36Z|00171|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 ovn-installed in OVS
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:36 np0005597378 systemd-machined[207425]: New machine qemu-29-instance-00000011.
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.915 238945 DEBUG nova.compute.provider_tree [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.920 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '9', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:36Z|00172|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 up in Southbound
Jan 27 08:44:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.921 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef bound to our chassis#033[00m
Jan 27 08:44:36 np0005597378 NetworkManager[48904]: <info>  [1769521476.9237] device (tap851829c6-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:44:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.923 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:36 np0005597378 systemd[1]: Started Virtual Machine qemu-29-instance-00000011.
Jan 27 08:44:36 np0005597378 NetworkManager[48904]: <info>  [1769521476.9248] device (tap851829c6-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:44:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.938 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c85305e-3b6d-4be9-920f-9d31545aca95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:36 np0005597378 nova_compute[238941]: 2026-01-27 13:44:36.954 238945 DEBUG nova.scheduler.client.report [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.965 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dc017e5e-8a28-41f0-8c56-a5b7b447f7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:36.971 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[30fab008-ba24-4a9a-ba0f-15ff220fcd8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.003 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2be438aa-66be-4159-8108-b546d5d646bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88612944-aab8-46ea-b51f-789d6e4fefd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263646, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.039 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75164ba3-00bf-4679-b3d0-832b19fbb5d2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263647, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263647, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.041 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.043 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:37.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.046 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.319 238945 INFO nova.scheduler.client.report [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Deleted allocations for instance 37f821bc-2bb2-4a60-a76a-4b3123788e6c#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.393 238945 DEBUG oslo_concurrency.lockutils [None req-bab7d344-098f-4d9c-88ed-82b4126c04f3 bb804373b8be4577a6623d2131cdcd59 c8773022351141649f1c7a9db9002d2f - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.993 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for bee7c432-6457-4160-917c-a807eca3df0e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.995 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521477.9930878, bee7c432-6457-4160-917c-a807eca3df0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.995 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.999 238945 DEBUG nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:44:37 np0005597378 nova_compute[238941]: 2026-01-27 13:44:37.999 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.003 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance spawned successfully.#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.004 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:44:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 409 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 207 op/s
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.297 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.300 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.566 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.566 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.567 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.567 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.568 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.568 238945 DEBUG nova.virt.libvirt.driver [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.883 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.884 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521477.9946797, bee7c432-6457-4160-917c-a807eca3df0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:38 np0005597378 nova_compute[238941]: 2026-01-27 13:44:38.884 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Started (Lifecycle Event)#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.008 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.012 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.072 238945 DEBUG nova.compute.manager [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.107 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.174 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.175 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.175 238945 DEBUG nova.objects.instance [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.253 238945 DEBUG oslo_concurrency.lockutils [None req-e7d1fa60-7705-47ed-ab20-ed1d98e6e2ca 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-unplugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.462 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] No waiting events found dispatching network-vif-unplugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 WARNING nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received unexpected event network-vif-unplugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.463 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 DEBUG oslo_concurrency.lockutils [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "37f821bc-2bb2-4a60-a76a-4b3123788e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 DEBUG nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] No waiting events found dispatching network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.464 238945 WARNING nova.compute.manager [req-2dd12f7c-2a8a-42d6-b0f3-b5827e24429d req-4f5d3215-a145-4a12-b11c-4e0456041ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received unexpected event network-vif-plugged-73b5c991-2d4d-4152-a3e1-02379e28f9c5 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.519 238945 DEBUG nova.network.neutron [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.540 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.544 238945 DEBUG nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Received event network-vif-deleted-73b5c991-2d4d-4152-a3e1-02379e28f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.544 238945 DEBUG nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.545 238945 DEBUG oslo_concurrency.lockutils [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.545 238945 DEBUG oslo_concurrency.lockutils [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.546 238945 DEBUG oslo_concurrency.lockutils [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.546 238945 DEBUG nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.547 238945 WARNING nova.compute.manager [req-5e99de3c-e2f6-4536-af7d-23875eec1098 req-4bfbe4d8-421f-49f4-b136-ee4f21fdcc66 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.547 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.548 238945 DEBUG nova.network.neutron [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.553 238945 DEBUG nova.virt.libvirt.vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.554 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.555 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.556 238945 DEBUG os_vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.557 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.558 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.562 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap694e1e12-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.563 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap694e1e12-dc, col_values=(('external_ids', {'iface-id': '694e1e12-dc4a-4a42-ba67-46b29efc58c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:9d:8a', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 NetworkManager[48904]: <info>  [1769521479.5660] manager: (tap694e1e12-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.576 238945 INFO os_vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc')#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.577 238945 DEBUG nova.virt.libvirt.vif [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.577 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.578 238945 DEBUG nova.network.os_vif_util [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.580 238945 DEBUG nova.virt.libvirt.guest [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:18:9d:8a"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <target dev="tap694e1e12-dc"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:44:39 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 08:44:39 np0005597378 kernel: tap694e1e12-dc: entered promiscuous mode
Jan 27 08:44:39 np0005597378 NetworkManager[48904]: <info>  [1769521479.5909] manager: (tap694e1e12-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Jan 27 08:44:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:39Z|00173|binding|INFO|Claiming lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 for this chassis.
Jan 27 08:44:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:39Z|00174|binding|INFO|694e1e12-dc4a-4a42-ba67-46b29efc58c1: Claiming fa:16:3e:18:9d:8a 10.100.0.11
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 NetworkManager[48904]: <info>  [1769521479.6051] device (tap694e1e12-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:44:39 np0005597378 NetworkManager[48904]: <info>  [1769521479.6060] device (tap694e1e12-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:44:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:39Z|00175|binding|INFO|Setting lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 ovn-installed in OVS
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:39Z|00176|binding|INFO|Setting lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 up in Southbound
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.627 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:9d:8a 10.100.0.11'], port_security=['fa:16:3e:18:9d:8a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=694e1e12-dc4a-4a42-ba67-46b29efc58c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.628 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.630 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bac14673-a43c-4d37-b2b1-a7e0051a2a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.708 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c18a78fa-f605-4ae9-8fba-61092b9080dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.717 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5feeaeeb-314e-4552-8244-7799e5942c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.718 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.719 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.719 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.720 238945 DEBUG nova.virt.libvirt.driver [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:18:9d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:39 np0005597378 podman[263697]: 2026-01-27 13:44:39.72928345 +0000 UTC m=+0.096059161 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.772 238945 DEBUG nova.virt.libvirt.guest [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:44:39</nova:creationTime>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:44:39 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 08:44:39 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:44:39 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:44:39 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:44:39 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.778 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39f1fc97-b140-4847-8ac0-6d5a611df7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.798 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6ae7de-6c4f-44f4-bb20-9465743632ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263726, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.814 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22cc5fc8-473a-4514-9c2a-e8f781272dd6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263727, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263727, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.816 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.818 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.819 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.819 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:39.819 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:39 np0005597378 nova_compute[238941]: 2026-01-27 13:44:39.846 238945 DEBUG oslo_concurrency.lockutils [None req-ce819190-cd32-45ca-a545-c614d671ee97 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 226 op/s
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.856 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.856 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.857 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:44:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:40Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:9d:8a 10.100.0.11
Jan 27 08:44:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:40Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:9d:8a 10.100.0.11
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.993 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.994 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:40 np0005597378 nova_compute[238941]: 2026-01-27 13:44:40.995 238945 DEBUG nova.objects.instance [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.689 238945 DEBUG nova.objects.instance [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.720 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.762 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.763 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.763 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 WARNING nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.764 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.765 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.765 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.765 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.766 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.766 238945 WARNING nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.766 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG oslo_concurrency.lockutils [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.767 238945 DEBUG nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:41 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.768 238945 WARNING nova.compute.manager [req-5db8c1de-8903-4fd8-ab58-ae2e86c2e1f7 req-0dbf49b1-8a6e-4735-a378-f1869d8282a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:41.999 238945 DEBUG nova.policy [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:44:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 193 op/s
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.250 238945 DEBUG nova.network.neutron [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port 694e1e12-dc4a-4a42-ba67-46b29efc58c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.251 238945 DEBUG nova.network.neutron [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.268 238945 DEBUG oslo_concurrency.lockutils [req-4a969ee9-3925-468a-8e4c-6cc80bd0ef07 req-120b1746-6f27-4b53-ad2d-a8fc8619d105 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.470 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [{"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.495 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-4c52012f-9a4f-4599-adb0-2c658a054f91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.496 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.496 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.496 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.497 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.516 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.517 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.517 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.518 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.518 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:42 np0005597378 nova_compute[238941]: 2026-01-27 13:44:42.699 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully created port: 033bda90-ba32-42f7-aab3-c017e5594e94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:44:42 np0005597378 podman[263739]: 2026-01-27 13:44:42.712478638 +0000 UTC m=+0.053543239 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 08:44:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857719808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.114 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.213 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.214 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.218 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.219 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.222 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.223 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.226 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.226 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.230 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.230 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.453 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3494MB free_disk=59.7851179651916GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.454 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.454 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.584 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.585 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.586 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bee7c432-6457-4160-917c-a807eca3df0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 4c52012f-9a4f-4599-adb0-2c658a054f91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 677a728d-1d2a-4e11-909d-c2c91838cfbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.703 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b816093f-751c-4d16-bb91-82ae954a9732 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.704 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.704 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.729 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.729 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.730 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.730 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.730 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.731 238945 INFO nova.compute.manager [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Terminating instance#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.732 238945 DEBUG nova.compute.manager [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.811 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:43 np0005597378 kernel: tap6f5f40a3-5f (unregistering): left promiscuous mode
Jan 27 08:44:43 np0005597378 NetworkManager[48904]: <info>  [1769521483.9176] device (tap6f5f40a3-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:43Z|00177|binding|INFO|Releasing lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac from this chassis (sb_readonly=0)
Jan 27 08:44:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:43Z|00178|binding|INFO|Setting lport 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac down in Southbound
Jan 27 08:44:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:43Z|00179|binding|INFO|Removing iface tap6f5f40a3-5f ovn-installed in OVS
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.938 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:7e:c5 10.100.0.11'], port_security=['fa:16:3e:67:7e:c5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b816093f-751c-4d16-bb91-82ae954a9732', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.939 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.941 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.958 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1901704-f8ac-4e3d-8350-3c7821150c79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:43 np0005597378 nova_compute[238941]: 2026-01-27 13:44:43.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.986 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e28dde-74e3-4295-921c-c508e0998422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:43.989 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[87e905a1-548f-4c05-8e8a-895d43275d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:43 np0005597378 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 27 08:44:43 np0005597378 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 17.900s CPU time.
Jan 27 08:44:44 np0005597378 systemd-machined[207425]: Machine qemu-24-instance-00000016 terminated.
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.020 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b54598c8-51b9-4c8e-a8b2-f32a9c1589e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.040 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d39136f6-0add-4d43-b503-0dd76b40f36f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263802, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.056 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04638c64-097d-4b3c-bef9-c4a6f4a68a99]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263803, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263803, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.065 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.065 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.065 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:44.066 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.170 238945 INFO nova.virt.libvirt.driver [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Instance destroyed successfully.#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.171 238945 DEBUG nova.objects.instance [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid b816093f-751c-4d16-bb91-82ae954a9732 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 236 op/s
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.196 238945 DEBUG nova.virt.libvirt.vif [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-147275193',display_name='tempest-ServersAdminTestJSON-server-147275193',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-147275193',id=22,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-1hcjid1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:55Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=b816093f-751c-4d16-bb91-82ae954a9732,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.196 238945 DEBUG nova.network.os_vif_util [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "address": "fa:16:3e:67:7e:c5", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f5f40a3-5f", "ovs_interfaceid": "6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.197 238945 DEBUG nova.network.os_vif_util [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.197 238945 DEBUG os_vif [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.199 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f5f40a3-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.204 238945 INFO os_vif [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:7e:c5,bridge_name='br-int',has_traffic_filtering=True,id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f5f40a3-5f')#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG nova.compute.manager [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-unplugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG oslo_concurrency.lockutils [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG oslo_concurrency.lockutils [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG oslo_concurrency.lockutils [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.322 238945 DEBUG nova.compute.manager [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] No waiting events found dispatching network-vif-unplugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.323 238945 DEBUG nova.compute.manager [req-5379a944-0ab7-4e1a-96cd-31d3e3a0fdfe req-dadf7857-0615-4101-8f1a-3e9218fa3ac4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-unplugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.369 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: 033bda90-ba32-42f7-aab3-c017e5594e94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.408 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.408 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.408 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:44:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133429943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.447 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.455 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.460 238945 DEBUG nova.compute.manager [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.460 238945 DEBUG nova.compute.manager [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-033bda90-ba32-42f7-aab3-c017e5594e94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.460 238945 DEBUG oslo_concurrency.lockutils [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.482 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.504 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.504 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.608 238945 WARNING nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:44:44 np0005597378 nova_compute[238941]: 2026-01-27 13:44:44.609 238945 WARNING nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.498 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.499 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.779 238945 INFO nova.virt.libvirt.driver [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deleting instance files /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732_del#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.780 238945 INFO nova.virt.libvirt.driver [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deletion of /var/lib/nova/instances/b816093f-751c-4d16-bb91-82ae954a9732_del complete#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.880 238945 INFO nova.compute.manager [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 2.15 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.881 238945 DEBUG oslo.service.loopingcall [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.881 238945 DEBUG nova.compute.manager [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:44:45 np0005597378 nova_compute[238941]: 2026-01-27 13:44:45.881 238945 DEBUG nova.network.neutron [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:44:46 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:46Z|00180|binding|INFO|Releasing lport f2abaf39-2261-4bb7-9bb5-6208083120f8 from this chassis (sb_readonly=0)
Jan 27 08:44:46 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:46Z|00181|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 405 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 27 08:44:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.407 238945 DEBUG nova.compute.manager [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.407 238945 DEBUG oslo_concurrency.lockutils [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b816093f-751c-4d16-bb91-82ae954a9732-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 DEBUG oslo_concurrency.lockutils [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 DEBUG oslo_concurrency.lockutils [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 DEBUG nova.compute.manager [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] No waiting events found dispatching network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.408 238945 WARNING nova.compute.manager [req-082afc14-f77c-42b3-a35d-e0e24f380d35 req-2a2daeef-f91b-4862-bd1a-ee32ea9be997 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received unexpected event network-vif-plugged-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:44:46 np0005597378 nova_compute[238941]: 2026-01-27 13:44:46.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:47 np0005597378 nova_compute[238941]: 2026-01-27 13:44:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:44:47 np0005597378 nova_compute[238941]: 2026-01-27 13:44:47.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:44:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 392 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 132 op/s
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.428 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521473.4270515, 37f821bc-2bb2-4a60-a76a-4b3123788e6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.429 238945 INFO nova.compute.manager [-] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.528 238945 DEBUG nova.compute.manager [None req-fad801ab-dbae-4dc8-bf89-90c5249f2c01 - - - - - -] [instance: 37f821bc-2bb2-4a60-a76a-4b3123788e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.935 238945 DEBUG nova.network.neutron [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.970 238945 DEBUG nova.compute.manager [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Received event network-vif-deleted-6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.970 238945 INFO nova.compute.manager [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Neutron deleted interface 6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.970 238945 DEBUG nova.network.neutron [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:48 np0005597378 nova_compute[238941]: 2026-01-27 13:44:48.999 238945 INFO nova.compute.manager [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Took 3.12 seconds to deallocate network for instance.#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.045 238945 DEBUG nova.compute.manager [req-2910c2d7-4737-4fd2-b354-3dfb9e3e99e3 req-6138fae4-7f14-4e81-b2d8-360d5cff50a3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Detach interface failed, port_id=6f5f40a3-5f2b-43c9-b993-00b2b6fc59ac, reason: Instance b816093f-751c-4d16-bb91-82ae954a9732 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.397 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.397 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.475 238945 DEBUG nova.network.neutron [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.509 238945 DEBUG oslo_concurrency.processutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.702 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.703 238945 DEBUG oslo_concurrency.lockutils [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.703 238945 DEBUG nova.network.neutron [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port 033bda90-ba32-42f7-aab3-c017e5594e94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.708 238945 DEBUG nova.virt.libvirt.vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.708 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.709 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.709 238945 DEBUG os_vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.710 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.710 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.712 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.712 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap033bda90-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.713 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap033bda90-ba, col_values=(('external_ids', {'iface-id': '033bda90-ba32-42f7-aab3-c017e5594e94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:53:9e', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.714 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 NetworkManager[48904]: <info>  [1769521489.7154] manager: (tap033bda90-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.721 238945 INFO os_vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba')#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.722 238945 DEBUG nova.virt.libvirt.vif [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.722 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.723 238945 DEBUG nova.network.os_vif_util [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.729 238945 DEBUG nova.virt.libvirt.guest [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 08:44:49 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:0b:53:9e"/>
Jan 27 08:44:49 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:44:49 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:44:49 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:44:49 np0005597378 nova_compute[238941]:  <target dev="tap033bda90-ba"/>
Jan 27 08:44:49 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:44:49 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 08:44:49 np0005597378 kernel: tap033bda90-ba: entered promiscuous mode
Jan 27 08:44:49 np0005597378 NetworkManager[48904]: <info>  [1769521489.7460] manager: (tap033bda90-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:49Z|00182|binding|INFO|Claiming lport 033bda90-ba32-42f7-aab3-c017e5594e94 for this chassis.
Jan 27 08:44:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:49Z|00183|binding|INFO|033bda90-ba32-42f7-aab3-c017e5594e94: Claiming fa:16:3e:0b:53:9e 10.100.0.10
Jan 27 08:44:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:49Z|00184|binding|INFO|Setting lport 033bda90-ba32-42f7-aab3-c017e5594e94 ovn-installed in OVS
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:49Z|00185|binding|INFO|Setting lport 033bda90-ba32-42f7-aab3-c017e5594e94 up in Southbound
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.781 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:53:9e 10.100.0.10'], port_security=['fa:16:3e:0b:53:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=033bda90-ba32-42f7-aab3-c017e5594e94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.782 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 033bda90-ba32-42f7-aab3-c017e5594e94 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.783 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:44:49 np0005597378 systemd-udevd[263861]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be8f806f-cc52-4b30-846d-2a826c073b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:49 np0005597378 NetworkManager[48904]: <info>  [1769521489.8106] device (tap033bda90-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:44:49 np0005597378 NetworkManager[48904]: <info>  [1769521489.8116] device (tap033bda90-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.838 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ab384ec4-c7c2-42aa-a959-f90a8ed14d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.844 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3e2a72-8135-4be7-92e0-36af90918ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.879 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[287ec784-5539-4603-8143-1d0eb360525f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.903 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6689cd-4df7-4328-b287-a9c2b12c176f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263868, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86c2ef16-ca5a-4b9a-8cef-4e572a067574]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263869, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263869, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.923 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.925 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.926 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.926 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.927 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:49.927 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.993 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.994 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.995 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.995 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:18:9d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:49 np0005597378 nova_compute[238941]: 2026-01-27 13:44:49.996 238945 DEBUG nova.virt.libvirt.driver [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:53:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.065 238945 DEBUG nova.virt.libvirt.guest [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:44:50</nova:creationTime>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:44:50 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 08:44:50 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:44:50 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:44:50 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:44:50 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:44:50 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:44:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292327475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.090 238945 DEBUG oslo_concurrency.processutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.095 238945 DEBUG nova.compute.provider_tree [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.160 238945 DEBUG oslo_concurrency.lockutils [None req-abc9bb4f-215f-4b27-9062-d0341cc0bbd8 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.163 238945 DEBUG nova.scheduler.client.report [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 329 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.233 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.267 238945 INFO nova.scheduler.client.report [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance b816093f-751c-4d16-bb91-82ae954a9732#033[00m
Jan 27 08:44:50 np0005597378 nova_compute[238941]: 2026-01-27 13:44:50.430 238945 DEBUG oslo_concurrency.lockutils [None req-ad39a04f-2e6d-4022-ad3b-d7ef686fedd6 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "b816093f-751c-4d16-bb91-82ae954a9732" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:51Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:51Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:0a:48 10.100.0.13
Jan 27 08:44:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:51Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:53:9e 10.100.0.10
Jan 27 08:44:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:51Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:53:9e 10.100.0.10
Jan 27 08:44:51 np0005597378 nova_compute[238941]: 2026-01-27 13:44:51.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.030 238945 DEBUG nova.compute.manager [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG oslo_concurrency.lockutils [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG oslo_concurrency.lockutils [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG oslo_concurrency.lockutils [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.031 238945 DEBUG nova.compute.manager [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.032 238945 WARNING nova.compute.manager [req-bd8b6225-5d88-423e-8dfa-a85a48fb4933 req-0e673582-7594-4404-a434-4b06123e16df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 329 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 297 KiB/s wr, 79 op/s
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.743 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.743 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.744 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.744 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.744 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.745 238945 INFO nova.compute.manager [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Terminating instance#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.746 238945 DEBUG nova.compute.manager [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:44:52 np0005597378 kernel: tap5f5812b1-ad (unregistering): left promiscuous mode
Jan 27 08:44:52 np0005597378 NetworkManager[48904]: <info>  [1769521492.8048] device (tap5f5812b1-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:52Z|00186|binding|INFO|Releasing lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a from this chassis (sb_readonly=0)
Jan 27 08:44:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:52Z|00187|binding|INFO|Setting lport 5f5812b1-ad53-4ee5-8409-ce2c112fa95a down in Southbound
Jan 27 08:44:52 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:52Z|00188|binding|INFO|Removing iface tap5f5812b1-ad ovn-installed in OVS
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.821 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:fa:85 10.100.0.4'], port_security=['fa:16:3e:3d:fa:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '677a728d-1d2a-4e11-909d-c2c91838cfbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5f5812b1-ad53-4ee5-8409-ce2c112fa95a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.822 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5f5812b1-ad53-4ee5-8409-ce2c112fa95a in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.824 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.841 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a75468-0b5a-4088-91f4-5a1e6a04e51b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:52 np0005597378 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 27 08:44:52 np0005597378 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 17.133s CPU time.
Jan 27 08:44:52 np0005597378 systemd-machined[207425]: Machine qemu-22-instance-00000014 terminated.
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.872 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f323ee7c-b33a-4a1b-a170-4a66d128ba9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.875 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[688f4b16-bf86-4149-a39e-852eb238e24c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.904 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f36f78-23c6-454e-8077-44f11834c1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8868e64b-bd59-4e8a-b45d-69588f3e5065]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263883, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.939 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2ea194-2eda-4100-9969-e4a286f600a1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263884, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263884, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.940 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:52.949 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.982 238945 INFO nova.virt.libvirt.driver [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Instance destroyed successfully.#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.983 238945 DEBUG nova.objects.instance [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid 677a728d-1d2a-4e11-909d-c2c91838cfbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.996 238945 DEBUG nova.virt.libvirt.vif [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2040609420',display_name='tempest-ServersAdminTestJSON-server-2040609420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2040609420',id=20,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-s9uhvpm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:35Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=677a728d-1d2a-4e11-909d-c2c91838cfbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.997 238945 DEBUG nova.network.os_vif_util [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "address": "fa:16:3e:3d:fa:85", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f5812b1-ad", "ovs_interfaceid": "5f5812b1-ad53-4ee5-8409-ce2c112fa95a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.998 238945 DEBUG nova.network.os_vif_util [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:52 np0005597378 nova_compute[238941]: 2026-01-27 13:44:52.999 238945 DEBUG os_vif [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.001 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f5812b1-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.007 238945 INFO os_vif [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:fa:85,bridge_name='br-int',has_traffic_filtering=True,id=5f5812b1-ad53-4ee5-8409-ce2c112fa95a,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f5812b1-ad')#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.416 238945 DEBUG nova.network.neutron [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port 033bda90-ba32-42f7-aab3-c017e5594e94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.417 238945 DEBUG nova.network.neutron [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.438 238945 DEBUG oslo_concurrency.lockutils [req-a879c7d5-2578-419f-8b41-58e60627bda7 req-83e13190-0c44-4939-98a8-2e93cb6ca528 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.711 238945 INFO nova.virt.libvirt.driver [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deleting instance files /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe_del#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.712 238945 INFO nova.virt.libvirt.driver [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deletion of /var/lib/nova/instances/677a728d-1d2a-4e11-909d-c2c91838cfbe_del complete#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.764 238945 INFO nova.compute.manager [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.764 238945 DEBUG oslo.service.loopingcall [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.764 238945 DEBUG nova.compute.manager [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:44:53 np0005597378 nova_compute[238941]: 2026-01-27 13:44:53.765 238945 DEBUG nova.network.neutron [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.048 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-18883f3b-6c4c-443b-81ec-0b1610e22203" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.048 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-18883f3b-6c4c-443b-81ec-0b1610e22203" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.049 238945 DEBUG nova.objects.instance [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 315 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 MiB/s wr, 119 op/s
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.193 238945 DEBUG nova.compute.manager [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG oslo_concurrency.lockutils [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG oslo_concurrency.lockutils [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG oslo_concurrency.lockutils [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 DEBUG nova.compute.manager [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.194 238945 WARNING nova.compute.manager [req-96ea07f6-9146-41d8-9850-520b17db37a8 req-2ec54187-f84d-4cee-81dc-385770786068 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:44:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:44:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.581 238945 DEBUG nova.network.neutron [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.599 238945 DEBUG nova.objects.instance [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.607 238945 INFO nova.compute.manager [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Took 0.84 seconds to deallocate network for instance.#033[00m
Jan 27 08:44:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.615 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.667 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.667 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.770 238945 DEBUG oslo_concurrency.processutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:44:54 np0005597378 nova_compute[238941]: 2026-01-27 13:44:54.804 238945 DEBUG nova.compute.manager [req-d177338c-0b9c-4b28-a21c-e46c030d4b70 req-4b13edec-5162-4752-81a6-a987432a4ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Received event network-vif-deleted-5f5812b1-ad53-4ee5-8409-ce2c112fa95a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:44:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366633500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.339 238945 DEBUG oslo_concurrency.processutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.344 238945 DEBUG nova.compute.provider_tree [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.358 238945 DEBUG nova.policy [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.373 238945 DEBUG nova.scheduler.client.report [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.436 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.519 238945 INFO nova.scheduler.client.report [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance 677a728d-1d2a-4e11-909d-c2c91838cfbe#033[00m
Jan 27 08:44:55 np0005597378 nova_compute[238941]: 2026-01-27 13:44:55.588 238945 DEBUG oslo_concurrency.lockutils [None req-132ccfb3-376e-4d0d-a43a-640febe624d0 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "677a728d-1d2a-4e11-909d-c2c91838cfbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:55 np0005597378 podman[264149]: 2026-01-27 13:44:55.757675403 +0000 UTC m=+0.063309142 container create 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:44:55 np0005597378 podman[264149]: 2026-01-27 13:44:55.722724454 +0000 UTC m=+0.028358213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:44:55 np0005597378 systemd[1]: Started libpod-conmon-1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71.scope.
Jan 27 08:44:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:55 np0005597378 podman[264149]: 2026-01-27 13:44:55.907540278 +0000 UTC m=+0.213174037 container init 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:44:55 np0005597378 podman[264149]: 2026-01-27 13:44:55.919265383 +0000 UTC m=+0.224899122 container start 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:44:55 np0005597378 vigorous_pike[264165]: 167 167
Jan 27 08:44:55 np0005597378 systemd[1]: libpod-1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71.scope: Deactivated successfully.
Jan 27 08:44:55 np0005597378 podman[264149]: 2026-01-27 13:44:55.946016782 +0000 UTC m=+0.251650541 container attach 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:44:55 np0005597378 podman[264149]: 2026-01-27 13:44:55.946598148 +0000 UTC m=+0.252231887 container died 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 08:44:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3a7c6b1fcea0ca03b0b6c30eae566ba8939f233f9d4edd6eb20ebe979fe0cae3-merged.mount: Deactivated successfully.
Jan 27 08:44:56 np0005597378 podman[264149]: 2026-01-27 13:44:56.139951472 +0000 UTC m=+0.445585211 container remove 1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:44:56 np0005597378 systemd[1]: libpod-conmon-1af43e573e36f87eccc7a1301776dc98243689037c167fd782628f2cbf447e71.scope: Deactivated successfully.
Jan 27 08:44:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 311 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.244 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Successfully updated port: 18883f3b-6c4c-443b-81ec-0b1610e22203 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.258 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.258 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.258 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.404 238945 WARNING nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.405 238945 WARNING nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.405 238945 WARNING nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] ee180809-3e36-46bd-ba3a-3bacc6f9ce96 already exists in list: networks containing: ['ee180809-3e36-46bd-ba3a-3bacc6f9ce96']. ignoring it#033[00m
Jan 27 08:44:56 np0005597378 podman[264188]: 2026-01-27 13:44:56.315269821 +0000 UTC m=+0.024060537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:44:56 np0005597378 podman[264188]: 2026-01-27 13:44:56.438771099 +0000 UTC m=+0.147561795 container create 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:44:56 np0005597378 systemd[1]: Started libpod-conmon-42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53.scope.
Jan 27 08:44:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:56 np0005597378 podman[264188]: 2026-01-27 13:44:56.544938791 +0000 UTC m=+0.253729507 container init 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:44:56 np0005597378 podman[264188]: 2026-01-27 13:44:56.552285288 +0000 UTC m=+0.261075984 container start 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:44:56 np0005597378 podman[264188]: 2026-01-27 13:44:56.565529204 +0000 UTC m=+0.274319930 container attach 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:44:56 np0005597378 nova_compute[238941]: 2026-01-27 13:44:56.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]: [
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:    {
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "available": false,
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "being_replaced": false,
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "ceph_device_lvm": false,
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "lsm_data": {},
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "lvs": [],
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "path": "/dev/sr0",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "rejected_reasons": [
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "Insufficient space (<5GB)",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "Has a FileSystem"
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        ],
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        "sys_api": {
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "actuators": null,
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "device_nodes": [
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:                "sr0"
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            ],
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "devname": "sr0",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "human_readable_size": "482.00 KB",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "id_bus": "ata",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "model": "QEMU DVD-ROM",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "nr_requests": "2",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "parent": "/dev/sr0",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "partitions": {},
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "path": "/dev/sr0",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "removable": "1",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "rev": "2.5+",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "ro": "0",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "rotational": "1",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "sas_address": "",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "sas_device_handle": "",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "scheduler_mode": "mq-deadline",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "sectors": 0,
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "sectorsize": "2048",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "size": 493568.0,
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "support_discard": "2048",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "type": "disk",
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:            "vendor": "QEMU"
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:        }
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]:    }
Jan 27 08:44:57 np0005597378 stupefied_lehmann[264205]: ]
Jan 27 08:44:57 np0005597378 systemd[1]: libpod-42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53.scope: Deactivated successfully.
Jan 27 08:44:57 np0005597378 podman[264188]: 2026-01-27 13:44:57.15500698 +0000 UTC m=+0.863797706 container died 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:44:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9a012ea38763a1f7db741355c918ba12fbed28d7650484fb37efd63e7e61f39a-merged.mount: Deactivated successfully.
Jan 27 08:44:57 np0005597378 podman[264188]: 2026-01-27 13:44:57.423694628 +0000 UTC m=+1.132485334 container remove 42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lehmann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 08:44:57 np0005597378 systemd[1]: libpod-conmon-42d03fbe9cd6def6dfb2e17d73a344dba802dc3ad7de4518e5749620392e3d53.scope: Deactivated successfully.
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:44:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:44:57 np0005597378 nova_compute[238941]: 2026-01-27 13:44:57.784 238945 DEBUG nova.compute.manager [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-changed-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:44:57 np0005597378 nova_compute[238941]: 2026-01-27 13:44:57.785 238945 DEBUG nova.compute.manager [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing instance network info cache due to event network-changed-18883f3b-6c4c-443b-81ec-0b1610e22203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:44:57 np0005597378 nova_compute[238941]: 2026-01-27 13:44:57.785 238945 DEBUG oslo_concurrency.lockutils [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:44:57 np0005597378 podman[265114]: 2026-01-27 13:44:57.923661858 +0000 UTC m=+0.045591506 container create d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:44:57 np0005597378 systemd[1]: Started libpod-conmon-d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b.scope.
Jan 27 08:44:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:57 np0005597378 podman[265114]: 2026-01-27 13:44:57.898695577 +0000 UTC m=+0.020625245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:44:58 np0005597378 nova_compute[238941]: 2026-01-27 13:44:58.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:58 np0005597378 podman[265114]: 2026-01-27 13:44:58.014729694 +0000 UTC m=+0.136659342 container init d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:44:58 np0005597378 podman[265114]: 2026-01-27 13:44:58.023452669 +0000 UTC m=+0.145382307 container start d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:44:58 np0005597378 podman[265114]: 2026-01-27 13:44:58.027865857 +0000 UTC m=+0.149795505 container attach d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:44:58 np0005597378 focused_chaum[265130]: 167 167
Jan 27 08:44:58 np0005597378 systemd[1]: libpod-d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b.scope: Deactivated successfully.
Jan 27 08:44:58 np0005597378 podman[265114]: 2026-01-27 13:44:58.029762909 +0000 UTC m=+0.151692577 container died d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:44:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c2c789479a6a35b519c06de3608e3412ed3c31309ceada13aad6d10f929191ca-merged.mount: Deactivated successfully.
Jan 27 08:44:58 np0005597378 podman[265114]: 2026-01-27 13:44:58.119500409 +0000 UTC m=+0.241430057 container remove d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:44:58 np0005597378 systemd[1]: libpod-conmon-d0539636989a5651edb15896b42c914158d6967cc34d33a7824901223e58905b.scope: Deactivated successfully.
Jan 27 08:44:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 279 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Jan 27 08:44:58 np0005597378 podman[265153]: 2026-01-27 13:44:58.319025608 +0000 UTC m=+0.040870978 container create 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:44:58 np0005597378 systemd[1]: Started libpod-conmon-493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4.scope.
Jan 27 08:44:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:58 np0005597378 podman[265153]: 2026-01-27 13:44:58.302092934 +0000 UTC m=+0.023938334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:44:58 np0005597378 podman[265153]: 2026-01-27 13:44:58.411224345 +0000 UTC m=+0.133069745 container init 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:44:58 np0005597378 podman[265153]: 2026-01-27 13:44:58.419384065 +0000 UTC m=+0.141229445 container start 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:44:58 np0005597378 podman[265153]: 2026-01-27 13:44:58.424621686 +0000 UTC m=+0.146467076 container attach 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:44:58 np0005597378 vigorous_mcclintock[265169]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:44:58 np0005597378 vigorous_mcclintock[265169]: --> All data devices are unavailable
Jan 27 08:44:58 np0005597378 systemd[1]: libpod-493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4.scope: Deactivated successfully.
Jan 27 08:44:58 np0005597378 podman[265153]: 2026-01-27 13:44:58.912432889 +0000 UTC m=+0.634278269 container died 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:44:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-83e16cedc3225480a04845c687c69e6d6af312a886a86c912afdda0a7e4ea16b-merged.mount: Deactivated successfully.
Jan 27 08:44:59 np0005597378 podman[265153]: 2026-01-27 13:44:59.009713313 +0000 UTC m=+0.731558703 container remove 493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:44:59 np0005597378 systemd[1]: libpod-conmon-493fe0c5693794be8599663e84874e7c0db538f2c9b3268198c3f1a4af4947f4.scope: Deactivated successfully.
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.138 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.139 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.140 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.140 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.140 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.142 238945 INFO nova.compute.manager [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Terminating instance#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.143 238945 DEBUG nova.compute.manager [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.174 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521484.1732442, b816093f-751c-4d16-bb91-82ae954a9732 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.174 238945 INFO nova.compute.manager [-] [instance: b816093f-751c-4d16-bb91-82ae954a9732] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:44:59 np0005597378 kernel: tap3cd42161-aa (unregistering): left promiscuous mode
Jan 27 08:44:59 np0005597378 NetworkManager[48904]: <info>  [1769521499.2204] device (tap3cd42161-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00189|binding|INFO|Releasing lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c from this chassis (sb_readonly=0)
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00190|binding|INFO|Setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c down in Southbound
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00191|binding|INFO|Removing iface tap3cd42161-aa ovn-installed in OVS
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 27 08:44:59 np0005597378 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Consumed 18.119s CPU time.
Jan 27 08:44:59 np0005597378 systemd-machined[207425]: Machine qemu-21-instance-00000012 terminated.
Jan 27 08:44:59 np0005597378 kernel: tap3cd42161-aa: entered promiscuous mode
Jan 27 08:44:59 np0005597378 NetworkManager[48904]: <info>  [1769521499.3662] manager: (tap3cd42161-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Jan 27 08:44:59 np0005597378 kernel: tap3cd42161-aa (unregistering): left promiscuous mode
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00192|if_status|INFO|Not updating pb chassis for 3cd42161-aa97-4ecb-9e41-e7a887f02d7c now as sb is readonly
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00193|binding|INFO|Releasing lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c from this chassis (sb_readonly=1)
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00194|if_status|INFO|Dropped 6 log messages in last 54 seconds (most recently, 54 seconds ago) due to excessive rate
Jan 27 08:44:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:44:59Z|00195|if_status|INFO|Not setting lport 3cd42161-aa97-4ecb-9e41-e7a887f02d7c down as sb is readonly
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.400 238945 INFO nova.virt.libvirt.driver [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Instance destroyed successfully.#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.400 238945 DEBUG nova.objects.instance [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid 4c52012f-9a4f-4599-adb0-2c658a054f91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.439 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:48:4c 10.100.0.14'], port_security=['fa:16:3e:3f:48:4c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c52012f-9a4f-4599-adb0-2c658a054f91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3cd42161-aa97-4ecb-9e41-e7a887f02d7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.441 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3cd42161-aa97-4ecb-9e41-e7a887f02d7c in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bc78608-1746-40d0-a3d3-be467e4c23ef#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85ed2f5d-fa5a-42ac-b436-1ad5a80131dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.463 238945 DEBUG nova.compute.manager [None req-ac0be53a-fb81-4b5d-a7c3-7b9f2e0ef053 - - - - - -] [instance: b816093f-751c-4d16-bb91-82ae954a9732] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.465 238945 DEBUG nova.virt.libvirt.vif [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1989956857',display_name='tempest-ServersAdminTestJSON-server-1989956857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1989956857',id=18,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-uvcvbdkl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:22Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=4c52012f-9a4f-4599-adb0-2c658a054f91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.465 238945 DEBUG nova.network.os_vif_util [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "address": "fa:16:3e:3f:48:4c", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cd42161-aa", "ovs_interfaceid": "3cd42161-aa97-4ecb-9e41-e7a887f02d7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.466 238945 DEBUG nova.network.os_vif_util [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.466 238945 DEBUG os_vif [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.468 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cd42161-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.471 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.474 238945 INFO os_vif [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:48:4c,bridge_name='br-int',has_traffic_filtering=True,id=3cd42161-aa97-4ecb-9e41-e7a887f02d7c,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cd42161-aa')#033[00m
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.487830796 +0000 UTC m=+0.051787762 container create 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.495 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee53d64-ead1-4fcc-b6e6-e7bbc48f8e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.502 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f04710-9f28-4bb8-8389-ce371cbed64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.532 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0861cfd9-d1a9-4b02-b745-56b8f7efdb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:59 np0005597378 systemd[1]: Started libpod-conmon-181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8.scope.
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.549 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f2788e-6076-4eb5-b448-d3b5f6dfe23c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bc78608-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:3f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 27, 'rx_bytes': 952, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 27, 'rx_bytes': 952, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403269, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265313, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.46005858 +0000 UTC m=+0.024015576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[56c0400a-9b27-4267-b482-1124c454a1f6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403280, 'tstamp': 403280}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265316, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bc78608-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403284, 'tstamp': 403284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265316, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.570 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.573 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc78608-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.573 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bc78608-10, col_values=(('external_ids', {'iface-id': 'f2abaf39-2261-4bb7-9bb5-6208083120f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:44:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:44:59.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:44:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:44:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2709941341' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:44:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:44:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2709941341' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.602743303 +0000 UTC m=+0.166700289 container init 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.609908445 +0000 UTC m=+0.173865411 container start 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.614866249 +0000 UTC m=+0.178823315 container attach 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:44:59 np0005597378 youthful_lovelace[265314]: 167 167
Jan 27 08:44:59 np0005597378 systemd[1]: libpod-181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8.scope: Deactivated successfully.
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.616293867 +0000 UTC m=+0.180250833 container died 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:44:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-145e65dec0e2f774e0bc33cfed92708a52948f20cb6e8b7acde81f6e0e95ce09-merged.mount: Deactivated successfully.
Jan 27 08:44:59 np0005597378 podman[265277]: 2026-01-27 13:44:59.707104207 +0000 UTC m=+0.271061173 container remove 181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:44:59 np0005597378 systemd[1]: libpod-conmon-181e7ad4a9c26deb6fbf178588b883299e80c54560dc4aa2e53db7600aba5fc8.scope: Deactivated successfully.
Jan 27 08:44:59 np0005597378 podman[265342]: 2026-01-27 13:44:59.908811945 +0000 UTC m=+0.048184516 container create 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:44:59 np0005597378 systemd[1]: Started libpod-conmon-7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf.scope.
Jan 27 08:44:59 np0005597378 podman[265342]: 2026-01-27 13:44:59.884382829 +0000 UTC m=+0.023755420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.987 238945 INFO nova.virt.libvirt.driver [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deleting instance files /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91_del#033[00m
Jan 27 08:44:59 np0005597378 nova_compute[238941]: 2026-01-27 13:44:59.989 238945 INFO nova.virt.libvirt.driver [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deletion of /var/lib/nova/instances/4c52012f-9a4f-4599-adb0-2c658a054f91_del complete#033[00m
Jan 27 08:44:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:44:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:44:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:00 np0005597378 podman[265342]: 2026-01-27 13:45:00.01211102 +0000 UTC m=+0.151483611 container init 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:45:00 np0005597378 podman[265342]: 2026-01-27 13:45:00.022261043 +0000 UTC m=+0.161633614 container start 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:45:00 np0005597378 podman[265342]: 2026-01-27 13:45:00.030445053 +0000 UTC m=+0.169817624 container attach 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 08:45:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 249 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]: {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:    "0": [
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:        {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "devices": [
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "/dev/loop3"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            ],
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_name": "ceph_lv0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_size": "21470642176",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "name": "ceph_lv0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "tags": {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cluster_name": "ceph",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.crush_device_class": "",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.encrypted": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.objectstore": "bluestore",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osd_id": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.type": "block",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.vdo": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.with_tpm": "0"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            },
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "type": "block",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "vg_name": "ceph_vg0"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:        }
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:    ],
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:    "1": [
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:        {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "devices": [
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "/dev/loop4"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            ],
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_name": "ceph_lv1",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_size": "21470642176",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "name": "ceph_lv1",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "tags": {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cluster_name": "ceph",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.crush_device_class": "",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.encrypted": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.objectstore": "bluestore",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osd_id": "1",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.type": "block",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.vdo": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.with_tpm": "0"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            },
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "type": "block",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "vg_name": "ceph_vg1"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:        }
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:    ],
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:    "2": [
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:        {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "devices": [
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "/dev/loop5"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            ],
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_name": "ceph_lv2",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_size": "21470642176",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "name": "ceph_lv2",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "tags": {
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.cluster_name": "ceph",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.crush_device_class": "",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.encrypted": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.objectstore": "bluestore",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osd_id": "2",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.type": "block",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.vdo": "0",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:                "ceph.with_tpm": "0"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            },
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "type": "block",
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:            "vg_name": "ceph_vg2"
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:        }
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]:    ]
Jan 27 08:45:00 np0005597378 gracious_noyce[265359]: }
Jan 27 08:45:00 np0005597378 systemd[1]: libpod-7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf.scope: Deactivated successfully.
Jan 27 08:45:00 np0005597378 podman[265342]: 2026-01-27 13:45:00.392418206 +0000 UTC m=+0.531790777 container died 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:00 np0005597378 nova_compute[238941]: 2026-01-27 13:45:00.397 238945 INFO nova.compute.manager [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:45:00 np0005597378 nova_compute[238941]: 2026-01-27 13:45:00.398 238945 DEBUG oslo.service.loopingcall [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:45:00 np0005597378 nova_compute[238941]: 2026-01-27 13:45:00.398 238945 DEBUG nova.compute.manager [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:45:00 np0005597378 nova_compute[238941]: 2026-01-27 13:45:00.398 238945 DEBUG nova.network.neutron [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:45:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3ad4a94c5401e143eddb4be8c46fa5c1779d299e6c5e90802cf3ea537a4dbec2-merged.mount: Deactivated successfully.
Jan 27 08:45:00 np0005597378 podman[265342]: 2026-01-27 13:45:00.439386768 +0000 UTC m=+0.578759329 container remove 7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noyce, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:45:00 np0005597378 systemd[1]: libpod-conmon-7c7bdf61abf30333ec020d573544504fe979c7a47670d74a7a384f58adaf7fcf.scope: Deactivated successfully.
Jan 27 08:45:00 np0005597378 podman[265441]: 2026-01-27 13:45:00.931532048 +0000 UTC m=+0.047338103 container create 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:00 np0005597378 systemd[1]: Started libpod-conmon-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope.
Jan 27 08:45:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:45:01 np0005597378 podman[265441]: 2026-01-27 13:45:00.906785363 +0000 UTC m=+0.022591458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:45:01 np0005597378 podman[265441]: 2026-01-27 13:45:01.029738627 +0000 UTC m=+0.145544712 container init 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:45:01 np0005597378 podman[265441]: 2026-01-27 13:45:01.036715834 +0000 UTC m=+0.152521899 container start 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:45:01 np0005597378 exciting_chatelet[265457]: 167 167
Jan 27 08:45:01 np0005597378 systemd[1]: libpod-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope: Deactivated successfully.
Jan 27 08:45:01 np0005597378 conmon[265457]: conmon 2486af68f44bc934cc33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope/container/memory.events
Jan 27 08:45:01 np0005597378 podman[265441]: 2026-01-27 13:45:01.047956676 +0000 UTC m=+0.163762731 container attach 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:45:01 np0005597378 podman[265441]: 2026-01-27 13:45:01.048993384 +0000 UTC m=+0.164799439 container died 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:45:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-351705c6dfedb1aea9045a757ef47f1361adf7309a7bea681144fb998be3fe98-merged.mount: Deactivated successfully.
Jan 27 08:45:01 np0005597378 podman[265441]: 2026-01-27 13:45:01.132529498 +0000 UTC m=+0.248335553 container remove 2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:45:01 np0005597378 systemd[1]: libpod-conmon-2486af68f44bc934cc33cf7072b52340624404a7812ec44f6ea5d67d8bfb9001.scope: Deactivated successfully.
Jan 27 08:45:01 np0005597378 podman[265481]: 2026-01-27 13:45:01.34401314 +0000 UTC m=+0.052833441 container create 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:01 np0005597378 systemd[1]: Started libpod-conmon-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope.
Jan 27 08:45:01 np0005597378 podman[265481]: 2026-01-27 13:45:01.317472737 +0000 UTC m=+0.026293068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:45:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:45:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:01 np0005597378 podman[265481]: 2026-01-27 13:45:01.449993987 +0000 UTC m=+0.158814308 container init 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:45:01 np0005597378 podman[265481]: 2026-01-27 13:45:01.458572208 +0000 UTC m=+0.167392509 container start 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.462 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.463 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:01 np0005597378 podman[265481]: 2026-01-27 13:45:01.473865998 +0000 UTC m=+0.182686329 container attach 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.522 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.710 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.711 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.720 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.721 238945 INFO nova.compute.claims [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:01 np0005597378 nova_compute[238941]: 2026-01-27 13:45:01.933 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.091 238945 DEBUG nova.network.neutron [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.139 238945 INFO nova.compute.manager [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Took 1.74 seconds to deallocate network for instance.#033[00m
Jan 27 08:45:02 np0005597378 lvm[265593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:45:02 np0005597378 lvm[265593]: VG ceph_vg0 finished
Jan 27 08:45:02 np0005597378 lvm[265595]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:45:02 np0005597378 lvm[265595]: VG ceph_vg1 finished
Jan 27 08:45:02 np0005597378 lvm[265596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:45:02 np0005597378 lvm[265596]: VG ceph_vg2 finished
Jan 27 08:45:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 249 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.249 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.269 238945 DEBUG nova.compute.manager [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-unplugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.270 238945 DEBUG oslo_concurrency.lockutils [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.270 238945 DEBUG oslo_concurrency.lockutils [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.270 238945 DEBUG oslo_concurrency.lockutils [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.271 238945 DEBUG nova.compute.manager [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] No waiting events found dispatching network-vif-unplugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.271 238945 WARNING nova.compute.manager [req-4c8c08bb-f44e-40ce-87cb-d10b53b8e809 req-6003481f-e432-404e-ae51-773dd10afd32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received unexpected event network-vif-unplugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:45:02 np0005597378 thirsty_kapitsa[265498]: {}
Jan 27 08:45:02 np0005597378 systemd[1]: libpod-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope: Deactivated successfully.
Jan 27 08:45:02 np0005597378 systemd[1]: libpod-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope: Consumed 1.378s CPU time.
Jan 27 08:45:02 np0005597378 podman[265481]: 2026-01-27 13:45:02.329086084 +0000 UTC m=+1.037906385 container died 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.383 238945 DEBUG nova.compute.manager [req-a7e3688b-24bd-49d9-9385-a19e769613a6 req-20922f5d-60c4-404d-a902-19947437a791 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-deleted-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ce10223073aa2a18121e0d6f01a7096da245281784bad51dfce1357164b4f8aa-merged.mount: Deactivated successfully.
Jan 27 08:45:02 np0005597378 podman[265481]: 2026-01-27 13:45:02.416824241 +0000 UTC m=+1.125644552 container remove 2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:45:02 np0005597378 systemd[1]: libpod-conmon-2f3779eb8d59fabbbab449a197394517a2558b629989d8a0cbd158bdf1009aa6.scope: Deactivated successfully.
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.516046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502516103, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2195, "num_deletes": 259, "total_data_size": 3336796, "memory_usage": 3388288, "flush_reason": "Manual Compaction"}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502548271, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3252771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21048, "largest_seqno": 23242, "table_properties": {"data_size": 3242909, "index_size": 6164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21249, "raw_average_key_size": 20, "raw_value_size": 3222802, "raw_average_value_size": 3144, "num_data_blocks": 274, "num_entries": 1025, "num_filter_entries": 1025, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521316, "oldest_key_time": 1769521316, "file_creation_time": 1769521502, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 32300 microseconds, and 7392 cpu microseconds.
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945242524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.548320) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3252771 bytes OK
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.548374) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556098) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556140) EVENT_LOG_v1 {"time_micros": 1769521502556133, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3327448, prev total WAL file size 3369900, number of live WAL files 2.
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.557066) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3176KB)], [50(7301KB)]
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502557096, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10729153, "oldest_snapshot_seqno": -1}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.574 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.581 238945 DEBUG nova.compute.provider_tree [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4907 keys, 8969722 bytes, temperature: kUnknown
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502644643, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8969722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8934821, "index_size": 21543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 120450, "raw_average_key_size": 24, "raw_value_size": 8844370, "raw_average_value_size": 1802, "num_data_blocks": 900, "num_entries": 4907, "num_filter_entries": 4907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521502, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.646 238945 DEBUG nova.scheduler.client.report [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.644892) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8969722 bytes
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.650491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.4 rd, 102.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5435, records dropped: 528 output_compression: NoCompression
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.650519) EVENT_LOG_v1 {"time_micros": 1769521502650506, "job": 26, "event": "compaction_finished", "compaction_time_micros": 87628, "compaction_time_cpu_micros": 19686, "output_level": 6, "num_output_files": 1, "total_output_size": 8969722, "num_input_records": 5435, "num_output_records": 4907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502651239, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521502652724, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.556971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:02 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:45:02.652773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.817 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.818 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.821 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.876 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.876 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.908 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.921 238945 DEBUG oslo_concurrency.processutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:02 np0005597378 nova_compute[238941]: 2026-01-27 13:45:02.956 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.063 238945 DEBUG nova.policy [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.070 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.072 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.072 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Creating image(s)#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.102 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.134 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.160 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.165 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.235 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.236 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.236 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.237 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.264 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.268 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059433215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.551 238945 DEBUG oslo_concurrency.processutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:45:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.560 238945 DEBUG nova.compute.provider_tree [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.586 238945 DEBUG nova.scheduler.client.report [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.708 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.738 238945 INFO nova.scheduler.client.report [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance 4c52012f-9a4f-4599-adb0-2c658a054f91#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.774 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.801 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Successfully created port: e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.842 238945 DEBUG oslo_concurrency.lockutils [None req-b0cd62ba-3609-448b-bda3-42d7d51b0d61 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.879 238945 DEBUG nova.objects.instance [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.896 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.897 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Ensure instance console log exists: /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.897 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.897 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:03 np0005597378 nova_compute[238941]: 2026-01-27 13:45:03.898 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 200 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 1.9 MiB/s wr, 135 op/s
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.455 238945 DEBUG nova.network.neutron [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.481 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.482 238945 DEBUG oslo_concurrency.lockutils [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.482 238945 DEBUG nova.network.neutron [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Refreshing network info cache for port 18883f3b-6c4c-443b-81ec-0b1610e22203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.486 238945 DEBUG nova.virt.libvirt.vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.486 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.487 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.487 238945 DEBUG os_vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.488 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.489 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.489 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.492 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18883f3b-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.493 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18883f3b-6c, col_values=(('external_ids', {'iface-id': '18883f3b-6c4c-443b-81ec-0b1610e22203', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:1e:e3', 'vm-uuid': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:04 np0005597378 NetworkManager[48904]: <info>  [1769521504.4963] manager: (tap18883f3b-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.504 238945 INFO os_vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c')#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.505 238945 DEBUG nova.virt.libvirt.vif [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.505 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.505 238945 DEBUG nova.network.os_vif_util [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.509 238945 DEBUG nova.virt.libvirt.guest [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] attach device xml: <interface type="ethernet">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:99:1e:e3"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <target dev="tap18883f3b-6c"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:45:04 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 08:45:04 np0005597378 kernel: tap18883f3b-6c: entered promiscuous mode
Jan 27 08:45:04 np0005597378 NetworkManager[48904]: <info>  [1769521504.5227] manager: (tap18883f3b-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Jan 27 08:45:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:04Z|00196|binding|INFO|Claiming lport 18883f3b-6c4c-443b-81ec-0b1610e22203 for this chassis.
Jan 27 08:45:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:04Z|00197|binding|INFO|18883f3b-6c4c-443b-81ec-0b1610e22203: Claiming fa:16:3e:99:1e:e3 10.100.0.13
Jan 27 08:45:04 np0005597378 systemd-udevd[265599]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 NetworkManager[48904]: <info>  [1769521504.5374] device (tap18883f3b-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.536 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1e:e3 10.100.0.13'], port_security=['fa:16:3e:99:1e:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=18883f3b-6c4c-443b-81ec-0b1610e22203) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.537 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 18883f3b-6c4c-443b-81ec-0b1610e22203 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:45:04 np0005597378 NetworkManager[48904]: <info>  [1769521504.5387] device (tap18883f3b-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.539 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:45:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:04Z|00198|binding|INFO|Setting lport 18883f3b-6c4c-443b-81ec-0b1610e22203 ovn-installed in OVS
Jan 27 08:45:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:04Z|00199|binding|INFO|Setting lport 18883f3b-6c4c-443b-81ec-0b1610e22203 up in Southbound
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.556 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b92b1d-53e4-40fd-b3d1-757ab9758ca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.585 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e4542a-25b1-43c4-b8f3-3dea4101c423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.588 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c61faf-61e4-4d68-b561-62b5aedb341d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.617 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17b764b8-e870-4c6c-99bf-434d7b84c69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:d8:2d:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.620 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:18:9d:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.621 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:53:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.621 238945 DEBUG nova.virt.libvirt.driver [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:99:1e:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.635 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd64da7-6b00-4965-87dd-bb0655d610f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265838, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.645 238945 DEBUG nova.virt.libvirt.guest [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:04</nova:creationTime>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:04 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:04 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:04 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:04 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f03cdae-d103-4a28-92c0-1bf2eda4ac57]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265839, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265839, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.654 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.657 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.657 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.658 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:04.658 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.668 238945 DEBUG oslo_concurrency.lockutils [None req-1f034754-2fd1-48ee-9a15-e0c213d3249c 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-18883f3b-6c4c-443b-81ec-0b1610e22203" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.842 238945 DEBUG nova.compute.manager [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG oslo_concurrency.lockutils [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG oslo_concurrency.lockutils [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG oslo_concurrency.lockutils [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4c52012f-9a4f-4599-adb0-2c658a054f91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.843 238945 DEBUG nova.compute.manager [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] No waiting events found dispatching network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.844 238945 WARNING nova.compute.manager [req-c7f3282f-3c09-44a0-b940-5c83046de206 req-5d974e3d-548c-4329-8e23-0aa406e59804 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Received unexpected event network-vif-plugged-3cd42161-aa97-4ecb-9e41-e7a887f02d7c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.985 238945 DEBUG nova.compute.manager [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.987 238945 DEBUG oslo_concurrency.lockutils [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.987 238945 DEBUG oslo_concurrency.lockutils [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.988 238945 DEBUG oslo_concurrency.lockutils [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.988 238945 DEBUG nova.compute.manager [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:04 np0005597378 nova_compute[238941]: 2026-01-27 13:45:04.988 238945 WARNING nova.compute.manager [req-5aabf715-b4d6-479a-83e1-a61e65af1521 req-11531b32-bf8c-44bc-b36e-fbff3816b836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.177 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.178 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.178 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.179 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.180 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.181 238945 INFO nova.compute.manager [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Terminating instance#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.182 238945 DEBUG nova.compute.manager [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:45:05 np0005597378 kernel: tap851829c6-49 (unregistering): left promiscuous mode
Jan 27 08:45:05 np0005597378 NetworkManager[48904]: <info>  [1769521505.2497] device (tap851829c6-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:45:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:05Z|00200|binding|INFO|Releasing lport 851829c6-49a6-4580-90d9-df985a736216 from this chassis (sb_readonly=0)
Jan 27 08:45:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:05Z|00201|binding|INFO|Setting lport 851829c6-49a6-4580-90d9-df985a736216 down in Southbound
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:05Z|00202|binding|INFO|Removing iface tap851829c6-49 ovn-installed in OVS
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.266 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:0a:48 10.100.0.13'], port_security=['fa:16:3e:5b:0a:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bee7c432-6457-4160-917c-a807eca3df0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c02e06ff150d4463ba12a3be444a4ae3', 'neutron:revision_number': '10', 'neutron:security_group_ids': '260d4ac8-a21c-4cbd-a3fe-a915b0767d7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f922aebf-2630-451c-b96b-b86c432d849a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=851829c6-49a6-4580-90d9-df985a736216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.268 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 851829c6-49a6-4580-90d9-df985a736216 in datapath 4bc78608-1746-40d0-a3d3-be467e4c23ef unbound from our chassis#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.269 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4bc78608-1746-40d0-a3d3-be467e4c23ef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.270 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01531775-d8fa-41ca-ba7f-820836f56147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.271 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef namespace which is not needed anymore#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.286 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Successfully updated port: e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:45:05 np0005597378 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 27 08:45:05 np0005597378 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Consumed 14.094s CPU time.
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.302 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.302 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.302 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:45:05 np0005597378 systemd-machined[207425]: Machine qemu-29-instance-00000011 terminated.
Jan 27 08:45:05 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : haproxy version is 2.8.14-c23fe91
Jan 27 08:45:05 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [NOTICE]   (258020) : path to executable is /usr/sbin/haproxy
Jan 27 08:45:05 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [WARNING]  (258020) : Exiting Master process...
Jan 27 08:45:05 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [ALERT]    (258020) : Current worker (258022) exited with code 143 (Terminated)
Jan 27 08:45:05 np0005597378 neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef[258014]: [WARNING]  (258020) : All workers exited. Exiting... (0)
Jan 27 08:45:05 np0005597378 systemd[1]: libpod-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25.scope: Deactivated successfully.
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.430 238945 INFO nova.virt.libvirt.driver [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Instance destroyed successfully.#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.430 238945 DEBUG nova.objects.instance [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lazy-loading 'resources' on Instance uuid bee7c432-6457-4160-917c-a807eca3df0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:05 np0005597378 podman[265861]: 2026-01-27 13:45:05.435658788 +0000 UTC m=+0.059558130 container died 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.445 238945 DEBUG nova.virt.libvirt.vif [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:43:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-752871201',display_name='tempest-ServersAdminTestJSON-server-752871201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-752871201',id=17,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:44:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c02e06ff150d4463ba12a3be444a4ae3',ramdisk_id='',reservation_id='r-fdiela0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2123092478',owner_user_name='tempest-ServersAdminTestJSON-2123092478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:44:42Z,user_data=None,user_id='97755bdfdc1140aa970fa69a04baeb3c',uuid=bee7c432-6457-4160-917c-a807eca3df0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.446 238945 DEBUG nova.network.os_vif_util [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converting VIF {"id": "851829c6-49a6-4580-90d9-df985a736216", "address": "fa:16:3e:5b:0a:48", "network": {"id": "4bc78608-1746-40d0-a3d3-be467e4c23ef", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-416596738-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c02e06ff150d4463ba12a3be444a4ae3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap851829c6-49", "ovs_interfaceid": "851829c6-49a6-4580-90d9-df985a736216", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.446 238945 DEBUG nova.network.os_vif_util [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.447 238945 DEBUG os_vif [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.449 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851829c6-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.455 238945 INFO os_vif [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:0a:48,bridge_name='br-int',has_traffic_filtering=True,id=851829c6-49a6-4580-90d9-df985a736216,network=Network(4bc78608-1746-40d0-a3d3-be467e4c23ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap851829c6-49')#033[00m
Jan 27 08:45:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25-userdata-shm.mount: Deactivated successfully.
Jan 27 08:45:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7912d8889bcc710109044d974d117cd8921ca98d61262588e916347c04d92b3c-merged.mount: Deactivated successfully.
Jan 27 08:45:05 np0005597378 podman[265861]: 2026-01-27 13:45:05.501831965 +0000 UTC m=+0.125731297 container cleanup 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:45:05 np0005597378 systemd[1]: libpod-conmon-33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25.scope: Deactivated successfully.
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.564 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:45:05 np0005597378 podman[265916]: 2026-01-27 13:45:05.605868671 +0000 UTC m=+0.078211872 container remove 33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.615 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e41178a-9c84-43cd-beb6-d85a9aec4347]: (4, ('Tue Jan 27 01:45:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef (33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25)\n33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25\nTue Jan 27 01:45:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef (33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25)\n33e1bfd0be5e8ff800f770b0e4c5201e48c86dc3eb4f16a1d42e359e4699cd25\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.618 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77b31202-d01d-4274-bd03-8c6d8fb5cfbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.620 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc78608-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.622 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 kernel: tap4bc78608-10: left promiscuous mode
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.643 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52bf4dff-3eb6-4d2b-9a6b-a3f7cce824cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7dc9daa-1cd2-4234-86db-a379515e8a2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.654 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39c4b8e3-257f-430d-a938-d1bb90d331ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.669 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0ca5a2-5a86-4fd1-8ddf-ed75f5de0e53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403262, 'reachable_time': 19117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265931, 'error': None, 'target': 'ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.672 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4bc78608-1746-40d0-a3d3-be467e4c23ef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:45:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:05.672 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab2dd92-b354-43bd-9d3b-bce2665413f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:05 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4bc78608\x2d1746\x2d40d0\x2da3d3\x2dbe467e4c23ef.mount: Deactivated successfully.
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.816 238945 DEBUG nova.network.neutron [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updated VIF entry in instance network info cache for port 18883f3b-6c4c-443b-81ec-0b1610e22203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.816 238945 DEBUG nova.network.neutron [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.840 238945 DEBUG oslo_concurrency.lockutils [req-4bc5053b-c8f0-4ce2-a9a1-5d674224be70 req-7043d0e5-f3b1-410f-8e88-d46c77ca4711 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.965 238945 INFO nova.virt.libvirt.driver [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deleting instance files /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del#033[00m
Jan 27 08:45:05 np0005597378 nova_compute[238941]: 2026-01-27 13:45:05.966 238945 INFO nova.virt.libvirt.driver [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deletion of /var/lib/nova/instances/bee7c432-6457-4160-917c-a807eca3df0e_del complete#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.034 238945 INFO nova.compute.manager [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.034 238945 DEBUG oslo.service.loopingcall [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.035 238945 DEBUG nova.compute.manager [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.035 238945 DEBUG nova.network.neutron [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:45:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 222 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 1.3 MiB/s wr, 146 op/s
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.633 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-694e1e12-dc4a-4a42-ba67-46b29efc58c1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.634 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-694e1e12-dc4a-4a42-ba67-46b29efc58c1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.656 238945 DEBUG nova.objects.instance [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.681 238945 DEBUG nova.virt.libvirt.vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.682 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.682 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.687 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.690 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:06Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:1e:e3 10.100.0.13
Jan 27 08:45:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:06Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:1e:e3 10.100.0.13
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.759 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.765 238945 DEBUG nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Attempting to detach device tap694e1e12-dc from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.766 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:18:9d:8a"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <target dev="tap694e1e12-dc"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.777 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <name>instance-00000015</name>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:04</nova:creationTime>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <resource>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </resource>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tapc2b2aaa7-69'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:18:9d:8a'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tap694e1e12-dc'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tap033bda90-ba'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tap18883f3b-6c'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 INFO nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tap694e1e12-dc from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config.#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 DEBUG nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] (1/8): Attempting to detach device tap694e1e12-dc with device alias net1 from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.782 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] detach device xml: <interface type="ethernet">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:18:9d:8a"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <target dev="tap694e1e12-dc"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 08:45:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.856 238945 DEBUG nova.network.neutron [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.878 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.878 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance network_info: |[{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.880 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start _get_guest_xml network_info=[{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:45:06 np0005597378 kernel: tap694e1e12-dc (unregistering): left promiscuous mode
Jan 27 08:45:06 np0005597378 NetworkManager[48904]: <info>  [1769521506.8841] device (tap694e1e12-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.886 238945 WARNING nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:45:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:06Z|00203|binding|INFO|Releasing lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 from this chassis (sb_readonly=0)
Jan 27 08:45:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:06Z|00204|binding|INFO|Setting lport 694e1e12-dc4a-4a42-ba67-46b29efc58c1 down in Southbound
Jan 27 08:45:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:06Z|00205|binding|INFO|Removing iface tap694e1e12-dc ovn-installed in OVS
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.898 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769521506.897779, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.899 238945 DEBUG nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Start waiting for the detach event from libvirt for device tap694e1e12-dc with device alias net1 for instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.899 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.903 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <name>instance-00000015</name>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:04</nova:creationTime>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="694e1e12-dc4a-4a42-ba67-46b29efc58c1">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <resource>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </resource>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tapc2b2aaa7-69'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tap033bda90-ba'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target dev='tap18883f3b-6c'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='net3'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.903 238945 INFO nova.virt.libvirt.driver [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully detached device tap694e1e12-dc from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the live domain config.#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.904 238945 DEBUG nova.virt.libvirt.vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.904 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.905 238945 DEBUG nova.network.os_vif_util [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.905 238945 DEBUG os_vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.906 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap694e1e12-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.907 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.907 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:06Z|00206|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.910 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:9d:8a 10.100.0.11'], port_security=['fa:16:3e:18:9d:8a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=694e1e12-dc4a-4a42-ba67-46b29efc58c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.912 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis#033[00m
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.914 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.914 238945 DEBUG nova.network.neutron [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.916 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.917 238945 DEBUG nova.virt.libvirt.host [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.917 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.917 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.918 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.919 238945 DEBUG nova.virt.hardware [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.922 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:06 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32914e2c-6cb2-443e-8be1-8cbc523017b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.947 238945 INFO os_vif [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc')#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.948 238945 DEBUG nova.virt.libvirt.guest [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:06 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:06 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:06 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.963 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[082b58a1-12bc-4bd0-86c1-d7b790c6beec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.966 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f846951a-4c1a-4fb3-9cbd-5ad46b6b91bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.973 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.973 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-unplugged-851829c6-49a6-4580-90d9-df985a736216 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.974 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bee7c432-6457-4160-917c-a807eca3df0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 DEBUG oslo_concurrency.lockutils [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 DEBUG nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] No waiting events found dispatching network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.975 238945 WARNING nova.compute.manager [req-581f24e5-4161-41b3-9539-d9cf6a5f821f req-7c888dfd-25ed-4733-bc4b-6916ba024d15 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received unexpected event network-vif-plugged-851829c6-49a6-4580-90d9-df985a736216 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:45:06 np0005597378 nova_compute[238941]: 2026-01-27 13:45:06.976 238945 INFO nova.compute.manager [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Took 0.94 seconds to deallocate network for instance.#033[00m
Jan 27 08:45:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:06.998 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee7691b-041a-4a67-b55c-30a2ec39b3f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f99b57-f4ca-4870-ab82-20cefae3ce8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265944, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.031 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0005cb2-eab6-40ba-be17-87eb3d907aaf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265945, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265945, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.033 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.036 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.036 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.036 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:07.037 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.054 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.055 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.232 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.233 238945 WARNING nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-18883f3b-6c4c-443b-81ec-0b1610e22203 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-changed-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG nova.compute.manager [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Refreshing instance network info cache due to event network-changed-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.234 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.235 238945 DEBUG nova.network.neutron [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Refreshing network info cache for port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.320 238945 DEBUG oslo_concurrency.processutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638083636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.519 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.540 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.544 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.806 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.806 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.807 238945 DEBUG nova.network.neutron [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:45:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219206401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.890 238945 DEBUG oslo_concurrency.processutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.896 238945 DEBUG nova.compute.provider_tree [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.916 238945 DEBUG nova.scheduler.client.report [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.940 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.981 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521492.9799948, 677a728d-1d2a-4e11-909d-c2c91838cfbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.982 238945 INFO nova.compute.manager [-] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:45:07 np0005597378 nova_compute[238941]: 2026-01-27 13:45:07.987 238945 INFO nova.scheduler.client.report [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Deleted allocations for instance bee7c432-6457-4160-917c-a807eca3df0e#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.079 238945 DEBUG nova.compute.manager [None req-225ef574-8ba7-4bef-a773-2303036e9e09 - - - - - -] [instance: 677a728d-1d2a-4e11-909d-c2c91838cfbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/443631378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.122 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.123 238945 DEBUG nova.virt.libvirt.vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-502081405',display_name='tempest-ImagesTestJSON-server-502081405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-502081405',id=26,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-4avwdocn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:02Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50d0e7b1-50a9-47e5-92b9-26570f8dba53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.124 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.125 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.126 238945 DEBUG nova.objects.instance [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.141 238945 DEBUG oslo_concurrency.lockutils [None req-4719f00e-c68a-45d6-9a5b-f6b2f0556b9f 97755bdfdc1140aa970fa69a04baeb3c c02e06ff150d4463ba12a3be444a4ae3 - - default default] Lock "bee7c432-6457-4160-917c-a807eca3df0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.144 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <uuid>50d0e7b1-50a9-47e5-92b9-26570f8dba53</uuid>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <name>instance-0000001a</name>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesTestJSON-server-502081405</nova:name>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <nova:port uuid="e4f2b52b-f218-4d18-9b87-fe3b94bf58b3">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <entry name="serial">50d0e7b1-50a9-47e5-92b9-26570f8dba53</entry>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <entry name="uuid">50d0e7b1-50a9-47e5-92b9-26570f8dba53</entry>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:1e:cf:10"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <target dev="tape4f2b52b-f2"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/console.log" append="off"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:45:08 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:08 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:08 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:08 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.144 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Preparing to wait for external event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.144 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG nova.virt.libvirt.vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:44:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-502081405',display_name='tempest-ImagesTestJSON-server-502081405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-502081405',id=26,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-4avwdocn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:02Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50d0e7b1-50a9-47e5-92b9-26570f8dba53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.145 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.146 238945 DEBUG nova.network.os_vif_util [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.146 238945 DEBUG os_vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.147 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.147 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4f2b52b-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4f2b52b-f2, col_values=(('external_ids', {'iface-id': 'e4f2b52b-f218-4d18-9b87-fe3b94bf58b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:cf:10', 'vm-uuid': '50d0e7b1-50a9-47e5-92b9-26570f8dba53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:08 np0005597378 NetworkManager[48904]: <info>  [1769521508.1524] manager: (tape4f2b52b-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.157 238945 INFO os_vif [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2')#033[00m
Jan 27 08:45:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 208 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.3 MiB/s wr, 148 op/s
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.226 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.226 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.227 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:1e:cf:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.227 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Using config drive#033[00m
Jan 27 08:45:08 np0005597378 nova_compute[238941]: 2026-01-27 13:45:08.250 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.061 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Received event network-vif-deleted-851829c6-49a6-4580-90d9-df985a736216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.061 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-unplugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.062 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-unplugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 WARNING nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-unplugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.063 238945 DEBUG oslo_concurrency.lockutils [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 WARNING nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-694e1e12-dc4a-4a42-ba67-46b29efc58c1 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 DEBUG nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-694e1e12-dc4a-4a42-ba67-46b29efc58c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.064 238945 INFO nova.compute.manager [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Neutron deleted interface 694e1e12-dc4a-4a42-ba67-46b29efc58c1; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.065 238945 DEBUG nova.network.neutron [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.111 238945 DEBUG nova.objects.instance [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.151 238945 DEBUG nova.objects.instance [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.196 238945 DEBUG nova.virt.libvirt.vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.197 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.197 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.201 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.206 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <name>instance-00000015</name>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <resource>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </resource>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='tapc2b2aaa7-69'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='tap033bda90-ba'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='net2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='tap18883f3b-6c'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='net3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.206 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.210 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:18:9d:8a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap694e1e12-dc"/></interface>not found in domain: <domain type='kvm' id='23'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <name>instance-00000015</name>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:06</nova:creationTime>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <resource>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </resource>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk' index='2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config' index='1'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='tapc2b2aaa7-69'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='tap033bda90-ba'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='net2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:99:1e:e3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target dev='tap18883f3b-6c'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='net3'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <source path='/dev/pts/2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c35,c518</label>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c35,c518</imagelabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.211 238945 WARNING nova.virt.libvirt.driver [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Detaching interface fa:16:3e:18:9d:8a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap694e1e12-dc' not found.#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.211 238945 DEBUG nova.virt.libvirt.vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.211 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "address": "fa:16:3e:18:9d:8a", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap694e1e12-dc", "ovs_interfaceid": "694e1e12-dc4a-4a42-ba67-46b29efc58c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.212 238945 DEBUG nova.network.os_vif_util [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.212 238945 DEBUG os_vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.214 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap694e1e12-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.214 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.216 238945 INFO os_vif [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:9d:8a,bridge_name='br-int',has_traffic_filtering=True,id=694e1e12-dc4a-4a42-ba67-46b29efc58c1,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap694e1e12-dc')#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.217 238945 DEBUG nova.virt.libvirt.guest [req-ee808d46-0ed8-45a2-8cc6-e978feaa261a req-68704f35-e4a9-40d9-be4c-3459ce04870c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:45:09</nova:creationTime>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="033bda90-ba32-42f7-aab3-c017e5594e94">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    <nova:port uuid="18883f3b-6c4c-443b-81ec-0b1610e22203">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.319 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Creating config drive at /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.324 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsld9l_w0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.359 238945 DEBUG nova.network.neutron [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updated VIF entry in instance network info cache for port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.360 238945 DEBUG nova.network.neutron [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [{"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.379 238945 DEBUG oslo_concurrency.lockutils [req-dc284ac6-28aa-4e4c-bdc2-509ab07945b2 req-6c02db5c-d656-40b6-9fd8-0850a84ee7c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-50d0e7b1-50a9-47e5-92b9-26570f8dba53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.456 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsld9l_w0" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.477 238945 DEBUG nova.storage.rbd_utils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.481 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.563 154802 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b5649368-8bdf-42be-9ac2-55d422c020b6 with type ""#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.564 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1e:e3 10.100.0.13'], port_security=['fa:16:3e:99:1e:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1719796987', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=18883f3b-6c4c-443b-81ec-0b1610e22203) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.566 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 18883f3b-6c4c-443b-81ec-0b1610e22203 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.568 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00207|binding|INFO|Removing iface tap18883f3b-6c ovn-installed in OVS
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00208|binding|INFO|Removing lport 18883f3b-6c4c-443b-81ec-0b1610e22203 ovn-installed in OVS
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff8274f-aafd-48ce-9f60-2b4feff3377e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.612 238945 INFO nova.network.neutron [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Port 694e1e12-dc4a-4a42-ba67-46b29efc58c1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.616 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce4eccd-b4ed-4658-b3cd-8e27cfbf37fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.619 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[284b93fc-4f4d-4798-bcfa-9b9328ca5d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.644 238945 DEBUG oslo_concurrency.processutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config 50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.644 238945 INFO nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deleting local config drive /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53/disk.config because it was imported into RBD.#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.647 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b86da0dc-7cff-4373-8997-4aef7d91c5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.666 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bcba39cd-8626-47f8-b55c-415bb0d26570]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 784, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 784, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407178, 'reachable_time': 31089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266095, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.683 238945 DEBUG nova.compute.manager [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-18883f3b-6c4c-443b-81ec-0b1610e22203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.684 238945 INFO nova.compute.manager [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Neutron deleted interface 18883f3b-6c4c-443b-81ec-0b1610e22203; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.684 238945 DEBUG nova.network.neutron [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.684 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[13a004a1-e996-47f3-b015-fc0ff12f2519]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407197, 'tstamp': 407197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266100, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee180809-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407202, 'tstamp': 407202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266100, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.686 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.687 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 kernel: tape4f2b52b-f2: entered promiscuous mode
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.6917] manager: (tape4f2b52b-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00209|binding|INFO|Claiming lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for this chassis.
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00210|binding|INFO|e4f2b52b-f218-4d18-9b87-fe3b94bf58b3: Claiming fa:16:3e:1e:cf:10 10.100.0.8
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.696 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.697 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.697 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.697 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.703 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:cf:10 10.100.0.8'], port_security=['fa:16:3e:1e:cf:10 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '50d0e7b1-50a9-47e5-92b9-26570f8dba53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.704 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.705 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.712 238945 DEBUG nova.objects.instance [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00211|binding|INFO|Setting lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 ovn-installed in OVS
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00212|binding|INFO|Setting lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 up in Southbound
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.718 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfe008d-cef6-4b20-b165-542f9231ed59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.719 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.722 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.722 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac3044e-f7e5-4868-bd1e-50e05d3730f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.723 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c80c2379-c39b-46c8-bf20-e9a05132e7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 systemd-udevd[266112]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:45:09 np0005597378 systemd-machined[207425]: New machine qemu-30-instance-0000001a.
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.741 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[de418bdc-0c5d-435b-bd02-89b2f3859386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.743 238945 DEBUG nova.objects.instance [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.7454] device (tape4f2b52b-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.7462] device (tape4f2b52b-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:45:09 np0005597378 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.766 238945 DEBUG nova.virt.libvirt.vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.766 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.767 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[078fe790-a52b-4f15-b9db-474b04afaa68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.774 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.778 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.783 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.783 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.784 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.784 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.784 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.785 238945 INFO nova.compute.manager [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Terminating instance#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.787 238945 DEBUG nova.compute.manager [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.788 238945 DEBUG nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Attempting to detach device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.789 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] detach device xml: <interface type="ethernet">
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:99:1e:e3"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]:  <target dev="tap18883f3b-6c"/>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:45:09 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.800 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[99728a62-7722-4d71-9a67-7454cbfe3e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.800 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 08:45:09 np0005597378 systemd-udevd[266115]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.8071] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41d5417e-375b-411e-bae4-28baf13ca343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.837 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[753d8a61-3d1b-4ac2-ad1f-d5a3f799b5d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.841 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7066b0-228b-4da1-9cf8-ba91dd4322b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.8696] device (tape25f7657-30): carrier: link connected
Jan 27 08:45:09 np0005597378 kernel: tapc2b2aaa7-69 (unregistering): left promiscuous mode
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.8747] device (tapc2b2aaa7-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.879 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad38f53-2040-45e1-aecd-1ae7ee6ec1c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 podman[266117]: 2026-01-27 13:45:09.88193602 +0000 UTC m=+0.105960417 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00213|binding|INFO|Releasing lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 from this chassis (sb_readonly=0)
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00214|binding|INFO|Setting lport c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 down in Southbound
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00215|binding|INFO|Removing iface tapc2b2aaa7-69 ovn-installed in OVS
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.893 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:2d:f1 10.100.0.7'], port_security=['fa:16:3e:d8:2d:f1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45353761-c75a-4426-88a9-3022541c9e26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8aff78-0c84-42c7-b4be-b72e63581e58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414949, 'reachable_time': 41832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266170, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 kernel: tap033bda90-ba (unregistering): left promiscuous mode
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.9081] device (tap033bda90-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00216|binding|INFO|Releasing lport 033bda90-ba32-42f7-aab3-c017e5594e94 from this chassis (sb_readonly=0)
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00217|binding|INFO|Setting lport 033bda90-ba32-42f7-aab3-c017e5594e94 down in Southbound
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.918 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:09Z|00218|binding|INFO|Removing iface tap033bda90-ba ovn-installed in OVS
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.920 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.920 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fdba0fa6-a1e3-46d0-95f5-dec70f9bf97b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414949, 'tstamp': 414949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266171, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.924 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:53:9e 10.100.0.10'], port_security=['fa:16:3e:0b:53:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9bf01cd7-4810-40fb-b3e6-3434dfc52d5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1633a5e8-2c53-47b9-a98a-b111a003890f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=033bda90-ba32-42f7-aab3-c017e5594e94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:09 np0005597378 kernel: tap18883f3b-6c (unregistering): left promiscuous mode
Jan 27 08:45:09 np0005597378 NetworkManager[48904]: <info>  [1769521509.9372] device (tap18883f3b-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3ead82-6796-495c-b0e7-6ec46c84bbf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414949, 'reachable_time': 41832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266176, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:09 np0005597378 nova_compute[238941]: 2026-01-27 13:45:09.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:09.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3eb6c2-b72b-4616-af10-d3c105266086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 27 08:45:10 np0005597378 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 18.723s CPU time.
Jan 27 08:45:10 np0005597378 systemd-machined[207425]: Machine qemu-23-instance-00000015 terminated.
Jan 27 08:45:10 np0005597378 NetworkManager[48904]: <info>  [1769521510.0349] manager: (tap033bda90-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.043 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[002c8d1a-b86f-433d-b62f-dba0a6b5e864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.045 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.045 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.046 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 NetworkManager[48904]: <info>  [1769521510.0481] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 27 08:45:10 np0005597378 NetworkManager[48904]: <info>  [1769521510.0497] manager: (tap18883f3b-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 27 08:45:10 np0005597378 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.061 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:10Z|00219|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.063 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:99:1e:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap18883f3b-6c"/></interface>not found in domain: <domain type='kvm'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <name>instance-00000015</name>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <uuid>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</uuid>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1204555771</nova:name>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:43:48</nova:creationTime>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <nova:port uuid="c2b2aaa7-69a4-4868-bbe6-d21fd9974c34">
Jan 27 08:45:10 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <entry name='serial'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <entry name='uuid'>9bf01cd7-4810-40fb-b3e6-3434dfc52d5c</entry>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <cpu mode='host-model' check='partial'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_disk.config'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </controller>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:d8:2d:f1'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target dev='tapc2b2aaa7-69'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:0b:53:9e'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target dev='tap033bda90-ba'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      </target>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <console type='pty'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c/console.log' append='off'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </console>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </input>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:10 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:10 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.064 238945 INFO nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully detached device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the persistent domain config.#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.064 238945 DEBUG nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] (1/8): Attempting to detach device tap18883f3b-6c with device alias net3 from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.064 238945 DEBUG nova.virt.libvirt.guest [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] detach device xml: <interface type="ethernet">
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:99:1e:e3"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]:  <target dev="tap18883f3b-6c"/>
Jan 27 08:45:10 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:45:10 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.066 238945 DEBUG nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Libvirt returned error while detaching device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c. Libvirt error code: 55, error message: Requested operation is not valid: domain is not running. _detach_sync /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2667#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.066 238945 WARNING nova.virt.libvirt.driver [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unexpected libvirt error while detaching device tap18883f3b-6c from instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c: Requested operation is not valid: domain is not running: libvirt.libvirtError: Requested operation is not valid: domain is not running#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.066 238945 DEBUG nova.virt.libvirt.vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.067 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.067 238945 DEBUG nova.network.os_vif_util [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.068 238945 DEBUG os_vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.070 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18883f3b-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.073 238945 INFO nova.virt.libvirt.driver [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Instance destroyed successfully.#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.073 238945 DEBUG nova.objects.instance [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'resources' on Instance uuid 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.097 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07289b59-760a-4a67-99da-b376ced62c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.099 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.099 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.099 238945 INFO os_vif [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1e:e3,bridge_name='br-int',has_traffic_filtering=True,id=18883f3b-6c4c-443b-81ec-0b1610e22203,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap18883f3b-6c')#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server [req-d713f43c-b323-4eac-afc7-6f9745d78d4d req-65ddaf8b-dfaa-4230-9c28-88472a2cfc57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Exception during message handling: libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     raise self.value
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11064, in external_instance_event
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._process_instance_vif_deleted_event(context,
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10871, in _process_instance_vif_deleted_event
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self.driver.detach_interface(context, instance, vif)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2943, in detach_interface
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_with_retry(
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2473, in _detach_with_retry
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_from_live_with_retry(
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2529, in _detach_from_live_with_retry
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_from_live_and_wait_for_event(
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2591, in _detach_from_live_and_wait_for_event
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._detach_sync(
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2663, in _detach_sync
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     guest.detach_device(dev, persistent=persistent, live=live)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 466, in detach_device
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     self._domain.detachDeviceFlags(device_xml, flags=flags)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     raise value
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1600, in detachDeviceFlags
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server     raise libvirtError('virDomainDetachDeviceFlags() failed')
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.102 238945 ERROR oslo_messaging.rpc.server #033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.161 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521510.1610332, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.162 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Started (Lifecycle Event)#033[00m
Jan 27 08:45:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 167 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 1.8 MiB/s wr, 171 op/s
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.268 238945 DEBUG nova.virt.libvirt.vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.269 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.269 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.270 238945 DEBUG os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.272 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2b2aaa7-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.280 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521510.1612558, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.280 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.282 238945 INFO os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:2d:f1,bridge_name='br-int',has_traffic_filtering=True,id=c2b2aaa7-69a4-4868-bbe6-d21fd9974c34,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2b2aaa7-69')#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.283 238945 DEBUG nova.virt.libvirt.vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:43:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1204555771',display_name='tempest-AttachInterfacesTestJSON-server-1204555771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1204555771',id=21,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE996IMdu9SoCmbAd0Ys1W24SMNzbfesKf9UThL+kDjghGdIjpi/kaOtXvBisO6k+sHx2IHkrZCBTIoOpv2JmyRHIthBXthFGWuBsK3Sy89XJlyP2qYmUQ/lFvMPiEhEkw==',key_name='tempest-keypair-362748091',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:43:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-mor9bswq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=9bf01cd7-4810-40fb-b3e6-3434dfc52d5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.283 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.284 238945 DEBUG nova.network.os_vif_util [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.284 238945 DEBUG os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.286 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap033bda90-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.294 238945 INFO os_vif [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:53:9e,bridge_name='br-int',has_traffic_filtering=True,id=033bda90-ba32-42f7-aab3-c017e5594e94,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap033bda90-ba')#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.311 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.315 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.339 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:45:10 np0005597378 podman[266311]: 2026-01-27 13:45:10.492194163 +0000 UTC m=+0.072748284 container create 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:45:10 np0005597378 podman[266311]: 2026-01-27 13:45:10.441603004 +0000 UTC m=+0.022157145 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:45:10 np0005597378 systemd[1]: Started libpod-conmon-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039.scope.
Jan 27 08:45:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:45:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef48247c39fef69a09fcc7dc78dcd0a411cfad88e422c8f66b50afbf5acacfe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:10 np0005597378 podman[266311]: 2026-01-27 13:45:10.596596338 +0000 UTC m=+0.177150479 container init 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:45:10 np0005597378 podman[266311]: 2026-01-27 13:45:10.603626667 +0000 UTC m=+0.184180788 container start 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : New worker (266333) forked
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : Loading success.
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.672 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.673 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.674 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d44f8e6-e62a-438d-8c08-901ca9f91f0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.675 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace which is not needed anymore#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.750 238945 INFO nova.virt.libvirt.driver [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deleting instance files /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_del#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.751 238945 INFO nova.virt.libvirt.driver [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deletion of /var/lib/nova/instances/9bf01cd7-4810-40fb-b3e6-3434dfc52d5c_del complete#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.804 238945 INFO nova.compute.manager [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.805 238945 DEBUG oslo.service.loopingcall [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.805 238945 DEBUG nova.compute.manager [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.805 238945 DEBUG nova.network.neutron [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : haproxy version is 2.8.14-c23fe91
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [NOTICE]   (261194) : path to executable is /usr/sbin/haproxy
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [WARNING]  (261194) : Exiting Master process...
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [ALERT]    (261194) : Current worker (261196) exited with code 143 (Terminated)
Jan 27 08:45:10 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[261136]: [WARNING]  (261194) : All workers exited. Exiting... (0)
Jan 27 08:45:10 np0005597378 systemd[1]: libpod-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12.scope: Deactivated successfully.
Jan 27 08:45:10 np0005597378 podman[266359]: 2026-01-27 13:45:10.82002267 +0000 UTC m=+0.056420717 container died 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 08:45:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12-userdata-shm.mount: Deactivated successfully.
Jan 27 08:45:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0157471d0fc7b6f22b1e4c33cb32963de33bfa1a16e2aa53868880ed4e9eb7de-merged.mount: Deactivated successfully.
Jan 27 08:45:10 np0005597378 podman[266359]: 2026-01-27 13:45:10.884531583 +0000 UTC m=+0.120929630 container cleanup 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:45:10 np0005597378 systemd[1]: libpod-conmon-75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12.scope: Deactivated successfully.
Jan 27 08:45:10 np0005597378 podman[266391]: 2026-01-27 13:45:10.952496068 +0000 UTC m=+0.046992462 container remove 75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.957 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f815c6-a4da-42ee-9cc9-625e2bcebd3b]: (4, ('Tue Jan 27 01:45:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12)\n75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12\nTue Jan 27 01:45:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 (75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12)\n75d0dd357e295f9be8f358eaa61f6a0cbceb8dd4d0550b0270adee0db74c6d12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e22cc97-2512-452b-afa6-657b7293cda6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.960 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 kernel: tapee180809-30: left promiscuous mode
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.965 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.967 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[126e3c6c-6520-4e4c-a746-720b5ba39ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 nova_compute[238941]: 2026-01-27 13:45:10.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.993 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3435d1b4-277e-49b7-9c2a-bc8146374ff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:10.995 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9ccee955-f323-4f63-87ff-f5a6aedae894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f74ff65-87da-41cf-a91f-3487fc5b8fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407167, 'reachable_time': 36768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266406, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.018 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:45:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.018 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[70d58655-dba2-4b7d-9772-3caf398e63a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.019 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 033bda90-ba32-42f7-aab3-c017e5594e94 in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 unbound from our chassis#033[00m
Jan 27 08:45:11 np0005597378 systemd[1]: run-netns-ovnmeta\x2dee180809\x2d3e36\x2d46bd\x2dba3a\x2d3bacc6f9ce96.mount: Deactivated successfully.
Jan 27 08:45:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.020 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:45:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:11.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4da7ba-e272-4322-983a-4cd937a8a055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.179 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.180 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.180 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Processing event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.181 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] No waiting events found dispatching network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 WARNING nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received unexpected event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.182 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-unplugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG oslo_concurrency.lockutils [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-unplugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.183 238945 DEBUG nova.compute.manager [req-39c3a167-7203-4707-b4b1-508efdde8819 req-6faa3e80-64b8-4f68-bfe3-44463c2a5ebc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-unplugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.184 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.188 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521511.1878457, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.188 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.189 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.195 238945 INFO nova.virt.libvirt.driver [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance spawned successfully.#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.195 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.215 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.220 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.224 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.224 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.224 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.225 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.225 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.225 238945 DEBUG nova.virt.libvirt.driver [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.256 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.284 238945 INFO nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.285 238945 DEBUG nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.350 238945 INFO nova.compute.manager [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 9.67 seconds to build instance.#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.368 238945 DEBUG oslo_concurrency.lockutils [None req-a44c050d-d8bc-4270-83aa-27c1b03e45bf 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.799 238945 DEBUG nova.compute.manager [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.800 238945 INFO nova.compute.manager [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Neutron deleted interface 033bda90-ba32-42f7-aab3-c017e5594e94; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.800 238945 DEBUG nova.network.neutron [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:11 np0005597378 nova_compute[238941]: 2026-01-27 13:45:11.832 238945 DEBUG nova.compute.manager [req-9e4cf021-eee8-4159-9a34-e7961a6ca8bb req-66acea54-dc73-4310-a5e6-8d077e39f342 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Detach interface failed, port_id=033bda90-ba32-42f7-aab3-c017e5594e94, reason: Instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.050 238945 DEBUG nova.network.neutron [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [{"id": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "address": "fa:16:3e:d8:2d:f1", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2b2aaa7-69", "ovs_interfaceid": "c2b2aaa7-69a4-4868-bbe6-d21fd9974c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "033bda90-ba32-42f7-aab3-c017e5594e94", "address": "fa:16:3e:0b:53:9e", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap033bda90-ba", "ovs_interfaceid": "033bda90-ba32-42f7-aab3-c017e5594e94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18883f3b-6c4c-443b-81ec-0b1610e22203", "address": "fa:16:3e:99:1e:e3", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18883f3b-6c", "ovs_interfaceid": "18883f3b-6c4c-443b-81ec-0b1610e22203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.073 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.098 238945 DEBUG oslo_concurrency.lockutils [None req-4c79482e-9e71-439f-9d16-262df4c5d82f 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "interface-9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-694e1e12-dc4a-4a42-ba67-46b29efc58c1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 167 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.497 238945 DEBUG nova.network.neutron [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.528 238945 INFO nova.compute.manager [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Took 1.72 seconds to deallocate network for instance.#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.578 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.579 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.675 238945 DEBUG oslo_concurrency.processutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:12Z|00220|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:45:12 np0005597378 nova_compute[238941]: 2026-01-27 13:45:12.855 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:13Z|00221|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547440214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.230 238945 DEBUG oslo_concurrency.processutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.236 238945 DEBUG nova.compute.provider_tree [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.422 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 WARNING nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.423 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 DEBUG oslo_concurrency.lockutils [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 DEBUG nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] No waiting events found dispatching network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.424 238945 WARNING nova.compute.manager [req-fd1fea13-e024-4eb7-82e8-e9d085571beb req-9739a914-ede9-4994-aba7-51e16dc57117 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received unexpected event network-vif-plugged-033bda90-ba32-42f7-aab3-c017e5594e94 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.439 238945 DEBUG nova.scheduler.client.report [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.536 238945 INFO nova.compute.manager [None req-097b042e-9da5-44cc-9c97-27fb66c1c06f 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Pausing#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.537 238945 DEBUG nova.objects.instance [None req-097b042e-9da5-44cc-9c97-27fb66c1c06f 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'flavor' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.692 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:13 np0005597378 podman[266430]: 2026-01-27 13:45:13.728210803 +0000 UTC m=+0.055351238 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.775 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521513.7748952, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.775 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.777 238945 DEBUG nova.compute.manager [None req-097b042e-9da5-44cc-9c97-27fb66c1c06f 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.821 238945 INFO nova.scheduler.client.report [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Deleted allocations for instance 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.955 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:13 np0005597378 nova_compute[238941]: 2026-01-27 13:45:13.961 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:14 np0005597378 nova_compute[238941]: 2026-01-27 13:45:14.078 238945 DEBUG nova.compute.manager [req-a9bc2d0e-7463-45bd-b43f-1807d302a3c1 req-e8dbff05-f94e-41c8-9ae7-4354fcb2d714 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Received event network-vif-deleted-c2b2aaa7-69a4-4868-bbe6-d21fd9974c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:14 np0005597378 nova_compute[238941]: 2026-01-27 13:45:14.136 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 27 08:45:14 np0005597378 nova_compute[238941]: 2026-01-27 13:45:14.182 238945 DEBUG oslo_concurrency.lockutils [None req-cee3933e-bece-4df6-a93c-2fade3c969df 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "9bf01cd7-4810-40fb-b3e6-3434dfc52d5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 122 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Jan 27 08:45:14 np0005597378 nova_compute[238941]: 2026-01-27 13:45:14.397 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521499.3967519, 4c52012f-9a4f-4599-adb0-2c658a054f91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:14 np0005597378 nova_compute[238941]: 2026-01-27 13:45:14.398 238945 INFO nova.compute.manager [-] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:45:14 np0005597378 nova_compute[238941]: 2026-01-27 13:45:14.422 238945 DEBUG nova.compute.manager [None req-d0e31cba-0362-48fc-8dcd-62eda077926c - - - - - -] [instance: 4c52012f-9a4f-4599-adb0-2c658a054f91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:15 np0005597378 nova_compute[238941]: 2026-01-27 13:45:15.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Jan 27 08:45:16 np0005597378 nova_compute[238941]: 2026-01-27 13:45:16.216 238945 DEBUG nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:16 np0005597378 nova_compute[238941]: 2026-01-27 13:45:16.267 238945 INFO nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] instance snapshotting#033[00m
Jan 27 08:45:16 np0005597378 nova_compute[238941]: 2026-01-27 13:45:16.267 238945 WARNING nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Jan 27 08:45:16 np0005597378 nova_compute[238941]: 2026-01-27 13:45:16.500 238945 INFO nova.virt.libvirt.driver [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Beginning live snapshot process#033[00m
Jan 27 08:45:16 np0005597378 nova_compute[238941]: 2026-01-27 13:45:16.637 238945 DEBUG nova.virt.libvirt.imagebackend [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:45:16 np0005597378 nova_compute[238941]: 2026-01-27 13:45:16.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:45:17
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:45:17 np0005597378 nova_compute[238941]: 2026-01-27 13:45:17.087 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(b9c9887ce00943628459af18a7490ed2) on rbd image(50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:45:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 27 08:45:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 27 08:45:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 27 08:45:17 np0005597378 nova_compute[238941]: 2026-01-27 13:45:17.313 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk@b9c9887ce00943628459af18a7490ed2 to images/693870c4-816e-41e5-ab7d-15dae8a60f23 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:45:17 np0005597378 nova_compute[238941]: 2026-01-27 13:45:17.428 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/693870c4-816e-41e5-ab7d-15dae8a60f23 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:45:17 np0005597378 nova_compute[238941]: 2026-01-27 13:45:17.716 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(b9c9887ce00943628459af18a7490ed2) on rbd image(50d0e7b1-50a9-47e5-92b9-26570f8dba53_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:45:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:45:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 109 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Jan 27 08:45:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 27 08:45:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 27 08:45:18 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 27 08:45:18 np0005597378 nova_compute[238941]: 2026-01-27 13:45:18.325 238945 DEBUG nova.storage.rbd_utils [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(693870c4-816e-41e5-ab7d-15dae8a60f23) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:45:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 27 08:45:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 27 08:45:19 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 27 08:45:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 134 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.5 MiB/s wr, 198 op/s
Jan 27 08:45:20 np0005597378 nova_compute[238941]: 2026-01-27 13:45:20.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:20 np0005597378 nova_compute[238941]: 2026-01-27 13:45:20.427 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521505.4257743, bee7c432-6457-4160-917c-a807eca3df0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:20 np0005597378 nova_compute[238941]: 2026-01-27 13:45:20.427 238945 INFO nova.compute.manager [-] [instance: bee7c432-6457-4160-917c-a807eca3df0e] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:45:20 np0005597378 nova_compute[238941]: 2026-01-27 13:45:20.450 238945 DEBUG nova.compute.manager [None req-59cb37f1-ef58-42ce-af75-95d182aad8af - - - - - -] [instance: bee7c432-6457-4160-917c-a807eca3df0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:21 np0005597378 nova_compute[238941]: 2026-01-27 13:45:21.292 238945 INFO nova.virt.libvirt.driver [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Snapshot image upload complete#033[00m
Jan 27 08:45:21 np0005597378 nova_compute[238941]: 2026-01-27 13:45:21.292 238945 INFO nova.compute.manager [None req-0938bac1-a8f7-43b9-bf71-b58c220dd37b 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 5.02 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:45:21 np0005597378 nova_compute[238941]: 2026-01-27 13:45:21.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:22 np0005597378 nova_compute[238941]: 2026-01-27 13:45:22.110 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:22 np0005597378 nova_compute[238941]: 2026-01-27 13:45:22.111 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 134 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 108 op/s
Jan 27 08:45:22 np0005597378 nova_compute[238941]: 2026-01-27 13:45:22.244 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:45:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 97 op/s
Jan 27 08:45:24 np0005597378 nova_compute[238941]: 2026-01-27 13:45:24.314 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:24 np0005597378 nova_compute[238941]: 2026-01-27 13:45:24.314 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:24 np0005597378 nova_compute[238941]: 2026-01-27 13:45:24.322 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:45:24 np0005597378 nova_compute[238941]: 2026-01-27 13:45:24.323 238945 INFO nova.compute.claims [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:45:25 np0005597378 nova_compute[238941]: 2026-01-27 13:45:25.064 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521510.0626156, 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:25 np0005597378 nova_compute[238941]: 2026-01-27 13:45:25.065 238945 INFO nova.compute.manager [-] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:45:25 np0005597378 nova_compute[238941]: 2026-01-27 13:45:25.112 238945 DEBUG nova.compute.manager [None req-23185f4a-74b7-44f9-bf06-f6b1ddeb0b7a - - - - - -] [instance: 9bf01cd7-4810-40fb-b3e6-3434dfc52d5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:25 np0005597378 nova_compute[238941]: 2026-01-27 13:45:25.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 745 KiB/s wr, 78 op/s
Jan 27 08:45:26 np0005597378 nova_compute[238941]: 2026-01-27 13:45:26.676 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:26 np0005597378 nova_compute[238941]: 2026-01-27 13:45:26.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 27 08:45:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 27 08:45:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 27 08:45:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715203260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.237 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.244 238945 DEBUG nova.compute.provider_tree [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.379 238945 DEBUG nova.scheduler.client.report [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035168241692247617 of space, bias 1.0, pg target 0.10550472507674286 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010135273779091397 of space, bias 1.0, pg target 0.3040582133727419 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.972561933057044e-07 of space, bias 4.0, pg target 0.0010767074319668454 quantized to 16 (current 16)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:45:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.548 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.549 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.758 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.758 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:45:27 np0005597378 nova_compute[238941]: 2026-01-27 13:45:27.858 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:45:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 27 08:45:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 27 08:45:27 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 27 08:45:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 134 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 255 B/s wr, 17 op/s
Jan 27 08:45:28 np0005597378 nova_compute[238941]: 2026-01-27 13:45:28.233 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.305 238945 DEBUG nova.policy [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923eb6b430064b86b77c4d8681ab271f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.505 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.506 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.507 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Creating image(s)#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.526 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.548 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.568 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.571 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.631 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.632 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.633 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.633 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.654 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:29 np0005597378 nova_compute[238941]: 2026-01-27 13:45:29.657 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8ebfacea-4592-4e16-8e7b-327affd2445b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.125 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8ebfacea-4592-4e16-8e7b-327affd2445b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.179 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] resizing rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:45:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 109 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.264 238945 DEBUG nova.objects.instance [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'migration_context' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.493 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.493 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Ensure instance console log exists: /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.494 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.494 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.495 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.950 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.951 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.951 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.951 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.952 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.953 238945 INFO nova.compute.manager [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Terminating instance#033[00m
Jan 27 08:45:30 np0005597378 nova_compute[238941]: 2026-01-27 13:45:30.954 238945 DEBUG nova.compute.manager [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:45:31 np0005597378 kernel: tape4f2b52b-f2 (unregistering): left promiscuous mode
Jan 27 08:45:31 np0005597378 NetworkManager[48904]: <info>  [1769521531.0044] device (tape4f2b52b-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:45:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:31Z|00222|binding|INFO|Releasing lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 from this chassis (sb_readonly=0)
Jan 27 08:45:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:31Z|00223|binding|INFO|Setting lport e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 down in Southbound
Jan 27 08:45:31 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:31Z|00224|binding|INFO|Removing iface tape4f2b52b-f2 ovn-installed in OVS
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.016 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 27 08:45:31 np0005597378 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 3.051s CPU time.
Jan 27 08:45:31 np0005597378 systemd-machined[207425]: Machine qemu-30-instance-0000001a terminated.
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.092 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:cf:10 10.100.0.8'], port_security=['fa:16:3e:1e:cf:10 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '50d0e7b1-50a9-47e5-92b9-26570f8dba53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.093 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.095 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a94fc5f6-0a9e-40db-bfac-a95d06958f84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.096 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.195 238945 INFO nova.virt.libvirt.driver [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Instance destroyed successfully.#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.196 238945 DEBUG nova.objects.instance [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 50d0e7b1-50a9-47e5-92b9-26570f8dba53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:31 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : haproxy version is 2.8.14-c23fe91
Jan 27 08:45:31 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [NOTICE]   (266331) : path to executable is /usr/sbin/haproxy
Jan 27 08:45:31 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [WARNING]  (266331) : Exiting Master process...
Jan 27 08:45:31 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [ALERT]    (266331) : Current worker (266333) exited with code 143 (Terminated)
Jan 27 08:45:31 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[266327]: [WARNING]  (266331) : All workers exited. Exiting... (0)
Jan 27 08:45:31 np0005597378 systemd[1]: libpod-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039.scope: Deactivated successfully.
Jan 27 08:45:31 np0005597378 podman[266804]: 2026-01-27 13:45:31.230800248 +0000 UTC m=+0.050915028 container died 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.230 238945 DEBUG nova.virt.libvirt.vif [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:44:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-502081405',display_name='tempest-ImagesTestJSON-server-502081405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-502081405',id=26,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:45:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-4avwdocn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:45:21Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50d0e7b1-50a9-47e5-92b9-26570f8dba53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.231 238945 DEBUG nova.network.os_vif_util [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "address": "fa:16:3e:1e:cf:10", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4f2b52b-f2", "ovs_interfaceid": "e4f2b52b-f218-4d18-9b87-fe3b94bf58b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.232 238945 DEBUG nova.network.os_vif_util [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.232 238945 DEBUG os_vif [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4f2b52b-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.242 238945 INFO os_vif [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cf:10,bridge_name='br-int',has_traffic_filtering=True,id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4f2b52b-f2')#033[00m
Jan 27 08:45:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bef48247c39fef69a09fcc7dc78dcd0a411cfad88e422c8f66b50afbf5acacfe-merged.mount: Deactivated successfully.
Jan 27 08:45:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039-userdata-shm.mount: Deactivated successfully.
Jan 27 08:45:31 np0005597378 podman[266804]: 2026-01-27 13:45:31.412455748 +0000 UTC m=+0.232570528 container cleanup 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:45:31 np0005597378 systemd[1]: libpod-conmon-05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039.scope: Deactivated successfully.
Jan 27 08:45:31 np0005597378 podman[266862]: 2026-01-27 13:45:31.49777866 +0000 UTC m=+0.062764407 container remove 05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[758816de-beb2-4e42-aba2-8cd2d8af6569]: (4, ('Tue Jan 27 01:45:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039)\n05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039\nTue Jan 27 01:45:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039)\n05b4eea342f8bd3559c1cd992c578a779189d4abb3a687a4ad5e56b7e01a4039\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7f867fb9-97e3-4c42-8fcc-7fd59a71fdd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.512 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 kernel: tape25f7657-30: left promiscuous mode
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.516 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[425a3a5e-85b0-4694-8be1-a1bf54b0752a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.533 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[051024bb-3134-4012-8712-dd1a260762f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.534 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7919b0c9-f2b2-44aa-8e73-63b5e8f33fd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.553 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2e2e8f-f4f6-4579-b800-db9c2e394e2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414941, 'reachable_time': 42305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266878, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.556 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:45:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:31.557 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[42c4954f-6f3e-4d93-982d-9a499787c01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.664 238945 INFO nova.virt.libvirt.driver [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deleting instance files /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53_del#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.665 238945 INFO nova.virt.libvirt.driver [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deletion of /var/lib/nova/instances/50d0e7b1-50a9-47e5-92b9-26570f8dba53_del complete#033[00m
Jan 27 08:45:31 np0005597378 nova_compute[238941]: 2026-01-27 13:45:31.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 109 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.5 KiB/s wr, 44 op/s
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.424 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Successfully created port: 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.603 238945 INFO nova.compute.manager [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 1.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.604 238945 DEBUG oslo.service.loopingcall [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.606 238945 DEBUG nova.compute.manager [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.607 238945 DEBUG nova.network.neutron [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.872 238945 DEBUG nova.compute.manager [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-unplugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.873 238945 DEBUG oslo_concurrency.lockutils [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.873 238945 DEBUG oslo_concurrency.lockutils [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.874 238945 DEBUG oslo_concurrency.lockutils [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.874 238945 DEBUG nova.compute.manager [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] No waiting events found dispatching network-vif-unplugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:32 np0005597378 nova_compute[238941]: 2026-01-27 13:45:32.875 238945 DEBUG nova.compute.manager [req-cf9cec28-fa2c-4e1b-a19a-afe0bad1eccd req-076700bd-3671-4c81-8c38-102d5c572ce1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-unplugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:45:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 95 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.136 238945 DEBUG nova.network.neutron [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.246 238945 DEBUG nova.compute.manager [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-deleted-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.247 238945 INFO nova.compute.manager [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Neutron deleted interface e4f2b52b-f218-4d18-9b87-fe3b94bf58b3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.247 238945 DEBUG nova.network.neutron [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.670 238945 INFO nova.compute.manager [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Took 3.06 seconds to deallocate network for instance.#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.680 238945 DEBUG nova.compute.manager [req-fcf314bf-5041-44cb-926c-0e513fadb636 req-ce3dd56f-f76c-47c8-ad58-6de3f69ade11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Detach interface failed, port_id=e4f2b52b-f218-4d18-9b87-fe3b94bf58b3, reason: Instance 50d0e7b1-50a9-47e5-92b9-26570f8dba53 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.754 238945 DEBUG nova.compute.manager [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG oslo_concurrency.lockutils [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG oslo_concurrency.lockutils [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG oslo_concurrency.lockutils [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.755 238945 DEBUG nova.compute.manager [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] No waiting events found dispatching network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.756 238945 WARNING nova.compute.manager [req-f35c6f44-1e49-4f4b-a5ff-63974f191743 req-4f2e9318-67ca-4dc4-b80f-1a6b1b744ad9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Received unexpected event network-vif-plugged-e4f2b52b-f218-4d18-9b87-fe3b94bf58b3 for instance with vm_state paused and task_state deleting.#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.759 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:35 np0005597378 nova_compute[238941]: 2026-01-27 13:45:35.760 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 MiB/s wr, 96 op/s
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.593 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.593 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.622 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 27 08:45:36 np0005597378 nova_compute[238941]: 2026-01-27 13:45:36.869 238945 DEBUG oslo_concurrency.processutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 27 08:45:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.136 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2679091876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.412 238945 DEBUG oslo_concurrency.processutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.418 238945 DEBUG nova.compute.provider_tree [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.571 238945 DEBUG nova.scheduler.client.report [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.747 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.749 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.760 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:45:37 np0005597378 nova_compute[238941]: 2026-01-27 13:45:37.761 238945 INFO nova.compute.claims [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:45:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.207 238945 INFO nova.scheduler.client.report [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 50d0e7b1-50a9-47e5-92b9-26570f8dba53#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.272 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Successfully updated port: 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.555 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.555 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.560 238945 DEBUG nova.compute.manager [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.560 238945 DEBUG nova.compute.manager [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing instance network info cache due to event network-changed-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.561 238945 DEBUG oslo_concurrency.lockutils [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.561 238945 DEBUG oslo_concurrency.lockutils [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.561 238945 DEBUG nova.network.neutron [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Refreshing network info cache for port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.607 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.751 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.798 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:45:38 np0005597378 nova_compute[238941]: 2026-01-27 13:45:38.805 238945 DEBUG nova.network.neutron [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.073 238945 DEBUG oslo_concurrency.lockutils [None req-ece59e30-7648-4045-9f4a-68b7216eb7e1 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50d0e7b1-50a9-47e5-92b9-26570f8dba53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330884477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.299 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.306 238945 DEBUG nova.compute.provider_tree [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.662 238945 DEBUG nova.scheduler.client.report [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.690 238945 DEBUG nova.network.neutron [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.841 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.842 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.918 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.918 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.926 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:45:39 np0005597378 nova_compute[238941]: 2026-01-27 13:45:39.927 238945 INFO nova.compute.claims [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:45:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 08:45:40 np0005597378 nova_compute[238941]: 2026-01-27 13:45:40.630 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:40 np0005597378 podman[266923]: 2026-01-27 13:45:40.770199925 +0000 UTC m=+0.110726715 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:45:40 np0005597378 nova_compute[238941]: 2026-01-27 13:45:40.932 238945 DEBUG oslo_concurrency.lockutils [req-fa8ca497-c6f5-40d9-a34e-6729d59bccec req-43fad813-e480-4739-96cb-8d48d413907a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:40 np0005597378 nova_compute[238941]: 2026-01-27 13:45:40.933 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquired lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:40 np0005597378 nova_compute[238941]: 2026-01-27 13:45:40.933 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.022 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.022 238945 DEBUG nova.network.neutron [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.445 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.600 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.601 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.601 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.601 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.685 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:45:41 np0005597378 nova_compute[238941]: 2026-01-27 13:45:41.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.412 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.414 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.414 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Creating image(s)#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.444 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.481 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.511 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.516 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.553 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.594 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.595 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.596 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.596 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.621 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.625 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.652 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.658 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.856 238945 DEBUG nova.network.neutron [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.857 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.897 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:42 np0005597378 nova_compute[238941]: 2026-01-27 13:45:42.970 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] resizing rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.057 238945 DEBUG nova.objects.instance [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'migration_context' on Instance uuid 91de80b2-eec2-40c0-b39a-062c18d4e96b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.098 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.099 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Ensure instance console log exists: /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.100 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.101 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.102 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.105 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.114 238945 WARNING nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.119 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.119 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.127 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.127 238945 DEBUG nova.virt.libvirt.host [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.128 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.128 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.128 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.129 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.130 238945 DEBUG nova.virt.hardware [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.134 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247335838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.256 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.263 238945 DEBUG nova.compute.provider_tree [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.297 238945 DEBUG nova.scheduler.client.report [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.415 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.417 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.420 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985574621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.728 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.751 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.755 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.780 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.781 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:45:43 np0005597378 nova_compute[238941]: 2026-01-27 13:45:43.935 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:45:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065917666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.026 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.036 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:44 np0005597378 podman[267220]: 2026-01-27 13:45:44.151376224 +0000 UTC m=+0.076835055 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:45:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 114 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.218 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.219 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4339MB free_disk=59.96732744947076GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.219 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.219 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347977249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.313 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.314 238945 DEBUG nova.objects.instance [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lazy-loading 'pci_devices' on Instance uuid 91de80b2-eec2-40c0-b39a-062c18d4e96b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.381 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.382 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.383 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Creating image(s)#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.402 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.422 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.439 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.442 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.472 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <uuid>91de80b2-eec2-40c0-b39a-062c18d4e96b</uuid>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <name>instance-0000001c</name>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-721945710</nova:name>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:45:43</nova:creationTime>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:user uuid="d968021c10bb479c89c1fd2c3bc6af54">tempest-ServersAdminNegativeTestJSON-694339820-project-member</nova:user>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <nova:project uuid="bd448544348a4b7ba8bc785fc241445e">tempest-ServersAdminNegativeTestJSON-694339820</nova:project>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <entry name="serial">91de80b2-eec2-40c0-b39a-062c18d4e96b</entry>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <entry name="uuid">91de80b2-eec2-40c0-b39a-062c18d4e96b</entry>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/91de80b2-eec2-40c0-b39a-062c18d4e96b_disk">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/console.log" append="off"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:45:44 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:44 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:44 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:44 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.494 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8ebfacea-4592-4e16-8e7b-327affd2445b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.495 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 91de80b2-eec2-40c0-b39a-062c18d4e96b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.495 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6f8945ff-dbc9-4429-ad46-089877d591b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.495 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.496 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.503 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.504 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.504 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.505 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.523 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.526 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6f8945ff-dbc9-4429-ad46-089877d591b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.597 238945 DEBUG nova.network.neutron [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Updating instance_info_cache with network_info: [{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.602 238945 DEBUG nova.policy [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.619 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.723 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Releasing lock "refresh_cache-8ebfacea-4592-4e16-8e7b-327affd2445b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.724 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance network_info: |[{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.730 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Start _get_guest_xml network_info=[{"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.742 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.742 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.743 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Using config drive#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.761 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.767 238945 WARNING nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.773 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.774 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.779 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.779 238945 DEBUG nova.virt.libvirt.host [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.780 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.780 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.781 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.781 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.781 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.782 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.782 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.782 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.783 238945 DEBUG nova.virt.hardware [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.786 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:44 np0005597378 nova_compute[238941]: 2026-01-27 13:45:44.958 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6f8945ff-dbc9-4429-ad46-089877d591b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.010 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.083 238945 DEBUG nova.objects.instance [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.137 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.138 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Ensure instance console log exists: /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.138 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.139 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.139 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205917887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.215 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.221 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3609279579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.341 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.347 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.365 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.369 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.393 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Creating config drive at /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.398 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3xcjf_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.525 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3xcjf_s" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.547 238945 DEBUG nova.storage.rbd_utils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.550 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.642 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.643 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.645 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.646 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.681 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.701 238945 DEBUG oslo_concurrency.processutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config 91de80b2-eec2-40c0-b39a-062c18d4e96b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.702 238945 INFO nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Deleting local config drive /var/lib/nova/instances/91de80b2-eec2-40c0-b39a-062c18d4e96b/disk.config because it was imported into RBD.#033[00m
Jan 27 08:45:45 np0005597378 systemd-machined[207425]: New machine qemu-31-instance-0000001c.
Jan 27 08:45:45 np0005597378 systemd[1]: Started Virtual Machine qemu-31-instance-0000001c.
Jan 27 08:45:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984374056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.933 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.935 238945 DEBUG nova.virt.libvirt.vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.936 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.937 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:45 np0005597378 nova_compute[238941]: 2026-01-27 13:45:45.938 238945 DEBUG nova.objects.instance [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ebfacea-4592-4e16-8e7b-327affd2445b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.047 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <uuid>8ebfacea-4592-4e16-8e7b-327affd2445b</uuid>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <name>instance-0000001b</name>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:name>tempest-tempest.common.compute-instance-1434120558</nova:name>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:45:44</nova:creationTime>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:user uuid="923eb6b430064b86b77c4d8681ab271f">tempest-AttachInterfacesTestJSON-455944080-project-member</nova:user>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:project uuid="2eadedddcfdb49d9ae9a3a4d9a059dac">tempest-AttachInterfacesTestJSON-455944080</nova:project>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <nova:port uuid="0794f0d4-bbd0-4b04-b778-f21c9e4ba99c">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <entry name="serial">8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <entry name="uuid">8ebfacea-4592-4e16-8e7b-327affd2445b</entry>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:0b:eb:c4"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <target dev="tap0794f0d4-bb"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/console.log" append="off"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:45:46 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:46 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:46 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:46 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.048 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Preparing to wait for external event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.048 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.049 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.049 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.050 238945 DEBUG nova.virt.libvirt.vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1434120558',display_name='tempest-tempest.common.compute-instance-1434120558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1434120558',id=27,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzeV7QGuIB+e9AU+FOZa3Vy69C3m1zIwGnJFGsjbe6bMBe0WYfeevZ6ogX0PlEPJKj26hqZEeUd7SWLPAj5upSk8p4diQTcyl6/FB1Z5qvfsbRsEjdfrUcGOhbSiLo4JA==',key_name='tempest-keypair-434541493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2eadedddcfdb49d9ae9a3a4d9a059dac',ramdisk_id='',reservation_id='r-zsra50c5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-455944080',owner_user_name='tempest-AttachInterfacesTestJSON-455944080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='923eb6b430064b86b77c4d8681ab271f',uuid=8ebfacea-4592-4e16-8e7b-327affd2445b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.050 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converting VIF {"id": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "address": "fa:16:3e:0b:eb:c4", "network": {"id": "ee180809-3e36-46bd-ba3a-3bacc6f9ce96", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1830221398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2eadedddcfdb49d9ae9a3a4d9a059dac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0794f0d4-bb", "ovs_interfaceid": "0794f0d4-bbd0-4b04-b778-f21c9e4ba99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.051 238945 DEBUG nova.network.os_vif_util [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.051 238945 DEBUG os_vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.052 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.052 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.053 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.057 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0794f0d4-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.057 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0794f0d4-bb, col_values=(('external_ids', {'iface-id': '0794f0d4-bbd0-4b04-b778-f21c9e4ba99c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:eb:c4', 'vm-uuid': '8ebfacea-4592-4e16-8e7b-327affd2445b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:46 np0005597378 NetworkManager[48904]: <info>  [1769521546.0606] manager: (tap0794f0d4-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.068 238945 INFO os_vif [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:eb:c4,bridge_name='br-int',has_traffic_filtering=True,id=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c,network=Network(ee180809-3e36-46bd-ba3a-3bacc6f9ce96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0794f0d4-bb')#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.179 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521546.1786633, 91de80b2-eec2-40c0-b39a-062c18d4e96b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.179 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.181 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.181 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.184 238945 INFO nova.virt.libvirt.driver [-] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Instance spawned successfully.#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.184 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.192 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521531.191866, 50d0e7b1-50a9-47e5-92b9-26570f8dba53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.193 238945 INFO nova.compute.manager [-] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:45:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 136 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.2 MiB/s wr, 35 op/s
Jan 27 08:45:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:46.293 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:46.294 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.388 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.388 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.389 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] No VIF found with MAC fa:16:3e:0b:eb:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.389 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Using config drive#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.413 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.419 238945 DEBUG nova.compute.manager [None req-8088c7a0-cc82-49c8-8659-f000773db867 - - - - - -] [instance: 50d0e7b1-50a9-47e5-92b9-26570f8dba53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.422 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.422 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.423 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.423 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.424 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.424 238945 DEBUG nova.virt.libvirt.driver [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.427 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.430 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.874 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.874 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521546.1810043, 91de80b2-eec2-40c0-b39a-062c18d4e96b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.874 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] VM Started (Lifecycle Event)#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.985 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:46 np0005597378 nova_compute[238941]: 2026-01-27 13:45:46.989 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.023 238945 INFO nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Took 4.61 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.024 238945 DEBUG nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.116 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.225 238945 INFO nova.compute.manager [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 91de80b2-eec2-40c0-b39a-062c18d4e96b] Took 10.16 seconds to build instance.#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.352 238945 DEBUG oslo_concurrency.lockutils [None req-c4750ac8-010d-4c1e-a7df-c09628dc1006 d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "91de80b2-eec2-40c0-b39a-062c18d4e96b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.643 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Creating config drive at /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.648 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_r8gz17 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.781 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_r8gz17" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.805 238945 DEBUG nova.storage.rbd_utils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] rbd image 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.808 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:45:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:47.898 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:47.899 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.926 238945 DEBUG oslo_concurrency.processutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config 8ebfacea-4592-4e16-8e7b-327affd2445b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.928 238945 INFO nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Deleting local config drive /var/lib/nova/instances/8ebfacea-4592-4e16-8e7b-327affd2445b/disk.config because it was imported into RBD.#033[00m
Jan 27 08:45:47 np0005597378 kernel: tap0794f0d4-bb: entered promiscuous mode
Jan 27 08:45:47 np0005597378 systemd-udevd[267606]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:45:47 np0005597378 NetworkManager[48904]: <info>  [1769521547.9848] manager: (tap0794f0d4-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Jan 27 08:45:47 np0005597378 nova_compute[238941]: 2026-01-27 13:45:47.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:47Z|00225|binding|INFO|Claiming lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c for this chassis.
Jan 27 08:45:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:47Z|00226|binding|INFO|0794f0d4-bbd0-4b04-b778-f21c9e4ba99c: Claiming fa:16:3e:0b:eb:c4 10.100.0.9
Jan 27 08:45:47 np0005597378 NetworkManager[48904]: <info>  [1769521547.9961] device (tap0794f0d4-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:45:47 np0005597378 NetworkManager[48904]: <info>  [1769521547.9971] device (tap0794f0d4-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:45:48 np0005597378 systemd-machined[207425]: New machine qemu-32-instance-0000001b.
Jan 27 08:45:48 np0005597378 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:48Z|00227|binding|INFO|Setting lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c ovn-installed in OVS
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.142 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Successfully created port: c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:45:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 153 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 2.5 MiB/s wr, 59 op/s
Jan 27 08:45:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:48Z|00228|binding|INFO|Setting lport 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c up in Southbound
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.331 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:eb:c4 10.100.0.9'], port_security=['fa:16:3e:0b:eb:c4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8ebfacea-4592-4e16-8e7b-327affd2445b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2eadedddcfdb49d9ae9a3a4d9a059dac', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c739b7db-85c5-4e87-9257-4bf4700eb47c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7092e1af-89aa-41d4-90db-d7509fd1426c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0794f0d4-bbd0-4b04-b778-f21c9e4ba99c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.332 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0794f0d4-bbd0-4b04-b778-f21c9e4ba99c in datapath ee180809-3e36-46bd-ba3a-3bacc6f9ce96 bound to our chassis#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.333 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee180809-3e36-46bd-ba3a-3bacc6f9ce96#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.345 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[622d3c77-14c9-4df9-8ce2-daf26203c3b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.346 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee180809-31 in ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.347 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee180809-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68c0dba4-92fc-431d-a57e-28fe912d7905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.348 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521548.347491, 8ebfacea-4592-4e16-8e7b-327affd2445b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.348 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Started (Lifecycle Event)#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.350 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39774674-1e92-45e6-8603-bbfa3e7a0d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.366 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[66c72c4c-0496-42d1-b7c6-f0d3f549aca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.390 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c272a95-f373-4b87-9672-2eddeed32345]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.439 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d2785e6e-cd88-48b7-a2cb-aa894aa7b303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 NetworkManager[48904]: <info>  [1769521548.4523] manager: (tapee180809-30): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55a011d4-889d-4284-b117-532b268c611a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 systemd-udevd[267678]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.486 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e34d8b35-0823-403b-b0b6-d0d8ec1d99be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.489 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff937a8-6ae7-4de1-9e68-dc1f64e87395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 NetworkManager[48904]: <info>  [1769521548.5107] device (tapee180809-30): carrier: link connected
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.517 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[022ee9f2-b3a8-4cac-9d9a-3d4e6a4303c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.533 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c15ef9cc-8227-43e3-8ac4-73dd1911d142]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267756, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.552 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04065be7-fbc0-45a4-b84f-7703114e8074]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:c077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418813, 'tstamp': 418813}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267757, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[799dd0b7-fcab-4913-930d-05146387f80e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee180809-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:c0:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418813, 'reachable_time': 28139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267758, 'error': None, 'target': 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.603 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af1cee50-34f4-4302-b790-dabc8ded0937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.663 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.668 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521548.3476298, 8ebfacea-4592-4e16-8e7b-327affd2445b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.669 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b88465d8-1cab-4572-b9d1-c33245601b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.690 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee180809-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.690 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.691 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee180809-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:48 np0005597378 NetworkManager[48904]: <info>  [1769521548.6932] manager: (tapee180809-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 27 08:45:48 np0005597378 kernel: tapee180809-30: entered promiscuous mode
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.695 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee180809-30, col_values=(('external_ids', {'iface-id': 'eda259aa-d2da-4d84-b8e3-a762146ea3e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:48Z|00229|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.718 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62170640-0f6b-4360-be53-7bbe45f05875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.720 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.pid.haproxy
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID ee180809-3e36-46bd-ba3a-3bacc6f9ce96
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:45:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:48.722 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'env', 'PROCESS_TAG=haproxy-ee180809-3e36-46bd-ba3a-3bacc6f9ce96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee180809-3e36-46bd-ba3a-3bacc6f9ce96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.739 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.742 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:48 np0005597378 nova_compute[238941]: 2026-01-27 13:45:48.804 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:45:49 np0005597378 podman[267790]: 2026-01-27 13:45:49.101259164 +0000 UTC m=+0.067356761 container create ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:45:49 np0005597378 systemd[1]: Started libpod-conmon-ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676.scope.
Jan 27 08:45:49 np0005597378 podman[267790]: 2026-01-27 13:45:49.057112168 +0000 UTC m=+0.023209775 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:45:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:45:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816beedac88cb0ab241caac1637350dd432216644533e948e01c29a82d8dfa92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:45:49 np0005597378 podman[267790]: 2026-01-27 13:45:49.203811589 +0000 UTC m=+0.169909216 container init ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:45:49 np0005597378 podman[267790]: 2026-01-27 13:45:49.210008425 +0000 UTC m=+0.176106022 container start ba02c8530f82f869149d34ec5bbfa45ae75c560c2f830cf4242887af480f7676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:45:49 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [NOTICE]   (267807) : New worker (267809) forked
Jan 27 08:45:49 np0005597378 neutron-haproxy-ovnmeta-ee180809-3e36-46bd-ba3a-3bacc6f9ce96[267803]: [NOTICE]   (267807) : Loading success.
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.681 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.681 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.842 238945 DEBUG nova.compute.manager [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.842 238945 DEBUG oslo_concurrency.lockutils [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.843 238945 DEBUG oslo_concurrency.lockutils [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.843 238945 DEBUG oslo_concurrency.lockutils [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.843 238945 DEBUG nova.compute.manager [req-fa676e43-eb54-4c7b-8cc9-27c01c176ed3 req-124a0509-c7ef-40bd-a520-145ff03e3676 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Processing event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.844 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.847 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521549.8473837, 8ebfacea-4592-4e16-8e7b-327affd2445b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.848 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.850 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.853 238945 INFO nova.virt.libvirt.driver [-] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Instance spawned successfully.#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.853 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.857 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Successfully updated port: c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.878 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.882 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.890 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.891 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.891 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.908 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.909 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.910 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.910 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.911 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.911 238945 DEBUG nova.virt.libvirt.driver [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.917 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.986 238945 DEBUG nova.compute.manager [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Received event network-changed-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.986 238945 DEBUG nova.compute.manager [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Refreshing instance network info cache due to event network-changed-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.987 238945 DEBUG oslo_concurrency.lockutils [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.989 238945 INFO nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Took 20.48 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:45:49 np0005597378 nova_compute[238941]: 2026-01-27 13:45:49.989 238945 DEBUG nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:45:50 np0005597378 nova_compute[238941]: 2026-01-27 13:45:50.071 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:45:50 np0005597378 nova_compute[238941]: 2026-01-27 13:45:50.077 238945 INFO nova.compute.manager [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Took 26.47 seconds to build instance.#033[00m
Jan 27 08:45:50 np0005597378 nova_compute[238941]: 2026-01-27 13:45:50.096 238945 DEBUG oslo_concurrency.lockutils [None req-10d85d82-7c3e-4e6a-8a73-42fd4b8a9be7 923eb6b430064b86b77c4d8681ab271f 2eadedddcfdb49d9ae9a3a4d9a059dac - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Jan 27 08:45:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:50.901 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.508 238945 DEBUG nova.network.neutron [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updating instance_info_cache with network_info: [{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.539 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.540 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Instance network_info: |[{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.540 238945 DEBUG oslo_concurrency.lockutils [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.541 238945 DEBUG nova.network.neutron [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Refreshing network info cache for port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.543 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Start _get_guest_xml network_info=[{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.549 238945 WARNING nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.557 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.558 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.562 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.562 238945 DEBUG nova.virt.libvirt.host [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.562 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.563 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.563 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.564 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.564 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.564 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.565 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.565 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.565 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.566 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.566 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.566 238945 DEBUG nova.virt.hardware [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.569 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.988 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "73738c70-ec35-426c-a81b-766bc5431f78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:51 np0005597378 nova_compute[238941]: 2026-01-27 13:45:51.988 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "73738c70-ec35-426c-a81b-766bc5431f78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.004 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.091 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.092 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.098 238945 DEBUG nova.virt.hardware [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.098 238945 INFO nova.compute.claims [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:45:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537042850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.162 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.185 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.189 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 181 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.321 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.360 238945 DEBUG nova.compute.manager [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.361 238945 DEBUG oslo_concurrency.lockutils [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.361 238945 DEBUG oslo_concurrency.lockutils [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.362 238945 DEBUG oslo_concurrency.lockutils [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8ebfacea-4592-4e16-8e7b-327affd2445b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.362 238945 DEBUG nova.compute.manager [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] No waiting events found dispatching network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.362 238945 WARNING nova.compute.manager [req-ac8a4fab-2d76-488f-9d85-29b2fe8cff30 req-deb101e1-88a0-4580-a6bc-6a8f8f7cbf4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8ebfacea-4592-4e16-8e7b-327affd2445b] Received unexpected event network-vif-plugged-0794f0d4-bbd0-4b04-b778-f21c9e4ba99c for instance with vm_state active and task_state None.#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.725 238945 DEBUG nova.network.neutron [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updated VIF entry in instance network info cache for port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.726 238945 DEBUG nova.network.neutron [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Updating instance_info_cache with network_info: [{"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.747 238945 DEBUG oslo_concurrency.lockutils [req-cf2a22ae-44e4-4507-810f-d1217c4edcd1 req-8d1d0e5e-205c-4583-bfeb-45dd1ef9d892 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6f8945ff-dbc9-4429-ad46-089877d591b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:45:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:45:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359739116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.776 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.778 238945 DEBUG nova.virt.libvirt.vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1899114191',display_name='tempest-ImagesTestJSON-server-1899114191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1899114191',id=29,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ziedveyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:44Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=6f8945ff-dbc9-4429-ad46-089877d591b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.778 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.779 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.781 238945 DEBUG nova.objects.instance [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f8945ff-dbc9-4429-ad46-089877d591b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.793 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <uuid>6f8945ff-dbc9-4429-ad46-089877d591b2</uuid>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <name>instance-0000001d</name>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesTestJSON-server-1899114191</nova:name>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:45:51</nova:creationTime>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <nova:port uuid="c1674f3d-f01d-4e6e-a4ee-503dbf007c2a">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <entry name="serial">6f8945ff-dbc9-4429-ad46-089877d591b2</entry>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <entry name="uuid">6f8945ff-dbc9-4429-ad46-089877d591b2</entry>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6f8945ff-dbc9-4429-ad46-089877d591b2_disk">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:09:fe:c8"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <target dev="tapc1674f3d-f0"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/console.log" append="off"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:45:52 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:45:52 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:45:52 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:45:52 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.800 238945 DEBUG nova.compute.manager [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Preparing to wait for external event network-vif-plugged-c1674f3d-f01d-4e6e-a4ee-503dbf007c2a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.800 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.801 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.801 238945 DEBUG oslo_concurrency.lockutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "6f8945ff-dbc9-4429-ad46-089877d591b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.802 238945 DEBUG nova.virt.libvirt.vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1899114191',display_name='tempest-ImagesTestJSON-server-1899114191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1899114191',id=29,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ziedveyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:45:44Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=6f8945ff-dbc9-4429-ad46-089877d591b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.802 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "address": "fa:16:3e:09:fe:c8", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1674f3d-f0", "ovs_interfaceid": "c1674f3d-f01d-4e6e-a4ee-503dbf007c2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.803 238945 DEBUG nova.network.os_vif_util [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.803 238945 DEBUG os_vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.805 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.805 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.808 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1674f3d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.808 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1674f3d-f0, col_values=(('external_ids', {'iface-id': 'c1674f3d-f01d-4e6e-a4ee-503dbf007c2a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:fe:c8', 'vm-uuid': '6f8945ff-dbc9-4429-ad46-089877d591b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:45:52 np0005597378 NetworkManager[48904]: <info>  [1769521552.8109] manager: (tapc1674f3d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.824 238945 INFO os_vif [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:fe:c8,bridge_name='br-int',has_traffic_filtering=True,id=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1674f3d-f0')#033[00m
Jan 27 08:45:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:45:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973093857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.882 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.888 238945 DEBUG nova.compute.provider_tree [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.895 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.895 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.896 238945 DEBUG nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:09:fe:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.896 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Using config drive#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.919 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.926 238945 DEBUG nova.scheduler.client.report [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.950 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:52 np0005597378 nova_compute[238941]: 2026-01-27 13:45:52.950 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.007 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.007 238945 DEBUG nova.network.neutron [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.026 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.043 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.127 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.128 238945 DEBUG nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.129 238945 INFO nova.virt.libvirt.driver [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Creating image(s)#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.146 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:53 np0005597378 NetworkManager[48904]: <info>  [1769521553.1711] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 27 08:45:53 np0005597378 NetworkManager[48904]: <info>  [1769521553.1720] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.210 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.245 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.248 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:53Z|00230|binding|INFO|Releasing lport eda259aa-d2da-4d84-b8e3-a762146ea3e9 from this chassis (sb_readonly=0)
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.289 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Creating config drive at /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.296 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmponr4q2sk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.328 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.329 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.330 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.330 238945 DEBUG oslo_concurrency.lockutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.350 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.354 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73738c70-ec35-426c-a81b-766bc5431f78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.427 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmponr4q2sk" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.448 238945 DEBUG nova.storage.rbd_utils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.452 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.502 238945 DEBUG nova.network.neutron [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.504 238945 DEBUG nova.compute.manager [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] [instance: 73738c70-ec35-426c-a81b-766bc5431f78] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.822 238945 DEBUG oslo_concurrency.processutils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73738c70-ec35-426c-a81b-766bc5431f78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.895 238945 DEBUG oslo_concurrency.processutils [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config 6f8945ff-dbc9-4429-ad46-089877d591b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.896 238945 INFO nova.virt.libvirt.driver [None req-1e1085dd-7e2e-429d-87c4-6e978897dab9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 6f8945ff-dbc9-4429-ad46-089877d591b2] Deleting local config drive /var/lib/nova/instances/6f8945ff-dbc9-4429-ad46-089877d591b2/disk.config because it was imported into RBD.#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.905 238945 DEBUG nova.storage.rbd_utils [None req-28718646-a6ec-4be4-bff1-8f0a87b7613e d968021c10bb479c89c1fd2c3bc6af54 bd448544348a4b7ba8bc785fc241445e - - default default] resizing rbd image 73738c70-ec35-426c-a81b-766bc5431f78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:45:53 np0005597378 NetworkManager[48904]: <info>  [1769521553.9491] manager: (tapc1674f3d-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Jan 27 08:45:53 np0005597378 kernel: tapc1674f3d-f0: entered promiscuous mode
Jan 27 08:45:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:53Z|00231|binding|INFO|Claiming lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a for this chassis.
Jan 27 08:45:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:53Z|00232|binding|INFO|c1674f3d-f01d-4e6e-a4ee-503dbf007c2a: Claiming fa:16:3e:09:fe:c8 10.100.0.5
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.963 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:fe:c8 10.100.0.5'], port_security=['fa:16:3e:09:fe:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f8945ff-dbc9-4429-ad46-089877d591b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c1674f3d-f01d-4e6e-a4ee-503dbf007c2a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.964 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c1674f3d-f01d-4e6e-a4ee-503dbf007c2a in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.965 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5#033[00m
Jan 27 08:45:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:53Z|00233|binding|INFO|Setting lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a ovn-installed in OVS
Jan 27 08:45:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:45:53Z|00234|binding|INFO|Setting lport c1674f3d-f01d-4e6e-a4ee-503dbf007c2a up in Southbound
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.983 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad49b47f-0ec7-45b1-9264-9290fe317913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.984 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:45:53 np0005597378 nova_compute[238941]: 2026-01-27 13:45:53.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.986 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa24fc90-9496-4761-bc91-7d5f7cfdde57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.987 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3baba3ca-c0bb-444c-b47f-de1e8dc24270]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:53 np0005597378 systemd-udevd[268127]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:45:53 np0005597378 systemd-machined[207425]: New machine qemu-33-instance-0000001d.
Jan 27 08:45:53 np0005597378 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Jan 27 08:45:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:53.999 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1143e4af-6ef4-4bb1-94c4-c6eb5e24bc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:45:54 np0005597378 NetworkManager[48904]: <info>  [1769521554.0084] device (tapc1674f3d-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:45:54 np0005597378 NetworkManager[48904]: <info>  [1769521554.0089] device (tapc1674f3d-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:45:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:45:54.024 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f366f301-b001-4435-8ac8-2921c3c2e5d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:47:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:47:18 np0005597378 rsyslogd[1006]: imjournal: 4952 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.185 238945 DEBUG nova.compute.manager [req-3fff8ae0-ecc1-4f8e-9661-cb00395c518b req-2c793938-bdc6-4199-9aac-03b771869df7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f] Received event network-vif-deleted-1ba5e57d-38e5-4379-a674-73c47c86a471 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 505 KiB/s wr, 126 op/s
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.304 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.305 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.367 238945 DEBUG oslo_concurrency.lockutils [req-d7d0b2f3-af36-4d15-87f7-e1a56b2d10da req-9bfa0340-bb01-4f0c-984e-8ab65ec22f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.369 238945 DEBUG oslo_concurrency.processutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724931507' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.965 238945 DEBUG oslo_concurrency.processutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:18 np0005597378 nova_compute[238941]: 2026-01-27 13:47:18.973 238945 DEBUG nova.compute.provider_tree [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:19 np0005597378 nova_compute[238941]: 2026-01-27 13:47:19.014 238945 DEBUG nova.scheduler.client.report [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:19 np0005597378 nova_compute[238941]: 2026-01-27 13:47:19.160 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 27 08:47:19 np0005597378 nova_compute[238941]: 2026-01-27 13:47:19.305 238945 INFO nova.scheduler.client.report [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 902ccd66-8386-4fe6-8c2b-4eb72bfdc97f#033[00m
Jan 27 08:47:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 27 08:47:19 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 27 08:47:19 np0005597378 nova_compute[238941]: 2026-01-27 13:47:19.924 238945 DEBUG oslo_concurrency.lockutils [None req-abf0561e-9e62-49fb-ab3f-eb74f25b091d 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "902ccd66-8386-4fe6-8c2b-4eb72bfdc97f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:20Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:72:d8 10.100.0.8
Jan 27 08:47:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:20Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:72:d8 10.100.0.8
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.126 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.126 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 106 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.249 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.552 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.553 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.558 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:47:20 np0005597378 nova_compute[238941]: 2026-01-27 13:47:20.558 238945 INFO nova.compute.claims [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:47:21 np0005597378 nova_compute[238941]: 2026-01-27 13:47:21.256 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263028752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:21 np0005597378 nova_compute[238941]: 2026-01-27 13:47:21.788 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:21 np0005597378 nova_compute[238941]: 2026-01-27 13:47:21.794 238945 DEBUG nova.compute.provider_tree [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:21 np0005597378 nova_compute[238941]: 2026-01-27 13:47:21.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:21 np0005597378 nova_compute[238941]: 2026-01-27 13:47:21.861 238945 DEBUG nova.scheduler.client.report [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 27 08:47:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 27 08:47:21 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 27 08:47:22 np0005597378 nova_compute[238941]: 2026-01-27 13:47:22.142 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:22 np0005597378 nova_compute[238941]: 2026-01-27 13:47:22.143 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:47:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 106 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 281 KiB/s rd, 2.3 MiB/s wr, 111 op/s
Jan 27 08:47:22 np0005597378 nova_compute[238941]: 2026-01-27 13:47:22.422 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:47:22 np0005597378 nova_compute[238941]: 2026-01-27 13:47:22.423 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:47:22 np0005597378 nova_compute[238941]: 2026-01-27 13:47:22.555 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:47:22 np0005597378 nova_compute[238941]: 2026-01-27 13:47:22.936 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.343 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.344 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.345 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Creating image(s)#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.362 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.383 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.404 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.407 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.473 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.475 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.475 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.476 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.499 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.503 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50c6d534-e937-4148-851e-4ec51e067875_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.784 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 50c6d534-e937-4148-851e-4ec51e067875_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.838 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.911 238945 DEBUG nova.objects.instance [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 50c6d534-e937-4148-851e-4ec51e067875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.955 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.955 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Ensure instance console log exists: /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.956 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.956 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:23 np0005597378 nova_compute[238941]: 2026-01-27 13:47:23.957 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 121 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Jan 27 08:47:25 np0005597378 nova_compute[238941]: 2026-01-27 13:47:25.351 238945 DEBUG nova.policy [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:47:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 128 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 529 KiB/s rd, 3.3 MiB/s wr, 161 op/s
Jan 27 08:47:26 np0005597378 nova_compute[238941]: 2026-01-27 13:47:26.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 27 08:47:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 27 08:47:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007710540994051852 of space, bias 1.0, pg target 0.23131622982155556 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006680384568422266 of space, bias 1.0, pg target 0.20041153705266798 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0250120367273857e-06 of space, bias 4.0, pg target 0.0012300144440728629 quantized to 16 (current 16)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:47:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:47:28 np0005597378 nova_compute[238941]: 2026-01-27 13:47:28.197 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Successfully created port: 0fb1bfa1-f000-4f51-8226-3de232ddb948 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:47:28 np0005597378 nova_compute[238941]: 2026-01-27 13:47:28.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 151 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Jan 27 08:47:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 3.4 MiB/s wr, 102 op/s
Jan 27 08:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 27 08:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 27 08:47:30 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.520 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Successfully updated port: 0fb1bfa1-f000-4f51-8226-3de232ddb948 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.595 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.595 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.595 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.745 238945 DEBUG nova.compute.manager [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-changed-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.745 238945 DEBUG nova.compute.manager [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Refreshing instance network info cache due to event network-changed-0fb1bfa1-f000-4f51-8226-3de232ddb948. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.745 238945 DEBUG oslo_concurrency.lockutils [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:47:30 np0005597378 nova_compute[238941]: 2026-01-27 13:47:30.831 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:47:31 np0005597378 nova_compute[238941]: 2026-01-27 13:47:31.303 238945 DEBUG nova.objects.instance [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'flavor' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:31 np0005597378 nova_compute[238941]: 2026-01-27 13:47:31.655 238945 DEBUG oslo_concurrency.lockutils [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:47:31 np0005597378 nova_compute[238941]: 2026-01-27 13:47:31.656 238945 DEBUG oslo_concurrency.lockutils [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:47:31 np0005597378 nova_compute[238941]: 2026-01-27 13:47:31.807 238945 DEBUG nova.network.neutron [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updating instance_info_cache with network_info: [{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:31 np0005597378 nova_compute[238941]: 2026-01-27 13:47:31.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.054 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.054 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance network_info: |[{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.055 238945 DEBUG oslo_concurrency.lockutils [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.055 238945 DEBUG nova.network.neutron [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Refreshing network info cache for port 0fb1bfa1-f000-4f51-8226-3de232ddb948 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.057 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start _get_guest_xml network_info=[{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.062 238945 WARNING nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.067 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.067 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.070 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.070 238945 DEBUG nova.virt.libvirt.host [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.071 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.072 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.073 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.073 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.073 238945 DEBUG nova.virt.hardware [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.076 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Jan 27 08:47:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:47:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206589363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.671 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.691 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:32 np0005597378 nova_compute[238941]: 2026-01-27 13:47:32.695 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:47:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531863756' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.241 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.242 238945 DEBUG nova.virt.libvirt.vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1880502757',display_name='tempest-ImagesTestJSON-server-1880502757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1880502757',id=35,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-vko7lh4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:23Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50c6d534-e937-4148-851e-4ec51e067875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.243 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.244 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.245 238945 DEBUG nova.objects.instance [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 50c6d534-e937-4148-851e-4ec51e067875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.391 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <uuid>50c6d534-e937-4148-851e-4ec51e067875</uuid>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <name>instance-00000023</name>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesTestJSON-server-1880502757</nova:name>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:47:32</nova:creationTime>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <nova:port uuid="0fb1bfa1-f000-4f51-8226-3de232ddb948">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <entry name="serial">50c6d534-e937-4148-851e-4ec51e067875</entry>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <entry name="uuid">50c6d534-e937-4148-851e-4ec51e067875</entry>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/50c6d534-e937-4148-851e-4ec51e067875_disk">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/50c6d534-e937-4148-851e-4ec51e067875_disk.config">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:1e:7a:2f"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <target dev="tap0fb1bfa1-f0"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/console.log" append="off"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:47:33 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:47:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:47:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:47:33 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.393 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Preparing to wait for external event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.393 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.393 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.394 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.394 238945 DEBUG nova.virt.libvirt.vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1880502757',display_name='tempest-ImagesTestJSON-server-1880502757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1880502757',id=35,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-vko7lh4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:23Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50c6d534-e937-4148-851e-4ec51e067875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.395 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.395 238945 DEBUG nova.network.os_vif_util [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.396 238945 DEBUG os_vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.396 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.397 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.402 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fb1bfa1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0fb1bfa1-f0, col_values=(('external_ids', {'iface-id': '0fb1bfa1-f000-4f51-8226-3de232ddb948', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:7a:2f', 'vm-uuid': '50c6d534-e937-4148-851e-4ec51e067875'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:33 np0005597378 NetworkManager[48904]: <info>  [1769521653.4055] manager: (tap0fb1bfa1-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.411 238945 INFO os_vif [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0')#033[00m
Jan 27 08:47:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 27 08:47:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 27 08:47:33 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.607 238945 DEBUG nova.network.neutron [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.672 238945 DEBUG nova.network.neutron [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updated VIF entry in instance network info cache for port 0fb1bfa1-f000-4f51-8226-3de232ddb948. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.672 238945 DEBUG nova.network.neutron [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updating instance_info_cache with network_info: [{"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.791 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.792 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.792 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:1e:7a:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.793 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Using config drive#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.818 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.899 238945 DEBUG nova.compute.manager [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.900 238945 DEBUG nova.compute.manager [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing instance network info cache due to event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.900 238945 DEBUG oslo_concurrency.lockutils [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:47:33 np0005597378 nova_compute[238941]: 2026-01-27 13:47:33.919 238945 DEBUG oslo_concurrency.lockutils [req-941e2c61-9e5e-4755-804d-eabd343e77d6 req-905e0752-1d26-4146-9930-39753b1f02e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-50c6d534-e937-4148-851e-4ec51e067875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.419 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Creating config drive at /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.424 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmlcmfsv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.555 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmlcmfsv" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.584 238945 DEBUG nova.storage.rbd_utils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 50c6d534-e937-4148-851e-4ec51e067875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.587 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config 50c6d534-e937-4148-851e-4ec51e067875_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.734 238945 DEBUG oslo_concurrency.processutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config 50c6d534-e937-4148-851e-4ec51e067875_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.735 238945 INFO nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deleting local config drive /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875/disk.config because it was imported into RBD.#033[00m
Jan 27 08:47:34 np0005597378 kernel: tap0fb1bfa1-f0: entered promiscuous mode
Jan 27 08:47:34 np0005597378 NetworkManager[48904]: <info>  [1769521654.7826] manager: (tap0fb1bfa1-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Jan 27 08:47:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:34Z|00279|binding|INFO|Claiming lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 for this chassis.
Jan 27 08:47:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:34Z|00280|binding|INFO|0fb1bfa1-f000-4f51-8226-3de232ddb948: Claiming fa:16:3e:1e:7a:2f 10.100.0.5
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:34Z|00281|binding|INFO|Setting lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 ovn-installed in OVS
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:34 np0005597378 nova_compute[238941]: 2026-01-27 13:47:34.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:34 np0005597378 systemd-udevd[273078]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:47:34 np0005597378 systemd-machined[207425]: New machine qemu-39-instance-00000023.
Jan 27 08:47:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:34Z|00282|binding|INFO|Setting lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 up in Southbound
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.823 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:7a:2f 10.100.0.5'], port_security=['fa:16:3e:1e:7a:2f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50c6d534-e937-4148-851e-4ec51e067875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0fb1bfa1-f000-4f51-8226-3de232ddb948) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.825 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0fb1bfa1-f000-4f51-8226-3de232ddb948 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.826 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5#033[00m
Jan 27 08:47:34 np0005597378 NetworkManager[48904]: <info>  [1769521654.8276] device (tap0fb1bfa1-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:47:34 np0005597378 NetworkManager[48904]: <info>  [1769521654.8282] device (tap0fb1bfa1-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:47:34 np0005597378 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.839 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3346321d-d384-479a-ad41-07e282495bc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.840 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.842 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.842 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75a9dcbc-f663-432d-b41c-3f61a7992782]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6da14a93-bf36-429c-9a9e-adc7de746e73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.856 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea5e9b7-87e7-4863-8be0-fb24d16161e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa58c4b-ea2d-4bd8-bac3-313d08e676c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.899 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f361ba5-2515-4b05-97c6-7901670c29f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 NetworkManager[48904]: <info>  [1769521654.9059] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f21b6ac5-4584-4b98-9849-4b0c5db9cb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 systemd-udevd[273080]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.941 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1473ee76-af3b-4b88-a5fc-424bdb26558a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.946 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d6eb2ad7-c530-4afe-a911-990f98350ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 NetworkManager[48904]: <info>  [1769521654.9750] device (tape25f7657-30): carrier: link connected
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.982 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9489532d-90f7-4da8-8eca-13f5afcb09b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:34.999 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97282a13-88ae-4a52-8044-40f86be994ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273111, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[caebde84-e40c-4f61-8878-06fb5f0cf80f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429459, 'tstamp': 429459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273112, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.029 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cdc6ad-9e8c-42c6-889b-45f5ffcb5053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273113, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0718f5d8-0b1b-47b9-a114-2b32e48ef283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f27042d-c908-4a8c-94d4-4aaf9067c1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.113 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.114 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:35 np0005597378 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 08:47:35 np0005597378 NetworkManager[48904]: <info>  [1769521655.1165] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.123 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:35Z|00283|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.127 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.128 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95e43a6e-079a-4fe1-9c4c-61b287658fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.128 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:47:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:35.129 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.381 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521655.3809295, 50c6d534-e937-4148-851e-4ec51e067875 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.382 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Started (Lifecycle Event)#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.455 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.459 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521655.38197, 50c6d534-e937-4148-851e-4ec51e067875 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.460 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:47:35 np0005597378 podman[273186]: 2026-01-27 13:47:35.489566396 +0000 UTC m=+0.052552843 container create 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:47:35 np0005597378 systemd[1]: Started libpod-conmon-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e.scope.
Jan 27 08:47:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:47:35 np0005597378 podman[273186]: 2026-01-27 13:47:35.459995991 +0000 UTC m=+0.022982458 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:47:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5fa3d9d9cc59a4afa86ab685a529794eaaa60a1db3ae2a87567af1f93fc149c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:47:35 np0005597378 podman[273186]: 2026-01-27 13:47:35.572838983 +0000 UTC m=+0.135825450 container init 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:47:35 np0005597378 podman[273186]: 2026-01-27 13:47:35.579370938 +0000 UTC m=+0.142357395 container start 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:47:35 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : New worker (273208) forked
Jan 27 08:47:35 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : Loading success.
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.684 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:35 np0005597378 nova_compute[238941]: 2026-01-27 13:47:35.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:47:36 np0005597378 nova_compute[238941]: 2026-01-27 13:47:36.014 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:47:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Jan 27 08:47:36 np0005597378 nova_compute[238941]: 2026-01-27 13:47:36.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:37 np0005597378 nova_compute[238941]: 2026-01-27 13:47:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 27 KiB/s wr, 32 op/s
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.913 238945 DEBUG nova.compute.manager [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.914 238945 DEBUG oslo_concurrency.lockutils [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.914 238945 DEBUG oslo_concurrency.lockutils [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.914 238945 DEBUG oslo_concurrency.lockutils [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.915 238945 DEBUG nova.compute.manager [req-9d7fd67e-4af7-4e83-9264-7122f6d33c5c req-28549ed3-6e4f-4eb0-a438-27643f54ee10 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Processing event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.915 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.918 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521658.9188058, 50c6d534-e937-4148-851e-4ec51e067875 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.919 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.921 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.924 238945 INFO nova.virt.libvirt.driver [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance spawned successfully.#033[00m
Jan 27 08:47:38 np0005597378 nova_compute[238941]: 2026-01-27 13:47:38.924 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.080 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.089 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.090 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.090 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.201 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.304 238945 INFO nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 15.96 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.304 238945 DEBUG nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.573 238945 INFO nova.compute.manager [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 19.06 seconds to build instance.#033[00m
Jan 27 08:47:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 27 08:47:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 27 08:47:39 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.642 238945 DEBUG nova.network.neutron [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.784 238945 DEBUG oslo_concurrency.lockutils [None req-7c33d06c-c007-4173-b9f3-f7e9e2082650 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.869 238945 DEBUG oslo_concurrency.lockutils [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.869 238945 DEBUG nova.compute.manager [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.869 238945 DEBUG nova.compute.manager [None req-a12c4b09-40fe-4210-b69e-b7aaa31381c8 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] network_info to inject: |[{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.871 238945 DEBUG oslo_concurrency.lockutils [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:47:39 np0005597378 nova_compute[238941]: 2026-01-27 13:47:39.871 238945 DEBUG nova.network.neutron [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:47:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 30 KiB/s wr, 46 op/s
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.444 238945 DEBUG nova.compute.manager [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.445 238945 DEBUG oslo_concurrency.lockutils [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.445 238945 DEBUG oslo_concurrency.lockutils [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.446 238945 DEBUG oslo_concurrency.lockutils [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.446 238945 DEBUG nova.compute.manager [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] No waiting events found dispatching network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.446 238945 WARNING nova.compute.manager [req-af8a0657-2247-40cb-802e-42b1ced2cdd5 req-ada7b12e-3c3f-47f1-a4a5-a1f6f95ee31d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received unexpected event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 for instance with vm_state active and task_state image_snapshot_pending.#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.733 238945 DEBUG nova.objects.instance [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'flavor' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.821 238945 DEBUG nova.compute.manager [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:41 np0005597378 nova_compute[238941]: 2026-01-27 13:47:41.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.068 238945 DEBUG nova.network.neutron [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updated VIF entry in instance network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.068 238945 DEBUG nova.network.neutron [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.139 238945 DEBUG oslo_concurrency.lockutils [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:47:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 26 KiB/s wr, 33 op/s
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.324 238945 DEBUG oslo_concurrency.lockutils [req-3ba3c957-da59-447e-88e2-24e22b04c11d req-f4f71452-57bb-4fd8-b636-b7b4cf76f80a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.325 238945 DEBUG oslo_concurrency.lockutils [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.387 238945 INFO nova.compute.manager [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] instance snapshotting#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.610 238945 INFO nova.virt.libvirt.driver [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Beginning live snapshot process#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.914 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.915 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:42 np0005597378 nova_compute[238941]: 2026-01-27 13:47:42.915 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.680 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.681 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.681 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.682 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.682 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:43 np0005597378 podman[273217]: 2026-01-27 13:47:43.750153401 +0000 UTC m=+0.089112834 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.834 238945 DEBUG nova.virt.libvirt.imagebackend [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.993 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:43 np0005597378 nova_compute[238941]: 2026-01-27 13:47:43.993 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.006 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(2015b50e82f245d2983c2253ccfe41fa) on rbd image(50c6d534-e937-4148-851e-4ec51e067875_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.071 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:47:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216399248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.227 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 118 op/s
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.347 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.348 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.353 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.354 238945 INFO nova.compute.claims [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.637 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.638 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.642 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.642 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:47:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 27 08:47:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 27 08:47:44 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.805 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/50c6d534-e937-4148-851e-4ec51e067875_disk@2015b50e82f245d2983c2253ccfe41fa to images/16533a81-6ed2-4221-aed9-29618a3a09b6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.858 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.860 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3925MB free_disk=59.921543680131435GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.860 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:44 np0005597378 nova_compute[238941]: 2026-01-27 13:47:44.901 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/16533a81-6ed2-4221-aed9-29618a3a09b6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.011 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.154 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(2015b50e82f245d2983c2253ccfe41fa) on rbd image(50c6d534-e937-4148-851e-4ec51e067875_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.486 238945 DEBUG nova.network.neutron [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:47:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293421647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.570 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.575 238945 DEBUG nova.compute.provider_tree [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 27 08:47:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 27 08:47:45 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.808 238945 DEBUG nova.storage.rbd_utils [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(16533a81-6ed2-4221-aed9-29618a3a09b6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:47:45 np0005597378 nova_compute[238941]: 2026-01-27 13:47:45.841 238945 DEBUG nova.scheduler.client.report [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.004 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "49158813-53e9-4c5a-9141-7646d98a93e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.004 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.078 238945 DEBUG nova.compute.manager [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.079 238945 DEBUG nova.compute.manager [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing instance network info cache due to event network-changed-ce56c185-84bf-4d18-8d83-a9ab2ece51eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.079 238945 DEBUG oslo_concurrency.lockutils [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.230 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.231 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.251 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:47:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 167 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 KiB/s wr, 156 op/s
Jan 27 08:47:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:46.295 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:46 np0005597378 podman[273428]: 2026-01-27 13:47:46.706969501 +0000 UTC m=+0.048174655 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.740 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.740 238945 DEBUG nova.network.neutron [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:47:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.788 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 11612a22-0c73-4cee-b792-3ed36c1d2c8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 50c6d534-e937-4148-851e-4ec51e067875 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e03449f9-27f7-4c89-8d13-5f4a688e2b1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:47:46 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.927 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 49158813-53e9-4c5a-9141-7646d98a93e1 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.927 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.927 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:47:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:46 np0005597378 nova_compute[238941]: 2026-01-27 13:47:46.958 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.085 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.279 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.502 238945 DEBUG nova.network.neutron [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.502 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:47:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611059065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.632 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.638 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.816 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.818 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.818 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Creating image(s)#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.836 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.856 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.877 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.882 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.948 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.953 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.953 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.954 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.954 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.975 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:47 np0005597378 nova_compute[238941]: 2026-01-27 13:47:47.978 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.220 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 200 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.280 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.280 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.281 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.285 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] resizing rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.319 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.320 238945 INFO nova.compute.claims [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.361 238945 DEBUG nova.objects.instance [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'migration_context' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.453 238945 INFO nova.virt.libvirt.driver [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Snapshot image upload complete#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.454 238945 INFO nova.compute.manager [None req-fbaf83ad-8d82-4aa7-9fe8-236d3eff71b0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 6.07 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.568 238945 DEBUG nova.network.neutron [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.570 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.571 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Ensure instance console log exists: /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.571 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.571 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.572 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.573 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.579 238945 WARNING nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.583 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.584 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.587 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.587 238945 DEBUG nova.virt.libvirt.host [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.588 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.589 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.590 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.590 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.590 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.591 238945 DEBUG nova.virt.hardware [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.594 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.927 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.961 238945 DEBUG oslo_concurrency.lockutils [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.962 238945 DEBUG nova.compute.manager [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.962 238945 DEBUG nova.compute.manager [None req-9933cca9-ad1d-4102-88db-97d63eb5f179 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] network_info to inject: |[{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.965 238945 DEBUG oslo_concurrency.lockutils [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:47:48 np0005597378 nova_compute[238941]: 2026-01-27 13:47:48.965 238945 DEBUG nova.network.neutron [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Refreshing network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:47:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:47:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764423229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.149 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.166 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.169 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.275 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213089318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.495 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.500 238945 DEBUG nova.compute.provider_tree [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.704 238945 DEBUG nova.scheduler.client.report [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:47:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471160365' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.722 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.724 238945 DEBUG nova.objects.instance [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'pci_devices' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:49 np0005597378 nova_compute[238941]: 2026-01-27 13:47:49.963 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <uuid>e03449f9-27f7-4c89-8d13-5f4a688e2b1b</uuid>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <name>instance-00000024</name>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1993432249</nova:name>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:47:48</nova:creationTime>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:user uuid="e9ae0a0790eb4ab98f7efc9783b8ae7a">tempest-ListImageFiltersTestJSON-208194190-project-member</nova:user>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <nova:project uuid="940524337ca54001a9841d70fba0b293">tempest-ListImageFiltersTestJSON-208194190</nova:project>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <entry name="serial">e03449f9-27f7-4c89-8d13-5f4a688e2b1b</entry>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <entry name="uuid">e03449f9-27f7-4c89-8d13-5f4a688e2b1b</entry>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/console.log" append="off"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:47:49 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:47:49 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:47:49 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:47:49 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.057 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.058 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:47:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 242 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.2 MiB/s wr, 134 op/s
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.666 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.666 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.667 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Using config drive#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.684 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.702 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.702 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.703 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.703 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.703 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.704 238945 INFO nova.compute.manager [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Terminating instance#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.705 238945 DEBUG nova.compute.manager [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:47:50 np0005597378 kernel: tapce56c185-84 (unregistering): left promiscuous mode
Jan 27 08:47:50 np0005597378 NetworkManager[48904]: <info>  [1769521670.8933] device (tapce56c185-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:47:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:50Z|00284|binding|INFO|Releasing lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb from this chassis (sb_readonly=0)
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:50Z|00285|binding|INFO|Setting lport ce56c185-84bf-4d18-8d83-a9ab2ece51eb down in Southbound
Jan 27 08:47:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:50Z|00286|binding|INFO|Removing iface tapce56c185-84 ovn-installed in OVS
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.910 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.956 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:47:50 np0005597378 nova_compute[238941]: 2026-01-27 13:47:50.956 238945 DEBUG nova.network.neutron [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:47:50 np0005597378 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Jan 27 08:47:50 np0005597378 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 13.975s CPU time.
Jan 27 08:47:50 np0005597378 systemd-machined[207425]: Machine qemu-38-instance-00000022 terminated.
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.028 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:d8 10.100.0.8'], port_security=['fa:16:3e:3a:72:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '11612a22-0c73-4cee-b792-3ed36c1d2c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8bf2cef3ea4068b3157ed963f94791', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f101d0b4-0d6d-4d31-aea8-dd08b367f4c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f72e36-75d8-4cda-823e-a2fb13b6196f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ce56c185-84bf-4d18-8d83-a9ab2ece51eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.030 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ce56c185-84bf-4d18-8d83-a9ab2ece51eb in datapath 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 unbound from our chassis#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.031 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.032 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[33ae85fb-1b24-4f54-a7f3-032389e37554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.033 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 namespace which is not needed anymore#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.144 238945 INFO nova.virt.libvirt.driver [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Instance destroyed successfully.#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.144 238945 DEBUG nova.objects.instance [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lazy-loading 'resources' on Instance uuid 11612a22-0c73-4cee-b792-3ed36c1d2c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.182 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:47:51 np0005597378 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [NOTICE]   (271759) : haproxy version is 2.8.14-c23fe91
Jan 27 08:47:51 np0005597378 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [NOTICE]   (271759) : path to executable is /usr/sbin/haproxy
Jan 27 08:47:51 np0005597378 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [WARNING]  (271759) : Exiting Master process...
Jan 27 08:47:51 np0005597378 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [ALERT]    (271759) : Current worker (271761) exited with code 143 (Terminated)
Jan 27 08:47:51 np0005597378 neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14[271737]: [WARNING]  (271759) : All workers exited. Exiting... (0)
Jan 27 08:47:51 np0005597378 systemd[1]: libpod-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9.scope: Deactivated successfully.
Jan 27 08:47:51 np0005597378 podman[273763]: 2026-01-27 13:47:51.198357574 +0000 UTC m=+0.061536064 container died f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:47:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9-userdata-shm.mount: Deactivated successfully.
Jan 27 08:47:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-48eb8abd3c25dfeb7c07813ee58489dc9bcbea6798d9a5b2295a1d1f2507868d-merged.mount: Deactivated successfully.
Jan 27 08:47:51 np0005597378 podman[273763]: 2026-01-27 13:47:51.232804088 +0000 UTC m=+0.095982568 container cleanup f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:47:51 np0005597378 systemd[1]: libpod-conmon-f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9.scope: Deactivated successfully.
Jan 27 08:47:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:51Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:7a:2f 10.100.0.5
Jan 27 08:47:51 np0005597378 ovn_controller[144812]: 2026-01-27T13:47:51Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:7a:2f 10.100.0.5
Jan 27 08:47:51 np0005597378 podman[273801]: 2026-01-27 13:47:51.290133308 +0000 UTC m=+0.037456206 container remove f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b030f26-548f-4f47-8c8d-1d98bf3c063f]: (4, ('Tue Jan 27 01:47:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 (f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9)\nf92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9\nTue Jan 27 01:47:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 (f92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9)\nf92121d2e2d677907be21db571056b2e7a31f963f6eb4ba8e35d022812b701a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d65dc278-d222-486e-9b6b-275df591bf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.297 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b2b0d1b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:51 np0005597378 kernel: tap8b2b0d1b-b0: left promiscuous mode
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7454ce1c-d9a6-4c96-86c1-eee2f3ecf4f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.331 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[87d42f91-f38d-43eb-9a53-44298a368700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c5856b-6115-4cc8-8baa-6342592a2a50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[50cbf3b3-237d-4ad5-a2e9-4dac4ad496c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426381, 'reachable_time': 27435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273819, 'error': None, 'target': 'ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.349 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:47:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:51.349 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a1274a88-1bb6-41f5-84c5-835512e328f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:47:51 np0005597378 systemd[1]: run-netns-ovnmeta\x2d8b2b0d1b\x2db369\x2d48f0\x2d81bd\x2db8e8d80b2e14.mount: Deactivated successfully.
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.486 238945 DEBUG nova.virt.libvirt.vif [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1772037059',display_name='tempest-AttachInterfacesUnderV243Test-server-1772037059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1772037059',id=34,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9+xf1vr/gbPx2mo4+pMlhtbdsukvX/x+V4Ypp/vSpl+k0sjd1zL2AsMTEGaDlCjz4fLKwR9QQYkbplp39yS8aSG4pFwkWe5jXO3C1L9o7qMHkVsL46zH4IIuJe/a+47g==',key_name='tempest-keypair-2011139980',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:47:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b8bf2cef3ea4068b3157ed963f94791',ramdisk_id='',reservation_id='r-ios0qgx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-615121077',owner_user_name='tempest-AttachInterfacesUnderV243Test-615121077-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:47:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ad1955b00c94408bef4253556e4fea8',uuid=11612a22-0c73-4cee-b792-3ed36c1d2c8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.486 238945 DEBUG nova.network.os_vif_util [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converting VIF {"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.489 238945 DEBUG nova.network.os_vif_util [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.490 238945 DEBUG os_vif [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.494 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce56c185-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.504 238945 INFO os_vif [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:d8,bridge_name='br-int',has_traffic_filtering=True,id=ce56c185-84bf-4d18-8d83-a9ab2ece51eb,network=Network(8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce56c185-84')#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.676 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:47:51 np0005597378 nova_compute[238941]: 2026-01-27 13:47:51.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 27 08:47:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 27 08:47:51 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.035 238945 INFO nova.virt.libvirt.driver [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deleting instance files /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f_del#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.035 238945 INFO nova.virt.libvirt.driver [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deletion of /var/lib/nova/instances/11612a22-0c73-4cee-b792-3ed36c1d2c8f_del complete#033[00m
Jan 27 08:47:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 242 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 108 op/s
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.423 238945 INFO nova.compute.manager [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.424 238945 DEBUG oslo.service.loopingcall [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.424 238945 DEBUG nova.compute.manager [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.424 238945 DEBUG nova.network.neutron [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.594 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.595 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.595 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Creating image(s)#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.615 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.633 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.649 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.652 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.711 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.712 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.712 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.712 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.729 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.731 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 49158813-53e9-4c5a-9141-7646d98a93e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.759 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Creating config drive at /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.763 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp72v_7ut7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.859 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.860 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.914 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp72v_7ut7" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.934 238945 DEBUG nova.storage.rbd_utils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.938 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:52 np0005597378 nova_compute[238941]: 2026-01-27 13:47:52.989 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 49158813-53e9-4c5a-9141-7646d98a93e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.042 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] resizing rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.122 238945 DEBUG oslo_concurrency.processutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.123 238945 INFO nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deleting local config drive /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b/disk.config because it was imported into RBD.#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.128 238945 DEBUG nova.objects.instance [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'migration_context' on Instance uuid 49158813-53e9-4c5a-9141-7646d98a93e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:53 np0005597378 systemd-machined[207425]: New machine qemu-40-instance-00000024.
Jan 27 08:47:53 np0005597378 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.267 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.268 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Ensure instance console log exists: /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.268 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.268 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.269 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.320 238945 DEBUG nova.network.neutron [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.320 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.321 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.327 238945 WARNING nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.333 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.333 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.336 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.337 238945 DEBUG nova.virt.libvirt.host [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.338 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.338 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.338 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.339 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.340 238945 DEBUG nova.virt.hardware [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.344 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.373 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.582 238945 DEBUG nova.network.neutron [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updated VIF entry in instance network info cache for port ce56c185-84bf-4d18-8d83-a9ab2ece51eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.582 238945 DEBUG nova.network.neutron [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [{"id": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "address": "fa:16:3e:3a:72:d8", "network": {"id": "8b2b0d1b-b369-48f0-81bd-b8e8d80b2e14", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1769118458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8bf2cef3ea4068b3157ed963f94791", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce56c185-84", "ovs_interfaceid": "ce56c185-84bf-4d18-8d83-a9ab2ece51eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.699 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521673.699176, e03449f9-27f7-4c89-8d13-5f4a688e2b1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.700 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.703 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.703 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.707 238945 INFO nova.virt.libvirt.driver [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance spawned successfully.#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.707 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.762 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.762 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.767 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.767 238945 INFO nova.compute.claims [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.778 238945 DEBUG oslo_concurrency.lockutils [req-b8538528-70b5-4e3d-b908-87a8e7c94a9d req-8b415198-26a4-4ab1-83b9-d0148f8f57c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11612a22-0c73-4cee-b792-3ed36c1d2c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:47:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:47:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470764648' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.891 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.914 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.918 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.950 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.956 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.959 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.960 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.960 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.962 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.962 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:53 np0005597378 nova_compute[238941]: 2026-01-27 13:47:53.962 238945 DEBUG nova.virt.libvirt.driver [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 250 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.9 MiB/s wr, 226 op/s
Jan 27 08:47:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:47:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139767986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:47:54 np0005597378 nova_compute[238941]: 2026-01-27 13:47:54.475 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:54 np0005597378 nova_compute[238941]: 2026-01-27 13:47:54.477 238945 DEBUG nova.objects.instance [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49158813-53e9-4c5a-9141-7646d98a93e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.409 238945 DEBUG nova.compute.manager [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-unplugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.409 238945 DEBUG oslo_concurrency.lockutils [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.409 238945 DEBUG oslo_concurrency.lockutils [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.410 238945 DEBUG oslo_concurrency.lockutils [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.410 238945 DEBUG nova.compute.manager [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] No waiting events found dispatching network-vif-unplugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.410 238945 DEBUG nova.compute.manager [req-805816a8-64c6-4d3f-a9b8-3e473c0a41b0 req-dd61dc6c-8efe-4307-a841-e00964c6a0f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-unplugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.515 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.516 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521673.7011483, e03449f9-27f7-4c89-8d13-5f4a688e2b1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.516 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] VM Started (Lifecycle Event)#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.727 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <uuid>49158813-53e9-4c5a-9141-7646d98a93e1</uuid>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <name>instance-00000025</name>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListImageFiltersTestJSON-server-573060294</nova:name>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:47:53</nova:creationTime>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:user uuid="e9ae0a0790eb4ab98f7efc9783b8ae7a">tempest-ListImageFiltersTestJSON-208194190-project-member</nova:user>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <nova:project uuid="940524337ca54001a9841d70fba0b293">tempest-ListImageFiltersTestJSON-208194190</nova:project>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <entry name="serial">49158813-53e9-4c5a-9141-7646d98a93e1</entry>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <entry name="uuid">49158813-53e9-4c5a-9141-7646d98a93e1</entry>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/49158813-53e9-4c5a-9141-7646d98a93e1_disk">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/49158813-53e9-4c5a-9141-7646d98a93e1_disk.config">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/console.log" append="off"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:47:55 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:47:55 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:47:55 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:47:55 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:47:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:55.851 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:47:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:55.852 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:47:55 np0005597378 nova_compute[238941]: 2026-01-27 13:47:55.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.201 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.206 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:47:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 259 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.5 MiB/s wr, 302 op/s
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.496 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:56.663 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:47:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:56.666 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:47:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:56.667 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.671 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.674 238945 INFO nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 8.86 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.675 238945 DEBUG nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.677 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.678 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.678 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Using config drive#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.704 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:47:56 np0005597378 nova_compute[238941]: 2026-01-27 13:47:56.872 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.002 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Creating config drive at /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.007 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdypsizs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.077 238945 DEBUG nova.network.neutron [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.149 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdypsizs" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.184 238945 DEBUG nova.storage.rbd_utils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] rbd image 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.187 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.227 238945 INFO nova.compute.manager [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 12.91 seconds to build instance.#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.293 238945 INFO nova.compute.manager [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Took 4.87 seconds to deallocate network for instance.#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.317 238945 DEBUG oslo_concurrency.processutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config 49158813-53e9-4c5a-9141-7646d98a93e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.318 238945 INFO nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deleting local config drive /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1/disk.config because it was imported into RBD.#033[00m
Jan 27 08:47:57 np0005597378 systemd-machined[207425]: New machine qemu-41-instance-00000025.
Jan 27 08:47:57 np0005597378 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Jan 27 08:47:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522346047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.467 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.473 238945 DEBUG nova.compute.provider_tree [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.616 238945 DEBUG oslo_concurrency.lockutils [None req-3da7ce0b-73f5-46d2-b97f-9d21c3ae1726 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.683 238945 DEBUG nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.684 238945 DEBUG oslo_concurrency.lockutils [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.684 238945 DEBUG oslo_concurrency.lockutils [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.685 238945 DEBUG oslo_concurrency.lockutils [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.685 238945 DEBUG nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] No waiting events found dispatching network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.686 238945 WARNING nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received unexpected event network-vif-plugged-ce56c185-84bf-4d18-8d83-a9ab2ece51eb for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.686 238945 DEBUG nova.compute.manager [req-b3e3ef07-3a31-41f3-9089-780c3e424649 req-c4abbdb2-b39d-4166-98d8-21cad09bbe79 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Received event network-vif-deleted-ce56c185-84bf-4d18-8d83-a9ab2ece51eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.690 238945 DEBUG nova.scheduler.client.report [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.722 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.727 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.728 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.795 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.796 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.798 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.806 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:47:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:47:57.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.917 238945 DEBUG oslo_concurrency.processutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.956 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521677.9553666, 49158813-53e9-4c5a-9141-7646d98a93e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.956 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.959 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.959 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.969 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.970 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.971 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.981 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.987 238945 INFO nova.virt.libvirt.driver [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance spawned successfully.#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.988 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:47:57 np0005597378 nova_compute[238941]: 2026-01-27 13:47:57.991 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.036 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.037 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.037 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.038 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.038 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.039 238945 DEBUG nova.virt.libvirt.driver [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.075 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.084 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.085 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521677.9557958, 49158813-53e9-4c5a-9141-7646d98a93e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.085 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] VM Started (Lifecycle Event)#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.195 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.198 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.247 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.251 238945 INFO nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 5.66 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.252 238945 DEBUG nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:47:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.3 MiB/s wr, 302 op/s
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.260 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.354 238945 INFO nova.compute.manager [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 11.63 seconds to build instance.#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.422 238945 DEBUG oslo_concurrency.lockutils [None req-7638ebe1-6705-4815-924e-4c906f058431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.457 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.459 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.460 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating image(s)#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.486 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.506 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.533 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.539 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.572 238945 DEBUG nova.policy [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '618e06758ec244289bb6f2258e3df2da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a34b23d56029482fbb58a6be97575a37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:47:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509840117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.627 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.628 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.629 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.629 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.652 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.658 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.686 238945 DEBUG oslo_concurrency.processutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.694 238945 DEBUG nova.compute.provider_tree [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.717 238945 DEBUG nova.scheduler.client.report [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.749 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.752 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.760 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.760 238945 INFO nova.compute.claims [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.784 238945 INFO nova.scheduler.client.report [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Deleted allocations for instance 11612a22-0c73-4cee-b792-3ed36c1d2c8f#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.874 238945 DEBUG oslo_concurrency.lockutils [None req-8a18a565-e495-4b7c-815d-90bdb8776655 1ad1955b00c94408bef4253556e4fea8 4b8bf2cef3ea4068b3157ed963f94791 - - default default] Lock "11612a22-0c73-4cee-b792-3ed36c1d2c8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:58 np0005597378 nova_compute[238941]: 2026-01-27 13:47:58.950 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.089 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.169 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.256 238945 DEBUG nova.objects.instance [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.273 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.273 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Ensure instance console log exists: /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.274 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.274 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.275 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148224519' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148224519' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285000027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.541 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.545 238945 DEBUG nova.compute.provider_tree [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.559 238945 DEBUG nova.scheduler.client.report [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.577 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.577 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.628 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.629 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.651 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.675 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.784 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.785 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.786 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Creating image(s)#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.804 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.823 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.845 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.848 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "0132b823f52b17d52941dc816b6e68b517891e72" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.849 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "0132b823f52b17d52941dc816b6e68b517891e72" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.853 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Successfully created port: 9005c867-83d2-40fe-a9c6-8abeb0537249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:47:59 np0005597378 nova_compute[238941]: 2026-01-27 13:47:59.859 238945 DEBUG nova.policy [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.095 238945 DEBUG nova.virt.libvirt.imagebackend [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/16533a81-6ed2-4221-aed9-29618a3a09b6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/16533a81-6ed2-4221-aed9-29618a3a09b6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.140 238945 DEBUG nova.virt.libvirt.imagebackend [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/16533a81-6ed2-4221-aed9-29618a3a09b6/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.141 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning images/16533a81-6ed2-4221-aed9-29618a3a09b6@snap to None/7749aa9a-e8ee-413b-8435-6aa205247766_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.218 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "0132b823f52b17d52941dc816b6e68b517891e72" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 275 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.1 MiB/s wr, 314 op/s
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.349 238945 DEBUG nova.objects.instance [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid 7749aa9a-e8ee-413b-8435-6aa205247766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.366 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Ensure instance console log exists: /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:00 np0005597378 nova_compute[238941]: 2026-01-27 13:48:00.367 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.220 238945 DEBUG nova.compute.manager [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.286 238945 INFO nova.compute.manager [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] instance snapshotting#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.489 238945 INFO nova.virt.libvirt.driver [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Beginning live snapshot process#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.730 238945 DEBUG nova.virt.libvirt.imagebackend [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.772 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Successfully created port: 8c56b5e8-9d79-4f73-94bd-a628a32ce290 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.852 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:01 np0005597378 nova_compute[238941]: 2026-01-27 13:48:01.970 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(fd6255fb702149d9ba3a0c5b9826f426) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 275 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 306 op/s
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.315 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Successfully updated port: 9005c867-83d2-40fe-a9c6-8abeb0537249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.357 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.357 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquired lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.358 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.408 238945 DEBUG nova.compute.manager [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-changed-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.409 238945 DEBUG nova.compute.manager [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Refreshing instance network info cache due to event network-changed-9005c867-83d2-40fe-a9c6-8abeb0537249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.409 238945 DEBUG oslo_concurrency.lockutils [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.488 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 27 08:48:02 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.531 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] cloning vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk@fd6255fb702149d9ba3a0c5b9826f426 to images/e2dc1e11-6130-483a-a5d1-e0cb97fec76c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.608 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] flattening images/e2dc1e11-6130-483a-a5d1-e0cb97fec76c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:48:02 np0005597378 nova_compute[238941]: 2026-01-27 13:48:02.873 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] removing snapshot(fd6255fb702149d9ba3a0c5b9826f426) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.170 238945 DEBUG nova.network.neutron [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updating instance_info_cache with network_info: [{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.231 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Releasing lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.232 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance network_info: |[{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.232 238945 DEBUG oslo_concurrency.lockutils [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.233 238945 DEBUG nova.network.neutron [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Refreshing network info cache for port 9005c867-83d2-40fe-a9c6-8abeb0537249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.236 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start _get_guest_xml network_info=[{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.240 238945 WARNING nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.244 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.245 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.250 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.251 238945 DEBUG nova.virt.libvirt.host [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.252 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.252 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.253 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.254 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.254 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.254 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.255 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.255 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.255 238945 DEBUG nova.virt.hardware [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.258 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 27 08:48:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 27 08:48:03 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.529 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Successfully updated port: 8c56b5e8-9d79-4f73-94bd-a628a32ce290 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.535 238945 DEBUG nova.storage.rbd_utils [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(snap) on rbd image(e2dc1e11-6130-483a-a5d1-e0cb97fec76c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.568 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.569 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.569 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.627 238945 DEBUG nova.compute.manager [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-changed-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.628 238945 DEBUG nova.compute.manager [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Refreshing instance network info cache due to event network-changed-8c56b5e8-9d79-4f73-94bd-a628a32ce290. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.628 238945 DEBUG oslo_concurrency.lockutils [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.772 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270099284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.891 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.915 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:03 np0005597378 nova_compute[238941]: 2026-01-27 13:48:03.920 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 340 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.6 MiB/s wr, 317 op/s
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058302556' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.520730) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684520786, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1822, "num_deletes": 507, "total_data_size": 2271710, "memory_usage": 2313264, "flush_reason": "Manual Compaction"}
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.520 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.522 238945 DEBUG nova.virt.libvirt.vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:58Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.524 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.525 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.527 238945 DEBUG nova.objects.instance [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684530309, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1808805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24152, "largest_seqno": 25973, "table_properties": {"data_size": 1801521, "index_size": 3718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20114, "raw_average_key_size": 20, "raw_value_size": 1784310, "raw_average_value_size": 1795, "num_data_blocks": 165, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521567, "oldest_key_time": 1769521567, "file_creation_time": 1769521684, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9634 microseconds, and 4822 cpu microseconds.
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.530369) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1808805 bytes OK
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.530389) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532214) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532252) EVENT_LOG_v1 {"time_micros": 1769521684532244, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532275) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2262721, prev total WAL file size 2262721, number of live WAL files 2.
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.533046) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1766KB)], [56(9761KB)]
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684533087, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11804696, "oldest_snapshot_seqno": -1}
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.584 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <uuid>551ba990-3708-4f5d-851a-6cd84303bab9</uuid>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <name>instance-00000026</name>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-76528564</nova:name>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:48:03</nova:creationTime>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <nova:port uuid="9005c867-83d2-40fe-a9c6-8abeb0537249">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <entry name="serial">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <entry name="uuid">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk.config">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:6f:65:71"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <target dev="tap9005c867-83"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log" append="off"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:48:04 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:48:04 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:48:04 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:48:04 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4900 keys, 7183543 bytes, temperature: kUnknown
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684589752, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7183543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7150762, "index_size": 19423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 122929, "raw_average_key_size": 25, "raw_value_size": 7062471, "raw_average_value_size": 1441, "num_data_blocks": 796, "num_entries": 4900, "num_filter_entries": 4900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521684, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.591 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Preparing to wait for external event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.591 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.590029) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7183543 bytes
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.592477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.0 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.5) write-amplify(4.0) OK, records in: 5897, records dropped: 997 output_compression: NoCompression
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.592559) EVENT_LOG_v1 {"time_micros": 1769521684592551, "job": 30, "event": "compaction_finished", "compaction_time_micros": 56762, "compaction_time_cpu_micros": 17325, "output_level": 6, "num_output_files": 1, "total_output_size": 7183543, "num_input_records": 5897, "num_output_records": 4900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.592 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684593040, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.593 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.594 238945 DEBUG nova.virt.libvirt.vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:58Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521684594750, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.532930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:48:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:48:04.594818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.594 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.596 238945 DEBUG nova.network.os_vif_util [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.596 238945 DEBUG os_vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.597 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.597 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.598 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.609 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9005c867-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.610 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9005c867-83, col_values=(('external_ids', {'iface-id': '9005c867-83d2-40fe-a9c6-8abeb0537249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:65:71', 'vm-uuid': '551ba990-3708-4f5d-851a-6cd84303bab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:04 np0005597378 NetworkManager[48904]: <info>  [1769521684.6127] manager: (tap9005c867-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.618 238945 INFO os_vif [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.749 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.750 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.750 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:6f:65:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.751 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Using config drive#033[00m
Jan 27 08:48:04 np0005597378 nova_compute[238941]: 2026-01-27 13:48:04.772 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.618 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating config drive at /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.625 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9i0sg6q9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.656 238945 DEBUG nova.network.neutron [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updated VIF entry in instance network info cache for port 9005c867-83d2-40fe-a9c6-8abeb0537249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.657 238945 DEBUG nova.network.neutron [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updating instance_info_cache with network_info: [{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.705 238945 DEBUG oslo_concurrency.lockutils [req-e4fdddca-7d05-4b5d-bfae-3a156a66e23d req-bc2bb916-6017-4492-9b19-3771594e81a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-551ba990-3708-4f5d-851a-6cd84303bab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.757 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9i0sg6q9" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.781 238945 DEBUG nova.storage.rbd_utils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.785 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.830 238945 DEBUG nova.network.neutron [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updating instance_info_cache with network_info: [{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.925 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.926 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance network_info: |[{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.926 238945 DEBUG oslo_concurrency.lockutils [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.926 238945 DEBUG nova.network.neutron [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Refreshing network info cache for port 8c56b5e8-9d79-4f73-94bd-a628a32ce290 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.931 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start _get_guest_xml network_info=[{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:47:40Z,direct_url=<?>,disk_format='raw',id=16533a81-6ed2-4221-aed9-29618a3a09b6,min_disk=1,min_ram=0,name='tempest-test-snap-68782787',owner='b041f051267f4a3c8518d3042922678a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:47:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '16533a81-6ed2-4221-aed9-29618a3a09b6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.935 238945 WARNING nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.939 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.940 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.945 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.946 238945 DEBUG nova.virt.libvirt.host [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.947 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.947 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:47:40Z,direct_url=<?>,disk_format='raw',id=16533a81-6ed2-4221-aed9-29618a3a09b6,min_disk=1,min_ram=0,name='tempest-test-snap-68782787',owner='b041f051267f4a3c8518d3042922678a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:47:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.947 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.948 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.949 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.949 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.949 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.950 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.950 238945 DEBUG nova.virt.hardware [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:48:05 np0005597378 nova_compute[238941]: 2026-01-27 13:48:05.954 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.001 238945 DEBUG oslo_concurrency.processutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.003 238945 INFO nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting local config drive /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config because it was imported into RBD.#033[00m
Jan 27 08:48:06 np0005597378 kernel: tap9005c867-83: entered promiscuous mode
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00287|binding|INFO|Claiming lport 9005c867-83d2-40fe-a9c6-8abeb0537249 for this chassis.
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00288|binding|INFO|9005c867-83d2-40fe-a9c6-8abeb0537249: Claiming fa:16:3e:6f:65:71 10.100.0.11
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 NetworkManager[48904]: <info>  [1769521686.0805] manager: (tap9005c867-83): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00289|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 ovn-installed in OVS
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00290|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 up in Southbound
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.101 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.106 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.108 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 systemd-udevd[274986]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:48:06 np0005597378 NetworkManager[48904]: <info>  [1769521686.1338] device (tap9005c867-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:48:06 np0005597378 NetworkManager[48904]: <info>  [1769521686.1349] device (tap9005c867-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:48:06 np0005597378 systemd-machined[207425]: New machine qemu-42-instance-00000026.
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.140 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7b5fbd-a5e4-49c2-8bb6-40a8b1ebc34c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.141 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.142 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521671.1412442, 11612a22-0c73-4cee-b792-3ed36c1d2c8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.143 238945 INFO nova.compute.manager [-] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.146 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4bd0ea-9723-4fbf-801e-6b23a61f592e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a48bca-9c53-4eaa-bffd-e9a254b2794d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.162 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[31d1e3b4-66d1-4bdd-bdc5-3f7614f5a666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.183 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e20f8180-4afc-4d92-be67-7b10d56df15f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.190 238945 DEBUG nova.compute.manager [None req-f84b870f-c736-46a1-8b50-a820076fbd7a - - - - - -] [instance: 11612a22-0c73-4cee-b792-3ed36c1d2c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.220 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[97b7dfba-95b6-4a07-b8b1-4a18c50991d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 NetworkManager[48904]: <info>  [1769521686.2294] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.228 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc11eca-a4a0-4fea-af15-b89c4a6ebfe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 352 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.275 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ff740c71-b882-4e6e-ad2f-6c207a77b7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.285 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f20ce46a-054a-4db4-ae7f-78c3bf231854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 NetworkManager[48904]: <info>  [1769521686.3200] device (tap4856e57c-d0): carrier: link connected
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.331 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[92ab6bb3-86c1-4b36-ac3c-49207cd8800c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.380 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0b2473-0cca-446e-bbd4-1299fcdfac4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432594, 'reachable_time': 40848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275021, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00291|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.416 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb7e9f1-abaf-47fb-8193-fe2882d6a5da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432594, 'tstamp': 432594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275022, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.445 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7705d7-0098-4a0b-9c9e-c99645daf335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432594, 'reachable_time': 40848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275023, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.492 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91b571ce-99e8-4a6b-b4e0-f829ae7a49ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00292|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814978644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af541376-fd20-4d21-911d-682ab2c16291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.579 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.580 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.581 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:06 np0005597378 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 08:48:06 np0005597378 NetworkManager[48904]: <info>  [1769521686.5848] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.583 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.591 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:06Z|00293|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.596 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.598 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[032dc605-a147-4eba-bf81-09128ea35210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.598 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:48:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:06.600 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.614 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.655 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.659 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.688 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521686.6824389, 551ba990-3708-4f5d-851a-6cd84303bab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.688 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Started (Lifecycle Event)#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.692 238945 DEBUG nova.compute.manager [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.692 238945 DEBUG oslo_concurrency.lockutils [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.692 238945 DEBUG oslo_concurrency.lockutils [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.693 238945 DEBUG oslo_concurrency.lockutils [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.693 238945 DEBUG nova.compute.manager [req-872c5e39-1628-493f-81f0-2317d1435de8 req-091c7d53-da29-4e50-a556-9e547ac4a985 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Processing event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.694 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.706 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.712 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance spawned successfully.#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.713 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.718 238945 INFO nova.virt.libvirt.driver [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Snapshot image upload complete#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.718 238945 INFO nova.compute.manager [None req-94b074ae-5f42-4539-bf1c-3254838e9efd e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 5.43 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.732 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.740 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.794 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.794 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521686.6826103, 551ba990-3708-4f5d-851a-6cd84303bab9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.794 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.802 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.803 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.803 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.804 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.804 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.804 238945 DEBUG nova.virt.libvirt.driver [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.867 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.873 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521686.7017918, 551ba990-3708-4f5d-851a-6cd84303bab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:06 np0005597378 nova_compute[238941]: 2026-01-27 13:48:06.873 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:48:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:07 np0005597378 podman[275136]: 2026-01-27 13:48:07.003633093 +0000 UTC m=+0.050538859 container create c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 08:48:07 np0005597378 systemd[1]: Started libpod-conmon-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a.scope.
Jan 27 08:48:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179b1d12ac53d05abab2fb3ee62c575ae4b2253e7c6301bb28e5e0a85fddbd93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:07 np0005597378 podman[275136]: 2026-01-27 13:48:06.978682212 +0000 UTC m=+0.025587988 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:48:07 np0005597378 podman[275136]: 2026-01-27 13:48:07.084468034 +0000 UTC m=+0.131373820 container init c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:48:07 np0005597378 podman[275136]: 2026-01-27 13:48:07.090190338 +0000 UTC m=+0.137096114 container start c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:48:07 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : New worker (275158) forked
Jan 27 08:48:07 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : Loading success.
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.119 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.126 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900233745' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.215 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.216 238945 DEBUG nova.virt.libvirt.vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-958894613',display_name='tempest-ImagesTestJSON-server-958894613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-958894613',id=39,image_ref='16533a81-6ed2-4221-aed9-29618a3a09b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ysn0kii7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='50c6d534-e937-4148-851e-4ec51e067875',image_min_disk='1',image_min_ram='0',image_owner_id='b041f051267f4a3c8518d3042922678a',image_owner_project_name='tempest-ImagesTestJSON-1064968599',image_owner_user_name='tempest-ImagesTestJSON-1064968599-project-member',image_user_id='7dedc0f04f3d455682ea65fc37a49f06',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:59Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=7749aa9a-e8ee-413b-8435-6aa205247766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.216 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.217 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.218 238945 DEBUG nova.objects.instance [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7749aa9a-e8ee-413b-8435-6aa205247766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.223 238945 INFO nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 8.76 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.223 238945 DEBUG nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.230 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.267 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <uuid>7749aa9a-e8ee-413b-8435-6aa205247766</uuid>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <name>instance-00000027</name>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesTestJSON-server-958894613</nova:name>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:48:05</nova:creationTime>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="16533a81-6ed2-4221-aed9-29618a3a09b6"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <nova:port uuid="8c56b5e8-9d79-4f73-94bd-a628a32ce290">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <entry name="serial">7749aa9a-e8ee-413b-8435-6aa205247766</entry>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <entry name="uuid">7749aa9a-e8ee-413b-8435-6aa205247766</entry>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7749aa9a-e8ee-413b-8435-6aa205247766_disk">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7749aa9a-e8ee-413b-8435-6aa205247766_disk.config">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:57:04:84"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <target dev="tap8c56b5e8-9d"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/console.log" append="off"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:48:07 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:48:07 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:48:07 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:48:07 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.267 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Preparing to wait for external event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.268 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.268 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.268 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.269 238945 DEBUG nova.virt.libvirt.vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-958894613',display_name='tempest-ImagesTestJSON-server-958894613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-958894613',id=39,image_ref='16533a81-6ed2-4221-aed9-29618a3a09b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ysn0kii7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='50c6d534-e937-4148-851e-4ec51e067875',image_min_disk='1',image_min_ram='0',image_owner_id='b041f051267f4a3c8518d3042922678a',image_owner_project_name='tempest-ImagesTestJSON-1064968599',image_owner_user_name='tempest-ImagesTestJSON-1064968599-project-member',image_user_id='7dedc0f04f3d455682ea65fc37a49f06',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:47:59Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=7749aa9a-e8ee-413b-8435-6aa205247766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.269 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.270 238945 DEBUG nova.network.os_vif_util [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.270 238945 DEBUG os_vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.271 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.271 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.274 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c56b5e8-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.275 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c56b5e8-9d, col_values=(('external_ids', {'iface-id': '8c56b5e8-9d79-4f73-94bd-a628a32ce290', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:04:84', 'vm-uuid': '7749aa9a-e8ee-413b-8435-6aa205247766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:07 np0005597378 NetworkManager[48904]: <info>  [1769521687.2771] manager: (tap8c56b5e8-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.286 238945 INFO os_vif [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d')#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.780 238945 DEBUG nova.network.neutron [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updated VIF entry in instance network info cache for port 8c56b5e8-9d79-4f73-94bd-a628a32ce290. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.780 238945 DEBUG nova.network.neutron [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updating instance_info_cache with network_info: [{"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.912 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.913 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.913 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:57:04:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.913 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Using config drive#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.935 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:07 np0005597378 nova_compute[238941]: 2026-01-27 13:48:07.982 238945 INFO nova.compute.manager [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 14.24 seconds to build instance.#033[00m
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.056 238945 DEBUG oslo_concurrency.lockutils [req-de074f90-456a-4b14-8202-36e76818a67f req-b5ba4d57-f58f-466a-9a1c-b480d8137b85 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7749aa9a-e8ee-413b-8435-6aa205247766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.079 238945 DEBUG oslo_concurrency.lockutils [None req-c1feb62a-e660-4d28-a7cb-6783ba6dff8f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 365 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.4 MiB/s wr, 383 op/s
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.814 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Creating config drive at /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config#033[00m
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.818 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pk2t3y2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.955 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pk2t3y2" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.976 238945 DEBUG nova.storage.rbd_utils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:08 np0005597378 nova_compute[238941]: 2026-01-27 13:48:08.979 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG nova.compute.manager [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG oslo_concurrency.lockutils [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG oslo_concurrency.lockutils [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.008 238945 DEBUG oslo_concurrency.lockutils [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.009 238945 DEBUG nova.compute.manager [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.009 238945 WARNING nova.compute.manager [req-a59c0c5b-ca00-402e-b030-fafe39e870ea req-524a015f-ebb3-4ffc-a7d3-434c48fd6a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.117 238945 DEBUG oslo_concurrency.processutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config 7749aa9a-e8ee-413b-8435-6aa205247766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.119 238945 INFO nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deleting local config drive /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766/disk.config because it was imported into RBD.#033[00m
Jan 27 08:48:09 np0005597378 kernel: tap8c56b5e8-9d: entered promiscuous mode
Jan 27 08:48:09 np0005597378 NetworkManager[48904]: <info>  [1769521689.1641] manager: (tap8c56b5e8-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Jan 27 08:48:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:09Z|00294|binding|INFO|Claiming lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 for this chassis.
Jan 27 08:48:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:09Z|00295|binding|INFO|8c56b5e8-9d79-4f73-94bd-a628a32ce290: Claiming fa:16:3e:57:04:84 10.100.0.4
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.165 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:09Z|00296|binding|INFO|Setting lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 ovn-installed in OVS
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.192 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:09 np0005597378 systemd-machined[207425]: New machine qemu-43-instance-00000027.
Jan 27 08:48:09 np0005597378 systemd-udevd[275242]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:48:09 np0005597378 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Jan 27 08:48:09 np0005597378 NetworkManager[48904]: <info>  [1769521689.2163] device (tap8c56b5e8-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:48:09 np0005597378 NetworkManager[48904]: <info>  [1769521689.2171] device (tap8c56b5e8-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.218 238945 DEBUG nova.compute.manager [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:09Z|00297|binding|INFO|Setting lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 up in Southbound
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.301 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:04:84 10.100.0.4'], port_security=['fa:16:3e:57:04:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7749aa9a-e8ee-413b-8435-6aa205247766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c56b5e8-9d79-4f73-94bd-a628a32ce290) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.302 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c56b5e8-9d79-4f73-94bd-a628a32ce290 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.304 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[563d8c44-49fd-4f8f-8d91-60bada4991e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.359 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6fef756f-e06c-4ba1-8622-840f24aef14d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.363 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9776c2-e5df-4d8d-aaa3-21bf127552aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:09 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.399 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5472e1-5511-4d58-b6d9-fd9a8eef8860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.420 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3af92369-83d7-48a8-8a38-78180a47ae74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275260, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.443 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52904866-cef6-418e-aee1-7c7d6625bbc5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429470, 'tstamp': 429470}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275275, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429473, 'tstamp': 429473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275275, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.445 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.447 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.450 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.450 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.451 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:09.451 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.592 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521689.5920517, 7749aa9a-e8ee-413b-8435-6aa205247766 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.592 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Started (Lifecycle Event)#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.744 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.747 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521689.5943809, 7749aa9a-e8ee-413b-8435-6aa205247766 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.747 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.759 238945 INFO nova.compute.manager [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] instance snapshotting#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.879 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:09 np0005597378 nova_compute[238941]: 2026-01-27 13:48:09.981 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:48:10 np0005597378 nova_compute[238941]: 2026-01-27 13:48:10.142 238945 INFO nova.virt.libvirt.driver [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Beginning live snapshot process#033[00m
Jan 27 08:48:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 391 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.9 MiB/s wr, 472 op/s
Jan 27 08:48:10 np0005597378 nova_compute[238941]: 2026-01-27 13:48:10.608 238945 DEBUG nova.virt.libvirt.imagebackend [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:48:10 np0005597378 nova_compute[238941]: 2026-01-27 13:48:10.917 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(390094d66ba94e71acaaec760a3640ba) on rbd image(49158813-53e9-4c5a-9141-7646d98a93e1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.304 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.304 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.305 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.305 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.305 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Processing event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG oslo_concurrency.lockutils [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 DEBUG nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] No waiting events found dispatching network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.306 238945 WARNING nova.compute.manager [req-ba886368-25ce-4df7-aa6f-657f92fd5ee6 req-0a1442af-6dc3-4419-a28b-21d8197e03f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received unexpected event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.307 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.324 238945 DEBUG nova.virt.libvirt.driver [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.324 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521691.3237157, 7749aa9a-e8ee-413b-8435-6aa205247766 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.324 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.328 238945 INFO nova.virt.libvirt.driver [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance spawned successfully.#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.328 238945 INFO nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 11.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.328 238945 DEBUG nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.380 238945 INFO nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Rebuilding instance#033[00m
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.581 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] cloning vms/49158813-53e9-4c5a-9141-7646d98a93e1_disk@390094d66ba94e71acaaec760a3640ba to images/834f138b-dbb2-445e-a207-20ce6d07600d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.661 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] flattening images/834f138b-dbb2-445e-a207-20ce6d07600d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.693 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.741 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.744 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.838 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.841 238945 INFO nova.compute.manager [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 13.89 seconds to build instance.#033[00m
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 27 08:48:11 np0005597378 nova_compute[238941]: 2026-01-27 13:48:11.974 238945 DEBUG oslo_concurrency.lockutils [None req-d496ea6b-7d6e-4ff6-a750-46059d245ade 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.018 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_requests' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.027 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] removing snapshot(390094d66ba94e71acaaec760a3640ba) on rbd image(49158813-53e9-4c5a-9141-7646d98a93e1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.068 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.164 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.221 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 391 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.4 MiB/s wr, 273 op/s
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.281 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:48:12 np0005597378 nova_compute[238941]: 2026-01-27 13:48:12.284 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:48:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 27 08:48:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 27 08:48:12 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 27 08:48:13 np0005597378 nova_compute[238941]: 2026-01-27 13:48:13.004 238945 DEBUG nova.storage.rbd_utils [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(snap) on rbd image(834f138b-dbb2-445e-a207-20ce6d07600d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 27 08:48:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 27 08:48:13 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 27 08:48:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 462 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 10 MiB/s wr, 531 op/s
Jan 27 08:48:14 np0005597378 podman[275441]: 2026-01-27 13:48:14.765273405 +0000 UTC m=+0.099079783 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.187 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.188 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.189 238945 INFO nova.compute.manager [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Terminating instance#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.190 238945 DEBUG nova.compute.manager [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:48:15 np0005597378 kernel: tap8c56b5e8-9d (unregistering): left promiscuous mode
Jan 27 08:48:15 np0005597378 NetworkManager[48904]: <info>  [1769521695.2216] device (tap8c56b5e8-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:48:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:15Z|00298|binding|INFO|Releasing lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 from this chassis (sb_readonly=0)
Jan 27 08:48:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:15Z|00299|binding|INFO|Setting lport 8c56b5e8-9d79-4f73-94bd-a628a32ce290 down in Southbound
Jan 27 08:48:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:15Z|00300|binding|INFO|Removing iface tap8c56b5e8-9d ovn-installed in OVS
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 27 08:48:15 np0005597378 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 4.445s CPU time.
Jan 27 08:48:15 np0005597378 systemd-machined[207425]: Machine qemu-43-instance-00000027 terminated.
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.364 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:04:84 10.100.0.4'], port_security=['fa:16:3e:57:04:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7749aa9a-e8ee-413b-8435-6aa205247766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c56b5e8-9d79-4f73-94bd-a628a32ce290) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.365 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c56b5e8-9d79-4f73-94bd-a628a32ce290 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.366 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.384 238945 INFO nova.virt.libvirt.driver [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Snapshot image upload complete#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.385 238945 INFO nova.compute.manager [None req-6e3267d2-b6f0-4220-bf32-76d7cce62b36 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 5.62 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd17dc9a-2c20-4d96-be4d-6a0576e9b84b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.423 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[217616b7-c1b4-4ebb-bdbc-e0b855600ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.426 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[57eba85a-66e6-45b7-9a48-02611e58e8d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.427 238945 INFO nova.virt.libvirt.driver [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Instance destroyed successfully.#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.428 238945 DEBUG nova.objects.instance [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 7749aa9a-e8ee-413b-8435-6aa205247766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.456 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e58e362-6e3f-46b3-a66a-6a40fdc2584b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[585a8d32-0ff2-4a27-9ba0-39de2c96a5de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429459, 'reachable_time': 24349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275488, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.496 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[724c6534-f737-4dd0-bc3b-061ed5368d11]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429470, 'tstamp': 429470}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275489, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape25f7657-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429473, 'tstamp': 429473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275489, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.498 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.503 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.503 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.503 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:15.504 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.539 238945 DEBUG nova.virt.libvirt.vif [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-958894613',display_name='tempest-ImagesTestJSON-server-958894613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-958894613',id=39,image_ref='16533a81-6ed2-4221-aed9-29618a3a09b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ysn0kii7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='50c6d534-e937-4148-851e-4ec51e067875',image_min_disk='1',image_min_ram='0',image_owner_id='b041f051267f4a3c8518d3042922678a',image_owner_project_name='tempest-ImagesTestJSON-1064968599',image_owner_user_name='tempest-ImagesTestJSON-1064968599-project-member',image_user_id='7dedc0f04f3d455682ea65fc37a49f06',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:48:11Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=7749aa9a-e8ee-413b-8435-6aa205247766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.540 238945 DEBUG nova.network.os_vif_util [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "address": "fa:16:3e:57:04:84", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c56b5e8-9d", "ovs_interfaceid": "8c56b5e8-9d79-4f73-94bd-a628a32ce290", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.541 238945 DEBUG nova.network.os_vif_util [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.541 238945 DEBUG os_vif [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c56b5e8-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.548 238945 INFO os_vif [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:04:84,bridge_name='br-int',has_traffic_filtering=True,id=8c56b5e8-9d79-4f73-94bd-a628a32ce290,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c56b5e8-9d')#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.795 238945 INFO nova.virt.libvirt.driver [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deleting instance files /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766_del#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.796 238945 INFO nova.virt.libvirt.driver [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deletion of /var/lib/nova/instances/7749aa9a-e8ee-413b-8435-6aa205247766_del complete#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.840 238945 DEBUG nova.compute.manager [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-unplugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.840 238945 DEBUG oslo_concurrency.lockutils [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG oslo_concurrency.lockutils [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG oslo_concurrency.lockutils [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG nova.compute.manager [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] No waiting events found dispatching network-vif-unplugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:15 np0005597378 nova_compute[238941]: 2026-01-27 13:48:15.841 238945 DEBUG nova.compute.manager [req-82489aff-8054-4fe4-a8c1-696c8ffd60bc req-316597e9-b5f2-4d7f-bc33-d3231425ca33 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-unplugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:48:16 np0005597378 nova_compute[238941]: 2026-01-27 13:48:16.051 238945 INFO nova.compute.manager [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:48:16 np0005597378 nova_compute[238941]: 2026-01-27 13:48:16.051 238945 DEBUG oslo.service.loopingcall [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:48:16 np0005597378 nova_compute[238941]: 2026-01-27 13:48:16.052 238945 DEBUG nova.compute.manager [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:48:16 np0005597378 nova_compute[238941]: 2026-01-27 13:48:16.052 238945 DEBUG nova.network.neutron [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:48:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 497 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 13 MiB/s wr, 492 op/s
Jan 27 08:48:16 np0005597378 nova_compute[238941]: 2026-01-27 13:48:16.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:48:17
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'vms', 'volumes']
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:48:17 np0005597378 podman[275615]: 2026-01-27 13:48:17.082312617 +0000 UTC m=+0.059501478 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 08:48:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:48:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:48:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.351745376 +0000 UTC m=+0.043407377 container create ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:48:17 np0005597378 systemd[1]: Started libpod-conmon-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope.
Jan 27 08:48:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.33108072 +0000 UTC m=+0.022742732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.447633221 +0000 UTC m=+0.139295252 container init ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.458733189 +0000 UTC m=+0.150395190 container start ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.461847923 +0000 UTC m=+0.153509954 container attach ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:48:17 np0005597378 cranky_ganguly[275688]: 167 167
Jan 27 08:48:17 np0005597378 systemd[1]: libpod-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope: Deactivated successfully.
Jan 27 08:48:17 np0005597378 conmon[275688]: conmon ee5510fd6b539ed3a866 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope/container/memory.events
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.468147663 +0000 UTC m=+0.159809664 container died ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:48:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4b15e165c68b57c4ba362b095c0dbdcac2e6e690e4ac350814fd85a13241bb15-merged.mount: Deactivated successfully.
Jan 27 08:48:17 np0005597378 podman[275672]: 2026-01-27 13:48:17.525886773 +0000 UTC m=+0.217548774 container remove ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:48:17 np0005597378 systemd[1]: libpod-conmon-ee5510fd6b539ed3a866fd8839b4dcab4489633dd4c6ad00523b3f2f6ef9b323.scope: Deactivated successfully.
Jan 27 08:48:17 np0005597378 podman[275713]: 2026-01-27 13:48:17.723452181 +0000 UTC m=+0.041581418 container create ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:48:17 np0005597378 systemd[1]: Started libpod-conmon-ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2.scope.
Jan 27 08:48:17 np0005597378 podman[275713]: 2026-01-27 13:48:17.707370419 +0000 UTC m=+0.025499656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:17 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:17 np0005597378 podman[275713]: 2026-01-27 13:48:17.828430781 +0000 UTC m=+0.146560038 container init ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:48:17 np0005597378 podman[275713]: 2026-01-27 13:48:17.834450273 +0000 UTC m=+0.152579510 container start ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 27 08:48:17 np0005597378 podman[275713]: 2026-01-27 13:48:17.83768905 +0000 UTC m=+0.155818317 container attach ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:48:17 np0005597378 nova_compute[238941]: 2026-01-27 13:48:17.978 238945 DEBUG nova.compute.manager [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:17 np0005597378 nova_compute[238941]: 2026-01-27 13:48:17.979 238945 DEBUG oslo_concurrency.lockutils [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:17 np0005597378 nova_compute[238941]: 2026-01-27 13:48:17.979 238945 DEBUG oslo_concurrency.lockutils [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:17 np0005597378 nova_compute[238941]: 2026-01-27 13:48:17.980 238945 DEBUG oslo_concurrency.lockutils [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:17 np0005597378 nova_compute[238941]: 2026-01-27 13:48:17.980 238945 DEBUG nova.compute.manager [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] No waiting events found dispatching network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:17 np0005597378 nova_compute[238941]: 2026-01-27 13:48:17.980 238945 WARNING nova.compute.manager [req-e4fbf304-e7e8-4742-932c-766efbef81e8 req-48670ab7-e044-4094-888d-bef596ea568a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received unexpected event network-vif-plugged-8c56b5e8-9d79-4f73-94bd-a628a32ce290 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:48:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:48:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 497 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 388 op/s
Jan 27 08:48:18 np0005597378 vibrant_lovelace[275729]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:48:18 np0005597378 vibrant_lovelace[275729]: --> All data devices are unavailable
Jan 27 08:48:18 np0005597378 systemd[1]: libpod-ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2.scope: Deactivated successfully.
Jan 27 08:48:18 np0005597378 podman[275713]: 2026-01-27 13:48:18.366068343 +0000 UTC m=+0.684197580 container died ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:48:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e7eab1f615bedbc01709a635af6f93428c087faec0e1d0076ccef7a80a9948f8-merged.mount: Deactivated successfully.
Jan 27 08:48:18 np0005597378 podman[275713]: 2026-01-27 13:48:18.439156636 +0000 UTC m=+0.757285873 container remove ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:48:18 np0005597378 systemd[1]: libpod-conmon-ce07c049dd4529370fd1092a438a547618e973b04ed313f6207e5c3d18cdc5c2.scope: Deactivated successfully.
Jan 27 08:48:18 np0005597378 podman[275824]: 2026-01-27 13:48:18.894432527 +0000 UTC m=+0.046736857 container create 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:48:18 np0005597378 nova_compute[238941]: 2026-01-27 13:48:18.906 238945 DEBUG nova.network.neutron [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:18 np0005597378 systemd[1]: Started libpod-conmon-36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c.scope.
Jan 27 08:48:18 np0005597378 nova_compute[238941]: 2026-01-27 13:48:18.948 238945 INFO nova.compute.manager [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Took 2.90 seconds to deallocate network for instance.#033[00m
Jan 27 08:48:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:18 np0005597378 podman[275824]: 2026-01-27 13:48:18.870895515 +0000 UTC m=+0.023199875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:48:18 np0005597378 podman[275824]: 2026-01-27 13:48:18.981798234 +0000 UTC m=+0.134102564 container init 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:48:18 np0005597378 podman[275824]: 2026-01-27 13:48:18.989161441 +0000 UTC m=+0.141465771 container start 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:48:18 np0005597378 sharp_shtern[275840]: 167 167
Jan 27 08:48:18 np0005597378 systemd[1]: libpod-36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c.scope: Deactivated successfully.
Jan 27 08:48:18 np0005597378 podman[275824]: 2026-01-27 13:48:18.994117855 +0000 UTC m=+0.146422185 container attach 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 08:48:18 np0005597378 podman[275824]: 2026-01-27 13:48:18.994842874 +0000 UTC m=+0.147147214 container died 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:18.999 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.000 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8b78ad251ff065a4f1540c356033c7e3a237f88adc5e35047c04209b1f47849a-merged.mount: Deactivated successfully.
Jan 27 08:48:19 np0005597378 podman[275824]: 2026-01-27 13:48:19.042999148 +0000 UTC m=+0.195303488 container remove 36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.055 238945 DEBUG nova.compute.manager [req-ac5b8ef8-b94f-4110-aeaa-048ed58f448a req-399db366-eb8f-4fda-a313-c72b4f0473e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Received event network-vif-deleted-8c56b5e8-9d79-4f73-94bd-a628a32ce290 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.059 238945 DEBUG nova.compute.manager [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:19 np0005597378 systemd[1]: libpod-conmon-36dffee0cf7eb9292669b71e3a96cd1523814785e1beb5c54123fb7211cb1e7c.scope: Deactivated successfully.
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.104 238945 INFO nova.compute.manager [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] instance snapshotting#033[00m
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.141 238945 DEBUG oslo_concurrency.processutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.236135987 +0000 UTC m=+0.049787059 container create 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:48:19 np0005597378 systemd[1]: Started libpod-conmon-170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86.scope.
Jan 27 08:48:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.214589317 +0000 UTC m=+0.028240409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:48:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.325349402 +0000 UTC m=+0.139000494 container init 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.332706601 +0000 UTC m=+0.146357673 container start 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.335 238945 INFO nova.virt.libvirt.driver [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Beginning live snapshot process#033[00m
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.336994756 +0000 UTC m=+0.150645848 container attach 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.465 238945 DEBUG nova.virt.libvirt.imagebackend [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:48:19 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]: {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:    "0": [
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:        {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "devices": [
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "/dev/loop3"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            ],
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_name": "ceph_lv0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_size": "21470642176",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "name": "ceph_lv0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "tags": {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cluster_name": "ceph",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.crush_device_class": "",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.encrypted": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.objectstore": "bluestore",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osd_id": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.type": "block",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.vdo": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.with_tpm": "0"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            },
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "type": "block",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "vg_name": "ceph_vg0"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:        }
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:    ],
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:    "1": [
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:        {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "devices": [
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "/dev/loop4"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            ],
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_name": "ceph_lv1",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_size": "21470642176",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "name": "ceph_lv1",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "tags": {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cluster_name": "ceph",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.crush_device_class": "",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.encrypted": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.objectstore": "bluestore",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osd_id": "1",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.type": "block",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.vdo": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.with_tpm": "0"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            },
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "type": "block",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "vg_name": "ceph_vg1"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:        }
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:    ],
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:    "2": [
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:        {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "devices": [
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "/dev/loop5"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            ],
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_name": "ceph_lv2",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_size": "21470642176",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "name": "ceph_lv2",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "tags": {
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.cluster_name": "ceph",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.crush_device_class": "",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.encrypted": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.objectstore": "bluestore",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osd_id": "2",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.type": "block",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.vdo": "0",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:                "ceph.with_tpm": "0"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            },
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "type": "block",
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:            "vg_name": "ceph_vg2"
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:        }
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]:    ]
Jan 27 08:48:19 np0005597378 gracious_williamson[275891]: }
Jan 27 08:48:19 np0005597378 systemd[1]: libpod-170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86.scope: Deactivated successfully.
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.632288789 +0000 UTC m=+0.445939861 container died 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.640 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(eb43c71eab694bdea9537b892c1d46f0) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ff108a9b4c9e231fde92db4dbca0ef9d30bd517e03ec5766b1acf737b5e7ef51-merged.mount: Deactivated successfully.
Jan 27 08:48:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228244690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:19 np0005597378 podman[275864]: 2026-01-27 13:48:19.696134063 +0000 UTC m=+0.509785135 container remove 170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_williamson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:48:19 np0005597378 systemd[1]: libpod-conmon-170a9fdc3d69d24ecfbdd9aff18fe40e75dcc05e2e886573ed025afd72ebac86.scope: Deactivated successfully.
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.719 238945 DEBUG oslo_concurrency.processutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.725 238945 DEBUG nova.compute.provider_tree [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.841 238945 DEBUG nova.scheduler.client.report [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:19 np0005597378 nova_compute[238941]: 2026-01-27 13:48:19.961 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:20Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:65:71 10.100.0.11
Jan 27 08:48:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:20Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:65:71 10.100.0.11
Jan 27 08:48:20 np0005597378 nova_compute[238941]: 2026-01-27 13:48:20.045 238945 INFO nova.scheduler.client.report [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 7749aa9a-e8ee-413b-8435-6aa205247766#033[00m
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.129389852 +0000 UTC m=+0.051239408 container create 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:48:20 np0005597378 systemd[1]: Started libpod-conmon-11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906.scope.
Jan 27 08:48:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.103049904 +0000 UTC m=+0.024899480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.20526618 +0000 UTC m=+0.127115756 container init 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.210755408 +0000 UTC m=+0.132604964 container start 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:48:20 np0005597378 elegant_feynman[276051]: 167 167
Jan 27 08:48:20 np0005597378 systemd[1]: libpod-11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906.scope: Deactivated successfully.
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.215725681 +0000 UTC m=+0.137575237 container attach 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.216102722 +0000 UTC m=+0.137952678 container died 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:48:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c51216688c24785c23484131a08130d880c7896aaae984c724c3d47cca4b3497-merged.mount: Deactivated successfully.
Jan 27 08:48:20 np0005597378 podman[276035]: 2026-01-27 13:48:20.268627322 +0000 UTC m=+0.190476878 container remove 11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:48:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 513 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 9.1 MiB/s wr, 385 op/s
Jan 27 08:48:20 np0005597378 systemd[1]: libpod-conmon-11abe69713bbd4dd82baa86b9cd1cf6b85d6df5029838658dcd333cdbbfcd906.scope: Deactivated successfully.
Jan 27 08:48:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 27 08:48:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 27 08:48:20 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 27 08:48:20 np0005597378 nova_compute[238941]: 2026-01-27 13:48:20.409 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] cloning vms/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk@eb43c71eab694bdea9537b892c1d46f0 to images/a45d76ca-17b3-48a9-9f05-3f4b87519afb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:48:20 np0005597378 podman[276075]: 2026-01-27 13:48:20.476164047 +0000 UTC m=+0.044973869 container create 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:48:20 np0005597378 nova_compute[238941]: 2026-01-27 13:48:20.511 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] flattening images/a45d76ca-17b3-48a9-9f05-3f4b87519afb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:48:20 np0005597378 systemd[1]: Started libpod-conmon-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope.
Jan 27 08:48:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:20 np0005597378 nova_compute[238941]: 2026-01-27 13:48:20.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:20 np0005597378 podman[276075]: 2026-01-27 13:48:20.455262806 +0000 UTC m=+0.024072648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:48:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:20 np0005597378 podman[276075]: 2026-01-27 13:48:20.571860268 +0000 UTC m=+0.140670110 container init 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:48:20 np0005597378 podman[276075]: 2026-01-27 13:48:20.582247497 +0000 UTC m=+0.151057319 container start 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:48:20 np0005597378 podman[276075]: 2026-01-27 13:48:20.594005433 +0000 UTC m=+0.162815275 container attach 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:48:20 np0005597378 nova_compute[238941]: 2026-01-27 13:48:20.861 238945 DEBUG oslo_concurrency.lockutils [None req-a340c911-5366-4372-a9dd-ffd1ba8ca6d9 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "7749aa9a-e8ee-413b-8435-6aa205247766" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:20 np0005597378 nova_compute[238941]: 2026-01-27 13:48:20.914 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] removing snapshot(eb43c71eab694bdea9537b892c1d46f0) on rbd image(e03449f9-27f7-4c89-8d13-5f4a688e2b1b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:48:21 np0005597378 lvm[276239]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:48:21 np0005597378 lvm[276241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:48:21 np0005597378 lvm[276241]: VG ceph_vg1 finished
Jan 27 08:48:21 np0005597378 lvm[276239]: VG ceph_vg0 finished
Jan 27 08:48:21 np0005597378 lvm[276243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:48:21 np0005597378 lvm[276243]: VG ceph_vg2 finished
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 27 08:48:21 np0005597378 lvm[276245]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:48:21 np0005597378 lvm[276245]: VG ceph_vg2 finished
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 27 08:48:21 np0005597378 nova_compute[238941]: 2026-01-27 13:48:21.397 238945 DEBUG nova.storage.rbd_utils [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] creating snapshot(snap) on rbd image(a45d76ca-17b3-48a9-9f05-3f4b87519afb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:21 np0005597378 clever_meninsky[276133]: {}
Jan 27 08:48:21 np0005597378 lvm[276247]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:48:21 np0005597378 lvm[276247]: VG ceph_vg2 finished
Jan 27 08:48:21 np0005597378 systemd[1]: libpod-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope: Deactivated successfully.
Jan 27 08:48:21 np0005597378 systemd[1]: libpod-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope: Consumed 1.294s CPU time.
Jan 27 08:48:21 np0005597378 lvm[276264]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:48:21 np0005597378 lvm[276264]: VG ceph_vg2 finished
Jan 27 08:48:21 np0005597378 podman[276267]: 2026-01-27 13:48:21.475611526 +0000 UTC m=+0.026479823 container died 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 08:48:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-955769f01b468b2dcd1c2067c9cdc54dbba36b991cd406e48d5a6d681866414c-merged.mount: Deactivated successfully.
Jan 27 08:48:21 np0005597378 podman[276267]: 2026-01-27 13:48:21.525501516 +0000 UTC m=+0.076369783 container remove 099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:48:21 np0005597378 systemd[1]: libpod-conmon-099420e17c416776dbef150f2499d550fbe0c792e445417f46dc50c7e7e8d6f0.scope: Deactivated successfully.
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:48:21 np0005597378 nova_compute[238941]: 2026-01-27 13:48:21.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 27 08:48:21 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 27 08:48:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 513 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 1.7 MiB/s wr, 127 op/s
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.346 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:48:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:48:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.588 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.589 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.589 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.589 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.590 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.591 238945 INFO nova.compute.manager [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Terminating instance#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.591 238945 DEBUG nova.compute.manager [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:48:22 np0005597378 kernel: tap0fb1bfa1-f0 (unregistering): left promiscuous mode
Jan 27 08:48:22 np0005597378 NetworkManager[48904]: <info>  [1769521702.6306] device (tap0fb1bfa1-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:48:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:22Z|00301|binding|INFO|Releasing lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 from this chassis (sb_readonly=0)
Jan 27 08:48:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:22Z|00302|binding|INFO|Setting lport 0fb1bfa1-f000-4f51-8226-3de232ddb948 down in Southbound
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.637 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:22Z|00303|binding|INFO|Removing iface tap0fb1bfa1-f0 ovn-installed in OVS
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:22 np0005597378 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Jan 27 08:48:22 np0005597378 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 13.847s CPU time.
Jan 27 08:48:22 np0005597378 systemd-machined[207425]: Machine qemu-39-instance-00000023 terminated.
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.829 238945 INFO nova.virt.libvirt.driver [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Instance destroyed successfully.#033[00m
Jan 27 08:48:22 np0005597378 nova_compute[238941]: 2026-01-27 13:48:22.829 238945 DEBUG nova.objects.instance [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid 50c6d534-e937-4148-851e-4ec51e067875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.148 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:7a:2f 10.100.0.5'], port_security=['fa:16:3e:1e:7a:2f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50c6d534-e937-4148-851e-4ec51e067875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0fb1bfa1-f000-4f51-8226-3de232ddb948) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.149 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0fb1bfa1-f000-4f51-8226-3de232ddb948 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.150 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3527cdb5-13d6-41aa-8aee-3cede84903ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.152 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.249 238945 DEBUG nova.virt.libvirt.vif [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1880502757',display_name='tempest-ImagesTestJSON-server-1880502757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1880502757',id=35,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:47:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-vko7lh4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:47:48Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=50c6d534-e937-4148-851e-4ec51e067875,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.250 238945 DEBUG nova.network.os_vif_util [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "address": "fa:16:3e:1e:7a:2f", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fb1bfa1-f0", "ovs_interfaceid": "0fb1bfa1-f000-4f51-8226-3de232ddb948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.251 238945 DEBUG nova.network.os_vif_util [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.251 238945 DEBUG os_vif [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.254 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fb1bfa1-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.260 238945 INFO os_vif [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:7a:2f,bridge_name='br-int',has_traffic_filtering=True,id=0fb1bfa1-f000-4f51-8226-3de232ddb948,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fb1bfa1-f0')#033[00m
Jan 27 08:48:23 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : haproxy version is 2.8.14-c23fe91
Jan 27 08:48:23 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [NOTICE]   (273206) : path to executable is /usr/sbin/haproxy
Jan 27 08:48:23 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [WARNING]  (273206) : Exiting Master process...
Jan 27 08:48:23 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [ALERT]    (273206) : Current worker (273208) exited with code 143 (Terminated)
Jan 27 08:48:23 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[273202]: [WARNING]  (273206) : All workers exited. Exiting... (0)
Jan 27 08:48:23 np0005597378 systemd[1]: libpod-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e.scope: Deactivated successfully.
Jan 27 08:48:23 np0005597378 podman[276336]: 2026-01-27 13:48:23.296029319 +0000 UTC m=+0.053359165 container died 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:48:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e-userdata-shm.mount: Deactivated successfully.
Jan 27 08:48:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b5fa3d9d9cc59a4afa86ab685a529794eaaa60a1db3ae2a87567af1f93fc149c-merged.mount: Deactivated successfully.
Jan 27 08:48:23 np0005597378 podman[276336]: 2026-01-27 13:48:23.358947519 +0000 UTC m=+0.116277345 container cleanup 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:48:23 np0005597378 systemd[1]: libpod-conmon-73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e.scope: Deactivated successfully.
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.395 238945 INFO nova.virt.libvirt.driver [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Snapshot image upload complete#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.396 238945 INFO nova.compute.manager [None req-3f075be6-d75a-4767-85f0-112992ee87c2 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 4.29 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:48:23 np0005597378 podman[276383]: 2026-01-27 13:48:23.440913101 +0000 UTC m=+0.059762477 container remove 73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[25a409a6-090e-4cb7-a78c-49494f742f20]: (4, ('Tue Jan 27 01:48:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e)\n73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e\nTue Jan 27 01:48:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e)\n73b984aca2f1cbc324b5fd80c63505d29bcc137384319377bbd309d64a97028e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.449 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53354279-77f9-4618-bbb4-f9d5ca9a901d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.450 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.452 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:23 np0005597378 kernel: tape25f7657-30: left promiscuous mode
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.471 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[17150b7c-8a5f-4a89-9248-081baa17ab41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.488 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be58be70-7bde-42cd-9cb5-5923d6f94eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc0d4c2-f637-43c7-b748-402311a5b616]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6e38f854-9659-4db7-bab9-403c70ba2e68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429451, 'reachable_time': 38692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276399, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.509 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:48:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:23.509 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb0664e-dae8-4ca2-a025-4393527e8071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:23 np0005597378 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.604 238945 INFO nova.virt.libvirt.driver [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deleting instance files /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875_del#033[00m
Jan 27 08:48:23 np0005597378 nova_compute[238941]: 2026-01-27 13:48:23.605 238945 INFO nova.virt.libvirt.driver [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deletion of /var/lib/nova/instances/50c6d534-e937-4148-851e-4ec51e067875_del complete#033[00m
Jan 27 08:48:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 522 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 377 op/s
Jan 27 08:48:24 np0005597378 kernel: tap9005c867-83 (unregistering): left promiscuous mode
Jan 27 08:48:24 np0005597378 NetworkManager[48904]: <info>  [1769521704.5917] device (tap9005c867-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:48:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:24Z|00304|binding|INFO|Releasing lport 9005c867-83d2-40fe-a9c6-8abeb0537249 from this chassis (sb_readonly=0)
Jan 27 08:48:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:24Z|00305|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 down in Southbound
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:24Z|00306|binding|INFO|Removing iface tap9005c867-83 ovn-installed in OVS
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.622 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:24 np0005597378 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 27 08:48:24 np0005597378 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 13.111s CPU time.
Jan 27 08:48:24 np0005597378 systemd-machined[207425]: Machine qemu-42-instance-00000026 terminated.
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.818 238945 INFO nova.compute.manager [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 2.23 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.818 238945 DEBUG oslo.service.loopingcall [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.819 238945 DEBUG nova.compute.manager [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.819 238945 DEBUG nova.network.neutron [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:48:24 np0005597378 kernel: tap9005c867-83: entered promiscuous mode
Jan 27 08:48:24 np0005597378 kernel: tap9005c867-83 (unregistering): left promiscuous mode
Jan 27 08:48:24 np0005597378 nova_compute[238941]: 2026-01-27 13:48:24.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.899 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.900 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:48:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.901 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:48:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[777a5519-15f6-42bf-97f2-27c13681a1f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:24.903 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore#033[00m
Jan 27 08:48:25 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : haproxy version is 2.8.14-c23fe91
Jan 27 08:48:25 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [NOTICE]   (275156) : path to executable is /usr/sbin/haproxy
Jan 27 08:48:25 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [WARNING]  (275156) : Exiting Master process...
Jan 27 08:48:25 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [ALERT]    (275156) : Current worker (275158) exited with code 143 (Terminated)
Jan 27 08:48:25 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[275152]: [WARNING]  (275156) : All workers exited. Exiting... (0)
Jan 27 08:48:25 np0005597378 systemd[1]: libpod-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a.scope: Deactivated successfully.
Jan 27 08:48:25 np0005597378 podman[276433]: 2026-01-27 13:48:25.022731185 +0000 UTC m=+0.039376230 container died c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:48:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a-userdata-shm.mount: Deactivated successfully.
Jan 27 08:48:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-179b1d12ac53d05abab2fb3ee62c575ae4b2253e7c6301bb28e5e0a85fddbd93-merged.mount: Deactivated successfully.
Jan 27 08:48:25 np0005597378 podman[276433]: 2026-01-27 13:48:25.057293523 +0000 UTC m=+0.073938558 container cleanup c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:48:25 np0005597378 systemd[1]: libpod-conmon-c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a.scope: Deactivated successfully.
Jan 27 08:48:25 np0005597378 podman[276461]: 2026-01-27 13:48:25.110216774 +0000 UTC m=+0.033633364 container remove c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ba617f-920e-40f3-9b9e-e09d774e3724]: (4, ('Tue Jan 27 01:48:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a)\nc55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a\nTue Jan 27 01:48:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (c55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a)\nc55e0262ab81b4c10c82017ab344fdb2e8055936cd512536b12c312b359dea2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ee45494e-e212-4e4b-91ea-2b145ec34275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.118 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:25 np0005597378 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c13f74a6-e7b2-4f49-ad3a-5b9728cb96b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.156 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[634cec2b-746d-4dc6-8da1-ee6992963758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.158 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6431bd1f-3d48-4cd7-8896-61529c607449]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5f148f-107e-4811-bd92-eaf2d9216464]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432583, 'reachable_time': 39832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276478, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.175 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:48:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:25.175 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb86c3d-4306-4261-8f34-45d4d3f69330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:25 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.361 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.366 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance destroyed successfully.#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.370 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance destroyed successfully.#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.370 238945 DEBUG nova.virt.libvirt.vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:10Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.371 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.371 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.372 238945 DEBUG os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.373 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.373 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9005c867-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.378 238945 INFO os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.775 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting instance files /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.776 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deletion of /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del complete#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.847 238945 DEBUG nova.compute.manager [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-unplugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.847 238945 DEBUG oslo_concurrency.lockutils [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.847 238945 DEBUG oslo_concurrency.lockutils [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.848 238945 DEBUG oslo_concurrency.lockutils [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.848 238945 DEBUG nova.compute.manager [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] No waiting events found dispatching network-vif-unplugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:25 np0005597378 nova_compute[238941]: 2026-01-27 13:48:25.848 238945 DEBUG nova.compute.manager [req-64a81f6c-4de8-418c-97e0-dc53e4b4b098 req-75cdf0a8-2413-4cfd-902a-30d157986468 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-unplugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.065 238945 DEBUG nova.compute.manager [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG oslo_concurrency.lockutils [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG oslo_concurrency.lockutils [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG oslo_concurrency.lockutils [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.066 238945 DEBUG nova.compute.manager [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.067 238945 WARNING nova.compute.manager [req-b0272f32-0d60-4160-9a18-407401ad324b req-7b5dc225-5f7d-4807-9a5d-e122099a4908 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 27 08:48:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 519 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 332 op/s
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.319 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.319 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating image(s)#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.339 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.361 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.381 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.384 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.450 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.451 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.452 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.452 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.472 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.475 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.748 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 551ba990-3708-4f5d-851a-6cd84303bab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.808 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.879 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.880 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Ensure instance console log exists: /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.880 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.881 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.881 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.883 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start _get_guest_xml network_info=[{"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.886 238945 WARNING nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.891 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.892 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.895 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.896 238945 DEBUG nova.virt.libvirt.host [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.896 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.896 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.897 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.898 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.899 238945 DEBUG nova.virt.hardware [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.899 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:26 np0005597378 nova_compute[238941]: 2026-01-27 13:48:26.971 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 27 08:48:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 27 08:48:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026215282805832118 of space, bias 1.0, pg target 0.7864584841749636 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002524002211443372 of space, bias 1.0, pg target 0.7572006634330116 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2038298523223453e-06 of space, bias 4.0, pg target 0.0014445958227868143 quantized to 16 (current 16)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:48:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:48:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065864399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:27 np0005597378 nova_compute[238941]: 2026-01-27 13:48:27.520 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:27 np0005597378 nova_compute[238941]: 2026-01-27 13:48:27.545 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:27 np0005597378 nova_compute[238941]: 2026-01-27 13:48:27.549 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.051 238945 DEBUG nova.compute.manager [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG oslo_concurrency.lockutils [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "50c6d534-e937-4148-851e-4ec51e067875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG oslo_concurrency.lockutils [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG oslo_concurrency.lockutils [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.052 238945 DEBUG nova.compute.manager [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] No waiting events found dispatching network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.053 238945 WARNING nova.compute.manager [req-9504c3e0-2bd3-4203-8a2f-d3a42c7a04a2 req-8a7036f2-1310-4426-9403-f97429eff860 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received unexpected event network-vif-plugged-0fb1bfa1-f000-4f51-8226-3de232ddb948 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:48:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769911001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.142 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.143 238945 DEBUG nova.virt.libvirt.vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:26Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.144 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.144 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.146 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <uuid>551ba990-3708-4f5d-851a-6cd84303bab9</uuid>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <name>instance-00000026</name>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-76528564</nova:name>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:48:26</nova:creationTime>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <nova:port uuid="9005c867-83d2-40fe-a9c6-8abeb0537249">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <entry name="serial">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <entry name="uuid">551ba990-3708-4f5d-851a-6cd84303bab9</entry>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/551ba990-3708-4f5d-851a-6cd84303bab9_disk.config">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:6f:65:71"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <target dev="tap9005c867-83"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/console.log" append="off"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:48:28 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:48:28 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:48:28 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:48:28 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Preparing to wait for external event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.147 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.148 238945 DEBUG nova.virt.libvirt.vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:26Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.148 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.148 238945 DEBUG nova.network.os_vif_util [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.149 238945 DEBUG os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.149 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.150 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.152 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9005c867-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.152 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9005c867-83, col_values=(('external_ids', {'iface-id': '9005c867-83d2-40fe-a9c6-8abeb0537249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:65:71', 'vm-uuid': '551ba990-3708-4f5d-851a-6cd84303bab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:28 np0005597378 NetworkManager[48904]: <info>  [1769521708.1550] manager: (tap9005c867-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.158 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.160 238945 INFO os_vif [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')#033[00m
Jan 27 08:48:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 476 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 9.3 MiB/s wr, 322 op/s
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.424 238945 DEBUG nova.compute.manager [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG oslo_concurrency.lockutils [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG oslo_concurrency.lockutils [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG oslo_concurrency.lockutils [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.425 238945 DEBUG nova.compute.manager [req-aadd1138-e515-4af9-affa-a8ddc0434b71 req-abe54f36-fafb-4dfb-ab14-0636c59a2b04 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Processing event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.639 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:6f:65:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.640 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Using config drive#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.668 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:28 np0005597378 nova_compute[238941]: 2026-01-27 13:48:28.978 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:29 np0005597378 nova_compute[238941]: 2026-01-27 13:48:29.236 238945 DEBUG nova.network.neutron [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:29 np0005597378 nova_compute[238941]: 2026-01-27 13:48:29.384 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'keypairs' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:30 np0005597378 nova_compute[238941]: 2026-01-27 13:48:30.258 238945 INFO nova.compute.manager [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Took 5.44 seconds to deallocate network for instance.#033[00m
Jan 27 08:48:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 10 MiB/s wr, 335 op/s
Jan 27 08:48:30 np0005597378 nova_compute[238941]: 2026-01-27 13:48:30.425 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521695.4239712, 7749aa9a-e8ee-413b-8435-6aa205247766 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:30 np0005597378 nova_compute[238941]: 2026-01-27 13:48:30.425 238945 INFO nova.compute.manager [-] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:48:30 np0005597378 nova_compute[238941]: 2026-01-27 13:48:30.873 238945 DEBUG nova.compute.manager [None req-712db0cc-3415-4913-ac4c-7b81add01b14 - - - - - -] [instance: 7749aa9a-e8ee-413b-8435-6aa205247766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:31 np0005597378 nova_compute[238941]: 2026-01-27 13:48:31.255 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:31 np0005597378 nova_compute[238941]: 2026-01-27 13:48:31.256 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:31 np0005597378 nova_compute[238941]: 2026-01-27 13:48:31.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 8.4 MiB/s wr, 278 op/s
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.459 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.459 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.482 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.490 238945 DEBUG oslo_concurrency.processutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.697 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.835 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Creating config drive at /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.840 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_f1jm7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:32 np0005597378 nova_compute[238941]: 2026-01-27 13:48:32.978 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_f1jm7v" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.000 238945 DEBUG nova.storage.rbd_utils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.003 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431354743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.074 238945 DEBUG oslo_concurrency.processutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.081 238945 DEBUG nova.compute.provider_tree [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.102 238945 DEBUG nova.compute.manager [req-446683d9-ed66-437c-8d36-1032b508a75e req-36cac579-ad2d-4395-bab0-a80739b0f3de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Received event network-vif-deleted-0fb1bfa1-f000-4f51-8226-3de232ddb948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.138 238945 DEBUG oslo_concurrency.processutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config 551ba990-3708-4f5d-851a-6cd84303bab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.139 238945 INFO nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting local config drive /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9/disk.config because it was imported into RBD.#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.162 238945 DEBUG nova.scheduler.client.report [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:33 np0005597378 kernel: tap9005c867-83: entered promiscuous mode
Jan 27 08:48:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:33Z|00307|binding|INFO|Claiming lport 9005c867-83d2-40fe-a9c6-8abeb0537249 for this chassis.
Jan 27 08:48:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:33Z|00308|binding|INFO|9005c867-83d2-40fe-a9c6-8abeb0537249: Claiming fa:16:3e:6f:65:71 10.100.0.11
Jan 27 08:48:33 np0005597378 NetworkManager[48904]: <info>  [1769521713.1885] manager: (tap9005c867-83): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:33Z|00309|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 ovn-installed in OVS
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 systemd-udevd[276821]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:48:33 np0005597378 systemd-machined[207425]: New machine qemu-44-instance-00000026.
Jan 27 08:48:33 np0005597378 NetworkManager[48904]: <info>  [1769521713.2260] device (tap9005c867-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:48:33 np0005597378 NetworkManager[48904]: <info>  [1769521713.2266] device (tap9005c867-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:48:33 np0005597378 systemd[1]: Started Virtual Machine qemu-44-instance-00000026.
Jan 27 08:48:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:33Z|00310|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 up in Southbound
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.323 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.325 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.326 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.337 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9be5f284-ff1f-4e64-a246-1304fdfd93e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.338 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.340 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.340 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8618d79c-8fe1-498f-a577-bea7635637d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.341 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89ef371e-59bd-49bd-a01f-fefada63d340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.354 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[325fad50-e518-419a-b704-c74fb86a4378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.356 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.359 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.368 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.369 238945 INFO nova.compute.claims [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f09396d2-d7f1-47f8-b067-a04b542ddcbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.399 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1859369a-536c-4aba-ab8e-49158b7aa4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 NetworkManager[48904]: <info>  [1769521713.4056] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72e8a15f-76c7-43fc-b3e3-10e017228c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.436 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a98ed044-8801-4213-9872-1d3c2ebf79f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.440 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[26ff2f7a-9dfd-442b-b8ba-3e35d6ea3e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 NetworkManager[48904]: <info>  [1769521713.4644] device (tap4856e57c-d0): carrier: link connected
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.471 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d6cf106c-43db-497d-a4c6-418631441e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e37bc86-f67b-4c66-9f68-592f6346768b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435308, 'reachable_time': 41581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276855, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8df796-9b9d-4dad-bb26-01f8723c64db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 435308, 'tstamp': 435308}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276856, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b73bfd11-81c9-4021-bbd3-d6027f9b0ec7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435308, 'reachable_time': 41581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276857, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.547 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9b1efe-5c33-4f0f-a4b4-890bae3b68fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.603 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2c32d7fc-8002-4161-9a81-9a0bb42eb4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.604 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.604 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.605 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:33 np0005597378 NetworkManager[48904]: <info>  [1769521713.6074] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Jan 27 08:48:33 np0005597378 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.614 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:33Z|00311|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.615 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.619 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.620 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[15502d03-4da2-45e8-8a18-24d840eee82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.620 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:48:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:33.621 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.679 238945 INFO nova.scheduler.client.report [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance 50c6d534-e937-4148-851e-4ec51e067875#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.970 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 551ba990-3708-4f5d-851a-6cd84303bab9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.971 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521713.9699225, 551ba990-3708-4f5d-851a-6cd84303bab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.971 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Started (Lifecycle Event)#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.974 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.978 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.983 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance spawned successfully.#033[00m
Jan 27 08:48:33 np0005597378 nova_compute[238941]: 2026-01-27 13:48:33.984 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:48:33 np0005597378 podman[276931]: 2026-01-27 13:48:33.986093187 +0000 UTC m=+0.058438150 container create b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:48:34 np0005597378 systemd[1]: Started libpod-conmon-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe.scope.
Jan 27 08:48:34 np0005597378 podman[276931]: 2026-01-27 13:48:33.956001059 +0000 UTC m=+0.028346052 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:48:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8203f8f5df80ddb58e033631c7b0b0361349dfc94002e2a118884845a52558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:34 np0005597378 podman[276931]: 2026-01-27 13:48:34.07517602 +0000 UTC m=+0.147521013 container init b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:48:34 np0005597378 podman[276931]: 2026-01-27 13:48:34.080554834 +0000 UTC m=+0.152899797 container start b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 08:48:34 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : New worker (276953) forked
Jan 27 08:48:34 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : Loading success.
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.212 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 116 op/s
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.433 238945 DEBUG nova.compute.manager [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.433 238945 DEBUG oslo_concurrency.lockutils [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 DEBUG oslo_concurrency.lockutils [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 DEBUG oslo_concurrency.lockutils [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 DEBUG nova.compute.manager [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.434 238945 WARNING nova.compute.manager [req-340947e4-afb8-4efc-a56e-4f144262a72a req-633dcf5d-b4c1-427a-aaf7-e80ec25f2ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.467 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.474 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.480 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.481 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.482 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.482 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.483 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.483 238945 DEBUG nova.virt.libvirt.driver [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461228143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.805 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.810 238945 DEBUG nova.compute.provider_tree [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.829 238945 DEBUG oslo_concurrency.lockutils [None req-f7e5fa89-15fb-415a-957f-040533f46363 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "50c6d534-e937-4148-851e-4ec51e067875" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.890 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.891 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521713.970136, 551ba990-3708-4f5d-851a-6cd84303bab9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:34 np0005597378 nova_compute[238941]: 2026-01-27 13:48:34.891 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.015 238945 DEBUG nova.scheduler.client.report [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.089 238945 DEBUG nova.compute.manager [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.451 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.455 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521713.9771802, 551ba990-3708-4f5d-851a-6cd84303bab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.456 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.862 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.866 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.942 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:35 np0005597378 nova_compute[238941]: 2026-01-27 13:48:35.943 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.003 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.004 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.004 238945 DEBUG nova.objects.instance [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:48:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 451 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.953 238945 DEBUG nova.compute.manager [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.954 238945 DEBUG oslo_concurrency.lockutils [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.955 238945 DEBUG oslo_concurrency.lockutils [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.955 238945 DEBUG oslo_concurrency.lockutils [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.955 238945 DEBUG nova.compute.manager [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.956 238945 WARNING nova.compute.manager [req-a6cfe585-4d05-4dcc-b4be-6b13e849e747 req-246e8cf7-4603-4f8b-b48b-9c6b9811751b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.972 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:48:36 np0005597378 nova_compute[238941]: 2026-01-27 13:48:36.973 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:48:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:37 np0005597378 nova_compute[238941]: 2026-01-27 13:48:37.299 238945 DEBUG oslo_concurrency.lockutils [None req-5209641b-86db-43cb-bcb0-43753f855304 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 1.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:37 np0005597378 nova_compute[238941]: 2026-01-27 13:48:37.385 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:48:37 np0005597378 nova_compute[238941]: 2026-01-27 13:48:37.420 238945 DEBUG nova.policy [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dedc0f04f3d455682ea65fc37a49f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b041f051267f4a3c8518d3042922678a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:48:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 27 08:48:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 27 08:48:37 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 27 08:48:37 np0005597378 nova_compute[238941]: 2026-01-27 13:48:37.796 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:48:37 np0005597378 nova_compute[238941]: 2026-01-27 13:48:37.827 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521702.826262, 50c6d534-e937-4148-851e-4ec51e067875 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:37 np0005597378 nova_compute[238941]: 2026-01-27 13:48:37.828 238945 INFO nova.compute.manager [-] [instance: 50c6d534-e937-4148-851e-4ec51e067875] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:48:38 np0005597378 nova_compute[238941]: 2026-01-27 13:48:38.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 419 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 1.9 MiB/s wr, 96 op/s
Jan 27 08:48:38 np0005597378 nova_compute[238941]: 2026-01-27 13:48:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:38 np0005597378 nova_compute[238941]: 2026-01-27 13:48:38.905 238945 DEBUG nova.compute.manager [None req-e6551cd3-99e8-43a1-87f2-72dbea552294 - - - - - -] [instance: 50c6d534-e937-4148-851e-4ec51e067875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.209 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.210 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.211 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Creating image(s)#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.232 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.256 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.277 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.280 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.345 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.346 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.347 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.347 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.366 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.369 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b3294f5e-4a09-45dd-af30-58436db2ff72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 27 08:48:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 27 08:48:39 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.639 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b3294f5e-4a09-45dd-af30-58436db2ff72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.709 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] resizing rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:48:39 np0005597378 nova_compute[238941]: 2026-01-27 13:48:39.813 238945 DEBUG nova.objects.instance [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'migration_context' on Instance uuid b3294f5e-4a09-45dd-af30-58436db2ff72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.104 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.105 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Ensure instance console log exists: /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.105 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.105 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.106 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 359 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.4 MiB/s wr, 187 op/s
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.702 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Successfully created port: 4be0b4d3-9e06-4332-af56-9c381e484852 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.793 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.794 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.794 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.795 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.795 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.796 238945 INFO nova.compute.manager [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Terminating instance#033[00m
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.797 238945 DEBUG nova.compute.manager [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:48:40 np0005597378 kernel: tap9005c867-83 (unregistering): left promiscuous mode
Jan 27 08:48:40 np0005597378 NetworkManager[48904]: <info>  [1769521720.8287] device (tap9005c867-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:40Z|00312|binding|INFO|Releasing lport 9005c867-83d2-40fe-a9c6-8abeb0537249 from this chassis (sb_readonly=0)
Jan 27 08:48:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:40Z|00313|binding|INFO|Setting lport 9005c867-83d2-40fe-a9c6-8abeb0537249 down in Southbound
Jan 27 08:48:40 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:40Z|00314|binding|INFO|Removing iface tap9005c867-83 ovn-installed in OVS
Jan 27 08:48:40 np0005597378 nova_compute[238941]: 2026-01-27 13:48:40.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.864 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:65:71 10.100.0.11'], port_security=['fa:16:3e:6f:65:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '551ba990-3708-4f5d-851a-6cd84303bab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9005c867-83d2-40fe-a9c6-8abeb0537249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.866 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9005c867-83d2-40fe-a9c6-8abeb0537249 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:48:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.867 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:48:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.868 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ec74f5-231a-4b47-a3e8-004b972262b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:40.869 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore#033[00m
Jan 27 08:48:40 np0005597378 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 27 08:48:40 np0005597378 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Consumed 7.671s CPU time.
Jan 27 08:48:40 np0005597378 systemd-machined[207425]: Machine qemu-44-instance-00000026 terminated.
Jan 27 08:48:40 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : haproxy version is 2.8.14-c23fe91
Jan 27 08:48:40 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [NOTICE]   (276951) : path to executable is /usr/sbin/haproxy
Jan 27 08:48:40 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [WARNING]  (276951) : Exiting Master process...
Jan 27 08:48:40 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [ALERT]    (276951) : Current worker (276953) exited with code 143 (Terminated)
Jan 27 08:48:40 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[276947]: [WARNING]  (276951) : All workers exited. Exiting... (0)
Jan 27 08:48:40 np0005597378 systemd[1]: libpod-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe.scope: Deactivated successfully.
Jan 27 08:48:41 np0005597378 podman[277175]: 2026-01-27 13:48:41.005628235 +0000 UTC m=+0.043410817 container died b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:48:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe-userdata-shm.mount: Deactivated successfully.
Jan 27 08:48:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ea8203f8f5df80ddb58e033631c7b0b0361349dfc94002e2a118884845a52558-merged.mount: Deactivated successfully.
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.042 238945 INFO nova.virt.libvirt.driver [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Instance destroyed successfully.#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.044 238945 DEBUG nova.objects.instance [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 551ba990-3708-4f5d-851a-6cd84303bab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:41 np0005597378 podman[277175]: 2026-01-27 13:48:41.04568255 +0000 UTC m=+0.083465132 container cleanup b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:48:41 np0005597378 systemd[1]: libpod-conmon-b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe.scope: Deactivated successfully.
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.095 238945 DEBUG nova.virt.libvirt.vif [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:47:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-76528564',display_name='tempest-ServerDiskConfigTestJSON-server-76528564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-76528564',id=38,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-33bfu074',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:48:36Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=551ba990-3708-4f5d-851a-6cd84303bab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.096 238945 DEBUG nova.network.os_vif_util [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "9005c867-83d2-40fe-a9c6-8abeb0537249", "address": "fa:16:3e:6f:65:71", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9005c867-83", "ovs_interfaceid": "9005c867-83d2-40fe-a9c6-8abeb0537249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.097 238945 DEBUG nova.network.os_vif_util [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.098 238945 DEBUG os_vif [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.100 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9005c867-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.105 238945 INFO os_vif [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:65:71,bridge_name='br-int',has_traffic_filtering=True,id=9005c867-83d2-40fe-a9c6-8abeb0537249,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9005c867-83')#033[00m
Jan 27 08:48:41 np0005597378 podman[277211]: 2026-01-27 13:48:41.111530518 +0000 UTC m=+0.042671606 container remove b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7fbc06-13f0-4880-9574-fcff0f9233a9]: (4, ('Tue Jan 27 01:48:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe)\nb9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe\nTue Jan 27 01:48:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (b9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe)\nb9365bef4653e4ed8fb823278630a1fff8d04f7e335f8c5ea7554a0ffaed2bfe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.122 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[990fd104-3ddb-4fc8-974b-007e66528a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.123 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:41 np0005597378 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.148 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef22e67-79e9-44d8-93cd-58000981d817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.159 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54062675-262a-4fcf-aaf1-44d54a8c4e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b964603b-b055-4090-9a77-427399784383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.174 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb924fa3-979f-40a3-9bde-debfbdc44ea1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435301, 'reachable_time': 24851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277244, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.178 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:48:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:41.178 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ee6413-0c7b-4e68-86eb-be575f33b260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.359 238945 INFO nova.virt.libvirt.driver [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deleting instance files /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.360 238945 INFO nova.virt.libvirt.driver [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deletion of /var/lib/nova/instances/551ba990-3708-4f5d-851a-6cd84303bab9_del complete#033[00m
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.603 238945 INFO nova.compute.manager [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.604 238945 DEBUG oslo.service.loopingcall [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.604 238945 DEBUG nova.compute.manager [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.604 238945 DEBUG nova.network.neutron [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:48:41 np0005597378 nova_compute[238941]: 2026-01-27 13:48:41.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 27 08:48:41 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.244 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Successfully updated port: 4be0b4d3-9e06-4332-af56-9c381e484852 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:48:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 359 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 266 op/s
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.564 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.564 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquired lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.564 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.606 238945 DEBUG nova.compute.manager [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.607 238945 DEBUG oslo_concurrency.lockutils [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.607 238945 DEBUG oslo_concurrency.lockutils [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.607 238945 DEBUG oslo_concurrency.lockutils [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.608 238945 DEBUG nova.compute.manager [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.608 238945 DEBUG nova.compute.manager [req-fab18347-4aa1-40d5-b435-5884a9229220 req-6614af6e-3077-49f6-9f28-d0c5a0f4b7fa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-unplugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.864 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:42 np0005597378 nova_compute[238941]: 2026-01-27 13:48:42.965 238945 DEBUG nova.network.neutron [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.484 238945 INFO nova.compute.manager [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Took 1.88 seconds to deallocate network for instance.#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.686 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.687 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.839 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:43 np0005597378 nova_compute[238941]: 2026-01-27 13:48:43.840 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 276 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 318 op/s
Jan 27 08:48:44 np0005597378 nova_compute[238941]: 2026-01-27 13:48:44.456 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 08:48:44 np0005597378 nova_compute[238941]: 2026-01-27 13:48:44.880 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 08:48:44 np0005597378 nova_compute[238941]: 2026-01-27 13:48:44.880 238945 DEBUG nova.compute.provider_tree [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.420 238945 DEBUG nova.compute.manager [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-changed-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.420 238945 DEBUG nova.compute.manager [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Refreshing instance network info cache due to event network-changed-4be0b4d3-9e06-4332-af56-9c381e484852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.421 238945 DEBUG oslo_concurrency.lockutils [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.424 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.563 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.671 238945 DEBUG oslo_concurrency.processutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.705 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.705 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.706 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.706 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:45 np0005597378 podman[277246]: 2026-01-27 13:48:45.753096733 +0000 UTC m=+0.089213156 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.918 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "49158813-53e9-4c5a-9141-7646d98a93e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.918 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.918 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "49158813-53e9-4c5a-9141-7646d98a93e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.919 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.919 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.920 238945 INFO nova.compute.manager [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Terminating instance#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.921 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "refresh_cache-49158813-53e9-4c5a-9141-7646d98a93e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.921 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquired lock "refresh_cache-49158813-53e9-4c5a-9141-7646d98a93e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.921 238945 DEBUG nova.network.neutron [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:45 np0005597378 nova_compute[238941]: 2026-01-27 13:48:45.952 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.106 238945 DEBUG nova.network.neutron [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694368789' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.203 238945 DEBUG nova.compute.manager [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.204 238945 DEBUG oslo_concurrency.lockutils [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.204 238945 DEBUG oslo_concurrency.lockutils [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.204 238945 DEBUG oslo_concurrency.lockutils [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.205 238945 DEBUG nova.compute.manager [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] No waiting events found dispatching network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.205 238945 WARNING nova.compute.manager [req-293ea83f-7083-4d88-b425-7a3cadd7f6e4 req-ca64ce1b-3ac4-4043-9c16-6df3390cb2f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received unexpected event network-vif-plugged-9005c867-83d2-40fe-a9c6-8abeb0537249 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.205 238945 DEBUG oslo_concurrency.processutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.212 238945 DEBUG nova.compute.provider_tree [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 246 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 613 KiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 27 08:48:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:46.296 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.313 238945 DEBUG nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.376 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.472 238945 INFO nova.scheduler.client.report [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Deleted allocations for instance 551ba990-3708-4f5d-851a-6cd84303bab9#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.495 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.571 238945 DEBUG nova.network.neutron [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.588 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.589 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.590 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.614 238945 DEBUG nova.network.neutron [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updating instance_info_cache with network_info: [{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.618 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.619 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.646 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Releasing lock "refresh_cache-49158813-53e9-4c5a-9141-7646d98a93e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.647 238945 DEBUG nova.compute.manager [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.678 238945 DEBUG oslo_concurrency.lockutils [None req-9aa68108-8777-48a6-bec7-5cd1d7443b4d 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "551ba990-3708-4f5d-851a-6cd84303bab9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:46 np0005597378 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 27 08:48:46 np0005597378 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 14.135s CPU time.
Jan 27 08:48:46 np0005597378 systemd-machined[207425]: Machine qemu-41-instance-00000025 terminated.
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.734 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.734 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.786 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Releasing lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.787 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance network_info: |[{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.787 238945 DEBUG oslo_concurrency.lockutils [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.787 238945 DEBUG nova.network.neutron [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Refreshing network info cache for port 4be0b4d3-9e06-4332-af56-9c381e484852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.790 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start _get_guest_xml network_info=[{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.794 238945 WARNING nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.799 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.800 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.803 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.804 238945 DEBUG nova.virt.libvirt.host [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.804 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.804 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.805 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.805 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.805 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.806 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.806 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.806 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.807 238945 DEBUG nova.virt.hardware [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.811 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.845 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.870 238945 INFO nova.virt.libvirt.driver [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance destroyed successfully.#033[00m
Jan 27 08:48:46 np0005597378 nova_compute[238941]: 2026-01-27 13:48:46.870 238945 DEBUG nova.objects.instance [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'resources' on Instance uuid 49158813-53e9-4c5a-9141-7646d98a93e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 27 08:48:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 27 08:48:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 27 08:48:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3659655999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.201 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.208 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.209 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.215 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.215 238945 INFO nova.compute.claims [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:48:47 np0005597378 podman[277356]: 2026-01-27 13:48:47.318124434 +0000 UTC m=+0.072391055 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:48:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118447251' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.357 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.504 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.508 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.619 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.620 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.623 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.623 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.657 238945 DEBUG nova.compute.manager [req-9aaa2af2-12ae-4d76-b455-564f9aed828e req-cd23e6f8-0269-4650-acef-e7e4dc7f5715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Received event network-vif-deleted-9005c867-83d2-40fe-a9c6-8abeb0537249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.738 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.820 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.821 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3883MB free_disk=59.876304518431425GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:48:47 np0005597378 nova_compute[238941]: 2026-01-27 13:48:47.821 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445708363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.152 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.153 238945 DEBUG nova.virt.libvirt.vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563749021',display_name='tempest-ImagesTestJSON-server-1563749021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563749021',id=40,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ua530r9o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:38Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=b3294f5e-4a09-45dd-af30-58436db2ff72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.153 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.154 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.155 238945 DEBUG nova.objects.instance [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'pci_devices' on Instance uuid b3294f5e-4a09-45dd-af30-58436db2ff72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.177 238945 INFO nova.virt.libvirt.driver [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deleting instance files /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1_del#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.178 238945 INFO nova.virt.libvirt.driver [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deletion of /var/lib/nova/instances/49158813-53e9-4c5a-9141-7646d98a93e1_del complete#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <uuid>b3294f5e-4a09-45dd-af30-58436db2ff72</uuid>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <name>instance-00000028</name>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesTestJSON-server-1563749021</nova:name>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:48:46</nova:creationTime>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:user uuid="7dedc0f04f3d455682ea65fc37a49f06">tempest-ImagesTestJSON-1064968599-project-member</nova:user>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:project uuid="b041f051267f4a3c8518d3042922678a">tempest-ImagesTestJSON-1064968599</nova:project>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <nova:port uuid="4be0b4d3-9e06-4332-af56-9c381e484852">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <entry name="serial">b3294f5e-4a09-45dd-af30-58436db2ff72</entry>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <entry name="uuid">b3294f5e-4a09-45dd-af30-58436db2ff72</entry>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b3294f5e-4a09-45dd-af30-58436db2ff72_disk">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:26:3b:31"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <target dev="tap4be0b4d3-9e"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/console.log" append="off"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:48:48 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:48:48 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:48:48 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:48:48 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Preparing to wait for external event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.183 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.184 238945 DEBUG nova.virt.libvirt.vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563749021',display_name='tempest-ImagesTestJSON-server-1563749021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563749021',id=40,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ua530r9o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:38Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=b3294f5e-4a09-45dd-af30-58436db2ff72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.184 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.185 238945 DEBUG nova.network.os_vif_util [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.185 238945 DEBUG os_vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.186 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.187 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.188 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4be0b4d3-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.189 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4be0b4d3-9e, col_values=(('external_ids', {'iface-id': '4be0b4d3-9e06-4332-af56-9c381e484852', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:3b:31', 'vm-uuid': 'b3294f5e-4a09-45dd-af30-58436db2ff72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:48 np0005597378 NetworkManager[48904]: <info>  [1769521728.1914] manager: (tap4be0b4d3-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.194 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.197 238945 INFO os_vif [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e')#033[00m
Jan 27 08:48:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 246 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 1.6 MiB/s wr, 139 op/s
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.291 238945 INFO nova.compute.manager [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 1.64 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.291 238945 DEBUG oslo.service.loopingcall [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.292 238945 DEBUG nova.compute.manager [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.292 238945 DEBUG nova.network.neutron [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.329 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.329 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.329 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No VIF found with MAC fa:16:3e:26:3b:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.330 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Using config drive#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.347 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969198426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.429 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.434 238945 DEBUG nova.compute.provider_tree [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.448 238945 DEBUG nova.network.neutron [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.524 238945 DEBUG nova.network.neutron [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.526 238945 DEBUG nova.scheduler.client.report [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.597 238945 INFO nova.compute.manager [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Took 0.31 seconds to deallocate network for instance.#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.823 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.824 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.982 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.990 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:48:48 np0005597378 nova_compute[238941]: 2026-01-27 13:48:48.991 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.032 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e03449f9-27f7-4c89-8d13-5f4a688e2b1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.033 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 49158813-53e9-4c5a-9141-7646d98a93e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.033 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b3294f5e-4a09-45dd-af30-58436db2ff72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.034 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9505af7f-b4b1-45a4-9350-98fd525ce36e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.074 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Creating config drive at /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.080 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejv_97cv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.141 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.143 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.143 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.147 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.181 238945 DEBUG nova.policy [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '618e06758ec244289bb6f2258e3df2da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a34b23d56029482fbb58a6be97575a37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.215 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejv_97cv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.243 238945 DEBUG nova.storage.rbd_utils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] rbd image b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.247 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.320 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.330 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.377 238945 DEBUG oslo_concurrency.processutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config b3294f5e-4a09-45dd-af30-58436db2ff72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.378 238945 INFO nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deleting local config drive /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72/disk.config because it was imported into RBD.#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.390 238945 DEBUG nova.network.neutron [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updated VIF entry in instance network info cache for port 4be0b4d3-9e06-4332-af56-9c381e484852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.391 238945 DEBUG nova.network.neutron [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updating instance_info_cache with network_info: [{"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.407 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.408 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:49 np0005597378 kernel: tap4be0b4d3-9e: entered promiscuous mode
Jan 27 08:48:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:49Z|00315|binding|INFO|Claiming lport 4be0b4d3-9e06-4332-af56-9c381e484852 for this chassis.
Jan 27 08:48:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:49Z|00316|binding|INFO|4be0b4d3-9e06-4332-af56-9c381e484852: Claiming fa:16:3e:26:3b:31 10.100.0.11
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:49 np0005597378 systemd-udevd[277300]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.437 238945 DEBUG oslo_concurrency.lockutils [req-093d5e5e-4b1e-47f4-b582-897af64af8d8 req-6b89e152-04fa-4f04-a74a-eeb5561a5882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b3294f5e-4a09-45dd-af30-58436db2ff72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:49 np0005597378 NetworkManager[48904]: <info>  [1769521729.4392] manager: (tap4be0b4d3-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.441 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:3b:31 10.100.0.11'], port_security=['fa:16:3e:26:3b:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b3294f5e-4a09-45dd-af30-58436db2ff72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be0b4d3-9e06-4332-af56-9c381e484852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be0b4d3-9e06-4332-af56-9c381e484852 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 bound to our chassis#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.443 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e25f7657-3ed6-425c-8132-1b5c417564a5#033[00m
Jan 27 08:48:49 np0005597378 NetworkManager[48904]: <info>  [1769521729.4518] device (tap4be0b4d3-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:48:49 np0005597378 NetworkManager[48904]: <info>  [1769521729.4550] device (tap4be0b4d3-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.456 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9c1ada-b707-4c0a-8735-d90c16d77bd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.457 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape25f7657-31 in ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.460 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape25f7657-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.460 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6756292-0964-4a48-ac16-2e31f514e918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a823097e-ce44-4e6d-b837-057403e7f268]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:49Z|00317|binding|INFO|Setting lport 4be0b4d3-9e06-4332-af56-9c381e484852 ovn-installed in OVS
Jan 27 08:48:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:49Z|00318|binding|INFO|Setting lport 4be0b4d3-9e06-4332-af56-9c381e484852 up in Southbound
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.476 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[161dad6e-7b20-4891-9cb1-05154dafa932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 systemd-machined[207425]: New machine qemu-45-instance-00000028.
Jan 27 08:48:49 np0005597378 systemd[1]: Started Virtual Machine qemu-45-instance-00000028.
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.493 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3674e3de-3967-4fb9-9d77-199e28f06401]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.513 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.521 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39c36881-373d-48ca-a041-6412f2e959dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.526 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[960f658d-0bef-41cd-b34f-1c7fd6134e03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 NetworkManager[48904]: <info>  [1769521729.5279] manager: (tape25f7657-30): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.558 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[23a7bd20-3083-4d0b-ad56-9f417fb67e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.563 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[772fe0a8-d3a7-4a5f-a3c0-39d3410f68d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 NetworkManager[48904]: <info>  [1769521729.5854] device (tape25f7657-30): carrier: link connected
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.594 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b96626-7a1c-47b8-b075-ece82396b434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.612 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be4b05d7-677a-4722-8fa8-59e6816ffd51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436920, 'reachable_time': 16553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277569, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.636 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd472e2-cd31-42ef-8212-b3a4014c12c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:da8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436920, 'tstamp': 436920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277570, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.656 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[244f2054-4a85-4ca8-a1f9-0435d18cec7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape25f7657-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:da:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436920, 'reachable_time': 16553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277571, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c01e9950-05d3-44e5-a6da-9e45077f9028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.765 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d359f6e-9d4b-45b9-98e7-cf0ed3b166e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape25f7657-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:49 np0005597378 kernel: tape25f7657-30: entered promiscuous mode
Jan 27 08:48:49 np0005597378 NetworkManager[48904]: <info>  [1769521729.7699] manager: (tape25f7657-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.773 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape25f7657-30, col_values=(('external_ids', {'iface-id': 'be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:48:49Z|00319|binding|INFO|Releasing lport be41bcc2-ba1f-4251-9c4e-2fdf5a46b18f from this chassis (sb_readonly=0)
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.775 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.776 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b126f8-ecd4-4bea-a624-db2cc4779591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.777 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e25f7657-3ed6-425c-8132-1b5c417564a5.pid.haproxy
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e25f7657-3ed6-425c-8132-1b5c417564a5
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:48:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:49.778 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'env', 'PROCESS_TAG=haproxy-e25f7657-3ed6-425c-8132-1b5c417564a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e25f7657-3ed6-425c-8132-1b5c417564a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.867 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521729.8666506, b3294f5e-4a09-45dd-af30-58436db2ff72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.867 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Started (Lifecycle Event)#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.906 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.907 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.907 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating image(s)#033[00m
Jan 27 08:48:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245816401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.924 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.944 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.964 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.967 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:49 np0005597378 nova_compute[238941]: 2026-01-27 13:48:49.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.002 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.006 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521729.8674378, b3294f5e-4a09-45dd-af30-58436db2ff72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.007 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.011 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.018 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.033 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.033 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.034 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.034 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.055 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.060 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.106 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.112 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.115 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:50 np0005597378 podman[277723]: 2026-01-27 13:48:50.119465038 +0000 UTC m=+0.049062199 container create c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:48:50 np0005597378 systemd[1]: Started libpod-conmon-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73.scope.
Jan 27 08:48:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:48:50 np0005597378 podman[277723]: 2026-01-27 13:48:50.093610243 +0000 UTC m=+0.023207424 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:48:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050e611915286b45aef2d22991dc7fb146c651ef9e37ac6957536377f190bcb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:48:50 np0005597378 podman[277723]: 2026-01-27 13:48:50.223244955 +0000 UTC m=+0.152842136 container init c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:48:50 np0005597378 podman[277723]: 2026-01-27 13:48:50.229694648 +0000 UTC m=+0.159291809 container start c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.232 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:48:50 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : New worker (277763) forked
Jan 27 08:48:50 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : Loading success.
Jan 27 08:48:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 206 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 1.3 MiB/s wr, 124 op/s
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.317 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.317 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.318 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.320 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.376 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.451 238945 DEBUG nova.objects.instance [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.486 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.486 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Ensure instance console log exists: /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.487 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.487 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.487 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:50 np0005597378 nova_compute[238941]: 2026-01-27 13:48:50.571 238945 DEBUG oslo_concurrency.processutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2538857301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.134 238945 DEBUG oslo_concurrency.processutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.140 238945 DEBUG nova.compute.provider_tree [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.176 238945 DEBUG nova.scheduler.client.report [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.270 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.273 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.280 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.281 238945 INFO nova.compute.claims [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.311 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.311 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.433 238945 INFO nova.scheduler.client.report [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Deleted allocations for instance 49158813-53e9-4c5a-9141-7646d98a93e1#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.679 238945 DEBUG oslo_concurrency.lockutils [None req-79cce645-af0e-4f26-a00d-a0fc5eba9cba e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "49158813-53e9-4c5a-9141-7646d98a93e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG nova.compute.manager [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG oslo_concurrency.lockutils [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG oslo_concurrency.lockutils [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.754 238945 DEBUG oslo_concurrency.lockutils [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.755 238945 DEBUG nova.compute.manager [req-1c80affe-fc52-4fee-8a44-e1453185392b req-6890a8a0-f9ca-4c81-a03f-0a7c15894426 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Processing event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.755 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.760 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521731.760585, b3294f5e-4a09-45dd-af30-58436db2ff72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.761 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.762 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.764 238945 INFO nova.virt.libvirt.driver [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance spawned successfully.#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.765 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.781 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.820 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.827 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.831 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.831 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.832 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.832 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.833 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.833 238945 DEBUG nova.virt.libvirt.driver [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.866 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.868 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.898 238945 INFO nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 12.69 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.898 238945 DEBUG nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.958 238945 INFO nova.compute.manager [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 19.28 seconds to build instance.#033[00m
Jan 27 08:48:51 np0005597378 nova_compute[238941]: 2026-01-27 13:48:51.974 238945 DEBUG oslo_concurrency.lockutils [None req-2cb32b16-9a41-49c4-9210-ea9c649218aa 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.224 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Successfully created port: ec45493d-696f-479c-a443-7428a58bd860 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:48:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 206 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.1 MiB/s wr, 103 op/s
Jan 27 08:48:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572695630' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.328 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.334 238945 DEBUG nova.compute.provider_tree [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.374 238945 DEBUG nova.scheduler.client.report [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.412 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.412 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.471 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.472 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.501 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.575 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.630 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.633 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.634 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.634 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.634 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.636 238945 INFO nova.compute.manager [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Terminating instance#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.637 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.637 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquired lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.638 238945 DEBUG nova.network.neutron [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.642 238945 DEBUG nova.policy [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11a9e491e7f24607aa5d3d710b6607ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.855 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.857 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.857 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating image(s)#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.876 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.901 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.921 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.924 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.990 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.991 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.991 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:52 np0005597378 nova_compute[238941]: 2026-01-27 13:48:52.992 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.011 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.015 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e053f779-294f-4782-bb33-a14e40753795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.294 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e053f779-294f-4782-bb33-a14e40753795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.346 238945 DEBUG nova.network.neutron [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.354 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] resizing rbd image e053f779-294f-4782-bb33-a14e40753795_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.460 238945 DEBUG nova.objects.instance [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.656 238945 DEBUG nova.network.neutron [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.714 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.714 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Ensure instance console log exists: /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.715 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.715 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.715 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.757 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Releasing lock "refresh_cache-e03449f9-27f7-4c89-8d13-5f4a688e2b1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.757 238945 DEBUG nova.compute.manager [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:48:53 np0005597378 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Jan 27 08:48:53 np0005597378 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 14.470s CPU time.
Jan 27 08:48:53 np0005597378 systemd-machined[207425]: Machine qemu-40-instance-00000024 terminated.
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.976 238945 INFO nova.virt.libvirt.driver [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance destroyed successfully.#033[00m
Jan 27 08:48:53 np0005597378 nova_compute[238941]: 2026-01-27 13:48:53.977 238945 DEBUG nova.objects.instance [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lazy-loading 'resources' on Instance uuid e03449f9-27f7-4c89-8d13-5f4a688e2b1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:48:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 211 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.374 238945 INFO nova.virt.libvirt.driver [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deleting instance files /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_del#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.375 238945 INFO nova.virt.libvirt.driver [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deletion of /var/lib/nova/instances/e03449f9-27f7-4c89-8d13-5f4a688e2b1b_del complete#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.429 238945 INFO nova.compute.manager [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.430 238945 DEBUG oslo.service.loopingcall [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.430 238945 DEBUG nova.compute.manager [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.430 238945 DEBUG nova.network.neutron [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.715 238945 DEBUG nova.network.neutron [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:54 np0005597378 nova_compute[238941]: 2026-01-27 13:48:54.814 238945 DEBUG nova.network.neutron [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.007 238945 DEBUG nova.compute.manager [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.007 238945 DEBUG oslo_concurrency.lockutils [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 DEBUG oslo_concurrency.lockutils [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 DEBUG oslo_concurrency.lockutils [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 DEBUG nova.compute.manager [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] No waiting events found dispatching network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.008 238945 WARNING nova.compute.manager [req-c735aa84-c128-48f9-b7d2-10d0ce12a4be req-08c97bf6-3cba-4a8b-90a6-dfa4a25a5e4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received unexpected event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.021 238945 INFO nova.compute.manager [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Took 0.59 seconds to deallocate network for instance.#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.054 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Successfully updated port: ec45493d-696f-479c-a443-7428a58bd860 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.298 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.299 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.387 238945 DEBUG oslo_concurrency.processutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.600 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.601 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquired lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.601 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.642 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Successfully created port: ceb7b09e-b635-4570-bcf2-a08115d41365 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:48:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:48:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962105225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.944 238945 DEBUG oslo_concurrency.processutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:55 np0005597378 nova_compute[238941]: 2026-01-27 13:48:55.949 238945 DEBUG nova.compute.provider_tree [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:48:56 np0005597378 nova_compute[238941]: 2026-01-27 13:48:56.040 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521721.039473, 551ba990-3708-4f5d-851a-6cd84303bab9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:48:56 np0005597378 nova_compute[238941]: 2026-01-27 13:48:56.041 238945 INFO nova.compute.manager [-] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:48:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 211 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 27 08:48:56 np0005597378 nova_compute[238941]: 2026-01-27 13:48:56.871 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:56 np0005597378 nova_compute[238941]: 2026-01-27 13:48:56.928 238945 DEBUG nova.scheduler.client.report [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:48:56 np0005597378 nova_compute[238941]: 2026-01-27 13:48:56.934 238945 DEBUG nova.compute.manager [None req-b8ecfe56-3677-4fee-9bcb-f06cf35cdf25 - - - - - -] [instance: 551ba990-3708-4f5d-851a-6cd84303bab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.410 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.564 238945 DEBUG nova.compute.manager [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.610 238945 INFO nova.scheduler.client.report [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Deleted allocations for instance e03449f9-27f7-4c89-8d13-5f4a688e2b1b#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.640 238945 INFO nova.compute.manager [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] instance snapshotting#033[00m
Jan 27 08:48:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:57.647 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:48:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:48:57.648 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.733 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.775 238945 DEBUG oslo_concurrency.lockutils [None req-118e461d-b7e9-4cb7-b8ce-a1bedf468431 e9ae0a0790eb4ab98f7efc9783b8ae7a 940524337ca54001a9841d70fba0b293 - - default default] Lock "e03449f9-27f7-4c89-8d13-5f4a688e2b1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:48:57 np0005597378 nova_compute[238941]: 2026-01-27 13:48:57.972 238945 INFO nova.virt.libvirt.driver [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Beginning live snapshot process#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.271 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Successfully updated port: ceb7b09e-b635-4570-bcf2-a08115d41365 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.279 238945 DEBUG nova.virt.libvirt.imagebackend [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:48:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 196 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 196 op/s
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.288 238945 DEBUG nova.compute.manager [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-changed-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.288 238945 DEBUG nova.compute.manager [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Refreshing instance network info cache due to event network-changed-ec45493d-696f-479c-a443-7428a58bd860. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.288 238945 DEBUG oslo_concurrency.lockutils [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.297 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.297 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.298 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.427 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.521 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(a6fcb8ccf46345c09cbaca5b6f3f6ed9) on rbd image(b3294f5e-4a09-45dd-af30-58436db2ff72_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.579 238945 DEBUG nova.compute.manager [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.579 238945 DEBUG nova.compute.manager [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:48:58 np0005597378 nova_compute[238941]: 2026-01-27 13:48:58.580 238945 DEBUG oslo_concurrency.lockutils [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:48:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 27 08:48:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 27 08:48:58 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.078 238945 DEBUG nova.network.neutron [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updating instance_info_cache with network_info: [{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.114 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Releasing lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.115 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance network_info: |[{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.115 238945 DEBUG oslo_concurrency.lockutils [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.115 238945 DEBUG nova.network.neutron [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Refreshing network info cache for port ec45493d-696f-479c-a443-7428a58bd860 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.118 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start _get_guest_xml network_info=[{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.123 238945 WARNING nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.130 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.131 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.141 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.142 238945 DEBUG nova.virt.libvirt.host [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.142 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.143 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.144 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.145 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.145 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.145 238945 DEBUG nova.virt.hardware [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.148 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.177 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] cloning vms/b3294f5e-4a09-45dd-af30-58436db2ff72_disk@a6fcb8ccf46345c09cbaca5b6f3f6ed9 to images/32f2fafb-9436-453f-aba2-6eeedf1bd61e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.308 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] flattening images/32f2fafb-9436-453f-aba2-6eeedf1bd61e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.504 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(a6fcb8ccf46345c09cbaca5b6f3f6ed9) on rbd image(b3294f5e-4a09-45dd-af30-58436db2ff72_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309389733' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309389733' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.546 238945 DEBUG nova.network.neutron [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.573 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.574 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance network_info: |[{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.574 238945 DEBUG oslo_concurrency.lockutils [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.575 238945 DEBUG nova.network.neutron [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.578 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start _get_guest_xml network_info=[{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.583 238945 WARNING nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.587 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.588 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.596 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.597 238945 DEBUG nova.virt.libvirt.host [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.597 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.597 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.598 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.598 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.598 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.599 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.600 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.600 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.600 238945 DEBUG nova.virt.hardware [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.603 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273159888' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.740 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.763 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:48:59 np0005597378 nova_compute[238941]: 2026-01-27 13:48:59.770 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 27 08:48:59 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.012 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] creating snapshot(snap) on rbd image(32f2fafb-9436-453f-aba2-6eeedf1bd61e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875873887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.168 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.190 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.197 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 194 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085182704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.333 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.336 238945 DEBUG nova.virt.libvirt.vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:49Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.337 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.340 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.343 238945 DEBUG nova.objects.instance [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.444 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <uuid>9505af7f-b4b1-45a4-9350-98fd525ce36e</uuid>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <name>instance-00000029</name>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1908864562</nova:name>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:48:59</nova:creationTime>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:port uuid="ec45493d-696f-479c-a443-7428a58bd860">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="serial">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="uuid">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:04:d0:64"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <target dev="tapec45493d-69"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log" append="off"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:49:00 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:49:00 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.453 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Preparing to wait for external event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.454 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.454 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.455 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.458 238945 DEBUG nova.virt.libvirt.vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:49Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.459 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.460 238945 DEBUG nova.network.os_vif_util [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.461 238945 DEBUG os_vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.466 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.467 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec45493d-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec45493d-69, col_values=(('external_ids', {'iface-id': 'ec45493d-696f-479c-a443-7428a58bd860', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:d0:64', 'vm-uuid': '9505af7f-b4b1-45a4-9350-98fd525ce36e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:00 np0005597378 NetworkManager[48904]: <info>  [1769521740.4787] manager: (tapec45493d-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.484 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.486 238945 INFO os_vif [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.572 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:04:d0:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.573 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Using config drive#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.594 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.690 238945 DEBUG nova.network.neutron [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updated VIF entry in instance network info cache for port ec45493d-696f-479c-a443-7428a58bd860. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.691 238945 DEBUG nova.network.neutron [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updating instance_info_cache with network_info: [{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.710 238945 DEBUG oslo_concurrency.lockutils [req-b24efdc4-c26e-4010-9d9f-1f47057be4c2 req-1a908b5a-f5b2-4df1-bdbe-3d06c39c5913 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9505af7f-b4b1-45a4-9350-98fd525ce36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514066953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.744 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.746 238945 DEBUG nova.virt.libvirt.vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.747 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.748 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.749 238945 DEBUG nova.objects.instance [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.777 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <uuid>e053f779-294f-4782-bb33-a14e40753795</uuid>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <name>instance-0000002a</name>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestOtherB-server-1638292425</nova:name>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:48:59</nova:creationTime>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <nova:port uuid="ceb7b09e-b635-4570-bcf2-a08115d41365">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="serial">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="uuid">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk.config">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:ad:be:d8"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <target dev="tapceb7b09e-b6"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log" append="off"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:49:00 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:49:00 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:49:00 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:49:00 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.782 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Preparing to wait for external event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.783 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.784 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.784 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.785 238945 DEBUG nova.virt.libvirt.vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:48:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.786 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.786 238945 DEBUG nova.network.os_vif_util [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.787 238945 DEBUG os_vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.788 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.789 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.792 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb7b09e-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.793 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb7b09e-b6, col_values=(('external_ids', {'iface-id': 'ceb7b09e-b635-4570-bcf2-a08115d41365', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:be:d8', 'vm-uuid': 'e053f779-294f-4782-bb33-a14e40753795'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:00 np0005597378 NetworkManager[48904]: <info>  [1769521740.7966] manager: (tapceb7b09e-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.805 238945 INFO os_vif [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.858 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.858 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.859 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:ad:be:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.859 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Using config drive#033[00m
Jan 27 08:49:00 np0005597378 nova_compute[238941]: 2026-01-27 13:49:00.883 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 27 08:49:00 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 32f2fafb-9436-453f-aba2-6eeedf1bd61e could not be found.
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 32f2fafb-9436-453f-aba2-6eeedf1bd61e
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver 
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver 
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 32f2fafb-9436-453f-aba2-6eeedf1bd61e could not be found.
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.141 238945 ERROR nova.virt.libvirt.driver #033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.176 238945 DEBUG nova.storage.rbd_utils [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] removing snapshot(snap) on rbd image(32f2fafb-9436-453f-aba2-6eeedf1bd61e) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.598 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating config drive at /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.604 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kb_mf69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.746 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kb_mf69" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.768 238945 DEBUG nova.storage.rbd_utils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.771 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.866 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521726.8639367, 49158813-53e9-4c5a-9141-7646d98a93e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.867 238945 INFO nova.compute.manager [-] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:01 np0005597378 nova_compute[238941]: 2026-01-27 13:49:01.897 238945 DEBUG nova.compute.manager [None req-a2c49c2f-637d-4004-a87e-5934e94968a0 - - - - - -] [instance: 49158813-53e9-4c5a-9141-7646d98a93e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 27 08:49:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 27 08:49:02 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.009 238945 DEBUG oslo_concurrency.processutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.009 238945 INFO nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting local config drive /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config because it was imported into RBD.#033[00m
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.0644] manager: (tapceb7b09e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Jan 27 08:49:02 np0005597378 kernel: tapceb7b09e-b6: entered promiscuous mode
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00320|binding|INFO|Claiming lport ceb7b09e-b635-4570-bcf2-a08115d41365 for this chassis.
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00321|binding|INFO|ceb7b09e-b635-4570-bcf2-a08115d41365: Claiming fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 08:49:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:02 np0005597378 systemd-udevd[278491]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:02 np0005597378 systemd-machined[207425]: New machine qemu-46-instance-0000002a.
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.1142] device (tapceb7b09e-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.1150] device (tapceb7b09e-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:49:02 np0005597378 systemd[1]: Started Virtual Machine qemu-46-instance-0000002a.
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.126 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.128 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a bound to our chassis#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.129 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[830bb5e1-901b-48d2-b74b-5522cd9383bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.145 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25155fe5-31 in ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.147 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25155fe5-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5757e3-c5a8-4819-b524-5a5ccc5dc7a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.148 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[073eafe5-aa2a-489d-bbd5-eeeed9e8e2e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.166 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[20cb7da0-c29e-4822-b14f-0b620e6a08f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00322|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 ovn-installed in OVS
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00323|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 up in Southbound
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.181 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f3563ffb-7809-4e8f-bf11-cd77dac978a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.211 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[340cd95c-728c-46e0-946c-8d8214196542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.2183] manager: (tap25155fe5-30): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5de6f789-1bda-4af2-af64-d9426fbc83ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.253 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3cae4787-aa95-4a5e-bae4-60d8e2124951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.257 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating config drive at /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.257 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0c81dc-fed5-4cac-abd8-c9178a57675d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.262 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12veq3cl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.2837] device (tap25155fe5-30): carrier: link connected
Jan 27 08:49:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 194 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.292 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6774216e-507c-428e-91f5-af825c7d0e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.312 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7237d7a1-556f-4302-861f-d01f4a0f6e17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278533, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[da0d2125-c58d-49e1-8752-bc7ece8b8584]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:48e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438190, 'tstamp': 438190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278544, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.356 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61dbd744-229a-4ec3-b265-83b82dfc7313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278556, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00882583-c071-4565-afb2-02065a8c5a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.412 238945 WARNING nova.compute.manager [None req-b3a5d01c-8fac-43d6-8ac1-61ee8775b6f0 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Image not found during snapshot: nova.exception.ImageNotFound: Image 32f2fafb-9436-453f-aba2-6eeedf1bd61e could not be found.#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.418 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12veq3cl" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.440 238945 DEBUG nova.storage.rbd_utils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.445 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9952a270-5813-40be-858b-1940d18732e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.446 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.446 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.447 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.447 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.4494] manager: (tap25155fe5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 27 08:49:02 np0005597378 kernel: tap25155fe5-30: entered promiscuous mode
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.454 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00324|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.478 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25155fe5-3d99-4510-9613-2ca9c8acc75a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25155fe5-3d99-4510-9613-2ca9c8acc75a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62f9535b-1dba-4fb7-8600-be60a596faee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.479 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/25155fe5-3d99-4510-9613-2ca9c8acc75a.pid.haproxy
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 25155fe5-3d99-4510-9613-2ca9c8acc75a
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.480 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'env', 'PROCESS_TAG=haproxy-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25155fe5-3d99-4510-9613-2ca9c8acc75a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.518 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521742.5178764, e053f779-294f-4782-bb33-a14e40753795 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.519 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Started (Lifecycle Event)#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.542 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.545 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521742.519225, e053f779-294f-4782-bb33-a14e40753795 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.546 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.551 238945 DEBUG nova.compute.manager [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.552 238945 DEBUG oslo_concurrency.lockutils [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.552 238945 DEBUG oslo_concurrency.lockutils [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.553 238945 DEBUG oslo_concurrency.lockutils [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.553 238945 DEBUG nova.compute.manager [req-09c936b8-65b4-4194-a11d-39cf0889f2f0 req-f535b9b7-9194-401c-ad3c-58f93f63803a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Processing event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.554 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.557 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.559 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance spawned successfully.#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.560 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.588 238945 DEBUG nova.network.neutron [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.588 238945 DEBUG nova.network.neutron [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.594 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.599 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.600 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.600 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.601 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.601 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.602 238945 DEBUG nova.virt.libvirt.driver [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.605 238945 DEBUG oslo_concurrency.processutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.606 238945 INFO nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting local config drive /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config because it was imported into RBD.#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.611 238945 DEBUG oslo_concurrency.lockutils [req-494e0d90-e7db-4422-bca0-dd153ad7462b req-ca5d5520-8ca6-448f-91a3-cca5f804a464 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.617 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521742.5564055, e053f779-294f-4782-bb33-a14e40753795 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.618 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.648 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:02 np0005597378 kernel: tapec45493d-69: entered promiscuous mode
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.6512] manager: (tapec45493d-69): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.651 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:02 np0005597378 systemd-udevd[278524]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00325|binding|INFO|Claiming lport ec45493d-696f-479c-a443-7428a58bd860 for this chassis.
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00326|binding|INFO|ec45493d-696f-479c-a443-7428a58bd860: Claiming fa:16:3e:04:d0:64 10.100.0.7
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.6658] device (tapec45493d-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:49:02 np0005597378 NetworkManager[48904]: <info>  [1769521742.6667] device (tapec45493d-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00327|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 ovn-installed in OVS
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.677 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:02.682 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:02Z|00328|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 up in Southbound
Jan 27 08:49:02 np0005597378 systemd-machined[207425]: New machine qemu-47-instance-00000029.
Jan 27 08:49:02 np0005597378 systemd[1]: Started Virtual Machine qemu-47-instance-00000029.
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.724 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.738 238945 INFO nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.739 238945 DEBUG nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.812 238945 INFO nova.compute.manager [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 12.81 seconds to build instance.#033[00m
Jan 27 08:49:02 np0005597378 podman[278674]: 2026-01-27 13:49:02.864577842 +0000 UTC m=+0.054845395 container create 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.889 238945 DEBUG oslo_concurrency.lockutils [None req-dbe34015-a038-4cfc-b71c-812d34e15f86 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:02 np0005597378 systemd[1]: Started libpod-conmon-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed.scope.
Jan 27 08:49:02 np0005597378 podman[278674]: 2026-01-27 13:49:02.835742777 +0000 UTC m=+0.026010340 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:49:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209ad30fbdfa4bb4da330a1cc3224f8070b7b1cb9b8d8a757768c254258b37b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:02 np0005597378 podman[278674]: 2026-01-27 13:49:02.953591572 +0000 UTC m=+0.143859115 container init 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:49:02 np0005597378 podman[278674]: 2026-01-27 13:49:02.960168149 +0000 UTC m=+0.150435692 container start 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.986 238945 DEBUG nova.compute.manager [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.986 238945 DEBUG oslo_concurrency.lockutils [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.987 238945 DEBUG oslo_concurrency.lockutils [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.987 238945 DEBUG oslo_concurrency.lockutils [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:02 np0005597378 nova_compute[238941]: 2026-01-27 13:49:02.987 238945 DEBUG nova.compute.manager [req-07158d6f-04fa-4085-a96d-e9b94435c68e req-09de1777-9ef8-4a02-a145-d5a3d89f97d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Processing event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:49:02 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : New worker (278695) forked
Jan 27 08:49:02 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : Loading success.
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.021 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.023 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd33ad0-b2a5-405f-b692-7441ed1e54ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.038 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.041 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbaac5ba-8720-404a-bc68-1da22f88878c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.043 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1612f882-ca14-4527-8664-27a48ccd7773]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.053 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.057 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.058 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.058 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.058 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.061 238945 INFO nova.compute.manager [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Terminating instance#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.062 238945 DEBUG nova.compute.manager [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.064 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[54e837c4-7e68-4b6d-8e24-65485fa9a1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.090 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[619a179f-866c-4cd1-b7aa-cc8d4f3f7cdc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 kernel: tap4be0b4d3-9e (unregistering): left promiscuous mode
Jan 27 08:49:03 np0005597378 NetworkManager[48904]: <info>  [1769521743.1085] device (tap4be0b4d3-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:49:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:03Z|00329|binding|INFO|Releasing lport 4be0b4d3-9e06-4332-af56-9c381e484852 from this chassis (sb_readonly=0)
Jan 27 08:49:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:03Z|00330|binding|INFO|Setting lport 4be0b4d3-9e06-4332-af56-9c381e484852 down in Southbound
Jan 27 08:49:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:03Z|00331|binding|INFO|Removing iface tap4be0b4d3-9e ovn-installed in OVS
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.135 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c12e3014-0f93-435e-9fc1-f9010ca58c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.142 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:3b:31 10.100.0.11'], port_security=['fa:16:3e:26:3b:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b3294f5e-4a09-45dd-af30-58436db2ff72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e25f7657-3ed6-425c-8132-1b5c417564a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b041f051267f4a3c8518d3042922678a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815f4830-9f7d-4d0c-ba61-52d753b90517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75e679bc-c1ea-42d9-90aa-87fb65c3da56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be0b4d3-9e06-4332-af56-9c381e484852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 NetworkManager[48904]: <info>  [1769521743.1453] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b352fd3-e90d-48a7-9dae-61370594d1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Deactivated successfully.
Jan 27 08:49:03 np0005597378 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Consumed 11.372s CPU time.
Jan 27 08:49:03 np0005597378 systemd-machined[207425]: Machine qemu-45-instance-00000028 terminated.
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b60f4aff-63ca-4068-b4f2-032de4fac0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fefc4f20-52e4-4a19-af1b-72b785fbb309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 NetworkManager[48904]: <info>  [1769521743.2062] device (tap4856e57c-d0): carrier: link connected
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.211 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[22f42920-7a75-4404-9fb4-78ee9eb73312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a98ba1c-4b46-4ccd-b9e3-28b88f342414]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438283, 'reachable_time': 38224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278762, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71b07abb-4e72-4006-8c11-545fc09e01ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438283, 'tstamp': 438283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278763, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.267 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e26e79ec-42c0-4118-ad3f-54f0924c646f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438283, 'reachable_time': 38224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278765, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.267 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521743.2675443, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.269 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Started (Lifecycle Event)#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.272 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.282 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:49:03 np0005597378 NetworkManager[48904]: <info>  [1769521743.2848] manager: (tap4be0b4d3-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.294 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance spawned successfully.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.295 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.307 238945 INFO nova.virt.libvirt.driver [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Instance destroyed successfully.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.308 238945 DEBUG nova.objects.instance [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lazy-loading 'resources' on Instance uuid b3294f5e-4a09-45dd-af30-58436db2ff72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[804623d0-7794-41cc-afe1-4896de6da62f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.375 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73c0fc6b-81b5-4078-9e6a-1c6a6e0f3733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.377 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.377 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.377 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 NetworkManager[48904]: <info>  [1769521743.3809] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Jan 27 08:49:03 np0005597378 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.386 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.387 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:03Z|00332|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.415 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.420 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[76268fde-bf8f-473c-9274-0003cdd4130f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.422 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:49:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:03.423 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.503 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.508 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.557 238945 DEBUG nova.virt.libvirt.vif [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:48:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563749021',display_name='tempest-ImagesTestJSON-server-1563749021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563749021',id=40,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:48:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b041f051267f4a3c8518d3042922678a',ramdisk_id='',reservation_id='r-ua530r9o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1064968599',owner_user_name='tempest-ImagesTestJSON-1064968599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:02Z,user_data=None,user_id='7dedc0f04f3d455682ea65fc37a49f06',uuid=b3294f5e-4a09-45dd-af30-58436db2ff72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.558 238945 DEBUG nova.network.os_vif_util [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converting VIF {"id": "4be0b4d3-9e06-4332-af56-9c381e484852", "address": "fa:16:3e:26:3b:31", "network": {"id": "e25f7657-3ed6-425c-8132-1b5c417564a5", "bridge": "br-int", "label": "tempest-ImagesTestJSON-707544014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b041f051267f4a3c8518d3042922678a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be0b4d3-9e", "ovs_interfaceid": "4be0b4d3-9e06-4332-af56-9c381e484852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.559 238945 DEBUG nova.network.os_vif_util [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.560 238945 DEBUG os_vif [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.562 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4be0b4d3-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.566 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.567 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521743.2676303, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.568 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.573 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.574 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.574 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.575 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.576 238945 DEBUG nova.virt.libvirt.driver [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.584 238945 INFO os_vif [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:3b:31,bridge_name='br-int',has_traffic_filtering=True,id=4be0b4d3-9e06-4332-af56-9c381e484852,network=Network(e25f7657-3ed6-425c-8132-1b5c417564a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be0b4d3-9e')#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.615 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.618 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521743.2816844, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.618 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.660 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.665 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.677 238945 INFO nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 13.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.679 238945 DEBUG nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.700 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.760 238945 INFO nova.compute.manager [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 16.60 seconds to build instance.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.784 238945 DEBUG oslo_concurrency.lockutils [None req-ffdafa99-1942-4d00-8d0f-4569329c3e99 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.859 238945 INFO nova.virt.libvirt.driver [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deleting instance files /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72_del#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.860 238945 INFO nova.virt.libvirt.driver [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deletion of /var/lib/nova/instances/b3294f5e-4a09-45dd-af30-58436db2ff72_del complete#033[00m
Jan 27 08:49:03 np0005597378 podman[278829]: 2026-01-27 13:49:03.881564774 +0000 UTC m=+0.078035457 container create 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:49:03 np0005597378 systemd[1]: Started libpod-conmon-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d.scope.
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.932 238945 INFO nova.compute.manager [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.934 238945 DEBUG oslo.service.loopingcall [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.934 238945 DEBUG nova.compute.manager [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:49:03 np0005597378 nova_compute[238941]: 2026-01-27 13:49:03.934 238945 DEBUG nova.network.neutron [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:49:03 np0005597378 podman[278829]: 2026-01-27 13:49:03.84976964 +0000 UTC m=+0.046240353 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:49:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90f446e3280eba140ab7529888c6ad543b5d8b35369819bbb4f3fbdba67fd17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:03 np0005597378 podman[278829]: 2026-01-27 13:49:03.974408717 +0000 UTC m=+0.170879430 container init 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:49:03 np0005597378 podman[278829]: 2026-01-27 13:49:03.981726585 +0000 UTC m=+0.178197268 container start 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : New worker (278850) forked
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : Loading success.
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.064 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be0b4d3-9e06-4332-af56-9c381e484852 in datapath e25f7657-3ed6-425c-8132-1b5c417564a5 unbound from our chassis#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.066 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e25f7657-3ed6-425c-8132-1b5c417564a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72880b03-0e4c-4b83-9596-c75f2c2e37c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.068 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 namespace which is not needed anymore#033[00m
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : haproxy version is 2.8.14-c23fe91
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [NOTICE]   (277761) : path to executable is /usr/sbin/haproxy
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [WARNING]  (277761) : Exiting Master process...
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [WARNING]  (277761) : Exiting Master process...
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [ALERT]    (277761) : Current worker (277763) exited with code 143 (Terminated)
Jan 27 08:49:04 np0005597378 neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5[277757]: [WARNING]  (277761) : All workers exited. Exiting... (0)
Jan 27 08:49:04 np0005597378 systemd[1]: libpod-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73.scope: Deactivated successfully.
Jan 27 08:49:04 np0005597378 podman[278873]: 2026-01-27 13:49:04.229624572 +0000 UTC m=+0.051360940 container died c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:49:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73-userdata-shm.mount: Deactivated successfully.
Jan 27 08:49:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-050e611915286b45aef2d22991dc7fb146c651ef9e37ac6957536377f190bcb1-merged.mount: Deactivated successfully.
Jan 27 08:49:04 np0005597378 podman[278873]: 2026-01-27 13:49:04.275210046 +0000 UTC m=+0.096946414 container cleanup c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:49:04 np0005597378 systemd[1]: libpod-conmon-c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73.scope: Deactivated successfully.
Jan 27 08:49:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 195 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.0 MiB/s wr, 255 op/s
Jan 27 08:49:04 np0005597378 podman[278902]: 2026-01-27 13:49:04.337957041 +0000 UTC m=+0.041519966 container remove c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.345 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2479c68a-dbde-4ff6-8744-53183b3f3361]: (4, ('Tue Jan 27 01:49:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73)\nc1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73\nTue Jan 27 01:49:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 (c1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73)\nc1df109fd21a0d25748fdb105b623f8304482ea4251698aa9e14fde02a6e2a73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b07ce316-12b9-46a1-990b-022d95544352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.348 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape25f7657-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:04 np0005597378 kernel: tape25f7657-30: left promiscuous mode
Jan 27 08:49:04 np0005597378 nova_compute[238941]: 2026-01-27 13:49:04.355 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.363 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c915f0-8474-41bb-a391-f5756e5dd4af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 nova_compute[238941]: 2026-01-27 13:49:04.375 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.381 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eda9008d-7081-427d-87d7-2bbe9ff94c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.382 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdd9ac4-1d22-4ddb-beb9-48e82e2f0a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.400 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2d673f-59b4-4ea7-851c-afbb7116da7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436914, 'reachable_time': 25961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278917, 'error': None, 'target': 'ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.402 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e25f7657-3ed6-425c-8132-1b5c417564a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:49:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:04.403 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1e48b341-9575-4866-8a4c-3557ad381be8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:04 np0005597378 systemd[1]: run-netns-ovnmeta\x2de25f7657\x2d3ed6\x2d425c\x2d8132\x2d1b5c417564a5.mount: Deactivated successfully.
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.284 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.285 238945 WARNING nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-unplugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] No waiting events found dispatching network-vif-unplugged-4be0b4d3-9e06-4332-af56-9c381e484852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.286 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-unplugged-4be0b4d3-9e06-4332-af56-9c381e484852 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG oslo_concurrency.lockutils [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 DEBUG nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] No waiting events found dispatching network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.287 238945 WARNING nova.compute.manager [req-291afc24-b241-4eb7-a841-01a88f88a993 req-149ba41f-ed36-4b13-9eac-c2bea5328124 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received unexpected event network-vif-plugged-4be0b4d3-9e06-4332-af56-9c381e484852 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.339 238945 DEBUG nova.network.neutron [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.357 238945 INFO nova.compute.manager [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Took 1.42 seconds to deallocate network for instance.#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.403 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.404 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.416 238945 DEBUG nova.compute.manager [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.416 238945 DEBUG oslo_concurrency.lockutils [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.417 238945 DEBUG oslo_concurrency.lockutils [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.417 238945 DEBUG oslo_concurrency.lockutils [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.417 238945 DEBUG nova.compute.manager [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.418 238945 WARNING nova.compute.manager [req-cab2de35-4858-4deb-9f88-b6b8a0080c9d req-25fe8cbe-ff20-4d3f-b0b2-6f85ef67776e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:49:05 np0005597378 nova_compute[238941]: 2026-01-27 13:49:05.514 238945 DEBUG oslo_concurrency.processutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655442166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.066 238945 DEBUG oslo_concurrency.processutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.072 238945 DEBUG nova.compute.provider_tree [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.089 238945 DEBUG nova.scheduler.client.report [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.116 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.164 238945 INFO nova.scheduler.client.report [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Deleted allocations for instance b3294f5e-4a09-45dd-af30-58436db2ff72#033[00m
Jan 27 08:49:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:06Z|00333|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:49:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:06Z|00334|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:06 np0005597378 NetworkManager[48904]: <info>  [1769521746.2014] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 27 08:49:06 np0005597378 NetworkManager[48904]: <info>  [1769521746.2021] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 27 08:49:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:06Z|00335|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:49:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:06Z|00336|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.257 238945 DEBUG oslo_concurrency.lockutils [None req-69b2d62f-bcc5-45d4-b111-9c82890643fb 7dedc0f04f3d455682ea65fc37a49f06 b041f051267f4a3c8518d3042922678a - - default default] Lock "b3294f5e-4a09-45dd-af30-58436db2ff72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 163 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.4 MiB/s wr, 340 op/s
Jan 27 08:49:06 np0005597378 nova_compute[238941]: 2026-01-27 13:49:06.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 27 08:49:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 27 08:49:07 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 27 08:49:07 np0005597378 nova_compute[238941]: 2026-01-27 13:49:07.442 238945 DEBUG nova.compute.manager [req-7b7fd791-1d78-49a5-bf3c-3937ce8ae0c9 req-f8ab77a9-c428-4141-875b-c7d9a2599e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Received event network-vif-deleted-4be0b4d3-9e06-4332-af56-9c381e484852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:07 np0005597378 nova_compute[238941]: 2026-01-27 13:49:07.625 238945 DEBUG nova.compute.manager [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:07 np0005597378 nova_compute[238941]: 2026-01-27 13:49:07.626 238945 DEBUG nova.compute.manager [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:49:07 np0005597378 nova_compute[238941]: 2026-01-27 13:49:07.626 238945 DEBUG oslo_concurrency.lockutils [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:07 np0005597378 nova_compute[238941]: 2026-01-27 13:49:07.627 238945 DEBUG oslo_concurrency.lockutils [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:07 np0005597378 nova_compute[238941]: 2026-01-27 13:49:07.627 238945 DEBUG nova.network.neutron [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:49:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.1 MiB/s wr, 413 op/s
Jan 27 08:49:08 np0005597378 nova_compute[238941]: 2026-01-27 13:49:08.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:08 np0005597378 nova_compute[238941]: 2026-01-27 13:49:08.976 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521733.9748993, e03449f9-27f7-4c89-8d13-5f4a688e2b1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:08 np0005597378 nova_compute[238941]: 2026-01-27 13:49:08.976 238945 INFO nova.compute.manager [-] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:49:08 np0005597378 nova_compute[238941]: 2026-01-27 13:49:08.997 238945 DEBUG nova.compute.manager [None req-6e5460e0-e75e-4f96-8c77-fa2606304f0c - - - - - -] [instance: e03449f9-27f7-4c89-8d13-5f4a688e2b1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:09 np0005597378 nova_compute[238941]: 2026-01-27 13:49:09.924 238945 DEBUG nova.network.neutron [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:49:09 np0005597378 nova_compute[238941]: 2026-01-27 13:49:09.926 238945 DEBUG nova.network.neutron [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:09 np0005597378 nova_compute[238941]: 2026-01-27 13:49:09.959 238945 DEBUG oslo_concurrency.lockutils [req-66d04454-da01-4352-85b8-ac919354db2f req-945e77e9-f385-4bc6-86c8-da43e8c128d2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.210 238945 INFO nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Rebuilding instance#033[00m
Jan 27 08:49:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.8 MiB/s wr, 401 op/s
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.474 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.533 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.630 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.641 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.658 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.672 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.686 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.692 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:49:10 np0005597378 nova_compute[238941]: 2026-01-27 13:49:10.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:11 np0005597378 nova_compute[238941]: 2026-01-27 13:49:11.876 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.5 MiB/s wr, 332 op/s
Jan 27 08:49:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:13Z|00337|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:49:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:13Z|00338|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:13 np0005597378 nova_compute[238941]: 2026-01-27 13:49:13.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:13 np0005597378 nova_compute[238941]: 2026-01-27 13:49:13.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:14 np0005597378 nova_compute[238941]: 2026-01-27 13:49:14.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 134 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 21 KiB/s wr, 242 op/s
Jan 27 08:49:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 27 08:49:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:15Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:d0:64 10.100.0.7
Jan 27 08:49:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:15Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:d0:64 10.100.0.7
Jan 27 08:49:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:15Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 08:49:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:15Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 08:49:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 140 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 981 KiB/s wr, 148 op/s
Jan 27 08:49:16 np0005597378 podman[278942]: 2026-01-27 13:49:16.737549426 +0000 UTC m=+0.078502749 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 27 08:49:16 np0005597378 nova_compute[238941]: 2026-01-27 13:49:16.878 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:49:17
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta']
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:49:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:17 np0005597378 podman[278968]: 2026-01-27 13:49:17.746321408 +0000 UTC m=+0.078827218 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:49:17 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:49:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 177 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 187 op/s
Jan 27 08:49:18 np0005597378 nova_compute[238941]: 2026-01-27 13:49:18.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:18 np0005597378 nova_compute[238941]: 2026-01-27 13:49:18.703 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521743.298691, b3294f5e-4a09-45dd-af30-58436db2ff72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:18 np0005597378 nova_compute[238941]: 2026-01-27 13:49:18.704 238945 INFO nova.compute.manager [-] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:49:18 np0005597378 nova_compute[238941]: 2026-01-27 13:49:18.726 238945 DEBUG nova.compute.manager [None req-1b78620e-1c51-4056-b423-608566b4bd4e - - - - - -] [instance: b3294f5e-4a09-45dd-af30-58436db2ff72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:20 np0005597378 nova_compute[238941]: 2026-01-27 13:49:20.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Jan 27 08:49:20 np0005597378 nova_compute[238941]: 2026-01-27 13:49:20.739 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:49:21 np0005597378 nova_compute[238941]: 2026-01-27 13:49:21.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 648 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:49:22 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:49:22 np0005597378 podman[279128]: 2026-01-27 13:49:22.866869588 +0000 UTC m=+0.039713519 container create b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:49:22 np0005597378 systemd[1]: Started libpod-conmon-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope.
Jan 27 08:49:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:22 np0005597378 podman[279128]: 2026-01-27 13:49:22.848202336 +0000 UTC m=+0.021046297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:49:22 np0005597378 podman[279128]: 2026-01-27 13:49:22.948228793 +0000 UTC m=+0.121072744 container init b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:49:22 np0005597378 podman[279128]: 2026-01-27 13:49:22.959923006 +0000 UTC m=+0.132766937 container start b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:49:22 np0005597378 podman[279128]: 2026-01-27 13:49:22.963703398 +0000 UTC m=+0.136547329 container attach b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:49:22 np0005597378 practical_haslett[279144]: 167 167
Jan 27 08:49:22 np0005597378 systemd[1]: libpod-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope: Deactivated successfully.
Jan 27 08:49:22 np0005597378 conmon[279144]: conmon b195a9f60a37d189bbab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope/container/memory.events
Jan 27 08:49:22 np0005597378 podman[279128]: 2026-01-27 13:49:22.968734773 +0000 UTC m=+0.141578704 container died b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 08:49:22 np0005597378 kernel: tapec45493d-69 (unregistering): left promiscuous mode
Jan 27 08:49:22 np0005597378 NetworkManager[48904]: <info>  [1769521762.9761] device (tapec45493d-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:49:22 np0005597378 nova_compute[238941]: 2026-01-27 13:49:22.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:22Z|00339|binding|INFO|Releasing lport ec45493d-696f-479c-a443-7428a58bd860 from this chassis (sb_readonly=0)
Jan 27 08:49:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:22Z|00340|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 down in Southbound
Jan 27 08:49:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:22Z|00341|binding|INFO|Removing iface tapec45493d-69 ovn-installed in OVS
Jan 27 08:49:22 np0005597378 nova_compute[238941]: 2026-01-27 13:49:22.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f3acc07671636bf9c87d46ce6ea4b9ea60419351d2b064a69ea811107cdf12a2-merged.mount: Deactivated successfully.
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.006 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.008 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.010 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.011 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e24102-a3ab-4ad0-9935-fecaa75c5505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.013 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore#033[00m
Jan 27 08:49:23 np0005597378 podman[279128]: 2026-01-27 13:49:23.019649621 +0000 UTC m=+0.192493552 container remove b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 08:49:23 np0005597378 systemd[1]: libpod-conmon-b195a9f60a37d189bbab14e97e581284ff6ad22350588c78bd68035d78c0181b.scope: Deactivated successfully.
Jan 27 08:49:23 np0005597378 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 27 08:49:23 np0005597378 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000029.scope: Consumed 13.222s CPU time.
Jan 27 08:49:23 np0005597378 systemd-machined[207425]: Machine qemu-47-instance-00000029 terminated.
Jan 27 08:49:23 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : haproxy version is 2.8.14-c23fe91
Jan 27 08:49:23 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [NOTICE]   (278848) : path to executable is /usr/sbin/haproxy
Jan 27 08:49:23 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [WARNING]  (278848) : Exiting Master process...
Jan 27 08:49:23 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [ALERT]    (278848) : Current worker (278850) exited with code 143 (Terminated)
Jan 27 08:49:23 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[278844]: [WARNING]  (278848) : All workers exited. Exiting... (0)
Jan 27 08:49:23 np0005597378 systemd[1]: libpod-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d.scope: Deactivated successfully.
Jan 27 08:49:23 np0005597378 podman[279185]: 2026-01-27 13:49:23.165243241 +0000 UTC m=+0.051046433 container died 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:49:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d-userdata-shm.mount: Deactivated successfully.
Jan 27 08:49:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c90f446e3280eba140ab7529888c6ad543b5d8b35369819bbb4f3fbdba67fd17-merged.mount: Deactivated successfully.
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 podman[279185]: 2026-01-27 13:49:23.217535985 +0000 UTC m=+0.103339187 container cleanup 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:49:23 np0005597378 systemd[1]: libpod-conmon-455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d.scope: Deactivated successfully.
Jan 27 08:49:23 np0005597378 podman[279205]: 2026-01-27 13:49:23.246569755 +0000 UTC m=+0.072149189 container create 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:49:23 np0005597378 systemd[1]: Started libpod-conmon-7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d.scope.
Jan 27 08:49:23 np0005597378 podman[279239]: 2026-01-27 13:49:23.307631854 +0000 UTC m=+0.053545659 container remove 455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 08:49:23 np0005597378 podman[279205]: 2026-01-27 13:49:23.218079919 +0000 UTC m=+0.043659373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.314 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc050879-3181-4a60-b52d-c944652789c7]: (4, ('Tue Jan 27 01:49:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d)\n455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d\nTue Jan 27 01:49:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d)\n455e32c8ab4503a70fdf3f473ce489f70255687a978dbeb72a182a42715a7e5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.316 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc17dba-9c59-490c-a82d-3e46a792dee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.317 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 08:49:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.343 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b47ac6-4ea4-4c2c-b2c5-2eec87eb7249]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:23 np0005597378 podman[279205]: 2026-01-27 13:49:23.367840941 +0000 UTC m=+0.193420395 container init 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.374 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09934c0b-e8b3-417b-8551-463ca1656fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 podman[279205]: 2026-01-27 13:49:23.377541042 +0000 UTC m=+0.203120486 container start 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.376 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa286d1-808a-43bf-8da2-674b221b0f62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 podman[279205]: 2026-01-27 13:49:23.387377976 +0000 UTC m=+0.212957430 container attach 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b89db07-5adf-4ff7-a637-e5459420d0dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438275, 'reachable_time': 36742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279267, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.407 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:49:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:23.407 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5815fdf1-a3f3-48f1-85a9-7ac39fe3fc58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.461 238945 DEBUG nova.compute.manager [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG oslo_concurrency.lockutils [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG oslo_concurrency.lockutils [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG oslo_concurrency.lockutils [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.462 238945 DEBUG nova.compute.manager [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.463 238945 WARNING nova.compute.manager [req-3e02648e-3aa2-495b-a4c5-35ce243019e0 req-ccc753c2-4c9a-4132-97f4-44009f0f152e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 27 08:49:23 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.755 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.760 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance destroyed successfully.#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.766 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance destroyed successfully.#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.767 238945 DEBUG nova.virt.libvirt.vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:09Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.767 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.768 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.769 238945 DEBUG os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.771 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec45493d-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:23 np0005597378 nova_compute[238941]: 2026-01-27 13:49:23.777 238945 INFO os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')#033[00m
Jan 27 08:49:23 np0005597378 great_satoshi[279257]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:49:23 np0005597378 great_satoshi[279257]: --> All data devices are unavailable
Jan 27 08:49:23 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 08:49:23 np0005597378 systemd[1]: libpod-7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d.scope: Deactivated successfully.
Jan 27 08:49:23 np0005597378 podman[279301]: 2026-01-27 13:49:23.964017242 +0000 UTC m=+0.034406904 container died 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:49:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b5d8d3eb634922bebbb0f4f19d1be3e0fde554e60681a86ac4b192e60d024b5f-merged.mount: Deactivated successfully.
Jan 27 08:49:24 np0005597378 podman[279301]: 2026-01-27 13:49:24.014931029 +0000 UTC m=+0.085320661 container remove 7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:49:24 np0005597378 systemd[1]: libpod-conmon-7401156fc67ae58e1a99d48a31d05648361b59ca3541fb469e8fb84d5d31d15d.scope: Deactivated successfully.
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.062 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting instance files /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.063 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deletion of /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del complete#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.243 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.245 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating image(s)#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.266 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.295 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 200 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.324 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.329 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.420 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.422 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.423 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.423 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.445 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.454 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.469052876 +0000 UTC m=+0.047021354 container create 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:49:24 np0005597378 systemd[1]: Started libpod-conmon-7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767.scope.
Jan 27 08:49:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.448793361 +0000 UTC m=+0.026761859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.55297291 +0000 UTC m=+0.130941418 container init 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.56157573 +0000 UTC m=+0.139544208 container start 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:49:24 np0005597378 competent_ramanujan[279471]: 167 167
Jan 27 08:49:24 np0005597378 systemd[1]: libpod-7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767.scope: Deactivated successfully.
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.567290054 +0000 UTC m=+0.145258552 container attach 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.567587702 +0000 UTC m=+0.145556190 container died 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 08:49:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4f46e6b299c50471fb6e62c82c07116c425777c28ee1d6d806d9997b8b530f31-merged.mount: Deactivated successfully.
Jan 27 08:49:24 np0005597378 podman[279434]: 2026-01-27 13:49:24.671595544 +0000 UTC m=+0.249564022 container remove 7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ramanujan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:49:24 np0005597378 systemd[1]: libpod-conmon-7eaeee39fff5b97a2d1b74a7488bcf1b66f0f2136f678dc73e99d349ed54a767.scope: Deactivated successfully.
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.735 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.765 238945 DEBUG nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.815 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.856 238945 INFO nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] instance snapshotting#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.859 238945 DEBUG nova.objects.instance [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:24 np0005597378 podman[279549]: 2026-01-27 13:49:24.873722063 +0000 UTC m=+0.050914179 container create ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:49:24 np0005597378 systemd[1]: Started libpod-conmon-ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20.scope.
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.923 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.924 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Ensure instance console log exists: /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.925 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.925 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.925 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.927 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start _get_guest_xml network_info=[{"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.939 238945 WARNING nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 08:49:24 np0005597378 podman[279549]: 2026-01-27 13:49:24.848558718 +0000 UTC m=+0.025750864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.947 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.947 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:49:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.951 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.951 238945 DEBUG nova.virt.libvirt.host [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.952 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.952 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.953 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.954 238945 DEBUG nova.virt.hardware [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.955 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:24 np0005597378 podman[279549]: 2026-01-27 13:49:24.97411598 +0000 UTC m=+0.151308126 container init ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:49:24 np0005597378 podman[279549]: 2026-01-27 13:49:24.98158994 +0000 UTC m=+0.158782056 container start ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 08:49:24 np0005597378 podman[279549]: 2026-01-27 13:49:24.986986286 +0000 UTC m=+0.164178422 container attach ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:49:24 np0005597378 nova_compute[238941]: 2026-01-27 13:49:24.998 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.008 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:25 np0005597378 eager_lamport[279601]: {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:    "0": [
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:        {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "devices": [
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "/dev/loop3"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            ],
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_name": "ceph_lv0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_size": "21470642176",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "name": "ceph_lv0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "tags": {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cluster_name": "ceph",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.crush_device_class": "",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.encrypted": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.objectstore": "bluestore",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osd_id": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.type": "block",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.vdo": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.with_tpm": "0"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            },
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "type": "block",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "vg_name": "ceph_vg0"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:        }
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:    ],
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:    "1": [
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:        {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "devices": [
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "/dev/loop4"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            ],
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_name": "ceph_lv1",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_size": "21470642176",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "name": "ceph_lv1",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "tags": {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cluster_name": "ceph",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.crush_device_class": "",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.encrypted": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.objectstore": "bluestore",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osd_id": "1",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.type": "block",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.vdo": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.with_tpm": "0"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            },
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "type": "block",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "vg_name": "ceph_vg1"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:        }
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:    ],
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:    "2": [
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:        {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "devices": [
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "/dev/loop5"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            ],
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_name": "ceph_lv2",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_size": "21470642176",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "name": "ceph_lv2",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "tags": {
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.cluster_name": "ceph",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.crush_device_class": "",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.encrypted": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.objectstore": "bluestore",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osd_id": "2",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.type": "block",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.vdo": "0",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:                "ceph.with_tpm": "0"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            },
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "type": "block",
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:            "vg_name": "ceph_vg2"
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:        }
Jan 27 08:49:25 np0005597378 eager_lamport[279601]:    ]
Jan 27 08:49:25 np0005597378 eager_lamport[279601]: }
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.303 238945 INFO nova.virt.libvirt.driver [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning live snapshot process#033[00m
Jan 27 08:49:25 np0005597378 systemd[1]: libpod-ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20.scope: Deactivated successfully.
Jan 27 08:49:25 np0005597378 podman[279549]: 2026-01-27 13:49:25.323051151 +0000 UTC m=+0.500243267 container died ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:49:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-540ab96d455419d751a870f31de7091671106740fdf41cd5de5ae36a62ae4246-merged.mount: Deactivated successfully.
Jan 27 08:49:25 np0005597378 podman[279549]: 2026-01-27 13:49:25.369075087 +0000 UTC m=+0.546267203 container remove ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:49:25 np0005597378 systemd[1]: libpod-conmon-ddf0e1ae1257d7ac901d163b748e60d3b7ee73844187b44fe2699cc7960bba20.scope: Deactivated successfully.
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.484 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.559 238945 DEBUG nova.compute.manager [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.559 238945 DEBUG oslo_concurrency.lockutils [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 DEBUG oslo_concurrency.lockutils [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 DEBUG oslo_concurrency.lockutils [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 DEBUG nova.compute.manager [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.560 238945 WARNING nova.compute.manager [req-ecaa473a-ee65-4ab9-8079-648673d00637 req-2e075a9f-40df-41bd-9e62-f8b06a72d515 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 27 08:49:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3919722406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.598 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.615 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.619 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:25 np0005597378 nova_compute[238941]: 2026-01-27 13:49:25.762 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(8aa197ef85d34cde99d35b5eed0ed585) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.834302321 +0000 UTC m=+0.042732769 container create 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:49:25 np0005597378 systemd[1]: Started libpod-conmon-8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5.scope.
Jan 27 08:49:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.813262706 +0000 UTC m=+0.021693174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.916316333 +0000 UTC m=+0.124746791 container init 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.925844669 +0000 UTC m=+0.134275137 container start 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.930022831 +0000 UTC m=+0.138453299 container attach 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:49:25 np0005597378 upbeat_antonelli[279807]: 167 167
Jan 27 08:49:25 np0005597378 systemd[1]: libpod-8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5.scope: Deactivated successfully.
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.933805513 +0000 UTC m=+0.142235961 container died 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:49:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-364deba2da8b15fa5bd75e788fde7d5c3867f5af22dc73751adb5dfffeca7830-merged.mount: Deactivated successfully.
Jan 27 08:49:25 np0005597378 podman[279789]: 2026-01-27 13:49:25.979790048 +0000 UTC m=+0.188220496 container remove 8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:49:26 np0005597378 systemd[1]: libpod-conmon-8ce7a3af59002f8007dba8b761463f9479cbf995a6e8b4636d1f0a07c770c1c5.scope: Deactivated successfully.
Jan 27 08:49:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790782063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:26 np0005597378 podman[279831]: 2026-01-27 13:49:26.170564892 +0000 UTC m=+0.042710718 container create 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.186 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.188 238945 DEBUG nova.virt.libvirt.vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:24Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.188 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.189 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.191 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <uuid>9505af7f-b4b1-45a4-9350-98fd525ce36e</uuid>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <name>instance-00000029</name>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1908864562</nova:name>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:49:24</nova:creationTime>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <nova:port uuid="ec45493d-696f-479c-a443-7428a58bd860">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <entry name="serial">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <entry name="uuid">9505af7f-b4b1-45a4-9350-98fd525ce36e</entry>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:04:d0:64"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <target dev="tapec45493d-69"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/console.log" append="off"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:49:26 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:49:26 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:49:26 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:49:26 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.191 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Preparing to wait for external event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.192 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.192 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.192 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.193 238945 DEBUG nova.virt.libvirt.vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:24Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.193 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.194 238945 DEBUG nova.network.os_vif_util [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.195 238945 DEBUG os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.196 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.196 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.199 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec45493d-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.199 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec45493d-69, col_values=(('external_ids', {'iface-id': 'ec45493d-696f-479c-a443-7428a58bd860', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:d0:64', 'vm-uuid': '9505af7f-b4b1-45a4-9350-98fd525ce36e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:26 np0005597378 NetworkManager[48904]: <info>  [1769521766.2017] manager: (tapec45493d-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:26 np0005597378 systemd[1]: Started libpod-conmon-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope.
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.208 238945 INFO os_vif [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')#033[00m
Jan 27 08:49:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:26 np0005597378 podman[279831]: 2026-01-27 13:49:26.154632784 +0000 UTC m=+0.026778630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:49:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:26 np0005597378 podman[279831]: 2026-01-27 13:49:26.262782678 +0000 UTC m=+0.134928534 container init 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.266 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.267 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.268 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:04:d0:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.268 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Using config drive#033[00m
Jan 27 08:49:26 np0005597378 podman[279831]: 2026-01-27 13:49:26.271444491 +0000 UTC m=+0.143590317 container start 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 08:49:26 np0005597378 podman[279831]: 2026-01-27 13:49:26.275049578 +0000 UTC m=+0.147195434 container attach 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.288 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 198 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 658 KiB/s rd, 4.8 MiB/s wr, 148 op/s
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.303 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.329 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'keypairs' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 27 08:49:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 27 08:49:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.583 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@8aa197ef85d34cde99d35b5eed0ed585 to images/201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.691 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:49:26 np0005597378 nova_compute[238941]: 2026-01-27 13:49:26.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:26 np0005597378 lvm[280002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:49:26 np0005597378 lvm[280003]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:49:26 np0005597378 lvm[280002]: VG ceph_vg0 finished
Jan 27 08:49:26 np0005597378 lvm[280003]: VG ceph_vg1 finished
Jan 27 08:49:26 np0005597378 lvm[280005]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:49:26 np0005597378 lvm[280005]: VG ceph_vg2 finished
Jan 27 08:49:26 np0005597378 lvm[280006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:49:26 np0005597378 lvm[280006]: VG ceph_vg0 finished
Jan 27 08:49:27 np0005597378 jovial_johnson[279851]: {}
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.065 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Creating config drive at /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.070 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkuz2i0x2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.099 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(8aa197ef85d34cde99d35b5eed0ed585) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:49:27 np0005597378 systemd[1]: libpod-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope: Deactivated successfully.
Jan 27 08:49:27 np0005597378 systemd[1]: libpod-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope: Consumed 1.289s CPU time.
Jan 27 08:49:27 np0005597378 podman[279831]: 2026-01-27 13:49:27.104175285 +0000 UTC m=+0.976321131 container died 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 08:49:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-881770d3074a6a19a6826b1f79cb1595956292aad353fa888adf151055486ab2-merged.mount: Deactivated successfully.
Jan 27 08:49:27 np0005597378 podman[279831]: 2026-01-27 13:49:27.157581599 +0000 UTC m=+1.029727425 container remove 64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:49:27 np0005597378 systemd[1]: libpod-conmon-64bb8383539fce3febf1a2270734a1b6b44de09a4ab5a955b12575cf41498892.scope: Deactivated successfully.
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.210 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkuz2i0x2" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.238 238945 DEBUG nova.storage.rbd_utils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.242 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.448 238945 DEBUG oslo_concurrency.processutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config 9505af7f-b4b1-45a4-9350-98fd525ce36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.449 238945 INFO nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting local config drive /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e/disk.config because it was imported into RBD.#033[00m
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014302356250740832 of space, bias 1.0, pg target 0.42907068752222494 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683214479120881 of space, bias 1.0, pg target 0.20049643437362644 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1942974561793254e-06 of space, bias 4.0, pg target 0.0014331569474151905 quantized to 16 (current 16)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:49:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:49:27 np0005597378 kernel: tapec45493d-69: entered promiscuous mode
Jan 27 08:49:27 np0005597378 NetworkManager[48904]: <info>  [1769521767.5019] manager: (tapec45493d-69): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Jan 27 08:49:27 np0005597378 systemd-udevd[280000]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:27Z|00342|binding|INFO|Claiming lport ec45493d-696f-479c-a443-7428a58bd860 for this chassis.
Jan 27 08:49:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:27Z|00343|binding|INFO|ec45493d-696f-479c-a443-7428a58bd860: Claiming fa:16:3e:04:d0:64 10.100.0.7
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.510 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.511 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis#033[00m
Jan 27 08:49:27 np0005597378 NetworkManager[48904]: <info>  [1769521767.5132] device (tapec45493d-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:49:27 np0005597378 NetworkManager[48904]: <info>  [1769521767.5143] device (tapec45493d-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.514 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054#033[00m
Jan 27 08:49:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:27Z|00344|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 ovn-installed in OVS
Jan 27 08:49:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:27Z|00345|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 up in Southbound
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.526 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd9d79c-39cb-4186-a834-60000bbd5615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.527 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.530 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3465e2dd-c23a-4222-b6a6-b7283a614db8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.531 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f65f12-dc32-45af-bf91-2f99e07f166f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.542 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[de45cfd3-5375-4004-9f74-cc96d7ea0cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 systemd-machined[207425]: New machine qemu-48-instance-00000029.
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 27 08:49:27 np0005597378 systemd[1]: Started Virtual Machine qemu-48-instance-00000029.
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.558 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08c2cdd0-43e4-4152-8e03-1a921b9eee16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:49:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.591 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ca69e7e7-a17e-4fff-bbd6-8ee350c2e8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.595 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[10aa8059-00e3-4ec1-bd84-3fcaf4f5a30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 NetworkManager[48904]: <info>  [1769521767.5974] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.625 238945 DEBUG nova.storage.rbd_utils [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.626 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9fed162d-44f2-4dc9-86a3-c0645314ea16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.629 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2998ff38-9406-431c-abe7-94bbc2f3e7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 NetworkManager[48904]: <info>  [1769521767.6534] device (tap4856e57c-d0): carrier: link connected
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.658 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d7080bce-05e6-474b-b7be-0017a5a611f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c894810-7ef8-482b-aa93-338ee3e07ae2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440727, 'reachable_time': 44367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280164, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54233433-b8ab-42fa-bc24-848a35ce3c12]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440727, 'tstamp': 440727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280165, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[136146c7-f130-4701-9ced-279430e057d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440727, 'reachable_time': 44367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280166, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[acc20eda-90ca-4c18-b8cd-7d8673b80efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.815 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[370e571d-641c-4d3d-a985-bbb2d69de533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.817 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.817 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.817 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 NetworkManager[48904]: <info>  [1769521767.8201] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Jan 27 08:49:27 np0005597378 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.823 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:27Z|00346|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.825 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.826 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb881114-f6a7-4ea8-bb78-1d86c2a71d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.827 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:49:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:27.828 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:49:27 np0005597378 nova_compute[238941]: 2026-01-27 13:49:27.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:28 np0005597378 podman[280198]: 2026-01-27 13:49:28.230828303 +0000 UTC m=+0.080419431 container create 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:49:28 np0005597378 podman[280198]: 2026-01-27 13:49:28.171262573 +0000 UTC m=+0.020853691 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:49:28 np0005597378 systemd[1]: Started libpod-conmon-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393.scope.
Jan 27 08:49:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 195 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 27 08:49:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07c073e096dad3b797b9b1b7d88ce6afb214abf8ac21589227b5e41aa3c9c17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:28 np0005597378 podman[280198]: 2026-01-27 13:49:28.327058987 +0000 UTC m=+0.176650135 container init 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:49:28 np0005597378 podman[280198]: 2026-01-27 13:49:28.333152021 +0000 UTC m=+0.182743139 container start 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 08:49:28 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : New worker (280220) forked
Jan 27 08:49:28 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : Loading success.
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.405 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.406 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.425 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.498 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.499 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.506 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.507 238945 INFO nova.compute.claims [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:49:28 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:28Z|00347|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:49:28 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:28Z|00348|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 27 08:49:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 27 08:49:28 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.660 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.694 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 9505af7f-b4b1-45a4-9350-98fd525ce36e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.694 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521768.6815255, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Started (Lifecycle Event)#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.722 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.733 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521768.6821322, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.759 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:28 np0005597378 nova_compute[238941]: 2026-01-27 13:49:28.781 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:49:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081868010' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.210 238945 DEBUG nova.compute.manager [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.210 238945 DEBUG oslo_concurrency.lockutils [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.210 238945 DEBUG oslo_concurrency.lockutils [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.211 238945 DEBUG oslo_concurrency.lockutils [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.211 238945 DEBUG nova.compute.manager [req-dda1548b-e41f-4351-ab59-26e237b7d869 req-5bd39a6d-5e5b-4eac-9806-052939152e86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Processing event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.212 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.214 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521769.2141647, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.214 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.216 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.222 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance spawned successfully.#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.222 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.225 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.230 238945 DEBUG nova.compute.provider_tree [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.239 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.244 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.247 238945 DEBUG nova.scheduler.client.report [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.252 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.253 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.253 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.253 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.254 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.254 238945 DEBUG nova.virt.libvirt.driver [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.284 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.287 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.288 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.334 238945 DEBUG nova.compute.manager [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.347 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.347 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.385 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.393 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.393 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.394 238945 DEBUG nova.objects.instance [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.415 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.466 238945 DEBUG oslo_concurrency.lockutils [None req-98c4b7c3-7906-4d3b-819f-ca904034b30f 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.516 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.518 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.518 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Creating image(s)#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.538 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.569 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.590 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.594 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.621 238945 DEBUG nova.policy [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87f8bb66fb254be5933d0d3a386e26b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd32c565a864e42ac9bf945538130cd1a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.661 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.662 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.663 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.663 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.683 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.687 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 95331449-9db7-44fa-8add-58a0505da212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:29 np0005597378 nova_compute[238941]: 2026-01-27 13:49:29.984 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 95331449-9db7-44fa-8add-58a0505da212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.045 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] resizing rbd image 95331449-9db7-44fa-8add-58a0505da212_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.142 238945 DEBUG nova.objects.instance [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lazy-loading 'migration_context' on Instance uuid 95331449-9db7-44fa-8add-58a0505da212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.156 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.157 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Ensure instance console log exists: /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.157 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.158 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.158 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 252 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 12 MiB/s wr, 282 op/s
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.430 238945 INFO nova.virt.libvirt.driver [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.431 238945 INFO nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 5.51 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.694 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Successfully created port: 0d1f569f-2627-40d8-9a8c-f67def34c7ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:49:30 np0005597378 nova_compute[238941]: 2026-01-27 13:49:30.894 238945 DEBUG nova.compute.manager [None req-7b714198-1992-4109-b544-b4c6d8a3c03e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.853 238945 DEBUG nova.compute.manager [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.854 238945 DEBUG oslo_concurrency.lockutils [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.854 238945 DEBUG oslo_concurrency.lockutils [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.854 238945 DEBUG oslo_concurrency.lockutils [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.855 238945 DEBUG nova.compute.manager [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.855 238945 WARNING nova.compute.manager [req-01b33b8d-cdb6-4d6e-8208-c1963d65312a req-388b2242-3bcb-4883-9c32-30126619cc54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:49:31 np0005597378 nova_compute[238941]: 2026-01-27 13:49:31.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 252 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 10 MiB/s wr, 246 op/s
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.369 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Successfully updated port: 0d1f569f-2627-40d8-9a8c-f67def34c7ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.426 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.426 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquired lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.427 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.529 238945 DEBUG nova.compute.manager [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-changed-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.530 238945 DEBUG nova.compute.manager [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Refreshing instance network info cache due to event network-changed-0d1f569f-2627-40d8-9a8c-f67def34c7ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.530 238945 DEBUG oslo_concurrency.lockutils [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.606 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.824 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.825 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.826 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.826 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.827 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.828 238945 INFO nova.compute.manager [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Terminating instance#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.829 238945 DEBUG nova.compute.manager [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.831 238945 DEBUG nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:32 np0005597378 kernel: tapec45493d-69 (unregistering): left promiscuous mode
Jan 27 08:49:32 np0005597378 NetworkManager[48904]: <info>  [1769521772.8762] device (tapec45493d-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:49:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:32Z|00349|binding|INFO|Releasing lport ec45493d-696f-479c-a443-7428a58bd860 from this chassis (sb_readonly=0)
Jan 27 08:49:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:32Z|00350|binding|INFO|Setting lport ec45493d-696f-479c-a443-7428a58bd860 down in Southbound
Jan 27 08:49:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:32Z|00351|binding|INFO|Removing iface tapec45493d-69 ovn-installed in OVS
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:32 np0005597378 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 27 08:49:32 np0005597378 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000029.scope: Consumed 4.702s CPU time.
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.937 238945 INFO nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] instance snapshotting#033[00m
Jan 27 08:49:32 np0005597378 nova_compute[238941]: 2026-01-27 13:49:32.938 238945 DEBUG nova.objects.instance [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.939 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:d0:64 10.100.0.7'], port_security=['fa:16:3e:04:d0:64 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9505af7f-b4b1-45a4-9350-98fd525ce36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ec45493d-696f-479c-a443-7428a58bd860) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:32 np0005597378 systemd-machined[207425]: Machine qemu-48-instance-00000029 terminated.
Jan 27 08:49:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.941 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ec45493d-696f-479c-a443-7428a58bd860 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:49:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.942 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[788022f7-0f8c-4a56-901a-4fdd717c2e93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:32.944 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore#033[00m
Jan 27 08:49:33 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : haproxy version is 2.8.14-c23fe91
Jan 27 08:49:33 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [NOTICE]   (280218) : path to executable is /usr/sbin/haproxy
Jan 27 08:49:33 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [WARNING]  (280218) : Exiting Master process...
Jan 27 08:49:33 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [WARNING]  (280218) : Exiting Master process...
Jan 27 08:49:33 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [ALERT]    (280218) : Current worker (280220) exited with code 143 (Terminated)
Jan 27 08:49:33 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[280214]: [WARNING]  (280218) : All workers exited. Exiting... (0)
Jan 27 08:49:33 np0005597378 systemd[1]: libpod-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393.scope: Deactivated successfully.
Jan 27 08:49:33 np0005597378 podman[280482]: 2026-01-27 13:49:33.06198422 +0000 UTC m=+0.042694497 container died 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.062 238945 INFO nova.virt.libvirt.driver [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Instance destroyed successfully.#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.062 238945 DEBUG nova.objects.instance [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 9505af7f-b4b1-45a4-9350-98fd525ce36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.080 238945 DEBUG nova.virt.libvirt.vif [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1908864562',display_name='tempest-ServerDiskConfigTestJSON-server-1908864562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1908864562',id=41,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-pesxuesj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:29Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=9505af7f-b4b1-45a4-9350-98fd525ce36e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.081 238945 DEBUG nova.network.os_vif_util [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "ec45493d-696f-479c-a443-7428a58bd860", "address": "fa:16:3e:04:d0:64", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45493d-69", "ovs_interfaceid": "ec45493d-696f-479c-a443-7428a58bd860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.081 238945 DEBUG nova.network.os_vif_util [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.082 238945 DEBUG os_vif [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.083 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec45493d-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.090 238945 INFO os_vif [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:d0:64,bridge_name='br-int',has_traffic_filtering=True,id=ec45493d-696f-479c-a443-7428a58bd860,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45493d-69')#033[00m
Jan 27 08:49:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393-userdata-shm.mount: Deactivated successfully.
Jan 27 08:49:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a07c073e096dad3b797b9b1b7d88ce6afb214abf8ac21589227b5e41aa3c9c17-merged.mount: Deactivated successfully.
Jan 27 08:49:33 np0005597378 podman[280482]: 2026-01-27 13:49:33.118013545 +0000 UTC m=+0.098723822 container cleanup 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:49:33 np0005597378 systemd[1]: libpod-conmon-3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393.scope: Deactivated successfully.
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.149 238945 INFO nova.virt.libvirt.driver [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning live snapshot process#033[00m
Jan 27 08:49:33 np0005597378 podman[280539]: 2026-01-27 13:49:33.186259528 +0000 UTC m=+0.046956273 container remove 3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.192 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05b1b828-1bd1-44d3-abb9-76dd166195ea]: (4, ('Tue Jan 27 01:49:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393)\n3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393\nTue Jan 27 01:49:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393)\n3e42db7b8a977cc9b5dfcf073929fa46a711ff6d131ce70e22b53dc60d6ef393\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.194 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89234ac1-4279-4ff9-8a9a-0220c3f5aefe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.197 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:33 np0005597378 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.221 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30b2f126-359a-4ce1-92f4-1cc36d5655ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[03223564-bbf7-497b-9fb3-cd9d8334d29e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fce39285-86b9-49d6-896b-f17609670bcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.252 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2f2bac-1ed7-44f7-ab93-71d5723add1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440720, 'reachable_time': 22587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280555, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.256 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:49:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:33.256 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ad650226-1b5b-455b-b2ad-764704bdadf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.289 238945 DEBUG nova.virt.libvirt.imagebackend [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.397 238945 INFO nova.virt.libvirt.driver [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deleting instance files /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.398 238945 INFO nova.virt.libvirt.driver [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deletion of /var/lib/nova/instances/9505af7f-b4b1-45a4-9350-98fd525ce36e_del complete#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.460 238945 INFO nova.compute.manager [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.460 238945 DEBUG oslo.service.loopingcall [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.461 238945 DEBUG nova.compute.manager [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.461 238945 DEBUG nova.network.neutron [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.519 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(97801be9ef934bb082b08cd87ef46afa) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 27 08:49:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 27 08:49:33 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.703 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@97801be9ef934bb082b08cd87ef46afa to images/278511b1-cf16-4acd-b4a2-f18a6df1c9bb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.790 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/278511b1-cf16-4acd-b4a2-f18a6df1c9bb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.875 238945 DEBUG nova.network.neutron [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updating instance_info_cache with network_info: [{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.900 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Releasing lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.900 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance network_info: |[{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.900 238945 DEBUG oslo_concurrency.lockutils [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.901 238945 DEBUG nova.network.neutron [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Refreshing network info cache for port 0d1f569f-2627-40d8-9a8c-f67def34c7ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.903 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start _get_guest_xml network_info=[{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.907 238945 WARNING nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.913 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.914 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.919 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.919 238945 DEBUG nova.virt.libvirt.host [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.920 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.921 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.922 238945 DEBUG nova.virt.hardware [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.924 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.962 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.962 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.963 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-unplugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.964 238945 DEBUG oslo_concurrency.lockutils [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.965 238945 DEBUG nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] No waiting events found dispatching network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:33 np0005597378 nova_compute[238941]: 2026-01-27 13:49:33.965 238945 WARNING nova.compute.manager [req-637ac6ef-350c-4865-a4d4-b7f4d001a209 req-ce34fe44-4b49-44b5-9261-bab640a08c31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received unexpected event network-vif-plugged-ec45493d-696f-479c-a443-7428a58bd860 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.135 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(97801be9ef934bb082b08cd87ef46afa) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:49:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 275 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 11 MiB/s wr, 369 op/s
Jan 27 08:49:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654935694' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.481 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.499 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.502 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 27 08:49:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 27 08:49:34 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.708 238945 DEBUG nova.storage.rbd_utils [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(278511b1-cf16-4acd-b4a2-f18a6df1c9bb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.761 238945 DEBUG nova.network.neutron [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.780 238945 INFO nova.compute.manager [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Took 1.32 seconds to deallocate network for instance.#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.833 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:34 np0005597378 nova_compute[238941]: 2026-01-27 13:49:34.834 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.020 238945 DEBUG oslo_concurrency.processutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.111 238945 DEBUG nova.compute.manager [req-a15ce296-e18c-4442-b2cf-962d2fd90596 req-24ecb93b-5a07-4e1a-b754-158aee8a7f0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Received event network-vif-deleted-ec45493d-696f-479c-a443-7428a58bd860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237745690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.130 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.131 238945 DEBUG nova.virt.libvirt.vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1112327884',display_name='tempest-ImagesNegativeTestJSON-server-1112327884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1112327884',id=43,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d32c565a864e42ac9bf945538130cd1a',ramdisk_id='',reservation_id='r-bvg2egf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-218830169',owner_user_name='tempest-ImagesNegativeTestJSON-218830169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:29Z,user_data=None,user_id='87f8bb66fb254be5933d0d3a386e26b3',uuid=95331449-9db7-44fa-8add-58a0505da212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.132 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converting VIF {"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.132 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.134 238945 DEBUG nova.objects.instance [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lazy-loading 'pci_devices' on Instance uuid 95331449-9db7-44fa-8add-58a0505da212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.151 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <uuid>95331449-9db7-44fa-8add-58a0505da212</uuid>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <name>instance-0000002b</name>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:name>tempest-ImagesNegativeTestJSON-server-1112327884</nova:name>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:49:33</nova:creationTime>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:user uuid="87f8bb66fb254be5933d0d3a386e26b3">tempest-ImagesNegativeTestJSON-218830169-project-member</nova:user>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:project uuid="d32c565a864e42ac9bf945538130cd1a">tempest-ImagesNegativeTestJSON-218830169</nova:project>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <nova:port uuid="0d1f569f-2627-40d8-9a8c-f67def34c7ab">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <entry name="serial">95331449-9db7-44fa-8add-58a0505da212</entry>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <entry name="uuid">95331449-9db7-44fa-8add-58a0505da212</entry>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/95331449-9db7-44fa-8add-58a0505da212_disk">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/95331449-9db7-44fa-8add-58a0505da212_disk.config">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:90:ac:dd"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <target dev="tap0d1f569f-26"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/console.log" append="off"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:49:35 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:49:35 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:49:35 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:49:35 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.153 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Preparing to wait for external event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.153 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.153 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.154 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.154 238945 DEBUG nova.virt.libvirt.vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1112327884',display_name='tempest-ImagesNegativeTestJSON-server-1112327884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1112327884',id=43,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d32c565a864e42ac9bf945538130cd1a',ramdisk_id='',reservation_id='r-bvg2egf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-218830169',owner_user_name='tempest-ImagesNegativeTestJSON-218830169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:29Z,user_data=None,user_id='87f8bb66fb254be5933d0d3a386e26b3',uuid=95331449-9db7-44fa-8add-58a0505da212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.155 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converting VIF {"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.156 238945 DEBUG nova.network.os_vif_util [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.156 238945 DEBUG os_vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.157 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.158 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.160 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d1f569f-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.160 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d1f569f-26, col_values=(('external_ids', {'iface-id': '0d1f569f-2627-40d8-9a8c-f67def34c7ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:ac:dd', 'vm-uuid': '95331449-9db7-44fa-8add-58a0505da212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:35 np0005597378 NetworkManager[48904]: <info>  [1769521775.1633] manager: (tap0d1f569f-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.168 238945 INFO os_vif [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26')#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.230 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.230 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.230 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] No VIF found with MAC fa:16:3e:90:ac:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.231 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Using config drive#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.252 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828880178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.570 238945 DEBUG oslo_concurrency.processutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.576 238945 DEBUG nova.compute.provider_tree [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.596 238945 DEBUG nova.scheduler.client.report [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.632 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.683 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Creating config drive at /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config#033[00m
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.688 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm08uhml9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:35 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.720 238945 INFO nova.scheduler.client.report [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Deleted allocations for instance 9505af7f-b4b1-45a4-9350-98fd525ce36e#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.724 238945 DEBUG nova.network.neutron [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updated VIF entry in instance network info cache for port 0d1f569f-2627-40d8-9a8c-f67def34c7ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.724 238945 DEBUG nova.network.neutron [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updating instance_info_cache with network_info: [{"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.753 238945 DEBUG oslo_concurrency.lockutils [req-f1e1d33a-78dc-4e27-8ebe-6fa79f159b17 req-2a9a8938-a7e8-41f6-aedc-13ce5c7d4212 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-95331449-9db7-44fa-8add-58a0505da212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.785 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.785 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.820 238945 DEBUG oslo_concurrency.lockutils [None req-38d50907-050c-42f2-a3f4-bfb717f26a38 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "9505af7f-b4b1-45a4-9350-98fd525ce36e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.824 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.826 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm08uhml9" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.845 238945 DEBUG nova.storage.rbd_utils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] rbd image 95331449-9db7-44fa-8add-58a0505da212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.849 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config 95331449-9db7-44fa-8add-58a0505da212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.970 238945 DEBUG oslo_concurrency.processutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config 95331449-9db7-44fa-8add-58a0505da212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:35 np0005597378 nova_compute[238941]: 2026-01-27 13:49:35.971 238945 INFO nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deleting local config drive /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212/disk.config because it was imported into RBD.#033[00m
Jan 27 08:49:36 np0005597378 kernel: tap0d1f569f-26: entered promiscuous mode
Jan 27 08:49:36 np0005597378 NetworkManager[48904]: <info>  [1769521776.0161] manager: (tap0d1f569f-26): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Jan 27 08:49:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:36Z|00352|binding|INFO|Claiming lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab for this chassis.
Jan 27 08:49:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:36Z|00353|binding|INFO|0d1f569f-2627-40d8-9a8c-f67def34c7ab: Claiming fa:16:3e:90:ac:dd 10.100.0.5
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:36Z|00354|binding|INFO|Setting lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab ovn-installed in OVS
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.043 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:36 np0005597378 systemd-udevd[280854]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:36 np0005597378 systemd-machined[207425]: New machine qemu-49-instance-0000002b.
Jan 27 08:49:36 np0005597378 NetworkManager[48904]: <info>  [1769521776.0555] device (tap0d1f569f-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:49:36 np0005597378 NetworkManager[48904]: <info>  [1769521776.0562] device (tap0d1f569f-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:49:36 np0005597378 systemd[1]: Started Virtual Machine qemu-49-instance-0000002b.
Jan 27 08:49:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:36Z|00355|binding|INFO|Setting lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab up in Southbound
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.090 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:ac:dd 10.100.0.5'], port_security=['fa:16:3e:90:ac:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95331449-9db7-44fa-8add-58a0505da212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dde970d8-838c-4623-9005-11bbdca7fe66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd32c565a864e42ac9bf945538130cd1a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0fecfea-abcd-49cc-a280-30dc78a3d332', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dca6bbaf-8a25-467a-a4ea-08eb872f9094, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0d1f569f-2627-40d8-9a8c-f67def34c7ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.091 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1f569f-2627-40d8-9a8c-f67def34c7ab in datapath dde970d8-838c-4623-9005-11bbdca7fe66 bound to our chassis#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.093 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dde970d8-838c-4623-9005-11bbdca7fe66#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.106 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[152dac1d-bf4d-43cf-9677-e853897a0aef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.107 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdde970d8-81 in ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.109 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdde970d8-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74db72f2-cd73-4340-990e-69769b175b8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.110 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5be1b4a5-22f7-4988-b125-0efd230dded7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.114 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.115 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.122 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0c7e4f-1292-412b-b0a5-9f7c3c275334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.124 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.124 238945 INFO nova.compute.claims [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.137 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7cbcc8-5199-4114-ac5e-775fd6b8ebd3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.167 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[11bd4ee0-a2a5-45ce-8334-e7c44b7c1d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.171 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66321da8-f444-418d-b2db-a63b0d816878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 systemd-udevd[280856]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:36 np0005597378 NetworkManager[48904]: <info>  [1769521776.1729] manager: (tapdde970d8-80): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.217 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecfe42f-1041-4f82-8274-c8fe4b64cd2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.221 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[24e18db0-0ddd-4a58-8828-27bf6133eb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 NetworkManager[48904]: <info>  [1769521776.2450] device (tapdde970d8-80): carrier: link connected
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.250 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c66103fc-463c-40fb-833a-1a0dad7c597d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b8be00-ae59-4d81-865d-f63d77fb423d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdde970d8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:0e:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441586, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280887, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[20b031c9-a65e-4ac5-b603-e0cfad9256ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:eb0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441586, 'tstamp': 441586}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280888, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91145e09-5086-44d3-9ac4-31633c628dcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdde970d8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:0e:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441586, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280889, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 286 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.7 MiB/s wr, 247 op/s
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98a4012c-bee1-4f41-80cb-cf5fc4b2cb4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.350 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.376 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66642d96-dffd-4a48-bcf3-86f5318868d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdde970d8-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdde970d8-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:36 np0005597378 NetworkManager[48904]: <info>  [1769521776.3810] manager: (tapdde970d8-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Jan 27 08:49:36 np0005597378 kernel: tapdde970d8-80: entered promiscuous mode
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.384 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdde970d8-80, col_values=(('external_ids', {'iface-id': 'a3a74a42-b4d1-4cb8-8fb3-22189c3060c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:36Z|00356|binding|INFO|Releasing lport a3a74a42-b4d1-4cb8-8fb3-22189c3060c3 from this chassis (sb_readonly=0)
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.406 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dde970d8-838c-4623-9005-11bbdca7fe66.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dde970d8-838c-4623-9005-11bbdca7fe66.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.406 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e60f0760-f7e1-46d4-adf2-f1c94f259ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.407 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-dde970d8-838c-4623-9005-11bbdca7fe66
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/dde970d8-838c-4623-9005-11bbdca7fe66.pid.haproxy
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID dde970d8-838c-4623-9005-11bbdca7fe66
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:49:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:36.408 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'env', 'PROCESS_TAG=haproxy-dde970d8-838c-4623-9005-11bbdca7fe66', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dde970d8-838c-4623-9005-11bbdca7fe66.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.408 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.634 238945 DEBUG nova.compute.manager [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.635 238945 DEBUG oslo_concurrency.lockutils [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.635 238945 DEBUG oslo_concurrency.lockutils [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.635 238945 DEBUG oslo_concurrency.lockutils [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.636 238945 DEBUG nova.compute.manager [req-966d8ffa-e576-4585-974f-9ed74c1b143c req-bf039a97-2f66-4f90-bd01-d62abe4b6031 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Processing event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:49:36 np0005597378 podman[280941]: 2026-01-27 13:49:36.76131893 +0000 UTC m=+0.023108771 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754743920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.920 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.926 238945 DEBUG nova.compute.provider_tree [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:36 np0005597378 podman[280941]: 2026-01-27 13:49:36.947953792 +0000 UTC m=+0.209743613 container create 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:49:36 np0005597378 nova_compute[238941]: 2026-01-27 13:49:36.953 238945 DEBUG nova.scheduler.client.report [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:36 np0005597378 systemd[1]: Started libpod-conmon-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3.scope.
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.007 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.009 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:49:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbafc8f85070bcddb752a288848fbd3f1e10ca3d87d307fcf89b2964580c91b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:37 np0005597378 podman[280941]: 2026-01-27 13:49:37.069999411 +0000 UTC m=+0.331789252 container init 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:49:37 np0005597378 podman[280941]: 2026-01-27 13:49:37.076486354 +0000 UTC m=+0.338276185 container start 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 08:49:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 27 08:49:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 27 08:49:37 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : New worker (281001) forked
Jan 27 08:49:37 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : Loading success.
Jan 27 08:49:37 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.140 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.140 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.193 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521777.1935422, 95331449-9db7-44fa-8add-58a0505da212 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.194 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Started (Lifecycle Event)#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.196 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.199 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.202 238945 INFO nova.virt.libvirt.driver [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance spawned successfully.#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.203 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.208 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.225 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.229 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.250 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.251 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.251 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.252 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.252 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.253 238945 DEBUG nova.virt.libvirt.driver [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.260 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.261 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521777.1946607, 95331449-9db7-44fa-8add-58a0505da212 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.261 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.269 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.325 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.329 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521777.1988711, 95331449-9db7-44fa-8add-58a0505da212 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.329 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.378 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.381 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.419 238945 INFO nova.virt.libvirt.driver [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.420 238945 INFO nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 4.45 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.474 238945 INFO nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 7.96 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.474 238945 DEBUG nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.486 238945 DEBUG nova.policy [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '618e06758ec244289bb6f2258e3df2da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a34b23d56029482fbb58a6be97575a37', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.506 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.653 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.654 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.655 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Creating image(s)#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.675 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.695 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.717 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.722 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.762 238945 INFO nova.compute.manager [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 9.29 seconds to build instance.#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.780 238945 DEBUG oslo_concurrency.lockutils [None req-3ffadfb1-ef91-4f49-a20e-0810e55758e5 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.796 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.797 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.797 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.798 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.821 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.824 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:37 np0005597378 nova_compute[238941]: 2026-01-27 13:49:37.998 238945 DEBUG nova.compute.manager [None req-178a6ce7-6ca1-4422-982c-ccd0638fd20e 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.086 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.154 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] resizing rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.237 238945 DEBUG nova.objects.instance [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'migration_context' on Instance uuid 18066d7e-b7a1-4ab2-97af-84ef678cfef9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.251 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.252 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Ensure instance console log exists: /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.252 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.253 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.253 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 299 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.6 MiB/s wr, 294 op/s
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.553 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.553 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.554 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.554 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.554 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.556 238945 INFO nova.compute.manager [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Terminating instance#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.557 238945 DEBUG nova.compute.manager [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:49:38 np0005597378 kernel: tap0d1f569f-26 (unregistering): left promiscuous mode
Jan 27 08:49:38 np0005597378 NetworkManager[48904]: <info>  [1769521778.5916] device (tap0d1f569f-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:49:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:38Z|00357|binding|INFO|Releasing lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab from this chassis (sb_readonly=0)
Jan 27 08:49:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:38Z|00358|binding|INFO|Setting lport 0d1f569f-2627-40d8-9a8c-f67def34c7ab down in Southbound
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:38Z|00359|binding|INFO|Removing iface tap0d1f569f-26 ovn-installed in OVS
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.619 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Jan 27 08:49:38 np0005597378 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Consumed 2.535s CPU time.
Jan 27 08:49:38 np0005597378 systemd-machined[207425]: Machine qemu-49-instance-0000002b terminated.
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.655 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:ac:dd 10.100.0.5'], port_security=['fa:16:3e:90:ac:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95331449-9db7-44fa-8add-58a0505da212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dde970d8-838c-4623-9005-11bbdca7fe66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd32c565a864e42ac9bf945538130cd1a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0fecfea-abcd-49cc-a280-30dc78a3d332', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dca6bbaf-8a25-467a-a4ea-08eb872f9094, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0d1f569f-2627-40d8-9a8c-f67def34c7ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.656 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1f569f-2627-40d8-9a8c-f67def34c7ab in datapath dde970d8-838c-4623-9005-11bbdca7fe66 unbound from our chassis#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.657 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dde970d8-838c-4623-9005-11bbdca7fe66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.658 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8a9eca-a898-4e3a-b741-6bd019ba6e21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.659 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 namespace which is not needed anymore#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.756 238945 DEBUG nova.compute.manager [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.757 238945 DEBUG oslo_concurrency.lockutils [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.758 238945 DEBUG oslo_concurrency.lockutils [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.758 238945 DEBUG oslo_concurrency.lockutils [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.758 238945 DEBUG nova.compute.manager [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] No waiting events found dispatching network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.759 238945 WARNING nova.compute.manager [req-8b6e2c3b-fe08-49cf-9f2b-910761801833 req-b92f8d07-1fde-49d0-b347-da077afbca90 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received unexpected event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:49:38 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : haproxy version is 2.8.14-c23fe91
Jan 27 08:49:38 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [NOTICE]   (280998) : path to executable is /usr/sbin/haproxy
Jan 27 08:49:38 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [WARNING]  (280998) : Exiting Master process...
Jan 27 08:49:38 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [ALERT]    (280998) : Current worker (281001) exited with code 143 (Terminated)
Jan 27 08:49:38 np0005597378 neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66[280960]: [WARNING]  (280998) : All workers exited. Exiting... (0)
Jan 27 08:49:38 np0005597378 systemd[1]: libpod-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3.scope: Deactivated successfully.
Jan 27 08:49:38 np0005597378 podman[281202]: 2026-01-27 13:49:38.790810265 +0000 UTC m=+0.048156285 container died 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.800 238945 INFO nova.virt.libvirt.driver [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Instance destroyed successfully.#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.801 238945 DEBUG nova.objects.instance [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lazy-loading 'resources' on Instance uuid 95331449-9db7-44fa-8add-58a0505da212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3-userdata-shm.mount: Deactivated successfully.
Jan 27 08:49:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4bbafc8f85070bcddb752a288848fbd3f1e10ca3d87d307fcf89b2964580c91b-merged.mount: Deactivated successfully.
Jan 27 08:49:38 np0005597378 podman[281202]: 2026-01-27 13:49:38.832442792 +0000 UTC m=+0.089788802 container cleanup 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:49:38 np0005597378 systemd[1]: libpod-conmon-6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3.scope: Deactivated successfully.
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.882 238945 DEBUG nova.virt.libvirt.vif [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:49:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1112327884',display_name='tempest-ImagesNegativeTestJSON-server-1112327884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1112327884',id=43,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d32c565a864e42ac9bf945538130cd1a',ramdisk_id='',reservation_id='r-bvg2egf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-218830169',owner_user_name='tempest-ImagesNegativeTestJSON-218830169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:37Z,user_data=None,user_id='87f8bb66fb254be5933d0d3a386e26b3',uuid=95331449-9db7-44fa-8add-58a0505da212,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.882 238945 DEBUG nova.network.os_vif_util [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converting VIF {"id": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "address": "fa:16:3e:90:ac:dd", "network": {"id": "dde970d8-838c-4623-9005-11bbdca7fe66", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1157446890-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d32c565a864e42ac9bf945538130cd1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1f569f-26", "ovs_interfaceid": "0d1f569f-2627-40d8-9a8c-f67def34c7ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.883 238945 DEBUG nova.network.os_vif_util [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.883 238945 DEBUG os_vif [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.885 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d1f569f-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.887 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.890 238945 INFO os_vif [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:ac:dd,bridge_name='br-int',has_traffic_filtering=True,id=0d1f569f-2627-40d8-9a8c-f67def34c7ab,network=Network(dde970d8-838c-4623-9005-11bbdca7fe66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1f569f-26')#033[00m
Jan 27 08:49:38 np0005597378 podman[281242]: 2026-01-27 13:49:38.893732509 +0000 UTC m=+0.038912967 container remove 6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5334f02-4539-4d20-9840-1967b49656f4]: (4, ('Tue Jan 27 01:49:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 (6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3)\n6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3\nTue Jan 27 01:49:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 (6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3)\n6bc6286fc2278e3343646e90e5698523f0e1c553b8959d389c8ad8b1e8dd3de3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.901 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[098145ba-7e1d-451a-9904-182ab866921a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdde970d8-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 kernel: tapdde970d8-80: left promiscuous mode
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.927 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[139801c3-cfc7-40e0-918c-44da31f556a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 nova_compute[238941]: 2026-01-27 13:49:38.941 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Successfully created port: 28184873-9427-478d-93ec-80092904c5d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.941 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29e02fac-50bc-4a7d-ac25-481acbd0a6eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.942 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1589638a-df64-4a24-b4c3-8ab4157f602e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.958 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5df02d-2b38-4f3f-b78d-f2914fb09a34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441578, 'reachable_time': 34494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281275, 'error': None, 'target': 'ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.961 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dde970d8-838c-4623-9005-11bbdca7fe66 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:49:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:38.961 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[235646e0-b056-4b0e-9dec-1619999794f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:38 np0005597378 systemd[1]: run-netns-ovnmeta\x2ddde970d8\x2d838c\x2d4623\x2d9005\x2d11bbdca7fe66.mount: Deactivated successfully.
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.163 238945 INFO nova.virt.libvirt.driver [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deleting instance files /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212_del#033[00m
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.164 238945 INFO nova.virt.libvirt.driver [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deletion of /var/lib/nova/instances/95331449-9db7-44fa-8add-58a0505da212_del complete#033[00m
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.282 238945 INFO nova.compute.manager [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.283 238945 DEBUG oslo.service.loopingcall [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.283 238945 DEBUG nova.compute.manager [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.283 238945 DEBUG nova.network.neutron [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:49:39 np0005597378 nova_compute[238941]: 2026-01-27 13:49:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 349 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 448 op/s
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.874 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-unplugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] No waiting events found dispatching network-vif-unplugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.875 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-unplugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "95331449-9db7-44fa-8add-58a0505da212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG oslo_concurrency.lockutils [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.876 238945 DEBUG nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] No waiting events found dispatching network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:40 np0005597378 nova_compute[238941]: 2026-01-27 13:49:40.877 238945 WARNING nova.compute.manager [req-128da226-79ac-4e7e-bfa5-c3b1b001b1f1 req-03b68848-d584-443b-bef4-bb4afbcd0aa3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received unexpected event network-vif-plugged-0d1f569f-2627-40d8-9a8c-f67def34c7ab for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:49:41 np0005597378 nova_compute[238941]: 2026-01-27 13:49:41.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:41 np0005597378 nova_compute[238941]: 2026-01-27 13:49:41.921 238945 DEBUG nova.network.neutron [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.001 238945 INFO nova.compute.manager [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] Took 2.72 seconds to deallocate network for instance.#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 27 08:49:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.188 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.189 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:42 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.262 238945 DEBUG oslo_concurrency.processutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 349 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.6 MiB/s wr, 334 op/s
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.387 238945 DEBUG nova.compute.manager [req-d572827f-2f0a-48ae-8668-310314ee08ea req-5664909b-0d31-44da-b504-63160165db02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 95331449-9db7-44fa-8add-58a0505da212] Received event network-vif-deleted-0d1f569f-2627-40d8-9a8c-f67def34c7ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.389 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.422 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Successfully updated port: 28184873-9427-478d-93ec-80092904c5d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.663 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.664 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquired lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.664 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.671 238945 INFO nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] instance snapshotting#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.672 238945 DEBUG nova.objects.instance [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254014913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.822 238945 DEBUG oslo_concurrency.processutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.828 238945 DEBUG nova.compute.provider_tree [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.906 238945 DEBUG nova.scheduler.client.report [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.935 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.957 238945 INFO nova.virt.libvirt.driver [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning live snapshot process#033[00m
Jan 27 08:49:42 np0005597378 nova_compute[238941]: 2026-01-27 13:49:42.961 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.014 238945 INFO nova.scheduler.client.report [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Deleted allocations for instance 95331449-9db7-44fa-8add-58a0505da212#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.218 238945 DEBUG oslo_concurrency.lockutils [None req-bbfc2555-05e5-4ad0-8324-76d24aebe9d9 87f8bb66fb254be5933d0d3a386e26b3 d32c565a864e42ac9bf945538130cd1a - - default default] Lock "95331449-9db7-44fa-8add-58a0505da212" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.227 238945 DEBUG nova.virt.libvirt.imagebackend [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.427 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(5b25e5f56a644231a2b54689706fa256) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.459 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.461 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.462 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.462 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.847 238945 DEBUG nova.network.neutron [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updating instance_info_cache with network_info: [{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:43 np0005597378 nova_compute[238941]: 2026-01-27 13:49:43.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466665687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.063 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.076 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Releasing lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.076 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance network_info: |[{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.079 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start _get_guest_xml network_info=[{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.087 238945 WARNING nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.092 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.093 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.103 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.104 238945 DEBUG nova.virt.libvirt.host [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.104 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.105 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.105 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.106 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.107 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.107 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.107 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.108 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.108 238945 DEBUG nova.virt.hardware [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.112 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 27 08:49:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 325 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 8.3 MiB/s wr, 364 op/s
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.360 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.360 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.446 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@5b25e5f56a644231a2b54689706fa256 to images/d0a3bc89-763d-4068-92fd-9d77a44e1110 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.571 238945 DEBUG nova.compute.manager [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-changed-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.572 238945 DEBUG nova.compute.manager [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Refreshing instance network info cache due to event network-changed-28184873-9427-478d-93ec-80092904c5d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.572 238945 DEBUG oslo_concurrency.lockutils [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.572 238945 DEBUG oslo_concurrency.lockutils [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.573 238945 DEBUG nova.network.neutron [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Refreshing network info cache for port 28184873-9427-478d-93ec-80092904c5d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.601 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.602 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3858MB free_disk=59.91117651667446GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.702 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/d0a3bc89-763d-4068-92fd-9d77a44e1110 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424281871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.796 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.846 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.851 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.915 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.916 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 18066d7e-b7a1-4ab2-97af-84ef678cfef9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.916 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.917 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:49:44 np0005597378 nova_compute[238941]: 2026-01-27 13:49:44.976 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.244 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(5b25e5f56a644231a2b54689706fa256) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:49:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807286098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.534 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.535 238945 DEBUG nova.virt.libvirt.vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1423841950',display_name='tempest-ServerDiskConfigTestJSON-server-1423841950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1423841950',id=44,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-269q2vdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:37Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=18066d7e-b7a1-4ab2-97af-84ef678cfef9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.536 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.537 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.538 238945 DEBUG nova.objects.instance [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18066d7e-b7a1-4ab2-97af-84ef678cfef9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.557 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <uuid>18066d7e-b7a1-4ab2-97af-84ef678cfef9</uuid>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <name>instance-0000002c</name>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1423841950</nova:name>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:49:44</nova:creationTime>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:user uuid="618e06758ec244289bb6f2258e3df2da">tempest-ServerDiskConfigTestJSON-580357788-project-member</nova:user>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:project uuid="a34b23d56029482fbb58a6be97575a37">tempest-ServerDiskConfigTestJSON-580357788</nova:project>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <nova:port uuid="28184873-9427-478d-93ec-80092904c5d1">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <entry name="serial">18066d7e-b7a1-4ab2-97af-84ef678cfef9</entry>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <entry name="uuid">18066d7e-b7a1-4ab2-97af-84ef678cfef9</entry>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:30:ed:6e"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <target dev="tap28184873-94"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/console.log" append="off"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:49:45 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:49:45 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:49:45 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:49:45 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.566 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Preparing to wait for external event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.567 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.568 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.568 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.569 238945 DEBUG nova.virt.libvirt.vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1423841950',display_name='tempest-ServerDiskConfigTestJSON-server-1423841950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1423841950',id=44,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-269q2vdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:37Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=18066d7e-b7a1-4ab2-97af-84ef678cfef9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.570 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.571 238945 DEBUG nova.network.os_vif_util [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.571 238945 DEBUG os_vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.573 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.574 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.578 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28184873-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.579 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28184873-94, col_values=(('external_ids', {'iface-id': '28184873-9427-478d-93ec-80092904c5d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ed:6e', 'vm-uuid': '18066d7e-b7a1-4ab2-97af-84ef678cfef9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.581 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:45 np0005597378 NetworkManager[48904]: <info>  [1769521785.5818] manager: (tap28184873-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.583 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.588 238945 INFO os_vif [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94')#033[00m
Jan 27 08:49:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110706741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.660 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.661 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.661 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] No VIF found with MAC fa:16:3e:30:ed:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.662 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Using config drive#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.692 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.703 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.710 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.730 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.765 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:49:45 np0005597378 nova_compute[238941]: 2026-01-27 13:49:45.766 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:45 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.158 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Creating config drive at /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.162 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1maxst1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 27 08:49:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 27 08:49:46 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.259 238945 DEBUG nova.storage.rbd_utils [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(d0a3bc89-763d-4068-92fd-9d77a44e1110) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.297 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.301 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1maxst1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 333 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.336 238945 DEBUG nova.storage.rbd_utils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] rbd image 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.345 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.541 238945 DEBUG oslo_concurrency.processutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config 18066d7e-b7a1-4ab2-97af-84ef678cfef9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.542 238945 INFO nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deleting local config drive /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9/disk.config because it was imported into RBD.#033[00m
Jan 27 08:49:46 np0005597378 kernel: tap28184873-94: entered promiscuous mode
Jan 27 08:49:46 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:46Z|00360|binding|INFO|Claiming lport 28184873-9427-478d-93ec-80092904c5d1 for this chassis.
Jan 27 08:49:46 np0005597378 NetworkManager[48904]: <info>  [1769521786.6109] manager: (tap28184873-94): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Jan 27 08:49:46 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:46Z|00361|binding|INFO|28184873-9427-478d-93ec-80092904c5d1: Claiming fa:16:3e:30:ed:6e 10.100.0.8
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:46 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:46Z|00362|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 ovn-installed in OVS
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.633 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:46 np0005597378 systemd-udevd[281619]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:46 np0005597378 NetworkManager[48904]: <info>  [1769521786.6617] device (tap28184873-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:49:46 np0005597378 NetworkManager[48904]: <info>  [1769521786.6626] device (tap28184873-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:49:46 np0005597378 systemd-machined[207425]: New machine qemu-50-instance-0000002c.
Jan 27 08:49:46 np0005597378 systemd[1]: Started Virtual Machine qemu-50-instance-0000002c.
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.760 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.761 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.761 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.766 238945 DEBUG nova.network.neutron [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updated VIF entry in instance network info cache for port 28184873-9427-478d-93ec-80092904c5d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.767 238945 DEBUG nova.network.neutron [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updating instance_info_cache with network_info: [{"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:46 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:46Z|00363|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 up in Southbound
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.865 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.867 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 bound to our chassis#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.868 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4856e57c-dca4-4180-b9d9-3b2eced0f054#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.892 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c44d6d8-692d-4c8e-8832-1cdee8c3a625]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.894 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4856e57c-d1 in ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.897 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4856e57c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.897 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54ab0563-fb67-4dc4-bfd6-b948e84e4803]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e622dee6-64d6-4443-8854-4ae1c6704e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.904 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.904 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:46 np0005597378 nova_compute[238941]: 2026-01-27 13:49:46.905 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.916 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5747b25d-3fbe-461f-8f6e-e0f51a27e976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.936 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c346008-2ca0-4abe-a238-61c35de03cef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.979 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1063b269-896b-44df-b64e-097480b82506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 NetworkManager[48904]: <info>  [1769521786.9891] manager: (tap4856e57c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Jan 27 08:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:46.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c915f77b-fc9d-49d5-995d-deaf4701ba95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:46 np0005597378 systemd-udevd[281623]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.001 238945 DEBUG oslo_concurrency.lockutils [req-4ec504d8-1b10-4c49-ad22-70211e65d289 req-b8fa8a6d-4951-45e5-afff-436ae5db8687 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-18066d7e-b7a1-4ab2-97af-84ef678cfef9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.031 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[59b787c2-4f5b-4ffd-b85f-d049d3736400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.035 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e839a7-a2af-466f-86c3-415dfd2f41a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 podman[281632]: 2026-01-27 13:49:47.058972986 +0000 UTC m=+0.110309674 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 08:49:47 np0005597378 NetworkManager[48904]: <info>  [1769521787.0673] device (tap4856e57c-d0): carrier: link connected
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.074 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd455ef-8180-4948-9573-aa5935fc4d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1ba2ff-a42d-4afb-8908-1724b675360c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442669, 'reachable_time': 24876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281679, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab2149-4d94-478e-aae8-2b85b8dd9eac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:164f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442669, 'tstamp': 442669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281681, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.136 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4fad7a-fda2-4e77-aac2-f112b31714df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4856e57c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:16:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442669, 'reachable_time': 24876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281682, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.176 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[354780af-6d92-481c-9d6e-44628ccfe007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 27 08:49:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[239e06cf-919c-497b-9f75-8269bf84f69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.266 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.266 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.267 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4856e57c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:47 np0005597378 NetworkManager[48904]: <info>  [1769521787.2701] manager: (tap4856e57c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 27 08:49:47 np0005597378 kernel: tap4856e57c-d0: entered promiscuous mode
Jan 27 08:49:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4856e57c-d0, col_values=(('external_ids', {'iface-id': '0436b10b-d79a-417d-bd92-96aac09ed050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:47Z|00364|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.279 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23c0c136-3249-4907-86eb-92f503831e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.281 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4856e57c-dca4-4180-b9d9-3b2eced0f054.pid.haproxy
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4856e57c-dca4-4180-b9d9-3b2eced0f054
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:49:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:47.282 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'env', 'PROCESS_TAG=haproxy-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4856e57c-dca4-4180-b9d9-3b2eced0f054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:47 np0005597378 podman[281714]: 2026-01-27 13:49:47.667059636 +0000 UTC m=+0.065050547 container create 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:49:47 np0005597378 systemd[1]: Started libpod-conmon-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4.scope.
Jan 27 08:49:47 np0005597378 podman[281714]: 2026-01-27 13:49:47.626948229 +0000 UTC m=+0.024939160 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:49:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdabd1cec64e293ba315615fb1c480c80783f7c2713661e4b67b8741b28f5f15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:47 np0005597378 podman[281714]: 2026-01-27 13:49:47.774316337 +0000 UTC m=+0.172307278 container init 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 08:49:47 np0005597378 podman[281714]: 2026-01-27 13:49:47.78037086 +0000 UTC m=+0.178361771 container start 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 08:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:47 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : New worker (281779) forked
Jan 27 08:49:47 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : Loading success.
Jan 27 08:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.825 238945 DEBUG nova.compute.manager [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.826 238945 DEBUG oslo_concurrency.lockutils [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.827 238945 DEBUG oslo_concurrency.lockutils [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.827 238945 DEBUG oslo_concurrency.lockutils [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.827 238945 DEBUG nova.compute.manager [req-c8bc4c3e-2a8f-420f-afb0-09972962bad5 req-04652f76-cb12-426e-bbd2-991c6a4fc9bc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Processing event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.828 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521787.8273184, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.828 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Started (Lifecycle Event)#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.830 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.833 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.838 238945 INFO nova.virt.libvirt.driver [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance spawned successfully.#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.839 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.849 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.860 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.861 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.861 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.862 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.862 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.863 238945 DEBUG nova.virt.libvirt.driver [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.870 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.871 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521787.8277075, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.871 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.896 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.902 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521787.8325677, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.903 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.931 238945 INFO nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 10.28 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.931 238945 DEBUG nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.932 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.942 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:47 np0005597378 nova_compute[238941]: 2026-01-27 13:49:47.982 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.015 238945 INFO nova.compute.manager [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 11.92 seconds to build instance.#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.041 238945 DEBUG oslo_concurrency.lockutils [None req-90a4b7dc-6421-4e08-baff-72defb37555c 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.059 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521773.058304, 9505af7f-b4b1-45a4-9350-98fd525ce36e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.059 238945 INFO nova.compute.manager [-] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.080 238945 DEBUG nova.compute.manager [None req-860d89d4-c523-4a93-beda-061bc8801954 - - - - - -] [instance: 9505af7f-b4b1-45a4-9350-98fd525ce36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 350 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 162 op/s
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.611 238945 INFO nova.virt.libvirt.driver [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.612 238945 INFO nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 5.88 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 08:49:48 np0005597378 podman[281788]: 2026-01-27 13:49:48.726872809 +0000 UTC m=+0.054927526 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.983 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.984 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Jan 27 08:49:48 np0005597378 nova_compute[238941]: 2026-01-27 13:49:48.984 238945 DEBUG nova.compute.manager [None req-510b5b19-02be-490f-bba5-cef64286a7ed 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting image 201a68f8-0ef3-4ae6-9dbe-39217fc2c6ce _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Jan 27 08:49:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:49Z|00365|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:49:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:49Z|00366|binding|INFO|Releasing lport 0436b10b-d79a-417d-bd92-96aac09ed050 from this chassis (sb_readonly=0)
Jan 27 08:49:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 27 08:49:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 27 08:49:49 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.322 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.323 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.347 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.416 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.417 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.426 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.426 238945 INFO nova.compute.claims [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.567 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.940 238945 DEBUG nova.compute.manager [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.941 238945 DEBUG oslo_concurrency.lockutils [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.942 238945 DEBUG oslo_concurrency.lockutils [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.942 238945 DEBUG oslo_concurrency.lockutils [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.942 238945 DEBUG nova.compute.manager [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:49 np0005597378 nova_compute[238941]: 2026-01-27 13:49:49.943 238945 WARNING nova.compute.manager [req-f2377341-fb64-40ef-af26-c18568310f7c req-eae96729-d22a-44e5-b7bd-4e3a8189bbdd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:49:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779392142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.223 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.232 238945 DEBUG nova.compute.provider_tree [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.254 238945 DEBUG nova.scheduler.client.report [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.287 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.288 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:49:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 386 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 256 op/s
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.338 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.339 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.362 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.380 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.494 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.496 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.497 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Creating image(s)#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.528 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.557 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.584 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.590 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.661 238945 DEBUG nova.policy [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c17e8011c1b44fa3beaccb9dacec4913', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33ea67a44b80493cb75d174ebab96310', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.675 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.676 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.676 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.677 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.713 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:50 np0005597378 nova_compute[238941]: 2026-01-27 13:49:50.718 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f4421f99-7c11-4331-a349-c0d9713d4dfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.035 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f4421f99-7c11-4331-a349-c0d9713d4dfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.087 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] resizing rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.178 238945 DEBUG nova.objects.instance [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lazy-loading 'migration_context' on Instance uuid f4421f99-7c11-4331-a349-c0d9713d4dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.194 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.194 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Ensure instance console log exists: /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.195 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.195 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.195 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:51 np0005597378 nova_compute[238941]: 2026-01-27 13:49:51.912 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Successfully created port: 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:49:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 27 08:49:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 27 08:49:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 27 08:49:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 386 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.066 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Successfully updated port: 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.089 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.090 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquired lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.090 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.092 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.092 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.092 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.093 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.093 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.095 238945 INFO nova.compute.manager [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Terminating instance#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.096 238945 DEBUG nova.compute.manager [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:49:53 np0005597378 kernel: tap28184873-94 (unregistering): left promiscuous mode
Jan 27 08:49:53 np0005597378 NetworkManager[48904]: <info>  [1769521793.1338] device (tap28184873-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00367|binding|INFO|Releasing lport 28184873-9427-478d-93ec-80092904c5d1 from this chassis (sb_readonly=0)
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00368|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 down in Southbound
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00369|binding|INFO|Removing iface tap28184873-94 ovn-installed in OVS
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.149 238945 DEBUG nova.compute.manager [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.149 238945 DEBUG nova.compute.manager [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing instance network info cache due to event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.150 238945 DEBUG oslo_concurrency.lockutils [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.149 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.152 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.153 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ae84c3-2b72-4a31-8025-025d791069ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.153 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 namespace which is not needed anymore#033[00m
Jan 27 08:49:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 27 08:49:53 np0005597378 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Jan 27 08:49:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 27 08:49:53 np0005597378 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Consumed 6.414s CPU time.
Jan 27 08:49:53 np0005597378 systemd-machined[207425]: Machine qemu-50-instance-0000002c terminated.
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.236 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:49:53 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : haproxy version is 2.8.14-c23fe91
Jan 27 08:49:53 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [NOTICE]   (281776) : path to executable is /usr/sbin/haproxy
Jan 27 08:49:53 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [WARNING]  (281776) : Exiting Master process...
Jan 27 08:49:53 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [ALERT]    (281776) : Current worker (281779) exited with code 143 (Terminated)
Jan 27 08:49:53 np0005597378 neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054[281769]: [WARNING]  (281776) : All workers exited. Exiting... (0)
Jan 27 08:49:53 np0005597378 systemd[1]: libpod-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4.scope: Deactivated successfully.
Jan 27 08:49:53 np0005597378 podman[282018]: 2026-01-27 13:49:53.304517977 +0000 UTC m=+0.059099718 container died 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:49:53 np0005597378 kernel: tap28184873-94: entered promiscuous mode
Jan 27 08:49:53 np0005597378 kernel: tap28184873-94 (unregistering): left promiscuous mode
Jan 27 08:49:53 np0005597378 NetworkManager[48904]: <info>  [1769521793.3233] manager: (tap28184873-94): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00370|binding|INFO|Claiming lport 28184873-9427-478d-93ec-80092904c5d1 for this chassis.
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00371|binding|INFO|28184873-9427-478d-93ec-80092904c5d1: Claiming fa:16:3e:30:ed:6e 10.100.0.8
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.333 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4-userdata-shm.mount: Deactivated successfully.
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.349 238945 INFO nova.virt.libvirt.driver [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Instance destroyed successfully.#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.350 238945 DEBUG nova.objects.instance [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lazy-loading 'resources' on Instance uuid 18066d7e-b7a1-4ab2-97af-84ef678cfef9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cdabd1cec64e293ba315615fb1c480c80783f7c2713661e4b67b8741b28f5f15-merged.mount: Deactivated successfully.
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00372|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 ovn-installed in OVS
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00373|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 up in Southbound
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00374|binding|INFO|Releasing lport 28184873-9427-478d-93ec-80092904c5d1 from this chassis (sb_readonly=1)
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00375|if_status|INFO|Dropped 3 log messages in last 294 seconds (most recently, 294 seconds ago) due to excessive rate
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00376|if_status|INFO|Not setting lport 28184873-9427-478d-93ec-80092904c5d1 down as sb is readonly
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00377|binding|INFO|Removing iface tap28184873-94 ovn-installed in OVS
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 podman[282018]: 2026-01-27 13:49:53.36456861 +0000 UTC m=+0.119150351 container cleanup 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00378|binding|INFO|Releasing lport 28184873-9427-478d-93ec-80092904c5d1 from this chassis (sb_readonly=0)
Jan 27 08:49:53 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:53Z|00379|binding|INFO|Setting lport 28184873-9427-478d-93ec-80092904c5d1 down in Southbound
Jan 27 08:49:53 np0005597378 systemd[1]: libpod-conmon-9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4.scope: Deactivated successfully.
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.373 238945 DEBUG nova.compute.manager [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.373 238945 DEBUG oslo_concurrency.lockutils [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG oslo_concurrency.lockutils [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG oslo_concurrency.lockutils [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG nova.compute.manager [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.374 238945 DEBUG nova.compute.manager [req-2fb9b9fd-98bc-457b-a536-4ef136174a20 req-ee6f0339-c72c-432a-95ca-e1696e9667ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.375 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.384 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ed:6e 10.100.0.8'], port_security=['fa:16:3e:30:ed:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '18066d7e-b7a1-4ab2-97af-84ef678cfef9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a34b23d56029482fbb58a6be97575a37', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcd0970e-d5ac-4df0-9ffa-6248a70e2af4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c73ed4-327f-4cf5-92c7-6fc07074e6ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=28184873-9427-478d-93ec-80092904c5d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.390 238945 DEBUG nova.virt.libvirt.vif [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1423841950',display_name='tempest-ServerDiskConfigTestJSON-server-1423841950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1423841950',id=44,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a34b23d56029482fbb58a6be97575a37',ramdisk_id='',reservation_id='r-269q2vdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-580357788',owner_user_name='tempest-ServerDiskConfigTestJSON-580357788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:51Z,user_data=None,user_id='618e06758ec244289bb6f2258e3df2da',uuid=18066d7e-b7a1-4ab2-97af-84ef678cfef9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.390 238945 DEBUG nova.network.os_vif_util [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converting VIF {"id": "28184873-9427-478d-93ec-80092904c5d1", "address": "fa:16:3e:30:ed:6e", "network": {"id": "4856e57c-dca4-4180-b9d9-3b2eced0f054", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1795514139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a34b23d56029482fbb58a6be97575a37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28184873-94", "ovs_interfaceid": "28184873-9427-478d-93ec-80092904c5d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.391 238945 DEBUG nova.network.os_vif_util [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.391 238945 DEBUG os_vif [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.393 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28184873-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.402 238945 INFO os_vif [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ed:6e,bridge_name='br-int',has_traffic_filtering=True,id=28184873-9427-478d-93ec-80092904c5d1,network=Network(4856e57c-dca4-4180-b9d9-3b2eced0f054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28184873-94')#033[00m
Jan 27 08:49:53 np0005597378 podman[282053]: 2026-01-27 13:49:53.457888226 +0000 UTC m=+0.062472149 container remove 9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.465 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e177ea71-fbf5-4f51-884c-da73010b081b]: (4, ('Tue Jan 27 01:49:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4)\n9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4\nTue Jan 27 01:49:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 (9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4)\n9619b73c8844b83bdfa5e4422852a32d5bb3ecf553a277ad2644b0e2ffbd98d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.467 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4917672d-9df8-46e1-8676-fbb3ba99ddd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.469 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4856e57c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 kernel: tap4856e57c-d0: left promiscuous mode
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.493 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60672767-271d-454b-ab36-6dd56de4c8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.512 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b617d96-56bc-49c2-b44c-b0a58016d88d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.515 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d92a0783-71e8-4499-96fd-9d6fc319b360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.532 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d621354a-01aa-4183-9fd1-aadb61bdc4a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442659, 'reachable_time': 26672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282084, 'error': None, 'target': 'ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.536 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4856e57c-dca4-4180-b9d9-3b2eced0f054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:49:53 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4856e57c\x2ddca4\x2d4180\x2db9d9\x2d3b2eced0f054.mount: Deactivated successfully.
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.536 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4451a7-c108-4a17-940e-827063d5b788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.537 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.538 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.539 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38fc1de1-8b87-418c-bdf9-cd540b455f8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.540 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 28184873-9427-478d-93ec-80092904c5d1 in datapath 4856e57c-dca4-4180-b9d9-3b2eced0f054 unbound from our chassis#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.541 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4856e57c-dca4-4180-b9d9-3b2eced0f054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:49:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:53.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f53dbe0-23fe-4227-8cbe-0ce743f4ca86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.752 238945 INFO nova.virt.libvirt.driver [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deleting instance files /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9_del#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.753 238945 INFO nova.virt.libvirt.driver [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deletion of /var/lib/nova/instances/18066d7e-b7a1-4ab2-97af-84ef678cfef9_del complete#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.797 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521778.797181, 95331449-9db7-44fa-8add-58a0505da212 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.798 238945 INFO nova.compute.manager [-] [instance: 95331449-9db7-44fa-8add-58a0505da212] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.824 238945 INFO nova.compute.manager [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.825 238945 DEBUG oslo.service.loopingcall [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.826 238945 DEBUG nova.compute.manager [None req-e72e030f-4c6f-4faf-842c-cedf29f76be7 - - - - - -] [instance: 95331449-9db7-44fa-8add-58a0505da212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.826 238945 DEBUG nova.compute.manager [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:49:53 np0005597378 nova_compute[238941]: 2026-01-27 13:49:53.827 238945 DEBUG nova.network.neutron [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.164 238945 DEBUG nova.network.neutron [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.182 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Releasing lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.182 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance network_info: |[{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.183 238945 DEBUG oslo_concurrency.lockutils [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.183 238945 DEBUG nova.network.neutron [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:49:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.186 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start _get_guest_xml network_info=[{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:49:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.192 238945 WARNING nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.203 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.203 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.210 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.210 238945 DEBUG nova.virt.libvirt.host [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.211 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.212 238945 DEBUG nova.virt.hardware [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.215 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 340 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.9 MiB/s wr, 333 op/s
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.451 238945 DEBUG nova.network.neutron [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.487 238945 INFO nova.compute.manager [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Took 0.66 seconds to deallocate network for instance.#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.550 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.550 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.684 238945 DEBUG oslo_concurrency.processutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950916860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.817 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.848 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:54 np0005597378 nova_compute[238941]: 2026-01-27 13:49:54.852 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.264 238945 DEBUG nova.compute.manager [req-dcfe9bd4-8049-4d77-9a8d-570025d0caeb req-cd2e4ec0-fc79-44d1-af31-f87c0f971271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-deleted-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:49:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620926252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.290 238945 DEBUG oslo_concurrency.processutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.298 238945 DEBUG nova.compute.provider_tree [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.345 238945 DEBUG nova.scheduler.client.report [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.426 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:49:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524549938' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.448 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.450 238945 DEBUG nova.virt.libvirt.vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-365811881',display_name='tempest-ServersTestJSON-server-365811881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-365811881',id=45,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiCDKaWjVQivEDqU3EoeLMmBvKQjqFKawS16b9UtLcg366OiAVxi5zfMPJLWF8VdZYXGmdopeJZDeH+kDxj9AThmsqXf4XhiP8H8FKjti0H4tVMOh5j6gSyyILFFBO17Q==',key_name='tempest-keypair-1767690537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33ea67a44b80493cb75d174ebab96310',ramdisk_id='',reservation_id='r-ormgr0hq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-782629825',owner_user_name='tempest-ServersTestJSON-782629825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c17e8011c1b44fa3beaccb9dacec4913',uuid=f4421f99-7c11-4331-a349-c0d9713d4dfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.450 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converting VIF {"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.451 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.452 238945 DEBUG nova.objects.instance [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4421f99-7c11-4331-a349-c0d9713d4dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.463 238945 INFO nova.scheduler.client.report [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Deleted allocations for instance 18066d7e-b7a1-4ab2-97af-84ef678cfef9#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.486 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <uuid>f4421f99-7c11-4331-a349-c0d9713d4dfc</uuid>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <name>instance-0000002d</name>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersTestJSON-server-365811881</nova:name>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:49:54</nova:creationTime>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:user uuid="c17e8011c1b44fa3beaccb9dacec4913">tempest-ServersTestJSON-782629825-project-member</nova:user>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:project uuid="33ea67a44b80493cb75d174ebab96310">tempest-ServersTestJSON-782629825</nova:project>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <nova:port uuid="6de0ab34-ff4c-4eee-a7d5-56df50d305ac">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <entry name="serial">f4421f99-7c11-4331-a349-c0d9713d4dfc</entry>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <entry name="uuid">f4421f99-7c11-4331-a349-c0d9713d4dfc</entry>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/f4421f99-7c11-4331-a349-c0d9713d4dfc_disk">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:27:9b:a0"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <target dev="tap6de0ab34-ff"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/console.log" append="off"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:49:55 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:49:55 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:49:55 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:49:55 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.486 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Preparing to wait for external event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.486 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.487 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.487 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.487 238945 DEBUG nova.virt.libvirt.vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-365811881',display_name='tempest-ServersTestJSON-server-365811881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-365811881',id=45,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiCDKaWjVQivEDqU3EoeLMmBvKQjqFKawS16b9UtLcg366OiAVxi5zfMPJLWF8VdZYXGmdopeJZDeH+kDxj9AThmsqXf4XhiP8H8FKjti0H4tVMOh5j6gSyyILFFBO17Q==',key_name='tempest-keypair-1767690537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33ea67a44b80493cb75d174ebab96310',ramdisk_id='',reservation_id='r-ormgr0hq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-782629825',owner_user_name='tempest-ServersTestJSON-782629825-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:49:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c17e8011c1b44fa3beaccb9dacec4913',uuid=f4421f99-7c11-4331-a349-c0d9713d4dfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.488 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converting VIF {"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.488 238945 DEBUG nova.network.os_vif_util [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.488 238945 DEBUG os_vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.491 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.491 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.494 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6de0ab34-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.494 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6de0ab34-ff, col_values=(('external_ids', {'iface-id': '6de0ab34-ff4c-4eee-a7d5-56df50d305ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:9b:a0', 'vm-uuid': 'f4421f99-7c11-4331-a349-c0d9713d4dfc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:55 np0005597378 NetworkManager[48904]: <info>  [1769521795.4967] manager: (tap6de0ab34-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.504 238945 INFO os_vif [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff')#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.531 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.532 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.533 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.534 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.535 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-unplugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.536 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 DEBUG oslo_concurrency.lockutils [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 DEBUG nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] No waiting events found dispatching network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.537 238945 WARNING nova.compute.manager [req-bb566cf5-86b6-40bf-ad49-7d5666e12a9c req-f4da748f-8007-4ada-83b3-281a0689621b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Received unexpected event network-vif-plugged-28184873-9427-478d-93ec-80092904c5d1 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.544 238945 DEBUG oslo_concurrency.lockutils [None req-ad3df0e1-6a9a-4afa-81bc-a62e9a537a41 618e06758ec244289bb6f2258e3df2da a34b23d56029482fbb58a6be97575a37 - - default default] Lock "18066d7e-b7a1-4ab2-97af-84ef678cfef9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.578 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.579 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.579 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] No VIF found with MAC fa:16:3e:27:9b:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.579 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Using config drive#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.603 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.609 238945 DEBUG nova.network.neutron [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updated VIF entry in instance network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.609 238945 DEBUG nova.network.neutron [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:49:55 np0005597378 nova_compute[238941]: 2026-01-27 13:49:55.629 238945 DEBUG oslo_concurrency.lockutils [req-1eb88a4c-193d-461b-9e3f-52b1f9ae2412 req-8eab4cad-e6ce-40ed-8ed9-10f308f1a14c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:49:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 311 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 212 op/s
Jan 27 08:49:56 np0005597378 nova_compute[238941]: 2026-01-27 13:49:56.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.022 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Creating config drive at /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.028 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2qr47by4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:49:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 27 08:49:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 27 08:49:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.166 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2qr47by4" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.196 238945 DEBUG nova.storage.rbd_utils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] rbd image f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.199 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.344 238945 DEBUG oslo_concurrency.processutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config f4421f99-7c11-4331-a349-c0d9713d4dfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.346 238945 INFO nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deleting local config drive /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc/disk.config because it was imported into RBD.#033[00m
Jan 27 08:49:57 np0005597378 kernel: tap6de0ab34-ff: entered promiscuous mode
Jan 27 08:49:57 np0005597378 NetworkManager[48904]: <info>  [1769521797.4074] manager: (tap6de0ab34-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:57Z|00380|binding|INFO|Claiming lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac for this chassis.
Jan 27 08:49:57 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:57Z|00381|binding|INFO|6de0ab34-ff4c-4eee-a7d5-56df50d305ac: Claiming fa:16:3e:27:9b:a0 10.100.0.10
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.415 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9b:a0 10.100.0.10'], port_security=['fa:16:3e:27:9b:a0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4421f99-7c11-4331-a349-c0d9713d4dfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-018af714-c949-4d8d-b260-666bb53f2891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33ea67a44b80493cb75d174ebab96310', 'neutron:revision_number': '2', 'neutron:security_group_ids': '81a640fc-8865-48fd-ac8a-a12381a2c86d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09612832-bac4-4af1-b317-dd3540d37656, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6de0ab34-ff4c-4eee-a7d5-56df50d305ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.416 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac in datapath 018af714-c949-4d8d-b260-666bb53f2891 bound to our chassis#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.417 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 018af714-c949-4d8d-b260-666bb53f2891#033[00m
Jan 27 08:49:57 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:57Z|00382|binding|INFO|Setting lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac ovn-installed in OVS
Jan 27 08:49:57 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:57Z|00383|binding|INFO|Setting lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac up in Southbound
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.430 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c53ead77-b9fa-4a5e-aec9-37a675b1f54a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.431 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap018af714-c1 in ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.433 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap018af714-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.433 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e07c6c3d-1cc7-48cb-a66d-e7f6ee435e0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.435 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee75ac3-8642-4342-b0d1-470e9ccab5c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 systemd-udevd[282244]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.448 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[25b14ff6-040c-44f7-a05d-6e182bc74fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 systemd-machined[207425]: New machine qemu-51-instance-0000002d.
Jan 27 08:49:57 np0005597378 NetworkManager[48904]: <info>  [1769521797.4569] device (tap6de0ab34-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:49:57 np0005597378 NetworkManager[48904]: <info>  [1769521797.4575] device (tap6de0ab34-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9201bb80-1b4b-4939-990d-d55e07743bbe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 systemd[1]: Started Virtual Machine qemu-51-instance-0000002d.
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.490 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5c870c2e-2bab-4ce5-b9c3-9678ba6c1201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 NetworkManager[48904]: <info>  [1769521797.4958] manager: (tap018af714-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.495 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfb17cc-2fb8-4487-8a83-8e860e487241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.523 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[17c44dad-ce27-4a9a-a80f-6c6f9d60dafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.527 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c6be657e-2c6c-4518-a690-260cf243b0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 NetworkManager[48904]: <info>  [1769521797.5520] device (tap018af714-c0): carrier: link connected
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.557 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab65188-ec86-4ca3-8d4b-96387d3fc3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.574 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0121a4c8-01d4-4b38-83f8-6bdf7b301e56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap018af714-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:97:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443717, 'reachable_time': 16106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282277, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e26cef90-76ba-488f-96f6-6b1c348743f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:971c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443717, 'tstamp': 443717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282278, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.604 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a89ad526-1343-41d1-bd98-ea983ee8f197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap018af714-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:97:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443717, 'reachable_time': 16106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282279, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.635 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[921aa99d-f1d5-4993-993f-488415413eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.659 238945 DEBUG nova.compute.manager [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.659 238945 DEBUG oslo_concurrency.lockutils [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.659 238945 DEBUG oslo_concurrency.lockutils [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.660 238945 DEBUG oslo_concurrency.lockutils [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.660 238945 DEBUG nova.compute.manager [req-84c0777a-34d6-4669-bcb1-0eaae90d0484 req-d1cf1452-da35-41d2-95e8-50d81689b5a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Processing event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.700 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[24aaf16f-cef0-4293-8512-f9c8a499196b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.701 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap018af714-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.701 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.702 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap018af714-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:57 np0005597378 kernel: tap018af714-c0: entered promiscuous mode
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.703 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 NetworkManager[48904]: <info>  [1769521797.7047] manager: (tap018af714-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.708 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap018af714-c0, col_values=(('external_ids', {'iface-id': '38bff87a-fb84-4b77-905a-6370e7595706'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.709 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 ovn_controller[144812]: 2026-01-27T13:49:57Z|00384|binding|INFO|Releasing lport 38bff87a-fb84-4b77-905a-6370e7595706 from this chassis (sb_readonly=0)
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.731 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/018af714-c949-4d8d-b260-666bb53f2891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/018af714-c949-4d8d-b260-666bb53f2891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.733 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f6874c-92a7-4598-a84d-164f530492b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.734 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-018af714-c949-4d8d-b260-666bb53f2891
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/018af714-c949-4d8d-b260-666bb53f2891.pid.haproxy
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 018af714-c949-4d8d-b260-666bb53f2891
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:49:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:49:57.734 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'env', 'PROCESS_TAG=haproxy-018af714-c949-4d8d-b260-666bb53f2891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/018af714-c949-4d8d-b260-666bb53f2891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.951 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.952 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521797.9519877, f4421f99-7c11-4331-a349-c0d9713d4dfc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.952 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Started (Lifecycle Event)#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.957 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.960 238945 INFO nova.virt.libvirt.driver [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance spawned successfully.#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.961 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.973 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.979 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.983 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.983 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.984 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.984 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.985 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:57 np0005597378 nova_compute[238941]: 2026-01-27 13:49:57.985 238945 DEBUG nova.virt.libvirt.driver [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.007 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.008 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521797.9520824, f4421f99-7c11-4331-a349-c0d9713d4dfc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.009 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.037 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.042 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521797.9562883, f4421f99-7c11-4331-a349-c0d9713d4dfc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.043 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.069 238945 INFO nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 7.57 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.070 238945 DEBUG nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.072 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.078 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.118 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:49:58 np0005597378 podman[282350]: 2026-01-27 13:49:58.142839016 +0000 UTC m=+0.066364123 container create 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.147 238945 INFO nova.compute.manager [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 8.75 seconds to build instance.#033[00m
Jan 27 08:49:58 np0005597378 nova_compute[238941]: 2026-01-27 13:49:58.161 238945 DEBUG oslo_concurrency.lockutils [None req-ae8c4f1d-019e-46c2-9817-54c2c640cb5c c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:49:58 np0005597378 systemd[1]: Started libpod-conmon-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181.scope.
Jan 27 08:49:58 np0005597378 podman[282350]: 2026-01-27 13:49:58.098752002 +0000 UTC m=+0.022277129 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:49:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:49:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2039b354fcbd140b331e952c67daa725beaff54ea0acc54cb7b969f56969eeb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:49:58 np0005597378 podman[282350]: 2026-01-27 13:49:58.233481961 +0000 UTC m=+0.157007078 container init 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 08:49:58 np0005597378 podman[282350]: 2026-01-27 13:49:58.239459831 +0000 UTC m=+0.162984938 container start 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:49:58 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : New worker (282371) forked
Jan 27 08:49:58 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : Loading success.
Jan 27 08:49:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 253 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 295 op/s
Jan 27 08:49:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:49:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701398102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:49:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:49:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701398102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:50:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 230 op/s
Jan 27 08:50:00 np0005597378 nova_compute[238941]: 2026-01-27 13:50:00.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.861 238945 DEBUG nova.compute.manager [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.861 238945 DEBUG oslo_concurrency.lockutils [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 DEBUG oslo_concurrency.lockutils [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 DEBUG oslo_concurrency.lockutils [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 DEBUG nova.compute.manager [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] No waiting events found dispatching network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.862 238945 WARNING nova.compute.manager [req-f2381af6-f03d-432c-8b76-aedfc7c76efc req-cab234eb-7493-45b6-bb9a-13df3538a539 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received unexpected event network-vif-plugged-6de0ab34-ff4c-4eee-a7d5-56df50d305ac for instance with vm_state active and task_state None.#033[00m
Jan 27 08:50:01 np0005597378 nova_compute[238941]: 2026-01-27 13:50:01.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 27 08:50:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 27 08:50:02 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 27 08:50:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 878 KiB/s wr, 148 op/s
Jan 27 08:50:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 185 op/s
Jan 27 08:50:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:04.571 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:04 np0005597378 nova_compute[238941]: 2026-01-27 13:50:04.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:04.572 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.184 238945 DEBUG nova.compute.manager [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG nova.compute.manager [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing instance network info cache due to event network-changed-6de0ab34-ff4c-4eee-a7d5-56df50d305ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG oslo_concurrency.lockutils [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG oslo_concurrency.lockutils [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.185 238945 DEBUG nova.network.neutron [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Refreshing network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:05Z|00385|binding|INFO|Releasing lport 38bff87a-fb84-4b77-905a-6370e7595706 from this chassis (sb_readonly=0)
Jan 27 08:50:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:05Z|00386|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:50:05 np0005597378 nova_compute[238941]: 2026-01-27 13:50:05.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 22 KiB/s wr, 162 op/s
Jan 27 08:50:06 np0005597378 nova_compute[238941]: 2026-01-27 13:50:06.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.015 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.015 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.374 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.475 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.476 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.484 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.485 238945 INFO nova.compute.claims [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.628 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.740 238945 DEBUG nova.network.neutron [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updated VIF entry in instance network info cache for port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.741 238945 DEBUG nova.network.neutron [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [{"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:07 np0005597378 nova_compute[238941]: 2026-01-27 13:50:07.763 238945 DEBUG oslo_concurrency.lockutils [req-18b95108-eda4-4b84-bd74-1f896596ead2 req-f41c8395-0f10-46e2-b9a4-2d0cacb99831 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f4421f99-7c11-4331-a349-c0d9713d4dfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637210400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.223 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.228 238945 DEBUG nova.compute.provider_tree [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.244 238945 DEBUG nova.scheduler.client.report [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.267 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.268 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:50:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.319 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.320 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.337 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521793.335626, 18066d7e-b7a1-4ab2-97af-84ef678cfef9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.338 238945 INFO nova.compute.manager [-] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.343 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.367 238945 DEBUG nova.compute.manager [None req-67179f83-2ca9-4b06-a4e0-5328cb67f427 - - - - - -] [instance: 18066d7e-b7a1-4ab2-97af-84ef678cfef9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.372 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.470 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.471 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.471 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Creating image(s)#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.492 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.518 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.545 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.550 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.620 238945 DEBUG nova.policy [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11a9e491e7f24607aa5d3d710b6607ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.639 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.640 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.641 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.641 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.663 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.667 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:08 np0005597378 nova_compute[238941]: 2026-01-27 13:50:08.962 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.021 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] resizing rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.113 238945 DEBUG nova.objects.instance [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.132 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.133 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Ensure instance console log exists: /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.133 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.134 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.134 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:09 np0005597378 nova_compute[238941]: 2026-01-27 13:50:09.853 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Successfully created port: 15ed6f57-c44c-4ee6-a349-3a8efc982101 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:50:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:09Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:9b:a0 10.100.0.10
Jan 27 08:50:09 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:09Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:9b:a0 10.100.0.10
Jan 27 08:50:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 196 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 99 op/s
Jan 27 08:50:10 np0005597378 nova_compute[238941]: 2026-01-27 13:50:10.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.368 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Successfully updated port: 15ed6f57-c44c-4ee6-a349-3a8efc982101 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.386 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.387 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.387 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.497 238945 DEBUG nova.compute.manager [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-changed-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.498 238945 DEBUG nova.compute.manager [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Refreshing instance network info cache due to event network-changed-15ed6f57-c44c-4ee6-a349-3a8efc982101. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.498 238945 DEBUG oslo_concurrency.lockutils [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.581 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:50:11 np0005597378 nova_compute[238941]: 2026-01-27 13:50:11.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 196 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 98 op/s
Jan 27 08:50:12 np0005597378 nova_compute[238941]: 2026-01-27 13:50:12.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.217 238945 DEBUG nova.network.neutron [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.235 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.236 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance network_info: |[{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.236 238945 DEBUG oslo_concurrency.lockutils [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.236 238945 DEBUG nova.network.neutron [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Refreshing network info cache for port 15ed6f57-c44c-4ee6-a349-3a8efc982101 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.239 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start _get_guest_xml network_info=[{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.244 238945 WARNING nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.248 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.249 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.256 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.257 238945 DEBUG nova.virt.libvirt.host [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.258 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.259 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.260 238945 DEBUG nova.virt.hardware [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.263 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230733460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.848 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.868 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:13 np0005597378 nova_compute[238941]: 2026-01-27 13:50:13.872 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 242 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 120 op/s
Jan 27 08:50:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401449526' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.441 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.443 238945 DEBUG nova.virt.libvirt.vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022258544',display_name='tempest-ServerActionsTestOtherB-server-2022258544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022258544',id=46,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-v8la0m7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:08Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=73a36ce7-38f6-4b8c-a3b7-bc84ad632778,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.444 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.445 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.446 238945 DEBUG nova.objects.instance [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.463 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <uuid>73a36ce7-38f6-4b8c-a3b7-bc84ad632778</uuid>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <name>instance-0000002e</name>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestOtherB-server-2022258544</nova:name>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:50:13</nova:creationTime>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <nova:port uuid="15ed6f57-c44c-4ee6-a349-3a8efc982101">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <entry name="serial">73a36ce7-38f6-4b8c-a3b7-bc84ad632778</entry>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <entry name="uuid">73a36ce7-38f6-4b8c-a3b7-bc84ad632778</entry>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:99:33:f8"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <target dev="tap15ed6f57-c4"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/console.log" append="off"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:50:14 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:50:14 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:50:14 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:50:14 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.465 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Preparing to wait for external event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.465 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.466 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.466 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.467 238945 DEBUG nova.virt.libvirt.vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022258544',display_name='tempest-ServerActionsTestOtherB-server-2022258544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022258544',id=46,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-v8la0m7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:08Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=73a36ce7-38f6-4b8c-a3b7-bc84ad632778,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.467 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.468 238945 DEBUG nova.network.os_vif_util [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.468 238945 DEBUG os_vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.470 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.470 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15ed6f57-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.474 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15ed6f57-c4, col_values=(('external_ids', {'iface-id': '15ed6f57-c44c-4ee6-a349-3a8efc982101', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:33:f8', 'vm-uuid': '73a36ce7-38f6-4b8c-a3b7-bc84ad632778'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:14 np0005597378 NetworkManager[48904]: <info>  [1769521814.4769] manager: (tap15ed6f57-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.484 238945 INFO os_vif [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4')#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.548 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.548 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.548 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:99:33:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.549 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Using config drive#033[00m
Jan 27 08:50:14 np0005597378 nova_compute[238941]: 2026-01-27 13:50:14.571 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:14.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.200 238945 DEBUG nova.network.neutron [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updated VIF entry in instance network info cache for port 15ed6f57-c44c-4ee6-a349-3a8efc982101. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.201 238945 DEBUG nova.network.neutron [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [{"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.215 238945 DEBUG oslo_concurrency.lockutils [req-5e29b817-6d18-409d-96e0-d1c86d667bf7 req-0ff51ccd-845f-4ae1-bc17-4bcd0569e9e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.237 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Creating config drive at /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.242 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwuq7ic0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.386 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwuq7ic0" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.413 238945 DEBUG nova.storage.rbd_utils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.417 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.566 238945 DEBUG oslo_concurrency.processutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config 73a36ce7-38f6-4b8c-a3b7-bc84ad632778_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.567 238945 INFO nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deleting local config drive /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778/disk.config because it was imported into RBD.#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:15 np0005597378 kernel: tap15ed6f57-c4: entered promiscuous mode
Jan 27 08:50:15 np0005597378 NetworkManager[48904]: <info>  [1769521815.6348] manager: (tap15ed6f57-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:15Z|00387|binding|INFO|Claiming lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 for this chassis.
Jan 27 08:50:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:15Z|00388|binding|INFO|15ed6f57-c44c-4ee6-a349-3a8efc982101: Claiming fa:16:3e:99:33:f8 10.100.0.14
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.643 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:33:f8 10.100.0.14'], port_security=['fa:16:3e:99:33:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '73a36ce7-38f6-4b8c-a3b7-bc84ad632778', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15ed6f57-c44c-4ee6-a349-3a8efc982101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.644 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15ed6f57-c44c-4ee6-a349-3a8efc982101 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a bound to our chassis#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.645 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:50:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:15Z|00389|binding|INFO|Setting lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 ovn-installed in OVS
Jan 27 08:50:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:15Z|00390|binding|INFO|Setting lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 up in Southbound
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.668 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1a967d-35c0-4507-b3f7-0956f8b82e77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:15 np0005597378 systemd-machined[207425]: New machine qemu-52-instance-0000002e.
Jan 27 08:50:15 np0005597378 systemd-udevd[282707]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:50:15 np0005597378 systemd[1]: Started Virtual Machine qemu-52-instance-0000002e.
Jan 27 08:50:15 np0005597378 NetworkManager[48904]: <info>  [1769521815.6882] device (tap15ed6f57-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:50:15 np0005597378 NetworkManager[48904]: <info>  [1769521815.6888] device (tap15ed6f57-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.704 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0d0c24-37a1-4766-8295-be4cdde8058f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.707 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b7584819-6964-4d03-84f0-d6eff79f4e83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.749 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5df419a4-4c8c-4778-99d7-64963ab6b53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[334e0a07-a8ad-4ad5-992f-b9a8710306db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282719, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6344f565-49ee-4462-ab67-fbbc281a1fde]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282721, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282721, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.794 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:15 np0005597378 nova_compute[238941]: 2026-01-27 13:50:15.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.798 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.798 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.799 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:15.799 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.099 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521816.0993032, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.100 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Started (Lifecycle Event)#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.121 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.125 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521816.099648, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.125 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.146 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.151 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.170 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 246 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 27 08:50:16 np0005597378 nova_compute[238941]: 2026-01-27 13:50:16.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:50:17
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:50:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:17 np0005597378 nova_compute[238941]: 2026-01-27 13:50:17.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:17 np0005597378 podman[282764]: 2026-01-27 13:50:17.784163531 +0000 UTC m=+0.118450061 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:50:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 246 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.637 238945 DEBUG nova.compute.manager [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG oslo_concurrency.lockutils [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG oslo_concurrency.lockutils [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG oslo_concurrency.lockutils [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.638 238945 DEBUG nova.compute.manager [req-f3309107-4de7-4ac1-a1d3-c3e15bbea4d3 req-5317515e-e15d-48ec-9237-2aeb265bfd87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Processing event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.639 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.642 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521818.6426768, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.645 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.648 238945 INFO nova.virt.libvirt.driver [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance spawned successfully.#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.649 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.675 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.680 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.682 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.683 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.683 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.684 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.684 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.685 238945 DEBUG nova.virt.libvirt.driver [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:18 np0005597378 nova_compute[238941]: 2026-01-27 13:50:18.704 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:19 np0005597378 nova_compute[238941]: 2026-01-27 13:50:19.080 238945 INFO nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 10.61 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:50:19 np0005597378 nova_compute[238941]: 2026-01-27 13:50:19.081 238945 DEBUG nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:19 np0005597378 nova_compute[238941]: 2026-01-27 13:50:19.347 238945 INFO nova.compute.manager [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 11.91 seconds to build instance.#033[00m
Jan 27 08:50:19 np0005597378 nova_compute[238941]: 2026-01-27 13:50:19.365 238945 DEBUG oslo_concurrency.lockutils [None req-dac7c677-3914-4049-b6df-cc1612adf8a8 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:19 np0005597378 nova_compute[238941]: 2026-01-27 13:50:19.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:19 np0005597378 podman[282791]: 2026-01-27 13:50:19.721169753 +0000 UTC m=+0.060709302 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 27 08:50:20 np0005597378 nova_compute[238941]: 2026-01-27 13:50:20.160 238945 INFO nova.compute.manager [None req-1f13cff1-90db-48bb-89ca-57f9248012dc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Get console output#033[00m
Jan 27 08:50:20 np0005597378 nova_compute[238941]: 2026-01-27 13:50:20.168 238945 INFO oslo.privsep.daemon [None req-1f13cff1-90db-48bb-89ca-57f9248012dc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp7c4a6u_1/privsep.sock']#033[00m
Jan 27 08:50:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.154 238945 INFO oslo.privsep.daemon [None req-1f13cff1-90db-48bb-89ca-57f9248012dc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:20.881 282814 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:20.885 282814 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:20.887 282814 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:20.888 282814 INFO oslo.privsep.daemon [-] privsep daemon running as pid 282814#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.628 238945 DEBUG nova.compute.manager [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG oslo_concurrency.lockutils [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG oslo_concurrency.lockutils [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG oslo_concurrency.lockutils [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.630 238945 DEBUG nova.compute.manager [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] No waiting events found dispatching network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.631 238945 WARNING nova.compute.manager [req-2365c9fb-230f-4b81-aa15-08bf7520fafc req-2dd99635-65ba-4c19-bf31-56dc54372599 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received unexpected event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:50:21 np0005597378 nova_compute[238941]: 2026-01-27 13:50:21.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 27 08:50:22 np0005597378 nova_compute[238941]: 2026-01-27 13:50:22.729 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:22 np0005597378 nova_compute[238941]: 2026-01-27 13:50:22.730 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:22 np0005597378 nova_compute[238941]: 2026-01-27 13:50:22.920 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:50:22 np0005597378 nova_compute[238941]: 2026-01-27 13:50:22.990 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:22 np0005597378 nova_compute[238941]: 2026-01-27 13:50:22.991 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:22 np0005597378 nova_compute[238941]: 2026-01-27 13:50:22.999 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:50:23 np0005597378 nova_compute[238941]: 2026-01-27 13:50:23.000 238945 INFO nova.compute.claims [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:50:23 np0005597378 nova_compute[238941]: 2026-01-27 13:50:23.331 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414395320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:23 np0005597378 nova_compute[238941]: 2026-01-27 13:50:23.963 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:23 np0005597378 nova_compute[238941]: 2026-01-27 13:50:23.970 238945 DEBUG nova.compute.provider_tree [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.096 238945 DEBUG nova.scheduler.client.report [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.129 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.130 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.192 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.193 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.218 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.239 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.292 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.293 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 247 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.327 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.340 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.341 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.341 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Creating image(s)#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.364 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.386 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.411 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.415 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.452 238945 DEBUG nova.policy [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc97508eec004685b1c36a85261430bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7fc23a96b5e44bf687aafd92e4199313', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.479 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.479 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.480 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.480 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.481 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.482 238945 INFO nova.compute.manager [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Terminating instance#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.483 238945 DEBUG nova.compute.manager [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.484 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.486 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.486 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.493 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.493 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.494 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.494 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.516 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.519 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:24 np0005597378 kernel: tap6de0ab34-ff (unregistering): left promiscuous mode
Jan 27 08:50:24 np0005597378 NetworkManager[48904]: <info>  [1769521824.5339] device (tap6de0ab34-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:50:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:24Z|00391|binding|INFO|Releasing lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac from this chassis (sb_readonly=0)
Jan 27 08:50:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:24Z|00392|binding|INFO|Setting lport 6de0ab34-ff4c-4eee-a7d5-56df50d305ac down in Southbound
Jan 27 08:50:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:24Z|00393|binding|INFO|Removing iface tap6de0ab34-ff ovn-installed in OVS
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.556 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9b:a0 10.100.0.10'], port_security=['fa:16:3e:27:9b:a0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4421f99-7c11-4331-a349-c0d9713d4dfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-018af714-c949-4d8d-b260-666bb53f2891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33ea67a44b80493cb75d174ebab96310', 'neutron:revision_number': '4', 'neutron:security_group_ids': '81a640fc-8865-48fd-ac8a-a12381a2c86d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09612832-bac4-4af1-b317-dd3540d37656, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6de0ab34-ff4c-4eee-a7d5-56df50d305ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.557 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6de0ab34-ff4c-4eee-a7d5-56df50d305ac in datapath 018af714-c949-4d8d-b260-666bb53f2891 unbound from our chassis#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.559 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 018af714-c949-4d8d-b260-666bb53f2891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.560 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb142b48-9259-40ee-b663-d2fbd45978af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.561 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 namespace which is not needed anymore#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.582 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.583 238945 INFO nova.compute.claims [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:24 np0005597378 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 27 08:50:24 np0005597378 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Consumed 13.890s CPU time.
Jan 27 08:50:24 np0005597378 systemd-machined[207425]: Machine qemu-51-instance-0000002d terminated.
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.736 238945 INFO nova.virt.libvirt.driver [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Instance destroyed successfully.#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.737 238945 DEBUG nova.objects.instance [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lazy-loading 'resources' on Instance uuid f4421f99-7c11-4331-a349-c0d9713d4dfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:24 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : haproxy version is 2.8.14-c23fe91
Jan 27 08:50:24 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [NOTICE]   (282369) : path to executable is /usr/sbin/haproxy
Jan 27 08:50:24 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [WARNING]  (282369) : Exiting Master process...
Jan 27 08:50:24 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [WARNING]  (282369) : Exiting Master process...
Jan 27 08:50:24 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [ALERT]    (282369) : Current worker (282371) exited with code 143 (Terminated)
Jan 27 08:50:24 np0005597378 neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891[282365]: [WARNING]  (282369) : All workers exited. Exiting... (0)
Jan 27 08:50:24 np0005597378 systemd[1]: libpod-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181.scope: Deactivated successfully.
Jan 27 08:50:24 np0005597378 podman[282955]: 2026-01-27 13:50:24.756899883 +0000 UTC m=+0.088334143 container died 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.777 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181-userdata-shm.mount: Deactivated successfully.
Jan 27 08:50:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2039b354fcbd140b331e952c67daa725beaff54ea0acc54cb7b969f56969eeb6-merged.mount: Deactivated successfully.
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.818 238945 DEBUG nova.virt.libvirt.vif [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-365811881',display_name='tempest-ServersTestJSON-server-365811881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-365811881',id=45,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiCDKaWjVQivEDqU3EoeLMmBvKQjqFKawS16b9UtLcg366OiAVxi5zfMPJLWF8VdZYXGmdopeJZDeH+kDxj9AThmsqXf4XhiP8H8FKjti0H4tVMOh5j6gSyyILFFBO17Q==',key_name='tempest-keypair-1767690537',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33ea67a44b80493cb75d174ebab96310',ramdisk_id='',reservation_id='r-ormgr0hq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-782629825',owner_user_name='tempest-ServersTestJSON-782629825-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:49:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c17e8011c1b44fa3beaccb9dacec4913',uuid=f4421f99-7c11-4331-a349-c0d9713d4dfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.819 238945 DEBUG nova.network.os_vif_util [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converting VIF {"id": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "address": "fa:16:3e:27:9b:a0", "network": {"id": "018af714-c949-4d8d-b260-666bb53f2891", "bridge": "br-int", "label": "tempest-ServersTestJSON-864984025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33ea67a44b80493cb75d174ebab96310", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6de0ab34-ff", "ovs_interfaceid": "6de0ab34-ff4c-4eee-a7d5-56df50d305ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.820 238945 DEBUG nova.network.os_vif_util [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.820 238945 DEBUG os_vif [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.823 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6de0ab34-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:24 np0005597378 podman[282955]: 2026-01-27 13:50:24.824649503 +0000 UTC m=+0.156083753 container cleanup 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.825 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:24 np0005597378 systemd[1]: libpod-conmon-23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181.scope: Deactivated successfully.
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.858 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.889 238945 INFO os_vif [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9b:a0,bridge_name='br-int',has_traffic_filtering=True,id=6de0ab34-ff4c-4eee-a7d5-56df50d305ac,network=Network(018af714-c949-4d8d-b260-666bb53f2891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6de0ab34-ff')#033[00m
Jan 27 08:50:24 np0005597378 podman[283009]: 2026-01-27 13:50:24.897368655 +0000 UTC m=+0.045201815 container remove 23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d162e0ef-05c7-4dd8-9faf-5b30668433f7]: (4, ('Tue Jan 27 01:50:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 (23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181)\n23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181\nTue Jan 27 01:50:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 (23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181)\n23c24f45180438e8d6a4822360274592d974443f814a37a4c44302c942b33181\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc60b325-9947-457a-bd6f-67ec4d47e97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.906 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap018af714-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:24 np0005597378 kernel: tap018af714-c0: left promiscuous mode
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90b975b0-72dd-4e39-9ac4-6032785f9b40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.935 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] resizing rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.942 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62744440-cee2-4ecf-8bbc-5f06ea35e18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[59b5c0f9-7864-40e8-a52e-cae21a642e5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd6e94b-b481-43b0-991f-9df90e279f1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443710, 'reachable_time': 21687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283084, 'error': None, 'target': 'ovnmeta-018af714-c949-4d8d-b260-666bb53f2891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 systemd[1]: run-netns-ovnmeta\x2d018af714\x2dc949\x2d4d8d\x2db260\x2d666bb53f2891.mount: Deactivated successfully.
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.969 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-018af714-c949-4d8d-b260-666bb53f2891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:50:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:24.969 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[52b7b270-c90f-4ea1-a536-2a5855079195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:24 np0005597378 nova_compute[238941]: 2026-01-27 13:50:24.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.043 238945 DEBUG nova.objects.instance [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'migration_context' on Instance uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.065 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.067 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Ensure instance console log exists: /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.068 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.068 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.068 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.261 238945 INFO nova.virt.libvirt.driver [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deleting instance files /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc_del#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.262 238945 INFO nova.virt.libvirt.driver [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deletion of /var/lib/nova/instances/f4421f99-7c11-4331-a349-c0d9713d4dfc_del complete#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.315 238945 INFO nova.compute.manager [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.316 238945 DEBUG oslo.service.loopingcall [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.317 238945 DEBUG nova.compute.manager [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.317 238945 DEBUG nova.network.neutron [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.326 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Successfully created port: 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:50:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644839987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.400 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.407 238945 DEBUG nova.compute.provider_tree [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.493 238945 DEBUG nova.scheduler.client.report [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.612 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.613 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.686 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.687 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.709 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:50:25 np0005597378 nova_compute[238941]: 2026-01-27 13:50:25.934 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.117 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.119 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.119 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Creating image(s)#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.138 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.158 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.179 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.183 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.232 238945 DEBUG nova.policy [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2275fd74011649b8b9de6b62ea5c6fc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0413d6d71e34cba95a1433946c34b12', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.255 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.256 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.257 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.257 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.279 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.284 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 195f21e5-7b85-4397-88db-891ef125522f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 268 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.670 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 195f21e5-7b85-4397-88db-891ef125522f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.732 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] resizing rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.807 238945 DEBUG nova.objects.instance [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lazy-loading 'migration_context' on Instance uuid 195f21e5-7b85-4397-88db-891ef125522f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.825 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.825 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Ensure instance console log exists: /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.826 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.826 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.827 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:26 np0005597378 nova_compute[238941]: 2026-01-27 13:50:26.983 238945 DEBUG nova.network.neutron [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.044 238945 INFO nova.compute.manager [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Took 1.73 seconds to deallocate network for instance.#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.135 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Successfully updated port: 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:50:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.227 238945 DEBUG nova.compute.manager [req-45478372-fd7f-4ca6-942c-a30f70b8654d req-45b6a7cf-4d8a-43b1-b8e3-ac6829853622 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Received event network-vif-deleted-6de0ab34-ff4c-4eee-a7d5-56df50d305ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.313 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.314 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.315 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.315 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.315 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.447 238945 DEBUG oslo_concurrency.processutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020674025010277053 of space, bias 1.0, pg target 0.6202207503083116 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00066818637975827 of space, bias 1.0, pg target 0.20045591392748102 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1636354327551156e-06 of space, bias 4.0, pg target 0.0013963625193061388 quantized to 16 (current 16)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:50:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:50:27 np0005597378 nova_compute[238941]: 2026-01-27 13:50:27.892 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:50:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663692838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.012 238945 DEBUG nova.compute.manager [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.012 238945 DEBUG nova.compute.manager [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing instance network info cache due to event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.013 238945 DEBUG oslo_concurrency.lockutils [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.024 238945 DEBUG oslo_concurrency.processutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.031 238945 DEBUG nova.compute.provider_tree [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.068 238945 DEBUG nova.scheduler.client.report [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:50:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 256 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 114 op/s
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.429 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.456 238945 INFO nova.scheduler.client.report [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Deleted allocations for instance f4421f99-7c11-4331-a349-c0d9713d4dfc#033[00m
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.479459457 +0000 UTC m=+0.052372018 container create f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.497 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Successfully created port: 9c37d828-4d8b-4de7-a966-d2d71349bb46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:50:28 np0005597378 systemd[1]: Started libpod-conmon-f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3.scope.
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.529 238945 DEBUG oslo_concurrency.lockutils [None req-ea71b94d-ec7a-4d3d-bce2-09c1dabd12fb c17e8011c1b44fa3beaccb9dacec4913 33ea67a44b80493cb75d174ebab96310 - - default default] Lock "f4421f99-7c11-4331-a349-c0d9713d4dfc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.535 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.538 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.450997693 +0000 UTC m=+0.023910284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.563 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.567488191 +0000 UTC m=+0.140400772 container init f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.575059054 +0000 UTC m=+0.147971625 container start f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:50:28 np0005597378 nostalgic_kirch[283468]: 167 167
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.582348399 +0000 UTC m=+0.155260980 container attach f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:50:28 np0005597378 systemd[1]: libpod-f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3.scope: Deactivated successfully.
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.582961736 +0000 UTC m=+0.155874297 container died f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:50:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-94ea84855905236eabb8094952842a14331b626dd97f9c6378f9486e3f8f8274-merged.mount: Deactivated successfully.
Jan 27 08:50:28 np0005597378 podman[283452]: 2026-01-27 13:50:28.619805166 +0000 UTC m=+0.192717727 container remove f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 27 08:50:28 np0005597378 systemd[1]: libpod-conmon-f9bebff0baf5a67d3db57e922dae4c8528d7f262801f436f485d0523a96f8ab3.scope: Deactivated successfully.
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.636 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.637 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.647 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.647 238945 INFO nova.compute.claims [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:50:28 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:50:28 np0005597378 podman[283491]: 2026-01-27 13:50:28.786886523 +0000 UTC m=+0.037217870 container create 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:50:28 np0005597378 nova_compute[238941]: 2026-01-27 13:50:28.821 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:28 np0005597378 systemd[1]: Started libpod-conmon-6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083.scope.
Jan 27 08:50:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:28 np0005597378 podman[283491]: 2026-01-27 13:50:28.771401247 +0000 UTC m=+0.021732614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:50:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:28 np0005597378 podman[283491]: 2026-01-27 13:50:28.888539983 +0000 UTC m=+0.138871350 container init 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:50:28 np0005597378 podman[283491]: 2026-01-27 13:50:28.895726146 +0000 UTC m=+0.146057493 container start 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:50:28 np0005597378 podman[283491]: 2026-01-27 13:50:28.911559391 +0000 UTC m=+0.161890738 container attach 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.071 238945 DEBUG nova.network.neutron [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.177 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.178 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance network_info: |[{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.179 238945 DEBUG oslo_concurrency.lockutils [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.179 238945 DEBUG nova.network.neutron [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.182 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start _get_guest_xml network_info=[{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.187 238945 WARNING nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.192 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.193 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.195 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.196 238945 DEBUG nova.virt.libvirt.host [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.196 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.197 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.198 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.198 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.198 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.199 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.199 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.199 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.200 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.200 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.200 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.201 238945 DEBUG nova.virt.hardware [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.204 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:29 np0005597378 brave_villani[283508]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:50:29 np0005597378 brave_villani[283508]: --> All data devices are unavailable
Jan 27 08:50:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876844393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:29 np0005597378 systemd[1]: libpod-6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083.scope: Deactivated successfully.
Jan 27 08:50:29 np0005597378 podman[283491]: 2026-01-27 13:50:29.410309816 +0000 UTC m=+0.660641193 container died 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.429 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-de6759e59b9427df605de6f4036d20f8ce68e3a9d33baf27c43df6f26a8b7a02-merged.mount: Deactivated successfully.
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.440 238945 DEBUG nova.compute.provider_tree [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.479 238945 DEBUG nova.scheduler.client.report [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:29 np0005597378 podman[283491]: 2026-01-27 13:50:29.498445763 +0000 UTC m=+0.748777110 container remove 6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.498 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.499 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:50:29 np0005597378 systemd[1]: libpod-conmon-6789908f3d77ecd913363b020fdfb6cd99e0dbfb3e0d58df97f1174544041083.scope: Deactivated successfully.
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.547 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.548 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.574 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.591 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.689 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.692 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.693 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Creating image(s)#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.717 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.740 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.767 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.771 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417864171' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.808 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.828 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.831 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.866 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.868 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.868 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.869 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.891 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.895 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3746a705-72ec-476a-a3c2-8cd4417b7367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:29 np0005597378 nova_compute[238941]: 2026-01-27 13:50:29.939 238945 DEBUG nova.policy [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11a9e491e7f24607aa5d3d710b6607ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:50:29 np0005597378 podman[283740]: 2026-01-27 13:50:29.96866414 +0000 UTC m=+0.041952417 container create e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:50:30 np0005597378 systemd[1]: Started libpod-conmon-e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0.scope.
Jan 27 08:50:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:30 np0005597378 podman[283740]: 2026-01-27 13:50:29.950100342 +0000 UTC m=+0.023388639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:50:30 np0005597378 podman[283740]: 2026-01-27 13:50:30.077145004 +0000 UTC m=+0.150433301 container init e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:50:30 np0005597378 podman[283740]: 2026-01-27 13:50:30.086145446 +0000 UTC m=+0.159433723 container start e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:50:30 np0005597378 suspicious_euler[283790]: 167 167
Jan 27 08:50:30 np0005597378 systemd[1]: libpod-e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0.scope: Deactivated successfully.
Jan 27 08:50:30 np0005597378 podman[283740]: 2026-01-27 13:50:30.093424892 +0000 UTC m=+0.166713169 container attach e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 08:50:30 np0005597378 podman[283740]: 2026-01-27 13:50:30.095099857 +0000 UTC m=+0.168388144 container died e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:50:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-56eff34c174d4d65184f29c4c7308fe97fd83fcca5fbb15a91e130fad7ce9046-merged.mount: Deactivated successfully.
Jan 27 08:50:30 np0005597378 podman[283740]: 2026-01-27 13:50:30.280976698 +0000 UTC m=+0.354264975 container remove e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:50:30 np0005597378 systemd[1]: libpod-conmon-e0f65d63a98219d710f9f02345feec94a29ed4db1238384a2121a87106391ec0.scope: Deactivated successfully.
Jan 27 08:50:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 260 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 150 op/s
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.350 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3746a705-72ec-476a-a3c2-8cd4417b7367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012156914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.437 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] resizing rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.481 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.482461209 +0000 UTC m=+0.049356636 container create af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.482 238945 DEBUG nova.virt.libvirt.vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-174010246',display_name='tempest-SecurityGroupsTestJSON-server-174010246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-174010246',id=47,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-ozsokgto',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:24Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=17b9acbe-02b3-41d7-af4b-fd8b3d902d47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.483 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.484 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.489 238945 DEBUG nova.objects.instance [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:30 np0005597378 systemd[1]: Started libpod-conmon-af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03.scope.
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.45865134 +0000 UTC m=+0.025546797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:50:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.594663033 +0000 UTC m=+0.161558480 container init af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.602630607 +0000 UTC m=+0.169526034 container start af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.61543122 +0000 UTC m=+0.182326677 container attach af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.629 238945 DEBUG nova.objects.instance [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.647 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <uuid>17b9acbe-02b3-41d7-af4b-fd8b3d902d47</uuid>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <name>instance-0000002f</name>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:name>tempest-SecurityGroupsTestJSON-server-174010246</nova:name>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:50:29</nova:creationTime>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:user uuid="dc97508eec004685b1c36a85261430bd">tempest-SecurityGroupsTestJSON-915122805-project-member</nova:user>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:project uuid="7fc23a96b5e44bf687aafd92e4199313">tempest-SecurityGroupsTestJSON-915122805</nova:project>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <nova:port uuid="8a6b3097-3b81-4bf7-8197-4ae8263c57e1">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <entry name="serial">17b9acbe-02b3-41d7-af4b-fd8b3d902d47</entry>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <entry name="uuid">17b9acbe-02b3-41d7-af4b-fd8b3d902d47</entry>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:4b:1f:41"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <target dev="tap8a6b3097-3b"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/console.log" append="off"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:50:30 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:50:30 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:50:30 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:50:30 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.649 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Preparing to wait for external event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.649 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.650 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.650 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.651 238945 DEBUG nova.virt.libvirt.vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-174010246',display_name='tempest-SecurityGroupsTestJSON-server-174010246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-174010246',id=47,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-ozsokgto',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:24Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=17b9acbe-02b3-41d7-af4b-fd8b3d902d47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.651 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.652 238945 DEBUG nova.network.os_vif_util [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.653 238945 DEBUG os_vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.654 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.654 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.655 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.660 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a6b3097-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.660 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a6b3097-3b, col_values=(('external_ids', {'iface-id': '8a6b3097-3b81-4bf7-8197-4ae8263c57e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:1f:41', 'vm-uuid': '17b9acbe-02b3-41d7-af4b-fd8b3d902d47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:30 np0005597378 NetworkManager[48904]: <info>  [1769521830.6636] manager: (tap8a6b3097-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.669 238945 INFO os_vif [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b')#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.774 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.774 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Ensure instance console log exists: /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.775 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.775 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:30 np0005597378 nova_compute[238941]: 2026-01-27 13:50:30.775 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]: {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:    "0": [
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:        {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "devices": [
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "/dev/loop3"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            ],
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_name": "ceph_lv0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_size": "21470642176",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "name": "ceph_lv0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "tags": {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cluster_name": "ceph",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.crush_device_class": "",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.encrypted": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.objectstore": "bluestore",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osd_id": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.type": "block",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.vdo": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.with_tpm": "0"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            },
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "type": "block",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "vg_name": "ceph_vg0"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:        }
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:    ],
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:    "1": [
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:        {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "devices": [
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "/dev/loop4"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            ],
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_name": "ceph_lv1",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_size": "21470642176",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "name": "ceph_lv1",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "tags": {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cluster_name": "ceph",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.crush_device_class": "",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.encrypted": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.objectstore": "bluestore",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osd_id": "1",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.type": "block",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.vdo": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.with_tpm": "0"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            },
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "type": "block",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "vg_name": "ceph_vg1"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:        }
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:    ],
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:    "2": [
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:        {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "devices": [
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "/dev/loop5"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            ],
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_name": "ceph_lv2",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_size": "21470642176",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "name": "ceph_lv2",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "tags": {
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.cluster_name": "ceph",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.crush_device_class": "",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.encrypted": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.objectstore": "bluestore",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osd_id": "2",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.type": "block",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.vdo": "0",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:                "ceph.with_tpm": "0"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            },
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "type": "block",
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:            "vg_name": "ceph_vg2"
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:        }
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]:    ]
Jan 27 08:50:30 np0005597378 confident_cartwright[283889]: }
Jan 27 08:50:30 np0005597378 systemd[1]: libpod-af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03.scope: Deactivated successfully.
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.927699707 +0000 UTC m=+0.494595124 container died af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:50:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fdaa8979ce7a443442a18468d35445a12797838750ce9ca6a8aeac88202ddde5-merged.mount: Deactivated successfully.
Jan 27 08:50:30 np0005597378 podman[283850]: 2026-01-27 13:50:30.97772998 +0000 UTC m=+0.544625407 container remove af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:50:30 np0005597378 systemd[1]: libpod-conmon-af2fe52c54944c7e79b4db0931be689a298d0277e01e3010c657052ea0394e03.scope: Deactivated successfully.
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.118 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.120 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No VIF found with MAC fa:16:3e:4b:1f:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.120 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Using config drive#033[00m
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.140 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.398982874 +0000 UTC m=+0.037617022 container create 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:50:31 np0005597378 systemd[1]: Started libpod-conmon-5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b.scope.
Jan 27 08:50:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.477351569 +0000 UTC m=+0.115985727 container init 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.383530159 +0000 UTC m=+0.022164317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.484750697 +0000 UTC m=+0.123384835 container start 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.488542319 +0000 UTC m=+0.127176487 container attach 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:50:31 np0005597378 stoic_roentgen[284028]: 167 167
Jan 27 08:50:31 np0005597378 systemd[1]: libpod-5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b.scope: Deactivated successfully.
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.490190493 +0000 UTC m=+0.128824631 container died 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:50:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dc892c9d27b1e1b283d9cd6f19531cd835e3b3a73fc06e5f939c5a643dc441c2-merged.mount: Deactivated successfully.
Jan 27 08:50:31 np0005597378 podman[284011]: 2026-01-27 13:50:31.5667648 +0000 UTC m=+0.205398938 container remove 5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:50:31 np0005597378 systemd[1]: libpod-conmon-5613dfa8ef47c3610ea64ea290054a5756bb71599e92fd57c984b0f3b5296d0b.scope: Deactivated successfully.
Jan 27 08:50:31 np0005597378 podman[284055]: 2026-01-27 13:50:31.754088451 +0000 UTC m=+0.045491753 container create c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:50:31 np0005597378 systemd[1]: Started libpod-conmon-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope.
Jan 27 08:50:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:31 np0005597378 podman[284055]: 2026-01-27 13:50:31.73395472 +0000 UTC m=+0.025358052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:50:31 np0005597378 podman[284055]: 2026-01-27 13:50:31.868075522 +0000 UTC m=+0.159478824 container init c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:50:31 np0005597378 podman[284055]: 2026-01-27 13:50:31.874142774 +0000 UTC m=+0.165546076 container start c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:50:31 np0005597378 podman[284055]: 2026-01-27 13:50:31.877797043 +0000 UTC m=+0.169200345 container attach c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:31 np0005597378 nova_compute[238941]: 2026-01-27 13:50:31.993 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Successfully updated port: 9c37d828-4d8b-4de7-a966-d2d71349bb46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:50:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:32Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:33:f8 10.100.0.14
Jan 27 08:50:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:32Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:33:f8 10.100.0.14
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.115 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.116 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquired lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.116 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.156 238945 DEBUG nova.compute.manager [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-changed-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.157 238945 DEBUG nova.compute.manager [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Refreshing instance network info cache due to event network-changed-9c37d828-4d8b-4de7-a966-d2d71349bb46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.157 238945 DEBUG oslo_concurrency.lockutils [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 260 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.366 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.470 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Creating config drive at /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.475 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygj_0ky1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:32 np0005597378 lvm[284151]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:50:32 np0005597378 lvm[284151]: VG ceph_vg0 finished
Jan 27 08:50:32 np0005597378 lvm[284154]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:50:32 np0005597378 lvm[284154]: VG ceph_vg1 finished
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.592 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Successfully created port: 3c6790eb-61b3-4e44-be64-1807d3342c68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:50:32 np0005597378 lvm[284156]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:50:32 np0005597378 lvm[284156]: VG ceph_vg2 finished
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.612 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygj_0ky1" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.639 238945 DEBUG nova.storage.rbd_utils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.643 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:32 np0005597378 optimistic_elion[284072]: {}
Jan 27 08:50:32 np0005597378 systemd[1]: libpod-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope: Deactivated successfully.
Jan 27 08:50:32 np0005597378 podman[284055]: 2026-01-27 13:50:32.737174533 +0000 UTC m=+1.028577835 container died c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:50:32 np0005597378 systemd[1]: libpod-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope: Consumed 1.305s CPU time.
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.764 238945 DEBUG nova.network.neutron [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updated VIF entry in instance network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.765 238945 DEBUG nova.network.neutron [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cb53d971cba875d934313e12c661220fa7335815399b11b8326e0ff69b390706-merged.mount: Deactivated successfully.
Jan 27 08:50:32 np0005597378 podman[284055]: 2026-01-27 13:50:32.81859947 +0000 UTC m=+1.110002772 container remove c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elion, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.822 238945 DEBUG oslo_concurrency.processutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config 17b9acbe-02b3-41d7-af4b-fd8b3d902d47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.823 238945 INFO nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deleting local config drive /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47/disk.config because it was imported into RBD.#033[00m
Jan 27 08:50:32 np0005597378 systemd[1]: libpod-conmon-c80878e3fa12cc3dc18a3050bee546c2a27440c0d3dab0432662e3443ed332ca.scope: Deactivated successfully.
Jan 27 08:50:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:50:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:50:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:50:32 np0005597378 kernel: tap8a6b3097-3b: entered promiscuous mode
Jan 27 08:50:32 np0005597378 NetworkManager[48904]: <info>  [1769521832.8778] manager: (tap8a6b3097-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Jan 27 08:50:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:32Z|00394|binding|INFO|Claiming lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 for this chassis.
Jan 27 08:50:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:32Z|00395|binding|INFO|8a6b3097-3b81-4bf7-8197-4ae8263c57e1: Claiming fa:16:3e:4b:1f:41 10.100.0.7
Jan 27 08:50:32 np0005597378 systemd-udevd[284155]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:50:32 np0005597378 NetworkManager[48904]: <info>  [1769521832.8918] device (tap8a6b3097-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:50:32 np0005597378 NetworkManager[48904]: <info>  [1769521832.8929] device (tap8a6b3097-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:50:32 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:32Z|00396|binding|INFO|Setting lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 ovn-installed in OVS
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:32 np0005597378 nova_compute[238941]: 2026-01-27 13:50:32.909 238945 DEBUG oslo_concurrency.lockutils [req-6a60a905-206b-4546-b20c-2d599e07fc97 req-190df82e-7c7d-4275-a301-2b4027dafe71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:32 np0005597378 systemd-machined[207425]: New machine qemu-53-instance-0000002f.
Jan 27 08:50:32 np0005597378 systemd[1]: Started Virtual Machine qemu-53-instance-0000002f.
Jan 27 08:50:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:33Z|00397|binding|INFO|Setting lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 up in Southbound
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.050 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:1f:41 10.100.0.7'], port_security=['fa:16:3e:4b:1f:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17b9acbe-02b3-41d7-af4b-fd8b3d902d47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8a6b3097-3b81-4bf7-8197-4ae8263c57e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.051 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 bound to our chassis#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.053 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[df17aee3-877b-4736-a3f2-13df9bc0fc75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.066 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fa17e2f-41 in ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.068 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fa17e2f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63d4417c-903a-4d1e-a35e-dd0174853ae8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22d0a44a-9688-4d65-ace4-ee1dbf194520]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.082 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[04907d93-e1dc-432d-826e-1dfabdf76a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f88b5b22-aee7-49ca-baba-f3db773e96ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.125 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e287b2cb-aff0-4976-86dd-f50fb53a673c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7ead5e-d40c-46f3-ad9a-67acef5c0979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 NetworkManager[48904]: <info>  [1769521833.1323] manager: (tap6fa17e2f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.157 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a489f2-7b0d-467a-833e-b2e38a112cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.160 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[045a91f6-d705-4229-b789-fea75a0b5b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 NetworkManager[48904]: <info>  [1769521833.1813] device (tap6fa17e2f-40): carrier: link connected
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.186 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[23b941c1-ba09-4c56-83e3-8d15ea1e2ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.202 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1050411f-e409-4e8b-8150-1148abdce191]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284275, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcaa611-b908-43dd-ac33-82adb072371b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:db61'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447280, 'tstamp': 447280}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284276, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5d5c8f-d37d-4bbf-813b-a5be3d978318]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284277, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e30f9c-8516-4ae1-9716-8c23c1afb368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.322 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0079d637-0a92-4ae6-8cee-59701374dc45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.323 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.323 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.324 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:33 np0005597378 kernel: tap6fa17e2f-40: entered promiscuous mode
Jan 27 08:50:33 np0005597378 NetworkManager[48904]: <info>  [1769521833.3265] manager: (tap6fa17e2f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.329 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:33 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:33Z|00398|binding|INFO|Releasing lport 023f53bd-5452-48b1-a708-41a1d13bdb08 from this chassis (sb_readonly=0)
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.330 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.348 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.349 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.350 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a96c22-92df-4ebe-ba15-d255ae943152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.350 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.pid.haproxy
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 6fa17e2f-4576-4e68-b7d9-6d78705f8a05
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:50:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:33.351 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'env', 'PROCESS_TAG=haproxy-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fa17e2f-4576-4e68-b7d9-6d78705f8a05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.561 238945 DEBUG nova.network.neutron [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updating instance_info_cache with network_info: [{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:33 np0005597378 podman[284309]: 2026-01-27 13:50:33.683911538 +0000 UTC m=+0.021168159 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.798 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Releasing lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.799 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance network_info: |[{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.799 238945 DEBUG oslo_concurrency.lockutils [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.800 238945 DEBUG nova.network.neutron [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Refreshing network info cache for port 9c37d828-4d8b-4de7-a966-d2d71349bb46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.802 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start _get_guest_xml network_info=[{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.808 238945 WARNING nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.814 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.814 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.817 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.817 238945 DEBUG nova.virt.libvirt.host [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.818 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.818 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.819 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.819 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.819 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.820 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.820 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.820 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.821 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.821 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.822 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.822 238945 DEBUG nova.virt.hardware [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:50:33 np0005597378 nova_compute[238941]: 2026-01-27 13:50:33.825 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:33 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:50:33 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:50:33 np0005597378 podman[284309]: 2026-01-27 13:50:33.907722329 +0000 UTC m=+0.244978930 container create 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 08:50:34 np0005597378 systemd[1]: Started libpod-conmon-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927.scope.
Jan 27 08:50:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0fe96905189fab7283c3eb95c3fab982410fb113c0b0e10de31b1021d24e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:34 np0005597378 podman[284309]: 2026-01-27 13:50:34.141025924 +0000 UTC m=+0.478282525 container init 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:50:34 np0005597378 podman[284309]: 2026-01-27 13:50:34.14717126 +0000 UTC m=+0.484427861 container start 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:50:34 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : New worker (284391) forked
Jan 27 08:50:34 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : Loading success.
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.239 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521834.2387881, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.240 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Started (Lifecycle Event)#033[00m
Jan 27 08:50:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 318 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 170 op/s
Jan 27 08:50:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024818540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.475 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Successfully updated port: 3c6790eb-61b3-4e44-be64-1807d3342c68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.482 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.501 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.504 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.788 238945 DEBUG nova.compute.manager [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.789 238945 DEBUG oslo_concurrency.lockutils [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.790 238945 DEBUG oslo_concurrency.lockutils [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.790 238945 DEBUG oslo_concurrency.lockutils [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.790 238945 DEBUG nova.compute.manager [req-bbc2ee4e-be7d-41fa-a60c-aea961c2fe6e req-9b90a003-8735-4987-8870-eb342c86cd26 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Processing event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.791 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.813 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.826 238945 INFO nova.virt.libvirt.driver [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance spawned successfully.#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.826 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.925 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:34 np0005597378 nova_compute[238941]: 2026-01-27 13:50:34.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859831611' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.055 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.057 238945 DEBUG nova.virt.libvirt.vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1954376749',display_name='tempest-InstanceActionsNegativeTestJSON-server-1954376749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1954376749',id=48,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0413d6d71e34cba95a1433946c34b12',ramdisk_id='',reservation_id='r-ygszc55r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1702192513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1702192513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:25Z,user_data=None,user_id='2275fd74011649b8b9de6b62ea5c6fc5',uuid=195f21e5-7b85-4397-88db-891ef125522f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.057 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converting VIF {"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.058 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.059 238945 DEBUG nova.objects.instance [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 195f21e5-7b85-4397-88db-891ef125522f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.205 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.206 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.206 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.347 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <uuid>195f21e5-7b85-4397-88db-891ef125522f</uuid>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <name>instance-00000030</name>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1954376749</nova:name>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:50:33</nova:creationTime>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:user uuid="2275fd74011649b8b9de6b62ea5c6fc5">tempest-InstanceActionsNegativeTestJSON-1702192513-project-member</nova:user>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:project uuid="a0413d6d71e34cba95a1433946c34b12">tempest-InstanceActionsNegativeTestJSON-1702192513</nova:project>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <nova:port uuid="9c37d828-4d8b-4de7-a966-d2d71349bb46">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <entry name="serial">195f21e5-7b85-4397-88db-891ef125522f</entry>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <entry name="uuid">195f21e5-7b85-4397-88db-891ef125522f</entry>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/195f21e5-7b85-4397-88db-891ef125522f_disk">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/195f21e5-7b85-4397-88db-891ef125522f_disk.config">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:5f:39:0d"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <target dev="tap9c37d828-4d"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/console.log" append="off"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:50:35 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:50:35 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:50:35 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:50:35 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.347 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Preparing to wait for external event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.347 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.348 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.348 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.349 238945 DEBUG nova.virt.libvirt.vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1954376749',display_name='tempest-InstanceActionsNegativeTestJSON-server-1954376749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1954376749',id=48,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0413d6d71e34cba95a1433946c34b12',ramdisk_id='',reservation_id='r-ygszc55r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1702192513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1702192513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:25Z,user_data=None,user_id='2275fd74011649b8b9de6b62ea5c6fc5',uuid=195f21e5-7b85-4397-88db-891ef125522f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.349 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converting VIF {"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.349 238945 DEBUG nova.network.os_vif_util [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.350 238945 DEBUG os_vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.352 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.352 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.352 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.353 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521834.2397187, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.353 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.362 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c37d828-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.363 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c37d828-4d, col_values=(('external_ids', {'iface-id': '9c37d828-4d8b-4de7-a966-d2d71349bb46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:39:0d', 'vm-uuid': '195f21e5-7b85-4397-88db-891ef125522f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.364 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:35 np0005597378 NetworkManager[48904]: <info>  [1769521835.3652] manager: (tap9c37d828-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.370 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.371 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.371 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.371 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.372 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.372 238945 DEBUG nova.virt.libvirt.driver [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.378 238945 INFO os_vif [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d')#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.411 238945 DEBUG nova.compute.manager [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.412 238945 DEBUG nova.compute.manager [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing instance network info cache due to event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.412 238945 DEBUG oslo_concurrency.lockutils [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.472 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.475 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521834.794127, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.475 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.504 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.523 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.525 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.606 238945 INFO nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 11.27 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.606 238945 DEBUG nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.774 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.868 238945 INFO nova.compute.manager [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 12.90 seconds to build instance.#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.871 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.871 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.872 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] No VIF found with MAC fa:16:3e:5f:39:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.872 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Using config drive#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.896 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.915 238945 DEBUG nova.network.neutron [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updated VIF entry in instance network info cache for port 9c37d828-4d8b-4de7-a966-d2d71349bb46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.916 238945 DEBUG nova.network.neutron [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updating instance_info_cache with network_info: [{"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:35 np0005597378 nova_compute[238941]: 2026-01-27 13:50:35.968 238945 DEBUG oslo_concurrency.lockutils [req-f833b984-065e-438f-8ecd-ef800b0dbea2 req-fe727899-f612-4c82-b91a-f6758cf7fb43 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-195f21e5-7b85-4397-88db-891ef125522f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.010 238945 DEBUG oslo_concurrency.lockutils [None req-cab98b90-ac6c-41b0-a60f-997a50e24404 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 7.5 MiB/s wr, 180 op/s
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.806 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Creating config drive at /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.814 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpefgvf14k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.910 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.929 238945 DEBUG nova.compute.manager [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.930 238945 DEBUG oslo_concurrency.lockutils [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.931 238945 DEBUG oslo_concurrency.lockutils [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.932 238945 DEBUG oslo_concurrency.lockutils [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.933 238945 DEBUG nova.compute.manager [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] No waiting events found dispatching network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.933 238945 WARNING nova.compute.manager [req-1c19905a-65a0-4ca7-ad80-09e5326327ac req-871040aa-9222-4204-a2c4-93a1ec40ac45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received unexpected event network-vif-plugged-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.959 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpefgvf14k" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.982 238945 DEBUG nova.storage.rbd_utils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] rbd image 195f21e5-7b85-4397-88db-891ef125522f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:36 np0005597378 nova_compute[238941]: 2026-01-27 13:50:36.986 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config 195f21e5-7b85-4397-88db-891ef125522f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.126 238945 DEBUG oslo_concurrency.processutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config 195f21e5-7b85-4397-88db-891ef125522f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.128 238945 INFO nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deleting local config drive /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f/disk.config because it was imported into RBD.#033[00m
Jan 27 08:50:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:37 np0005597378 kernel: tap9c37d828-4d: entered promiscuous mode
Jan 27 08:50:37 np0005597378 NetworkManager[48904]: <info>  [1769521837.1740] manager: (tap9c37d828-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Jan 27 08:50:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:37Z|00399|binding|INFO|Claiming lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 for this chassis.
Jan 27 08:50:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:37Z|00400|binding|INFO|9c37d828-4d8b-4de7-a966-d2d71349bb46: Claiming fa:16:3e:5f:39:0d 10.100.0.7
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.206 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:39:0d 10.100.0.7'], port_security=['fa:16:3e:5f:39:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '195f21e5-7b85-4397-88db-891ef125522f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-058138e4-fae2-4d79-bd80-796b1eaa624c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0413d6d71e34cba95a1433946c34b12', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee269986-8444-4d10-a28d-c61c5ad76197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d3bf4c4-e1c5-4174-bc6f-0954bf7750ab, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9c37d828-4d8b-4de7-a966-d2d71349bb46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:37Z|00401|binding|INFO|Setting lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 ovn-installed in OVS
Jan 27 08:50:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:37Z|00402|binding|INFO|Setting lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 up in Southbound
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:37 np0005597378 systemd-udevd[284517]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:50:37 np0005597378 systemd-machined[207425]: New machine qemu-54-instance-00000030.
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.209 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9c37d828-4d8b-4de7-a966-d2d71349bb46 in datapath 058138e4-fae2-4d79-bd80-796b1eaa624c bound to our chassis#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.211 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 058138e4-fae2-4d79-bd80-796b1eaa624c#033[00m
Jan 27 08:50:37 np0005597378 systemd[1]: Started Virtual Machine qemu-54-instance-00000030.
Jan 27 08:50:37 np0005597378 NetworkManager[48904]: <info>  [1769521837.2204] device (tap9c37d828-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:50:37 np0005597378 NetworkManager[48904]: <info>  [1769521837.2212] device (tap9c37d828-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[47229c38-c824-4034-9f5f-02f905eed4a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.227 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap058138e4-f1 in ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.230 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap058138e4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c407a20-4b00-4b82-a7f0-2954f7c0a2c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0879729e-f4a8-44d0-a682-9f224a7a1be0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.247 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[42d879cc-a838-4bb7-a236-7d5b7b635ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.274 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f37701b6-170b-4cd2-820c-dfe10ecbff2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.308 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b5415a-9501-4c7a-bbb5-921fe72f0fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.315 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40e686f9-4d01-459d-9d6f-2ef8521eb55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 NetworkManager[48904]: <info>  [1769521837.3169] manager: (tap058138e4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.345 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[addbfdbe-fe75-4a83-a1ae-f464afc652c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.347 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[51c018b8-743a-4b72-96df-b0e55667617a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 NetworkManager[48904]: <info>  [1769521837.3689] device (tap058138e4-f0): carrier: link connected
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.373 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e793d652-f807-4d37-b7c2-1c22311432f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63243db9-a1f4-44e2-b669-b7da14893ae0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap058138e4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6b:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447699, 'reachable_time': 30462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284550, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.405 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[765d6050-07c1-420c-b355-998c649c3f39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:6b0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447699, 'tstamp': 447699}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284551, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.420 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79dfb502-b963-4e42-bde3-628478de809b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap058138e4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6b:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447699, 'reachable_time': 30462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284552, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.454 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebc2726-ec56-45e9-94ba-f5e4fcae874b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.495 238945 DEBUG nova.network.neutron [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea22519-d648-416d-bfcd-f33dce5e5e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.512 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058138e4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.513 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.513 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058138e4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:37 np0005597378 kernel: tap058138e4-f0: entered promiscuous mode
Jan 27 08:50:37 np0005597378 NetworkManager[48904]: <info>  [1769521837.5162] manager: (tap058138e4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.515 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.522 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap058138e4-f0, col_values=(('external_ids', {'iface-id': '6a4dc730-966f-42e2-a074-1ac5f5c9e683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:37Z|00403|binding|INFO|Releasing lport 6a4dc730-966f-42e2-a074-1ac5f5c9e683 from this chassis (sb_readonly=0)
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.526 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/058138e4-fae2-4d79-bd80-796b1eaa624c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/058138e4-fae2-4d79-bd80-796b1eaa624c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40d0b4f3-b83b-45b6-96d9-2eb2d6e3e94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.528 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-058138e4-fae2-4d79-bd80-796b1eaa624c
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/058138e4-fae2-4d79-bd80-796b1eaa624c.pid.haproxy
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 058138e4-fae2-4d79-bd80-796b1eaa624c
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:50:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:37.529 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'env', 'PROCESS_TAG=haproxy-058138e4-fae2-4d79-bd80-796b1eaa624c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/058138e4-fae2-4d79-bd80-796b1eaa624c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.638 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.639 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance network_info: |[{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.639 238945 DEBUG oslo_concurrency.lockutils [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.639 238945 DEBUG nova.network.neutron [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.642 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Start _get_guest_xml network_info=[{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.648 238945 WARNING nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.653 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.654 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.658 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.658 238945 DEBUG nova.virt.libvirt.host [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.659 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.659 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.659 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.660 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.661 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.661 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.661 238945 DEBUG nova.virt.hardware [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.664 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.731 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521837.7309837, 195f21e5-7b85-4397-88db-891ef125522f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:37 np0005597378 nova_compute[238941]: 2026-01-27 13:50:37.732 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Started (Lifecycle Event)#033[00m
Jan 27 08:50:37 np0005597378 podman[284646]: 2026-01-27 13:50:37.909873031 +0000 UTC m=+0.045218095 container create d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 08:50:37 np0005597378 systemd[1]: Started libpod-conmon-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510.scope.
Jan 27 08:50:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:50:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184fc400af7d75c6923064f4fee1f6ab740f3209234bf20381b879ac6d77692/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:50:37 np0005597378 podman[284646]: 2026-01-27 13:50:37.885656221 +0000 UTC m=+0.021001305 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:50:37 np0005597378 podman[284646]: 2026-01-27 13:50:37.994115064 +0000 UTC m=+0.129460148 container init d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:50:38 np0005597378 podman[284646]: 2026-01-27 13:50:38.000845965 +0000 UTC m=+0.136191029 container start d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:50:38 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : New worker (284667) forked
Jan 27 08:50:38 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : Loading success.
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.076 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.081 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521837.7334793, 195f21e5-7b85-4397-88db-891ef125522f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.081 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.141 238945 DEBUG nova.compute.manager [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.141 238945 DEBUG oslo_concurrency.lockutils [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.141 238945 DEBUG oslo_concurrency.lockutils [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.142 238945 DEBUG oslo_concurrency.lockutils [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.142 238945 DEBUG nova.compute.manager [req-7693ebd9-46f3-4590-9a46-0a56ecc4bff0 req-a268b759-69eb-4606-b19b-fc9994cf4060 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Processing event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.142 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.146 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.148 238945 INFO nova.virt.libvirt.driver [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance spawned successfully.#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.148 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.212 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.215 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521838.1449933, 195f21e5-7b85-4397-88db-891ef125522f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.215 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:50:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319959860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.258 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.259 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.259 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.260 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.260 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.261 238945 DEBUG nova.virt.libvirt.driver [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.276 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.300 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.303 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 864 KiB/s rd, 6.5 MiB/s wr, 190 op/s
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.350 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.412 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.495 238945 INFO nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 12.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.495 238945 DEBUG nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.739 238945 INFO nova.compute.manager [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 14.28 seconds to build instance.#033[00m
Jan 27 08:50:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:50:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319348830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.883 238945 DEBUG oslo_concurrency.lockutils [None req-da89fe45-78a6-4db3-b7c4-0163df100b59 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.907 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.908 238945 DEBUG nova.virt.libvirt.vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2074236305',display_name='tempest-ServerActionsTestOtherB-server-2074236305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2074236305',id=49,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-5b3s0esz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:29Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=3746a705-72ec-476a-a3c2-8cd4417b7367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.908 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.909 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:38 np0005597378 nova_compute[238941]: 2026-01-27 13:50:38.910 238945 DEBUG nova.objects.instance [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.124 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <uuid>3746a705-72ec-476a-a3c2-8cd4417b7367</uuid>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <name>instance-00000031</name>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestOtherB-server-2074236305</nova:name>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:50:37</nova:creationTime>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <nova:port uuid="3c6790eb-61b3-4e44-be64-1807d3342c68">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <entry name="serial">3746a705-72ec-476a-a3c2-8cd4417b7367</entry>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <entry name="uuid">3746a705-72ec-476a-a3c2-8cd4417b7367</entry>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3746a705-72ec-476a-a3c2-8cd4417b7367_disk">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:45:74:04"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <target dev="tap3c6790eb-61"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/console.log" append="off"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:50:39 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:50:39 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:50:39 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:50:39 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.125 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Preparing to wait for external event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.125 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.125 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.126 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.126 238945 DEBUG nova.virt.libvirt.vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2074236305',display_name='tempest-ServerActionsTestOtherB-server-2074236305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2074236305',id=49,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-5b3s0esz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:29Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=3746a705-72ec-476a-a3c2-8cd4417b7367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.127 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.127 238945 DEBUG nova.network.os_vif_util [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.128 238945 DEBUG os_vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.129 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.129 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.134 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c6790eb-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.134 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c6790eb-61, col_values=(('external_ids', {'iface-id': '3c6790eb-61b3-4e44-be64-1807d3342c68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:74:04', 'vm-uuid': '3746a705-72ec-476a-a3c2-8cd4417b7367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:39 np0005597378 NetworkManager[48904]: <info>  [1769521839.1366] manager: (tap3c6790eb-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.143 238945 INFO os_vif [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61')#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.537 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.538 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.538 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:45:74:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.538 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Using config drive#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.560 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.731 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521824.7292984, f4421f99-7c11-4331-a349-c0d9713d4dfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.731 238945 INFO nova.compute.manager [-] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:50:39 np0005597378 nova_compute[238941]: 2026-01-27 13:50:39.883 238945 DEBUG nova.compute.manager [None req-12b81422-059a-4f02-9c9c-34b4076783d4 - - - - - -] [instance: f4421f99-7c11-4331-a349-c0d9713d4dfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 27 08:50:40 np0005597378 nova_compute[238941]: 2026-01-27 13:50:40.506 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:40 np0005597378 nova_compute[238941]: 2026-01-27 13:50:40.997 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Creating config drive at /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.001 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj23qcr8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.138 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj23qcr8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.165 238945 DEBUG nova.storage.rbd_utils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.169 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.322 238945 DEBUG oslo_concurrency.processutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config 3746a705-72ec-476a-a3c2-8cd4417b7367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.323 238945 INFO nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Deleting local config drive /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367/disk.config because it was imported into RBD.#033[00m
Jan 27 08:50:41 np0005597378 kernel: tap3c6790eb-61: entered promiscuous mode
Jan 27 08:50:41 np0005597378 NetworkManager[48904]: <info>  [1769521841.3788] manager: (tap3c6790eb-61): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:41Z|00404|binding|INFO|Claiming lport 3c6790eb-61b3-4e44-be64-1807d3342c68 for this chassis.
Jan 27 08:50:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:41Z|00405|binding|INFO|3c6790eb-61b3-4e44-be64-1807d3342c68: Claiming fa:16:3e:45:74:04 10.100.0.3
Jan 27 08:50:41 np0005597378 systemd-udevd[284790]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:50:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:41Z|00406|binding|INFO|Setting lport 3c6790eb-61b3-4e44-be64-1807d3342c68 ovn-installed in OVS
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.415 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:41 np0005597378 NetworkManager[48904]: <info>  [1769521841.4292] device (tap3c6790eb-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:50:41 np0005597378 NetworkManager[48904]: <info>  [1769521841.4298] device (tap3c6790eb-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:50:41 np0005597378 systemd-machined[207425]: New machine qemu-55-instance-00000031.
Jan 27 08:50:41 np0005597378 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Jan 27 08:50:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:41Z|00407|binding|INFO|Setting lport 3c6790eb-61b3-4e44-be64-1807d3342c68 up in Southbound
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.506 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:74:04 10.100.0.3'], port_security=['fa:16:3e:45:74:04 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3746a705-72ec-476a-a3c2-8cd4417b7367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c6790eb-61b3-4e44-be64-1807d3342c68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.507 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6790eb-61b3-4e44-be64-1807d3342c68 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a bound to our chassis#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.508 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.524 238945 DEBUG nova.network.neutron [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updated VIF entry in instance network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.525 238945 DEBUG nova.network.neutron [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1aa801e-5072-4028-8479-4d974c7cbe80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.558 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee644fa-8f83-4edb-baea-ccd52ed08297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.562 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbcf573-e3b9-4076-ad64-c181e42de37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.591 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[67306533-3413-43fd-9066-baa78293f2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.607 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6eedbc23-40f2-421f-8349-1df37a4cf201]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284806, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.629 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee164c4-0171-4c26-a3f3-50ab3626a4fd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284807, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284807, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.632 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.635 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.635 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.636 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:41.636 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.674 238945 DEBUG oslo_concurrency.lockutils [req-bf5a6f04-13fa-4209-ab71-172147d4392f req-2521467c-133f-4886-8df8-3f5660b5dcd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.982 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521841.982172, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:41 np0005597378 nova_compute[238941]: 2026-01-27 13:50:41.983 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Started (Lifecycle Event)#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.053 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.057 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521841.982351, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.057 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:50:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.208 238945 DEBUG nova.compute.manager [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.208 238945 DEBUG oslo_concurrency.lockutils [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.208 238945 DEBUG oslo_concurrency.lockutils [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.209 238945 DEBUG oslo_concurrency.lockutils [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.209 238945 DEBUG nova.compute.manager [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] No waiting events found dispatching network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.209 238945 WARNING nova.compute.manager [req-262584f5-f095-49d5-80c6-cd954f215d6b req-c2d8840a-5452-4dc9-9542-645c3721518b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received unexpected event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.221 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.225 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.298 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:42Z|00408|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:50:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:42Z|00409|binding|INFO|Releasing lport 6a4dc730-966f-42e2-a074-1ac5f5c9e683 from this chassis (sb_readonly=0)
Jan 27 08:50:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:42Z|00410|binding|INFO|Releasing lport 023f53bd-5452-48b1-a708-41a1d13bdb08 from this chassis (sb_readonly=0)
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG nova.compute.manager [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG nova.compute.manager [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing instance network info cache due to event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG oslo_concurrency.lockutils [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.310 238945 DEBUG oslo_concurrency.lockutils [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.311 238945 DEBUG nova.network.neutron [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.498 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 08:50:42 np0005597378 nova_compute[238941]: 2026-01-27 13:50:42.499 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.122 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.123 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.123 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.123 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.124 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.125 238945 INFO nova.compute.manager [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Terminating instance#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.126 238945 DEBUG nova.compute.manager [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:50:43 np0005597378 kernel: tap9c37d828-4d (unregistering): left promiscuous mode
Jan 27 08:50:43 np0005597378 NetworkManager[48904]: <info>  [1769521843.1702] device (tap9c37d828-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:50:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:43Z|00411|binding|INFO|Releasing lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 from this chassis (sb_readonly=0)
Jan 27 08:50:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:43Z|00412|binding|INFO|Setting lport 9c37d828-4d8b-4de7-a966-d2d71349bb46 down in Southbound
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.176 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:43Z|00413|binding|INFO|Removing iface tap9c37d828-4d ovn-installed in OVS
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000030.scope: Deactivated successfully.
Jan 27 08:50:43 np0005597378 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000030.scope: Consumed 5.558s CPU time.
Jan 27 08:50:43 np0005597378 systemd-machined[207425]: Machine qemu-54-instance-00000030 terminated.
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.236 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:39:0d 10.100.0.7'], port_security=['fa:16:3e:5f:39:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '195f21e5-7b85-4397-88db-891ef125522f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-058138e4-fae2-4d79-bd80-796b1eaa624c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0413d6d71e34cba95a1433946c34b12', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee269986-8444-4d10-a28d-c61c5ad76197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d3bf4c4-e1c5-4174-bc6f-0954bf7750ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9c37d828-4d8b-4de7-a966-d2d71349bb46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.237 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9c37d828-4d8b-4de7-a966-d2d71349bb46 in datapath 058138e4-fae2-4d79-bd80-796b1eaa624c unbound from our chassis#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.239 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 058138e4-fae2-4d79-bd80-796b1eaa624c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0bafe932-6d19-4213-a986-b6aed791a5ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.240 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c namespace which is not needed anymore#033[00m
Jan 27 08:50:43 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : haproxy version is 2.8.14-c23fe91
Jan 27 08:50:43 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [NOTICE]   (284665) : path to executable is /usr/sbin/haproxy
Jan 27 08:50:43 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [WARNING]  (284665) : Exiting Master process...
Jan 27 08:50:43 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [ALERT]    (284665) : Current worker (284667) exited with code 143 (Terminated)
Jan 27 08:50:43 np0005597378 neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c[284661]: [WARNING]  (284665) : All workers exited. Exiting... (0)
Jan 27 08:50:43 np0005597378 systemd[1]: libpod-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510.scope: Deactivated successfully.
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.365 238945 INFO nova.virt.libvirt.driver [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Instance destroyed successfully.#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.366 238945 DEBUG nova.objects.instance [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lazy-loading 'resources' on Instance uuid 195f21e5-7b85-4397-88db-891ef125522f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:43 np0005597378 podman[284871]: 2026-01-27 13:50:43.371649654 +0000 UTC m=+0.048889684 container died d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:50:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510-userdata-shm.mount: Deactivated successfully.
Jan 27 08:50:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5184fc400af7d75c6923064f4fee1f6ab740f3209234bf20381b879ac6d77692-merged.mount: Deactivated successfully.
Jan 27 08:50:43 np0005597378 podman[284871]: 2026-01-27 13:50:43.412681466 +0000 UTC m=+0.089921496 container cleanup d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:50:43 np0005597378 systemd[1]: libpod-conmon-d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510.scope: Deactivated successfully.
Jan 27 08:50:43 np0005597378 podman[284911]: 2026-01-27 13:50:43.483445907 +0000 UTC m=+0.048512615 container remove d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd03fd9-c59a-4f2f-af3b-dc31c24a4814]: (4, ('Tue Jan 27 01:50:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c (d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510)\nd75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510\nTue Jan 27 01:50:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c (d75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510)\nd75a3bd2bbb0d17ea7f4eed17eed726221869ff1245a8badefeac9d469bc6510\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.491 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d457109-d512-462c-9e78-39eb90fc0a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.492 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058138e4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:43 np0005597378 kernel: tap058138e4-f0: left promiscuous mode
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.525 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9803fbba-118a-4f9c-a67a-e41737195dad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.539 238945 DEBUG nova.virt.libvirt.vif [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1954376749',display_name='tempest-InstanceActionsNegativeTestJSON-server-1954376749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1954376749',id=48,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0413d6d71e34cba95a1433946c34b12',ramdisk_id='',reservation_id='r-ygszc55r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1702192513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1702192513-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:38Z,user_data=None,user_id='2275fd74011649b8b9de6b62ea5c6fc5',uuid=195f21e5-7b85-4397-88db-891ef125522f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.540 238945 DEBUG nova.network.os_vif_util [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converting VIF {"id": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "address": "fa:16:3e:5f:39:0d", "network": {"id": "058138e4-fae2-4d79-bd80-796b1eaa624c", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1255222002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0413d6d71e34cba95a1433946c34b12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c37d828-4d", "ovs_interfaceid": "9c37d828-4d8b-4de7-a966-d2d71349bb46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.540 238945 DEBUG nova.network.os_vif_util [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.541 238945 DEBUG os_vif [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.541 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01ec2f91-92d2-4958-b194-6515ec6479df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.542 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c37d828-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.542 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccb09e5-d9e8-4322-a656-407990c6791c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.549 238945 INFO os_vif [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:39:0d,bridge_name='br-int',has_traffic_filtering=True,id=9c37d828-4d8b-4de7-a966-d2d71349bb46,network=Network(058138e4-fae2-4d79-bd80-796b1eaa624c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c37d828-4d')#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.560 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcee85a-17cf-4854-a2cf-6a56644b78f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447692, 'reachable_time': 21339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284929, 'error': None, 'target': 'ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.562 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-058138e4-fae2-4d79-bd80-796b1eaa624c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:50:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:43.562 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1ae69d-03ba-48b7-8bb5-3bd7ff216e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:43 np0005597378 systemd[1]: run-netns-ovnmeta\x2d058138e4\x2dfae2\x2d4d79\x2dbd80\x2d796b1eaa624c.mount: Deactivated successfully.
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.572 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.649 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.650 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.650 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.651 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.651 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.801 238945 INFO nova.virt.libvirt.driver [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deleting instance files /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f_del#033[00m
Jan 27 08:50:43 np0005597378 nova_compute[238941]: 2026-01-27 13:50:43.802 238945 INFO nova.virt.libvirt.driver [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deletion of /var/lib/nova/instances/195f21e5-7b85-4397-88db-891ef125522f_del complete#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.213 238945 DEBUG nova.network.neutron [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updated VIF entry in instance network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.214 238945 DEBUG nova.network.neutron [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868916465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.236 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.314 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-unplugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.315 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] No waiting events found dispatching network-vif-unplugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-unplugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "195f21e5-7b85-4397-88db-891ef125522f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG oslo_concurrency.lockutils [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.316 238945 DEBUG nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] No waiting events found dispatching network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.317 238945 WARNING nova.compute.manager [req-19f77f4d-7fe3-4d34-8e7b-94d06b29656e req-1e3e1b9f-f0d6-4e9b-b6d6-8090055e30e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received unexpected event network-vif-plugged-9c37d828-4d8b-4de7-a966-d2d71349bb46 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:50:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 339 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 246 op/s
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.377 238945 INFO nova.compute.manager [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.378 238945 DEBUG oslo.service.loopingcall [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.378 238945 DEBUG nova.compute.manager [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.378 238945 DEBUG nova.network.neutron [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.386 238945 DEBUG oslo_concurrency.lockutils [req-d14b2832-0902-48cf-8ccd-bd9b3ca696ea req-c09ded22-8a57-4327-9c6b-fa052f167d65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.417 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.417 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.421 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.421 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.424 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.424 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.428 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.428 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.434 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.435 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Processing event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.436 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.437 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] No waiting events found dispatching network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.437 238945 WARNING nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received unexpected event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG nova.compute.manager [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing instance network info cache due to event network-changed-8a6b3097-3b81-4bf7-8197-4ae8263c57e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.438 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.439 238945 DEBUG nova.network.neutron [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Refreshing network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.440 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.442 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Error from libvirt while getting description of instance-00000030: [Error Code 42] Domain not found: no domain with matching uuid '195f21e5-7b85-4397-88db-891ef125522f' (instance-00000030): libvirt.libvirtError: Domain not found: no domain with matching uuid '195f21e5-7b85-4397-88db-891ef125522f' (instance-00000030)#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.445 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521844.444733, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.445 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.448 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.464 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance spawned successfully.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.466 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.470 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.475 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.494 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.495 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.495 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.496 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.496 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.497 238945 DEBUG nova.virt.libvirt.driver [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.505 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.559 238945 INFO nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Took 14.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.560 238945 DEBUG nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.687 238945 INFO nova.compute.manager [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Took 16.07 seconds to build instance.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.703 238945 DEBUG oslo_concurrency.lockutils [None req-7a5f8b37-8dc9-4c71-9f33-7f1e8d12a64a 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.705 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.706 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.834361389279366GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.706 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.707 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.941 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 195f21e5-7b85-4397-88db-891ef125522f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3746a705-72ec-476a-a3c2-8cd4417b7367 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.942 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:50:44 np0005597378 nova_compute[238941]: 2026-01-27 13:50:44.943 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.178 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762352290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.769 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.777 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.804 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.874 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.875 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.908 238945 DEBUG nova.network.neutron [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:45 np0005597378 nova_compute[238941]: 2026-01-27 13:50:45.930 238945 INFO nova.compute.manager [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Took 1.55 seconds to deallocate network for instance.#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.005 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.006 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.144 238945 DEBUG oslo_concurrency.processutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:46.298 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:46.299 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:46.300 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 317 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 203 op/s
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.416 238945 INFO nova.compute.manager [None req-6788041a-4e09-443d-bbd3-716d467ca8b4 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Pausing#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.417 238945 DEBUG nova.objects.instance [None req-6788041a-4e09-443d-bbd3-716d467ca8b4 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'flavor' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.451 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521846.450257, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.451 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.456 238945 DEBUG nova.network.neutron [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updated VIF entry in instance network info cache for port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.457 238945 DEBUG nova.network.neutron [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [{"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.460 238945 DEBUG nova.compute.manager [None req-6788041a-4e09-443d-bbd3-716d467ca8b4 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.485 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.488 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.514 238945 DEBUG oslo_concurrency.lockutils [req-c4d541c2-5b7b-489a-8a4b-1f735f411e7b req-08297541-4b7b-47e1-bc7e-fd4ec392a628 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-17b9acbe-02b3-41d7-af4b-fd8b3d902d47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.527 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.685 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.686 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.686 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.686 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:50:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300479716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.756 238945 DEBUG oslo_concurrency.processutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.761 238945 DEBUG nova.compute.provider_tree [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.779 238945 DEBUG nova.scheduler.client.report [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.803 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.831 238945 INFO nova.scheduler.client.report [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Deleted allocations for instance 195f21e5-7b85-4397-88db-891ef125522f#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.880 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.881 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.881 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.882 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.889 238945 DEBUG nova.compute.manager [req-39979a26-ff93-48c6-bc41-9871c494d4d7 req-a38f88f8-bc11-4553-86f4-3b3d8b55dc74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Received event network-vif-deleted-9c37d828-4d8b-4de7-a966-d2d71349bb46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:46 np0005597378 nova_compute[238941]: 2026-01-27 13:50:46.939 238945 DEBUG oslo_concurrency.lockutils [None req-fe63ce51-b0cf-4751-ba7c-9c9457d889e8 2275fd74011649b8b9de6b62ea5c6fc5 a0413d6d71e34cba95a1433946c34b12 - - default default] Lock "195f21e5-7b85-4397-88db-891ef125522f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:47Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:1f:41 10.100.0.7
Jan 27 08:50:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:47Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:1f:41 10.100.0.7
Jan 27 08:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.264 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.277 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.277 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.278 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:50:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 322 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 774 KiB/s wr, 210 op/s
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:48 np0005597378 podman[285016]: 2026-01-27 13:50:48.751815958 +0000 UTC m=+0.086648139 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.908 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.908 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.908 238945 INFO nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Shelving#033[00m
Jan 27 08:50:48 np0005597378 kernel: tap3c6790eb-61 (unregistering): left promiscuous mode
Jan 27 08:50:48 np0005597378 NetworkManager[48904]: <info>  [1769521848.9584] device (tap3c6790eb-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:50:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:48Z|00414|binding|INFO|Releasing lport 3c6790eb-61b3-4e44-be64-1807d3342c68 from this chassis (sb_readonly=0)
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:48Z|00415|binding|INFO|Setting lport 3c6790eb-61b3-4e44-be64-1807d3342c68 down in Southbound
Jan 27 08:50:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:48Z|00416|binding|INFO|Removing iface tap3c6790eb-61 ovn-installed in OVS
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.977 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:74:04 10.100.0.3'], port_security=['fa:16:3e:45:74:04 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3746a705-72ec-476a-a3c2-8cd4417b7367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3c6790eb-61b3-4e44-be64-1807d3342c68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:50:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.979 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6790eb-61b3-4e44-be64-1807d3342c68 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis#033[00m
Jan 27 08:50:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.980 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:50:48 np0005597378 nova_compute[238941]: 2026-01-27 13:50:48.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:48.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d34bc149-be53-4996-9bd6-898a501d7bf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:49 np0005597378 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 27 08:50:49 np0005597378 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 2.550s CPU time.
Jan 27 08:50:49 np0005597378 systemd-machined[207425]: Machine qemu-55-instance-00000031 terminated.
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.024 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[494e3db8-6043-4671-9167-b35d8085f234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.029 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9531b638-ed1f-482c-9e93-7b6f5d4042b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.059 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0af90044-f41e-4790-b3c1-8c6f7a7bc515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.076 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fff5b3-a53f-4c4d-a590-60245ea0f8eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285054, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7142fd1-d842-44dd-95c8-63cbade70631]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285055, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285055, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.094 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.101 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.101 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.102 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:50:49.102 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.156 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance destroyed successfully.#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.156 238945 DEBUG nova.objects.instance [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'numa_topology' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.383 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Beginning cold snapshot process#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.519 238945 DEBUG nova.compute.manager [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-unplugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG oslo_concurrency.lockutils [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG oslo_concurrency.lockutils [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG oslo_concurrency.lockutils [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 DEBUG nova.compute.manager [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] No waiting events found dispatching network-vif-unplugged-3c6790eb-61b3-4e44-be64-1807d3342c68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.520 238945 WARNING nova.compute.manager [req-294041bb-aaa0-4028-b97f-85556fdf6132 req-c2478603-2483-47ae-85be-8e0b399df934 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received unexpected event network-vif-unplugged-3c6790eb-61b3-4e44-be64-1807d3342c68 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.525 238945 DEBUG nova.virt.libvirt.imagebackend [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:50:49 np0005597378 nova_compute[238941]: 2026-01-27 13:50:49.709 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(a4beed0b4d6f4420bc191075b37e4668) on rbd image(3746a705-72ec-476a-a3c2-8cd4417b7367_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:50:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 326 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.1 MiB/s wr, 282 op/s
Jan 27 08:50:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 27 08:50:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 27 08:50:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 27 08:50:50 np0005597378 nova_compute[238941]: 2026-01-27 13:50:50.519 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/3746a705-72ec-476a-a3c2-8cd4417b7367_disk@a4beed0b4d6f4420bc191075b37e4668 to images/382ed141-55e2-48f5-99ca-8b88b812c1b1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:50:50 np0005597378 nova_compute[238941]: 2026-01-27 13:50:50.593 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/382ed141-55e2-48f5-99ca-8b88b812c1b1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:50:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:50Z|00417|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:50:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:50:50Z|00418|binding|INFO|Releasing lport 023f53bd-5452-48b1-a708-41a1d13bdb08 from this chassis (sb_readonly=0)
Jan 27 08:50:50 np0005597378 nova_compute[238941]: 2026-01-27 13:50:50.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:50 np0005597378 podman[285172]: 2026-01-27 13:50:50.763295539 +0000 UTC m=+0.098643780 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 27 08:50:50 np0005597378 nova_compute[238941]: 2026-01-27 13:50:50.848 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(a4beed0b4d6f4420bc191075b37e4668) on rbd image(3746a705-72ec-476a-a3c2-8cd4417b7367_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 27 08:50:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 27 08:50:51 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.510 238945 DEBUG nova.storage.rbd_utils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(382ed141-55e2-48f5-99ca-8b88b812c1b1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.715 238945 DEBUG nova.compute.manager [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.715 238945 DEBUG oslo_concurrency.lockutils [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.716 238945 DEBUG oslo_concurrency.lockutils [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.717 238945 DEBUG oslo_concurrency.lockutils [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.717 238945 DEBUG nova.compute.manager [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] No waiting events found dispatching network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.717 238945 WARNING nova.compute.manager [req-86d706cc-9af9-49bb-ba60-36d49833cc69 req-c3527368-d8ea-4959-b6b7-f6b23a3dedf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received unexpected event network-vif-plugged-3c6790eb-61b3-4e44-be64-1807d3342c68 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Jan 27 08:50:51 np0005597378 nova_compute[238941]: 2026-01-27 13:50:51.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 326 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 232 op/s
Jan 27 08:50:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 27 08:50:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 27 08:50:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 27 08:50:53 np0005597378 nova_compute[238941]: 2026-01-27 13:50:53.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:53 np0005597378 nova_compute[238941]: 2026-01-27 13:50:53.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.166 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Snapshot image upload complete#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.167 238945 DEBUG nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.226 238945 INFO nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Shelve offloading#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.234 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance destroyed successfully.#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.234 238945 DEBUG nova.compute.manager [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.237 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.237 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.237 238945 DEBUG nova.network.neutron [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:50:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 364 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.3 MiB/s wr, 291 op/s
Jan 27 08:50:54 np0005597378 nova_compute[238941]: 2026-01-27 13:50:54.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:55 np0005597378 nova_compute[238941]: 2026-01-27 13:50:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:50:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 113 op/s
Jan 27 08:50:56 np0005597378 nova_compute[238941]: 2026-01-27 13:50:56.851 238945 DEBUG nova.network.neutron [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:56 np0005597378 nova_compute[238941]: 2026-01-27 13:50:56.870 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:56 np0005597378 nova_compute[238941]: 2026-01-27 13:50:56.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:50:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 27 08:50:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 27 08:50:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.141 238945 INFO nova.virt.libvirt.driver [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Instance destroyed successfully.#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.142 238945 DEBUG nova.objects.instance [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid 3746a705-72ec-476a-a3c2-8cd4417b7367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.159 238945 DEBUG nova.virt.libvirt.vif [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2074236305',display_name='tempest-ServerActionsTestOtherB-server-2074236305',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2074236305',id=49,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-5b3s0esz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:50:54.167408',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='382ed141-55e2-48f5-99ca-8b88b812c1b1'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:49Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=3746a705-72ec-476a-a3c2-8cd4417b7367,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.160 238945 DEBUG nova.network.os_vif_util [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6790eb-61", "ovs_interfaceid": "3c6790eb-61b3-4e44-be64-1807d3342c68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.161 238945 DEBUG nova.network.os_vif_util [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.161 238945 DEBUG os_vif [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.163 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c6790eb-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.169 238945 INFO os_vif [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:74:04,bridge_name='br-int',has_traffic_filtering=True,id=3c6790eb-61b3-4e44-be64-1807d3342c68,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6790eb-61')#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.226 238945 DEBUG nova.compute.manager [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Received event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.226 238945 DEBUG nova.compute.manager [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing instance network info cache due to event network-changed-3c6790eb-61b3-4e44-be64-1807d3342c68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.226 238945 DEBUG oslo_concurrency.lockutils [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.227 238945 DEBUG oslo_concurrency.lockutils [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.227 238945 DEBUG nova.network.neutron [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Refreshing network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:50:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 100 op/s
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.361 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521843.3599315, 195f21e5-7b85-4397-88db-891ef125522f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.362 238945 INFO nova.compute.manager [-] [instance: 195f21e5-7b85-4397-88db-891ef125522f] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.394 238945 DEBUG nova.compute.manager [None req-48aa057c-9dea-4adb-8558-c86869346064 - - - - - -] [instance: 195f21e5-7b85-4397-88db-891ef125522f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.436 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.437 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.457 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.470 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Deleting instance files /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367_del#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.471 238945 INFO nova.virt.libvirt.driver [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Deletion of /var/lib/nova/instances/3746a705-72ec-476a-a3c2-8cd4417b7367_del complete#033[00m
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.544829) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858544937, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2278, "num_deletes": 264, "total_data_size": 3333155, "memory_usage": 3398384, "flush_reason": "Manual Compaction"}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858570121, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3268760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25974, "largest_seqno": 28251, "table_properties": {"data_size": 3258263, "index_size": 6735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22183, "raw_average_key_size": 21, "raw_value_size": 3237126, "raw_average_value_size": 3074, "num_data_blocks": 293, "num_entries": 1053, "num_filter_entries": 1053, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769521685, "oldest_key_time": 1769521685, "file_creation_time": 1769521858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 25368 microseconds, and 8092 cpu microseconds.
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.570206) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3268760 bytes OK
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.570241) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.572466) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.572484) EVENT_LOG_v1 {"time_micros": 1769521858572478, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.572514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3323437, prev total WAL file size 3323437, number of live WAL files 2.
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.573984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3192KB)], [59(7015KB)]
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858574089, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10452303, "oldest_snapshot_seqno": -1}
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.577 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.578 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.579 238945 INFO nova.scheduler.client.report [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance 3746a705-72ec-476a-a3c2-8cd4417b7367#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.588 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.589 238945 INFO nova.compute.claims [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5420 keys, 8763402 bytes, temperature: kUnknown
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858625547, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8763402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8725177, "index_size": 23587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 134872, "raw_average_key_size": 24, "raw_value_size": 8625746, "raw_average_value_size": 1591, "num_data_blocks": 965, "num_entries": 5420, "num_filter_entries": 5420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769521858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.625757) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8763402 bytes
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.627841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 170.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5953, records dropped: 533 output_compression: NoCompression
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.627862) EVENT_LOG_v1 {"time_micros": 1769521858627853, "job": 32, "event": "compaction_finished", "compaction_time_micros": 51511, "compaction_time_cpu_micros": 22863, "output_level": 6, "num_output_files": 1, "total_output_size": 8763402, "num_input_records": 5953, "num_output_records": 5420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858628583, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769521858630281, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.573759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:50:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:50:58.630353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.646 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:58 np0005597378 nova_compute[238941]: 2026-01-27 13:50:58.772 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.292 238945 DEBUG nova.network.neutron [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updated VIF entry in instance network info cache for port 3c6790eb-61b3-4e44-be64-1807d3342c68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.293 238945 DEBUG nova.network.neutron [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Updating instance_info_cache with network_info: [{"id": "3c6790eb-61b3-4e44-be64-1807d3342c68", "address": "fa:16:3e:45:74:04", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3c6790eb-61", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.312 238945 DEBUG oslo_concurrency.lockutils [req-50a610d1-d686-4655-8d6c-2ad608ce15bf req-4ea0855b-b36e-4e21-86d2-c8effe63566c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3746a705-72ec-476a-a3c2-8cd4417b7367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:50:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:50:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172331166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.362 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.368 238945 DEBUG nova.compute.provider_tree [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.380 238945 DEBUG nova.scheduler.client.report [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.403 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.403 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.405 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.456 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.456 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.484 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.508 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.526 238945 DEBUG oslo_concurrency.processutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:50:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002905650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:50:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:50:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002905650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.611 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.613 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.613 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Creating image(s)#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.636 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.657 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.684 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.688 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.721 238945 DEBUG nova.policy [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc97508eec004685b1c36a85261430bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7fc23a96b5e44bf687aafd92e4199313', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.760 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.761 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.762 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.762 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.785 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:50:59 np0005597378 nova_compute[238941]: 2026-01-27 13:50:59.791 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790451490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.125 238945 DEBUG oslo_concurrency.processutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.132 238945 DEBUG nova.compute.provider_tree [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.137 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.167 238945 DEBUG nova.scheduler.client.report [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.199 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.206 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] resizing rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.249 238945 DEBUG oslo_concurrency.lockutils [None req-b821a089-26a1-4cc5-8ef8-db94d909e73c 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "3746a705-72ec-476a-a3c2-8cd4417b7367" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.286 238945 DEBUG nova.objects.instance [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'migration_context' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.297 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.298 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Ensure instance console log exists: /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.298 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.299 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.299 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 337 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.450 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Successfully created port: bf0b7102-1d3f-448b-912f-96a2c136df6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:00 np0005597378 nova_compute[238941]: 2026-01-27 13:51:00.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.475 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Successfully updated port: bf0b7102-1d3f-448b-912f-96a2c136df6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.495 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.495 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.496 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.566 238945 DEBUG nova.compute.manager [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.566 238945 DEBUG nova.compute.manager [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing instance network info cache due to event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.566 238945 DEBUG oslo_concurrency.lockutils [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.648 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:01 np0005597378 nova_compute[238941]: 2026-01-27 13:51:01.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 337 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.748 238945 DEBUG nova.network.neutron [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.769 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.770 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance network_info: |[{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.770 238945 DEBUG oslo_concurrency.lockutils [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.770 238945 DEBUG nova.network.neutron [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.772 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start _get_guest_xml network_info=[{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.777 238945 WARNING nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.781 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.782 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.784 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.784 238945 DEBUG nova.virt.libvirt.host [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.785 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.785 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.786 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.787 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.788 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.789 238945 DEBUG nova.virt.hardware [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:02 np0005597378 nova_compute[238941]: 2026-01-27 13:51:02.791 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115819615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.353 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.377 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.384 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.451 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.452 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.452 238945 INFO nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Shelving#033[00m
Jan 27 08:51:03 np0005597378 nova_compute[238941]: 2026-01-27 13:51:03.483 238945 DEBUG nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:51:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761100498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.021 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.023 238945 DEBUG nova.virt.libvirt.vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:59Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.024 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.025 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.027 238945 DEBUG nova.objects.instance [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'pci_devices' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.042 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <uuid>f433aa34-c04e-4ae6-8fd3-0999a41789fe</uuid>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <name>instance-00000032</name>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:name>tempest-SecurityGroupsTestJSON-server-89666614</nova:name>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:02</nova:creationTime>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:user uuid="dc97508eec004685b1c36a85261430bd">tempest-SecurityGroupsTestJSON-915122805-project-member</nova:user>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:project uuid="7fc23a96b5e44bf687aafd92e4199313">tempest-SecurityGroupsTestJSON-915122805</nova:project>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <nova:port uuid="bf0b7102-1d3f-448b-912f-96a2c136df6b">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <entry name="serial">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <entry name="uuid">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:c7:a7:77"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <target dev="tapbf0b7102-1d"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/console.log" append="off"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:04 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:04 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:04 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:04 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.044 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Preparing to wait for external event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.045 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.045 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.046 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.046 238945 DEBUG nova.virt.libvirt.vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:50:59Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.047 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.047 238945 DEBUG nova.network.os_vif_util [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.048 238945 DEBUG os_vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.049 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.049 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0b7102-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.054 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0b7102-1d, col_values=(('external_ids', {'iface-id': 'bf0b7102-1d3f-448b-912f-96a2c136df6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:a7:77', 'vm-uuid': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 NetworkManager[48904]: <info>  [1769521864.0568] manager: (tapbf0b7102-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.062 238945 INFO os_vif [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.119 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.120 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.121 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] No VIF found with MAC fa:16:3e:c7:a7:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.122 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Using config drive#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.143 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.156 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521849.155015, 3746a705-72ec-476a-a3c2-8cd4417b7367 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.156 238945 INFO nova.compute.manager [-] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.173 238945 DEBUG nova.compute.manager [None req-9e1e64e6-0545-4fdc-b9ca-5c17ef2d9138 - - - - - -] [instance: 3746a705-72ec-476a-a3c2-8cd4417b7367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.208 238945 DEBUG nova.network.neutron [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updated VIF entry in instance network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.208 238945 DEBUG nova.network.neutron [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.237 238945 DEBUG oslo_concurrency.lockutils [req-ae579fe5-7a2e-4b7e-a7e1-135cf4d2bbd5 req-c7704e97-9e67-4e41-9a82-bc92d264dbec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 360 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 570 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.538 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Creating config drive at /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.544 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7u8ill18 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.686 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7u8ill18" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.717 238945 DEBUG nova.storage.rbd_utils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] rbd image f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.724 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.904 238945 DEBUG oslo_concurrency.processutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.904 238945 INFO nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deleting local config drive /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/disk.config because it was imported into RBD.#033[00m
Jan 27 08:51:04 np0005597378 kernel: tapbf0b7102-1d: entered promiscuous mode
Jan 27 08:51:04 np0005597378 NetworkManager[48904]: <info>  [1769521864.9637] manager: (tapbf0b7102-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Jan 27 08:51:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:04Z|00419|binding|INFO|Claiming lport bf0b7102-1d3f-448b-912f-96a2c136df6b for this chassis.
Jan 27 08:51:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:04Z|00420|binding|INFO|bf0b7102-1d3f-448b-912f-96a2c136df6b: Claiming fa:16:3e:c7:a7:77 10.100.0.5
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.971 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.973 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 bound to our chassis#033[00m
Jan 27 08:51:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.975 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05#033[00m
Jan 27 08:51:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:04Z|00421|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b ovn-installed in OVS
Jan 27 08:51:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:04Z|00422|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b up in Southbound
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 nova_compute[238941]: 2026-01-27 13:51:04.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:04.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80e54600-ed24-4fb4-ad54-842b06d23d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:04 np0005597378 systemd-machined[207425]: New machine qemu-56-instance-00000032.
Jan 27 08:51:04 np0005597378 systemd-udevd[285591]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:05 np0005597378 systemd[1]: Started Virtual Machine qemu-56-instance-00000032.
Jan 27 08:51:05 np0005597378 NetworkManager[48904]: <info>  [1769521865.0100] device (tapbf0b7102-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:05 np0005597378 NetworkManager[48904]: <info>  [1769521865.0106] device (tapbf0b7102-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3be5a2ff-9ab8-47a8-b876-6b7e8e05b3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.028 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4c1ab7-8c6b-4111-b5ea-5a81f5b36f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.063 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[25a44e2a-0f3a-4085-94d4-9d4ff011d656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[532b5713-c839-4853-b39f-3799086f0b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285603, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2063358f-dac8-40d0-bc0a-12f530facdd5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285605, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285605, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.108 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.108 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.432 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521865.4318151, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.432 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.482 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521865.4345946, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.483 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.505 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.509 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.536 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:05 np0005597378 kernel: tapceb7b09e-b6 (unregistering): left promiscuous mode
Jan 27 08:51:05 np0005597378 NetworkManager[48904]: <info>  [1769521865.7249] device (tapceb7b09e-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:05Z|00423|binding|INFO|Releasing lport ceb7b09e-b635-4570-bcf2-a08115d41365 from this chassis (sb_readonly=0)
Jan 27 08:51:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:05Z|00424|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 down in Southbound
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:05Z|00425|binding|INFO|Removing iface tapceb7b09e-b6 ovn-installed in OVS
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.742 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.744 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.745 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b12a93c7-53a7-489c-a9db-05372aefc9d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 27 08:51:05 np0005597378 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Consumed 17.339s CPU time.
Jan 27 08:51:05 np0005597378 systemd-machined[207425]: Machine qemu-46-instance-0000002a terminated.
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.804 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ffca9518-60f6-4a6e-9b54-7acf9bec7bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.808 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[36a64188-09cc-49b3-9a0c-3c2fad4b4801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.839 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae0218e-63bc-4fca-baca-7b7b4c3fda9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.863 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e539ad0-b3dd-4e1b-a6d3-9f20f408b488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285656, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdc2018-b109-4ab0-aecb-ddb0823e19c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285657, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285657, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.892 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:05.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:05 np0005597378 nova_compute[238941]: 2026-01-27 13:51:05.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 27 08:51:06 np0005597378 nova_compute[238941]: 2026-01-27 13:51:06.505 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 08:51:06 np0005597378 nova_compute[238941]: 2026-01-27 13:51:06.512 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.#033[00m
Jan 27 08:51:06 np0005597378 nova_compute[238941]: 2026-01-27 13:51:06.513 238945 DEBUG nova.objects.instance [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'numa_topology' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:06 np0005597378 nova_compute[238941]: 2026-01-27 13:51:06.850 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Beginning cold snapshot process#033[00m
Jan 27 08:51:06 np0005597378 nova_compute[238941]: 2026-01-27 13:51:06.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:07 np0005597378 nova_compute[238941]: 2026-01-27 13:51:07.028 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 08:51:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:07 np0005597378 nova_compute[238941]: 2026-01-27 13:51:07.235 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(a3fa0c18e0f1415f97839a5cf8e2a2ad) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:51:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:07.482 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:07 np0005597378 nova_compute[238941]: 2026-01-27 13:51:07.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:07.484 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:51:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 27 08:51:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 27 08:51:07 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 27 08:51:07 np0005597378 nova_compute[238941]: 2026-01-27 13:51:07.967 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning vms/e053f779-294f-4782-bb33-a14e40753795_disk@a3fa0c18e0f1415f97839a5cf8e2a2ad to images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.079 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:51:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 372 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.552 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] removing snapshot(a3fa0c18e0f1415f97839a5cf8e2a2ad) on rbd image(e053f779-294f-4782-bb33-a14e40753795_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.854 238945 DEBUG nova.compute.manager [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.855 238945 DEBUG oslo_concurrency.lockutils [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.855 238945 DEBUG oslo_concurrency.lockutils [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.856 238945 DEBUG oslo_concurrency.lockutils [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.856 238945 DEBUG nova.compute.manager [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.856 238945 WARNING nova.compute.manager [req-55830551-a787-473c-8111-fa5f68f7ba8e req-dfe7e668-cc64-4974-a5cd-8c5681933ec8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 08:51:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 27 08:51:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 27 08:51:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 27 08:51:08 np0005597378 nova_compute[238941]: 2026-01-27 13:51:08.968 238945 DEBUG nova.storage.rbd_utils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] creating snapshot(snap) on rbd image(af1a2f6f-cd22-4a1a-b2d9-576a65db1604) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.182 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.183 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.203 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.222 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.222 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.254 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.260 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.260 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.291 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.301 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.302 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.311 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.312 238945 INFO nova.compute.claims [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.377 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.391 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.514 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.564 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.565 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.584 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.637 238945 DEBUG nova.compute.manager [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.637 238945 DEBUG oslo_concurrency.lockutils [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.638 238945 DEBUG oslo_concurrency.lockutils [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.638 238945 DEBUG oslo_concurrency.lockutils [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.638 238945 DEBUG nova.compute.manager [req-d6619c32-af62-4413-9066-f537a0040643 req-6b0d38dc-c84f-4743-be4b-3157b47f660f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Processing event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.639 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.647 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.648 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521869.6437912, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.648 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.652 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.658 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance spawned successfully.#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.659 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.670 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.674 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.687 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.688 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.689 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.689 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.690 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.691 238945 DEBUG nova.virt.libvirt.driver [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.748 238945 INFO nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 10.14 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.749 238945 DEBUG nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.812 238945 INFO nova.compute.manager [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 11.28 seconds to build instance.#033[00m
Jan 27 08:51:09 np0005597378 nova_compute[238941]: 2026-01-27 13:51:09.828 238945 DEBUG oslo_concurrency.lockutils [None req-2ce6763a-94e5-4ad6-9ec2-5f6006ddd660 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 27 08:51:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 27 08:51:09 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 27 08:51:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651621802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.154 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.160 238945 DEBUG nova.compute.provider_tree [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.183 238945 DEBUG nova.scheduler.client.report [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.204 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.205 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.209 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.218 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.218 238945 INFO nova.compute.claims [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.276 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.277 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.306 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.329 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:51:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 427 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 119 op/s
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.447 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.449 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.450 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Creating image(s)#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.481 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.516 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.548 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.552 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.629 238945 DEBUG nova.policy [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2731f35d38de444e8d3fac25a4164453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14aa89c69a294999aab63771025b995a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.654 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.655 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.656 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.657 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.682 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.688 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:10 np0005597378 nova_compute[238941]: 2026-01-27 13:51:10.804 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.026 238945 DEBUG nova.compute.manager [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.027 238945 DEBUG oslo_concurrency.lockutils [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.028 238945 DEBUG oslo_concurrency.lockutils [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.028 238945 DEBUG oslo_concurrency.lockutils [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.029 238945 DEBUG nova.compute.manager [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.029 238945 WARNING nova.compute.manager [req-a03c6564-e75a-4acc-a5ec-0e84858c0fb5 req-98c6ac40-3e27-48f2-8478-18039ff28dfc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.048 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.133 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] resizing rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.224 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'migration_context' on Instance uuid 6696d934-5b11-43a6-828d-b968bbf1ba9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.241 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.241 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Ensure instance console log exists: /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.241 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.242 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.242 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.481 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Successfully created port: 15a02d2b-a26e-4680-91c7-6294785d6e82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:11.485 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659407022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.502 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Snapshot image upload complete#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.503 238945 DEBUG nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.516 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.522 238945 DEBUG nova.compute.provider_tree [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.540 238945 DEBUG nova.scheduler.client.report [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.570 238945 INFO nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Shelve offloading#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.576 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.576 238945 DEBUG nova.compute.manager [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.578 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.578 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.578 238945 DEBUG nova.network.neutron [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.581 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.582 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.584 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.589 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.590 238945 INFO nova.compute.claims [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.657 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.658 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.677 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.736 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.814 238945 DEBUG nova.policy [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2731f35d38de444e8d3fac25a4164453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14aa89c69a294999aab63771025b995a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.915 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:11 np0005597378 nova_compute[238941]: 2026-01-27 13:51:11.999 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.000 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.001 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Creating image(s)#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.022 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.041 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.073 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.076 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.167 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.167 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.168 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.168 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.187 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.190 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.248 238945 DEBUG nova.compute.manager [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.249 238945 DEBUG oslo_concurrency.lockutils [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.249 238945 DEBUG oslo_concurrency.lockutils [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.250 238945 DEBUG oslo_concurrency.lockutils [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.250 238945 DEBUG nova.compute.manager [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.250 238945 WARNING nova.compute.manager [req-1bd03600-9ec2-46d1-81a7-4ab36c51d1e9 req-23fffe65-ba20-4c66-902c-8eefcc8cb876 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.#033[00m
Jan 27 08:51:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 427 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 81 op/s
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.490 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.548 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] resizing rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:51:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516819127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.627 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.631 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'migration_context' on Instance uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.637 238945 DEBUG nova.compute.provider_tree [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.683 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.684 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Ensure instance console log exists: /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.684 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.684 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.685 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.709 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Successfully created port: 7417a545-1c1e-4477-b4ff-72b924a65f11 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.742 238945 DEBUG nova.scheduler.client.report [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.801 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.803 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.806 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 3.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.815 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.815 238945 INFO nova.compute.claims [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:51:12 np0005597378 nova_compute[238941]: 2026-01-27 13:51:12.964 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Successfully updated port: 15a02d2b-a26e-4680-91c7-6294785d6e82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.046 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.046 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.134 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.134 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquired lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.134 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.423 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing instance network info cache due to event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.424 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.436 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.548 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.548 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.548 238945 INFO nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Rebooting instance#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.614 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:51:13 np0005597378 nova_compute[238941]: 2026-01-27 13:51:13.620 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.137 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.138 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.138 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Creating image(s)#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.159 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.183 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.206 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.210 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.279 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.280 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.280 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.281 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.305 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.314 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7e8705e9-4e86-44aa-b532-55fcccac542c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 511 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 322 op/s
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.655 238945 DEBUG nova.network.neutron [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.662 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.698 238945 DEBUG nova.policy [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2731f35d38de444e8d3fac25a4164453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14aa89c69a294999aab63771025b995a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.771 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:14 np0005597378 nova_compute[238941]: 2026-01-27 13:51:14.836 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.074 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7e8705e9-4e86-44aa-b532-55fcccac542c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.140 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] resizing rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:51:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319059991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.235 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.241 238945 DEBUG nova.compute.provider_tree [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.294 238945 DEBUG nova.scheduler.client.report [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.303 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'migration_context' on Instance uuid 7e8705e9-4e86-44aa-b532-55fcccac542c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.339 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.340 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Ensure instance console log exists: /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.340 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.340 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.341 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.343 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.343 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.407 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.407 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.448 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.462 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Successfully updated port: 7417a545-1c1e-4477-b4ff-72b924a65f11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.565 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.565 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquired lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.565 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.581 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.684 238945 DEBUG nova.compute.manager [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-changed-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.684 238945 DEBUG nova.compute.manager [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Refreshing instance network info cache due to event network-changed-7417a545-1c1e-4477-b4ff-72b924a65f11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.684 238945 DEBUG oslo_concurrency.lockutils [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.695 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.696 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.697 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Creating image(s)#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.716 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.734 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.755 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.761 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.836 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.837 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.837 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.837 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.856 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.859 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b17763fd-bf68-45e0-84a4-579e1453d6cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:15 np0005597378 nova_compute[238941]: 2026-01-27 13:51:15.900 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.270 238945 DEBUG nova.policy [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ba812648bec43bbbd7489f6c33289cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:51:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 560 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 13 MiB/s wr, 309 op/s
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.362 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b17763fd-bf68-45e0-84a4-579e1453d6cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.429 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] resizing rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.562 238945 DEBUG nova.objects.instance [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'migration_context' on Instance uuid b17763fd-bf68-45e0-84a4-579e1453d6cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.706 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Successfully created port: 89a5b6ba-141b-45b8-b1ea-fc2a60970931 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.719 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.719 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Ensure instance console log exists: /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.720 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.720 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.720 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.738 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updated VIF entry in instance network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.738 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.925 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updating instance_info_cache with network_info: [{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:16 np0005597378 nova_compute[238941]: 2026-01-27 13:51:16.986 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updating instance_info_cache with network_info: [{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:51:17
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'backups', 'default.rgw.control']
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.151 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.151 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-changed-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG nova.compute.manager [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Refreshing instance network info cache due to event network-changed-15a02d2b-a26e-4680-91c7-6294785d6e82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.152 238945 DEBUG nova.network.neutron [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.178 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Releasing lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.179 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance network_info: |[{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.179 238945 DEBUG oslo_concurrency.lockutils [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.179 238945 DEBUG nova.network.neutron [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Refreshing network info cache for port 7417a545-1c1e-4477-b4ff-72b924a65f11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.182 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start _get_guest_xml network_info=[{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.187 238945 WARNING nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.192 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.193 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.195 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.195 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.196 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.196 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.196 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.197 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.198 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.198 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.198 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.201 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.246 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Releasing lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.247 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance network_info: |[{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.248 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.248 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Refreshing network info cache for port 15a02d2b-a26e-4680-91c7-6294785d6e82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.252 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start _get_guest_xml network_info=[{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.258 238945 WARNING nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.268 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.268 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.272 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.272 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.273 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.273 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.273 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.274 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.275 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.278 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.435 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.436 238945 DEBUG nova.objects.instance [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.530 238945 DEBUG nova.virt.libvirt.vif [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:51:11.503493',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='af1a2f6f-cd22-4a1a-b2d9-576a65db1604'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.531 238945 DEBUG nova.network.os_vif_util [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.532 238945 DEBUG nova.network.os_vif_util [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.532 238945 DEBUG os_vif [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.535 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb7b09e-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.537 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.542 238945 INFO os_vif [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.597 238945 DEBUG nova.compute.manager [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG nova.compute.manager [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG oslo_concurrency.lockutils [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG oslo_concurrency.lockutils [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.598 238945 DEBUG nova.network.neutron [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348512881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.869 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.892 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.896 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871646173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:17 np0005597378 nova_compute[238941]: 2026-01-27 13:51:17.995 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.026 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.030 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.338 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully created port: a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 571 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 336 op/s
Jan 27 08:51:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459536770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.522 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.523 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-2',id=52,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:11Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=4316dbd4-e3b9-4411-b921-6dbdd5a3197f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.523 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.524 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.525 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.544 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <uuid>4316dbd4-e3b9-4411-b921-6dbdd5a3197f</uuid>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <name>instance-00000034</name>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServersNegativeTestJSON-server-2140282589-2</nova:name>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:17</nova:creationTime>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:user uuid="2731f35d38de444e8d3fac25a4164453">tempest-ListServersNegativeTestJSON-2145054704-project-member</nova:user>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:project uuid="14aa89c69a294999aab63771025b995a">tempest-ListServersNegativeTestJSON-2145054704</nova:project>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:port uuid="7417a545-1c1e-4477-b4ff-72b924a65f11">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="serial">4316dbd4-e3b9-4411-b921-6dbdd5a3197f</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="uuid">4316dbd4-e3b9-4411-b921-6dbdd5a3197f</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:0d:99:51"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <target dev="tap7417a545-1c"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/console.log" append="off"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:18 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:18 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Preparing to wait for external event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.545 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.546 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-2',id=52,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:11Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=4316dbd4-e3b9-4411-b921-6dbdd5a3197f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.546 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.547 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.547 238945 DEBUG os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.548 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.548 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.554 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7417a545-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.555 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7417a545-1c, col_values=(('external_ids', {'iface-id': '7417a545-1c1e-4477-b4ff-72b924a65f11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:99:51', 'vm-uuid': '4316dbd4-e3b9-4411-b921-6dbdd5a3197f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 NetworkManager[48904]: <info>  [1769521878.5573] manager: (tap7417a545-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.563 238945 INFO os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c')#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.590 238945 DEBUG nova.network.neutron [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updated VIF entry in instance network info cache for port 7417a545-1c1e-4477-b4ff-72b924a65f11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.590 238945 DEBUG nova.network.neutron [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updating instance_info_cache with network_info: [{"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.607 238945 DEBUG oslo_concurrency.lockutils [req-c028d03f-b5bc-4538-a168-70225b4bdc5f req-cf39be60-02c6-4e66-902e-441134a563de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4316dbd4-e3b9-4411-b921-6dbdd5a3197f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446940574' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.633 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.635 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-1',id=51,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:10Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=6696d934-5b11-43a6-828d-b968bbf1ba9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.635 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.635 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.636 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6696d934-5b11-43a6-828d-b968bbf1ba9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.640 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updated VIF entry in instance network info cache for port 15a02d2b-a26e-4680-91c7-6294785d6e82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.640 238945 DEBUG nova.network.neutron [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updating instance_info_cache with network_info: [{"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.651 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <uuid>6696d934-5b11-43a6-828d-b968bbf1ba9d</uuid>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <name>instance-00000033</name>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServersNegativeTestJSON-server-2140282589-1</nova:name>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:17</nova:creationTime>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:user uuid="2731f35d38de444e8d3fac25a4164453">tempest-ListServersNegativeTestJSON-2145054704-project-member</nova:user>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:project uuid="14aa89c69a294999aab63771025b995a">tempest-ListServersNegativeTestJSON-2145054704</nova:project>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <nova:port uuid="15a02d2b-a26e-4680-91c7-6294785d6e82">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="serial">6696d934-5b11-43a6-828d-b968bbf1ba9d</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="uuid">6696d934-5b11-43a6-828d-b968bbf1ba9d</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6696d934-5b11-43a6-828d-b968bbf1ba9d_disk">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:4b:a9:8b"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <target dev="tap15a02d2b-a2"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/console.log" append="off"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:18 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:18 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:18 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:18 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.651 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Preparing to wait for external event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.652 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.652 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.652 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.653 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-1',id=51,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:10Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=6696d934-5b11-43a6-828d-b968bbf1ba9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.653 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.653 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.655 238945 DEBUG os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.656 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.656 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.657 238945 DEBUG oslo_concurrency.lockutils [req-0a437184-657e-455e-83ba-ce404fcab1c7 req-75127bf2-bf02-43c3-93da-7d50c82a9217 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6696d934-5b11-43a6-828d-b968bbf1ba9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.659 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a02d2b-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.659 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15a02d2b-a2, col_values=(('external_ids', {'iface-id': '15a02d2b-a26e-4680-91c7-6294785d6e82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:a9:8b', 'vm-uuid': '6696d934-5b11-43a6-828d-b968bbf1ba9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.661 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 NetworkManager[48904]: <info>  [1769521878.6620] manager: (tap15a02d2b-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.670 238945 INFO os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2')#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.829 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.829 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.829 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No VIF found with MAC fa:16:3e:0d:99:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.830 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Using config drive#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.897 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.938 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.938 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.938 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No VIF found with MAC fa:16:3e:4b:a9:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.939 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Using config drive#033[00m
Jan 27 08:51:18 np0005597378 nova_compute[238941]: 2026-01-27 13:51:18.993 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.225 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Creating config drive at /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.231 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2xr578r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.270 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting instance files /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.271 238945 INFO nova.virt.libvirt.driver [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deletion of /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del complete#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.279 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Creating config drive at /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.284 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj11rbhbx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.371 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2xr578r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.402 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.405 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.446 238945 INFO nova.scheduler.client.report [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance e053f779-294f-4782-bb33-a14e40753795#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.450 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj11rbhbx" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.472 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.475 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.513 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.513 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.611 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Successfully updated port: 89a5b6ba-141b-45b8-b1ea-fc2a60970931 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.626 238945 DEBUG nova.network.neutron [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.628 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.628 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquired lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.628 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.649 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.651 238945 DEBUG nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.665 238945 DEBUG oslo_concurrency.processutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:19 np0005597378 podman[286820]: 2026-01-27 13:51:19.779491817 +0000 UTC m=+0.111486186 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:51:19 np0005597378 kernel: tapbf0b7102-1d (unregistering): left promiscuous mode
Jan 27 08:51:19 np0005597378 NetworkManager[48904]: <info>  [1769521879.8989] device (tapbf0b7102-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:19 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:19Z|00426|binding|INFO|Releasing lport bf0b7102-1d3f-448b-912f-96a2c136df6b from this chassis (sb_readonly=0)
Jan 27 08:51:19 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:19Z|00427|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b down in Southbound
Jan 27 08:51:19 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:19Z|00428|binding|INFO|Removing iface tapbf0b7102-1d ovn-installed in OVS
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.905 238945 DEBUG nova.compute.manager [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-changed-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.916 238945 DEBUG nova.compute.manager [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Refreshing instance network info cache due to event network-changed-89a5b6ba-141b-45b8-b1ea-fc2a60970931. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.917 238945 DEBUG oslo_concurrency.lockutils [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.921 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.922 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a d69f7bb8-0f27-4330-919f-a99b9bc92557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.923 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 unbound from our chassis#033[00m
Jan 27 08:51:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.930 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05#033[00m
Jan 27 08:51:19 np0005597378 nova_compute[238941]: 2026-01-27 13:51:19.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d44eb2f2-4320-4198-9eaa-f7f95912db27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:19 np0005597378 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 27 08:51:19 np0005597378 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Consumed 10.799s CPU time.
Jan 27 08:51:19 np0005597378 systemd-machined[207425]: Machine qemu-56-instance-00000032 terminated.
Jan 27 08:51:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.983 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bee100cc-275e-4864-9aa5-89bb0263ef3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:19.987 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[12fdccd5-ce67-49be-9d44-a5da24e11674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.017 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f28c6893-1ce6-48a9-ba5b-3023923ebfe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.040 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance destroyed successfully.#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.041 238945 DEBUG nova.objects.instance [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'resources' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc9112-4bb9-47cd-87d8-91ebd4f27831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286897, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.056 238945 DEBUG nova.virt.libvirt.vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:19Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.057 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.058 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.058 238945 DEBUG os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.061 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0b7102-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.065 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.062 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb8e0a9-ab45-4686-9ce5-0715c86fee36]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286902, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286902, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.067 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.071 238945 INFO os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.077 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.077 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.081 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start _get_guest_xml network_info=[{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.086 238945 DEBUG nova.compute.manager [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.087 238945 DEBUG oslo_concurrency.lockutils [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.087 238945 DEBUG oslo_concurrency.lockutils [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.087 238945 DEBUG oslo_concurrency.lockutils [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.088 238945 DEBUG nova.compute.manager [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.088 238945 WARNING nova.compute.manager [req-782a069d-3034-40e2-a47a-bcd474164e81 req-f47b5c1d-e7a3-4eaa-9255-3d2d39795578 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.090 238945 WARNING nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.096 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.097 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.104 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.105 238945 DEBUG nova.virt.libvirt.host [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.105 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.105 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.106 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.106 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.107 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.107 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.107 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.108 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.108 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.109 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.109 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.109 238945 DEBUG nova.virt.hardware [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.110 238945 DEBUG nova.objects.instance [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.128 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.188 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully created port: d7c86f5b-f6e4-4637-9ff2-1d6007449737 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879606346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.233 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config 4316dbd4-e3b9-4411-b921-6dbdd5a3197f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.234 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deleting local config drive /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f/disk.config because it was imported into RBD.#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.253 238945 DEBUG oslo_concurrency.processutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.262 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config 6696d934-5b11-43a6-828d-b968bbf1ba9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.263 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deleting local config drive /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d/disk.config because it was imported into RBD.#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.272 238945 DEBUG nova.compute.provider_tree [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.2868] manager: (tap7417a545-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 27 08:51:20 np0005597378 systemd-udevd[286878]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:20 np0005597378 kernel: tap7417a545-1c: entered promiscuous mode
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00429|binding|INFO|Claiming lport 7417a545-1c1e-4477-b4ff-72b924a65f11 for this chassis.
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00430|binding|INFO|7417a545-1c1e-4477-b4ff-72b924a65f11: Claiming fa:16:3e:0d:99:51 10.100.0.11
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.3042] device (tap7417a545-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.299 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:99:51 10.100.0.11'], port_security=['fa:16:3e:0d:99:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4316dbd4-e3b9-4411-b921-6dbdd5a3197f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7417a545-1c1e-4477-b4ff-72b924a65f11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.3051] device (tap7417a545-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.300 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7417a545-1c1e-4477-b4ff-72b924a65f11 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e bound to our chassis#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.302 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.313 238945 DEBUG nova.scheduler.client.report [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.320 238945 DEBUG nova.network.neutron [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.320 238945 DEBUG nova.network.neutron [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18d72971-7ba9-4e56-8b97-6dea7e1db834]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.321 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcfc0a286-51 in ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.323 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcfc0a286-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02ef5f01-f62d-4f6f-a915-e9fc183973e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01ae9ce5-e882-4c85-a508-f63633c8384a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00431|binding|INFO|Setting lport 7417a545-1c1e-4477-b4ff-72b924a65f11 ovn-installed in OVS
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00432|binding|INFO|Setting lport 7417a545-1c1e-4477-b4ff-72b924a65f11 up in Southbound
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.337 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.336 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ca07d523-66b6-4d0c-9062-5d6fa430afaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.3395] manager: (tap15a02d2b-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Jan 27 08:51:20 np0005597378 kernel: tap15a02d2b-a2: entered promiscuous mode
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.343 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 557 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 308 op/s
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.348 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.3522] device (tap15a02d2b-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.3528] device (tap15a02d2b-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.353 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00433|binding|INFO|Claiming lport 15a02d2b-a26e-4680-91c7-6294785d6e82 for this chassis.
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00434|binding|INFO|15a02d2b-a26e-4680-91c7-6294785d6e82: Claiming fa:16:3e:4b:a9:8b 10.100.0.8
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.356 238945 DEBUG oslo_concurrency.lockutils [req-41312e43-513d-483f-95e9-d677fcd07e84 req-cf2406d6-a6aa-4e83-9f60-90f30579cc1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:20 np0005597378 systemd-machined[207425]: New machine qemu-57-instance-00000034.
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.365 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:a9:8b 10.100.0.8'], port_security=['fa:16:3e:4b:a9:8b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6696d934-5b11-43a6-828d-b968bbf1ba9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15a02d2b-a26e-4680-91c7-6294785d6e82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.367 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e27c978a-a0a2-42dd-9a36-78125a3c4f55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00435|binding|INFO|Setting lport 15a02d2b-a26e-4680-91c7-6294785d6e82 ovn-installed in OVS
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00436|binding|INFO|Setting lport 15a02d2b-a26e-4680-91c7-6294785d6e82 up in Southbound
Jan 27 08:51:20 np0005597378 systemd[1]: Started Virtual Machine qemu-57-instance-00000034.
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.387 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 systemd-machined[207425]: New machine qemu-58-instance-00000033.
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.396 238945 DEBUG oslo_concurrency.lockutils [None req-7603f754-463b-45c3-b1e6-4827987029bc 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.403 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[315f720c-e3a5-4a2d-b557-eb833b5b81fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 systemd[1]: Started Virtual Machine qemu-58-instance-00000033.
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.4101] manager: (tapcfc0a286-50): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[99999fe8-8b7a-445f-a228-d56bccaaa746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.451 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0e7c00-32e2-4a9a-afa8-9c79b3151982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.454 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[746a210c-35f2-4fc2-97f7-8da8c563bd14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.4763] device (tapcfc0a286-50): carrier: link connected
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.481 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1228a58-45c0-4215-8297-dbc00ac912bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.498 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18b833b6-1566-48e0-bbdb-46c4111b6bf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286995, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.517 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2887420-a4f7-4204-9870-019d9a215a77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:d225'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452010, 'tstamp': 452010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286996, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.535 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b19acb9c-517f-4ff0-85d9-32ca46c3c87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286997, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.566 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89e59f7c-b457-44c6-a996-e411ff148312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.637 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8458378b-e1d7-4d95-823c-0bf3385ec2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.639 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.639 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.640 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 NetworkManager[48904]: <info>  [1769521880.6427] manager: (tapcfc0a286-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Jan 27 08:51:20 np0005597378 kernel: tapcfc0a286-50: entered promiscuous mode
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.646 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:20Z|00437|binding|INFO|Releasing lport 7435efea-97d4-42e4-b8e7-2f77985e6cb4 from this chassis (sb_readonly=0)
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.663 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfc0a286-57b7-4099-8601-e0f075cad96e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfc0a286-57b7-4099-8601-e0f075cad96e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.664 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12843d12-d414-4d51-a6c7-87f5af1b75da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.665 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/cfc0a286-57b7-4099-8601-e0f075cad96e.pid.haproxy
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID cfc0a286-57b7-4099-8601-e0f075cad96e
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:51:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:20.667 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'env', 'PROCESS_TAG=haproxy-cfc0a286-57b7-4099-8601-e0f075cad96e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cfc0a286-57b7-4099-8601-e0f075cad96e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:51:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179657857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.710 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.744 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.974 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521880.9737282, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.975 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.979 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521865.9785411, e053f779-294f-4782-bb33-a14e40753795 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:20 np0005597378 nova_compute[238941]: 2026-01-27 13:51:20.979 238945 INFO nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.005 238945 DEBUG nova.compute.manager [None req-2128276b-9dbd-46ac-b224-cc3e201e74dc - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.006 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.014 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521880.9751382, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.015 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.032 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.039 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.065 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.073 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521881.0726242, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.073 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.098 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.103 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521881.0727692, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.104 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.123 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.127 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:21 np0005597378 podman[287152]: 2026-01-27 13:51:21.135859354 +0000 UTC m=+0.097021328 container create 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.146 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:21 np0005597378 podman[287152]: 2026-01-27 13:51:21.065542395 +0000 UTC m=+0.026704389 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:51:21 np0005597378 systemd[1]: Started libpod-conmon-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope.
Jan 27 08:51:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b190e352736a83f7d17ea65747b1d58325fe8f6470af34fb8178ac46c48bc5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:21 np0005597378 podman[287152]: 2026-01-27 13:51:21.223142518 +0000 UTC m=+0.184304502 container init 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 08:51:21 np0005597378 podman[287166]: 2026-01-27 13:51:21.230276849 +0000 UTC m=+0.056013215 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:51:21 np0005597378 podman[287152]: 2026-01-27 13:51:21.230488285 +0000 UTC m=+0.191650259 container start 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:51:21 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : New worker (287194) forked
Jan 27 08:51:21 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : Loading success.
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.300 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15a02d2b-a26e-4680-91c7-6294785d6e82 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.303 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39fd481b-954d-45b8-9e23-f159bb5c1576]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562914691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.363 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c551ab-60ba-4e8a-92bd-8a251453e565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.370 238945 DEBUG oslo_concurrency.processutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.370 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[52dbfb69-1425-4e75-bf59-699c5e67f367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.371 238945 DEBUG nova.virt.libvirt.vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:19Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.371 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.372 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.373 238945 DEBUG nova.objects.instance [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'pci_devices' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.393 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <uuid>f433aa34-c04e-4ae6-8fd3-0999a41789fe</uuid>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <name>instance-00000032</name>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:name>tempest-SecurityGroupsTestJSON-server-89666614</nova:name>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:20</nova:creationTime>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:user uuid="dc97508eec004685b1c36a85261430bd">tempest-SecurityGroupsTestJSON-915122805-project-member</nova:user>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:project uuid="7fc23a96b5e44bf687aafd92e4199313">tempest-SecurityGroupsTestJSON-915122805</nova:project>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <nova:port uuid="bf0b7102-1d3f-448b-912f-96a2c136df6b">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <entry name="serial">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <entry name="uuid">f433aa34-c04e-4ae6-8fd3-0999a41789fe</entry>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/f433aa34-c04e-4ae6-8fd3-0999a41789fe_disk.config">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:c7:a7:77"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <target dev="tapbf0b7102-1d"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe/console.log" append="off"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:21 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:21 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:21 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:21 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.394 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.394 238945 DEBUG nova.virt.libvirt.driver [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.395 238945 DEBUG nova.virt.libvirt.vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:19Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.395 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.395 238945 DEBUG nova.network.os_vif_util [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.396 238945 DEBUG os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.397 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.397 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0b7102-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0b7102-1d, col_values=(('external_ids', {'iface-id': 'bf0b7102-1d3f-448b-912f-96a2c136df6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:a7:77', 'vm-uuid': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 NetworkManager[48904]: <info>  [1769521881.4036] manager: (tapbf0b7102-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.406 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[633425a9-6632-4e24-815f-d107840fc468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.412 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.413 238945 INFO os_vif [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.429 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9044216-e51f-4850-83ac-1691f642a338]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287211, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.450 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b85a4b7-52cc-4da8-8a33-a51d4a2fefcb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287213, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287213, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.453 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.459 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.460 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.460 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.460 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:21 np0005597378 kernel: tapbf0b7102-1d: entered promiscuous mode
Jan 27 08:51:21 np0005597378 NetworkManager[48904]: <info>  [1769521881.4802] manager: (tapbf0b7102-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 27 08:51:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:21Z|00438|binding|INFO|Claiming lport bf0b7102-1d3f-448b-912f-96a2c136df6b for this chassis.
Jan 27 08:51:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:21Z|00439|binding|INFO|bf0b7102-1d3f-448b-912f-96a2c136df6b: Claiming fa:16:3e:c7:a7:77 10.100.0.5
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 systemd-udevd[286980]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.490 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a d69f7bb8-0f27-4330-919f-a99b9bc92557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.491 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 bound to our chassis#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.493 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05#033[00m
Jan 27 08:51:21 np0005597378 NetworkManager[48904]: <info>  [1769521881.5018] device (tapbf0b7102-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:21 np0005597378 NetworkManager[48904]: <info>  [1769521881.5024] device (tapbf0b7102-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:21Z|00440|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b ovn-installed in OVS
Jan 27 08:51:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:21Z|00441|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b up in Southbound
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.514 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[762ed693-e0f8-4e65-9350-f5b472b0b7fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 systemd-machined[207425]: New machine qemu-59-instance-00000032.
Jan 27 08:51:21 np0005597378 systemd[1]: Started Virtual Machine qemu-59-instance-00000032.
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.542 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b54f4882-a126-4893-97f2-e8af2f5e893a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.545 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d958985c-332d-41c4-864d-a60dc82574df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.571 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4aac4c-e852-4899-b526-e07ecdf6a1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60f19faa-eca5-4c4c-b099-2d82af5b3761]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287238, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.601 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d36d7f5-3e51-4f84-912f-a1a53ce35798]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287239, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287239, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.602 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.604 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.605 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.605 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.606 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:21.606 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:21 np0005597378 nova_compute[238941]: 2026-01-27 13:51:21.991 238945 DEBUG nova.network.neutron [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updating instance_info_cache with network_info: [{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.012 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Releasing lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.012 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance network_info: |[{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.013 238945 DEBUG oslo_concurrency.lockutils [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.013 238945 DEBUG nova.network.neutron [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Refreshing network info cache for port 89a5b6ba-141b-45b8-b1ea-fc2a60970931 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.016 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start _get_guest_xml network_info=[{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.023 238945 WARNING nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.029 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.030 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.034 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.libvirt.host [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.035 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.036 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.037 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.037 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.037 238945 DEBUG nova.virt.hardware [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.040 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.264 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for f433aa34-c04e-4ae6-8fd3-0999a41789fe due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.265 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521882.2638676, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.265 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.267 238945 DEBUG nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.271 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance rebooted successfully.#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.272 238945 DEBUG nova.compute.manager [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.284 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.291 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.316 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521882.2640152, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 557 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 308 op/s
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.352 238945 DEBUG oslo_concurrency.lockutils [None req-abec1527-84c1-4d71-99c2-8101b5e27741 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.379 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.385 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.437 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully created port: a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:51:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417957429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.658 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.685 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:22 np0005597378 nova_compute[238941]: 2026-01-27 13:51:22.692 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.043 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.044 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.045 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.045 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.045 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 WARNING nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.046 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.047 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.047 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Processing event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.047 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.048 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.048 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.048 238945 DEBUG oslo_concurrency.lockutils [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.049 238945 DEBUG nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] No waiting events found dispatching network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.049 238945 WARNING nova.compute.manager [req-325c8055-4a72-49b5-94f1-ccdcabc2e370 req-aa252e96-c018-4a2a-a11d-c2eb81d3595d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received unexpected event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.050 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.054 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521883.0539896, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.054 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.057 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.060 238945 INFO nova.virt.libvirt.driver [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance spawned successfully.#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.060 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.132 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.141 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.147 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.148 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.149 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.150 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.150 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.151 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.180 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.236 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 11.24 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.236 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/980673521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.315 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 13.97 seconds to build instance.#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.348 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.350 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-3',id=53,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:13Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=7e8705e9-4e86-44aa-b532-55fcccac542c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.351 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.352 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.354 238945 DEBUG nova.objects.instance [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e8705e9-4e86-44aa-b532-55fcccac542c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.357 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.377 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <uuid>7e8705e9-4e86-44aa-b532-55fcccac542c</uuid>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <name>instance-00000035</name>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServersNegativeTestJSON-server-2140282589-3</nova:name>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:22</nova:creationTime>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:user uuid="2731f35d38de444e8d3fac25a4164453">tempest-ListServersNegativeTestJSON-2145054704-project-member</nova:user>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:project uuid="14aa89c69a294999aab63771025b995a">tempest-ListServersNegativeTestJSON-2145054704</nova:project>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <nova:port uuid="89a5b6ba-141b-45b8-b1ea-fc2a60970931">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <entry name="serial">7e8705e9-4e86-44aa-b532-55fcccac542c</entry>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <entry name="uuid">7e8705e9-4e86-44aa-b532-55fcccac542c</entry>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7e8705e9-4e86-44aa-b532-55fcccac542c_disk">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:d6:d7:e7"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <target dev="tap89a5b6ba-14"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/console.log" append="off"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:23 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:23 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Preparing to wait for external event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.379 238945 DEBUG nova.virt.libvirt.vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-3',id=53,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:13Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=7e8705e9-4e86-44aa-b532-55fcccac542c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.379 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.380 238945 DEBUG nova.network.os_vif_util [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.381 238945 DEBUG os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.382 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.382 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.386 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89a5b6ba-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.386 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89a5b6ba-14, col_values=(('external_ids', {'iface-id': '89a5b6ba-141b-45b8-b1ea-fc2a60970931', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:d7:e7', 'vm-uuid': '7e8705e9-4e86-44aa-b532-55fcccac542c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:23 np0005597378 NetworkManager[48904]: <info>  [1769521883.3895] manager: (tap89a5b6ba-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.392 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.399 238945 INFO os_vif [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14')#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.563 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.564 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.564 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] No VIF found with MAC fa:16:3e:d6:d7:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.564 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Using config drive#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.589 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.653 238945 DEBUG nova.network.neutron [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updated VIF entry in instance network info cache for port 89a5b6ba-141b-45b8-b1ea-fc2a60970931. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.653 238945 DEBUG nova.network.neutron [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updating instance_info_cache with network_info: [{"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.656 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully updated port: a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.688 238945 DEBUG oslo_concurrency.lockutils [req-6a16f6d1-9367-41b6-a769-5b4f51ced555 req-34e93d68-bd4d-4409-b443-ed2c94508776 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7e8705e9-4e86-44aa-b532-55fcccac542c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.929 238945 DEBUG nova.compute.manager [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-changed-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.930 238945 DEBUG nova.compute.manager [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing instance network info cache due to event network-changed-a00dfa6b-3d70-4dbd-b9c8-4817560c3488. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.930 238945 DEBUG oslo_concurrency.lockutils [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.930 238945 DEBUG oslo_concurrency.lockutils [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:23 np0005597378 nova_compute[238941]: 2026-01-27 13:51:23.931 238945 DEBUG nova.network.neutron [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing network info cache for port a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 557 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 211 op/s
Jan 27 08:51:24 np0005597378 nova_compute[238941]: 2026-01-27 13:51:24.880 238945 DEBUG nova.network.neutron [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:24 np0005597378 nova_compute[238941]: 2026-01-27 13:51:24.926 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Creating config drive at /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config#033[00m
Jan 27 08:51:24 np0005597378 nova_compute[238941]: 2026-01-27 13:51:24.934 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptpue0lwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.099 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptpue0lwn" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.142 238945 DEBUG nova.storage.rbd_utils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] rbd image 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.147 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.415 238945 DEBUG oslo_concurrency.processutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config 7e8705e9-4e86-44aa-b532-55fcccac542c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.416 238945 INFO nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deleting local config drive /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:51:25 np0005597378 kernel: tap89a5b6ba-14: entered promiscuous mode
Jan 27 08:51:25 np0005597378 NetworkManager[48904]: <info>  [1769521885.4918] manager: (tap89a5b6ba-14): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:25Z|00442|binding|INFO|Claiming lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 for this chassis.
Jan 27 08:51:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:25Z|00443|binding|INFO|89a5b6ba-141b-45b8-b1ea-fc2a60970931: Claiming fa:16:3e:d6:d7:e7 10.100.0.6
Jan 27 08:51:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:25Z|00444|binding|INFO|Setting lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 ovn-installed in OVS
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:25 np0005597378 systemd-udevd[287419]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:25 np0005597378 systemd-machined[207425]: New machine qemu-60-instance-00000035.
Jan 27 08:51:25 np0005597378 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Jan 27 08:51:25 np0005597378 NetworkManager[48904]: <info>  [1769521885.5688] device (tap89a5b6ba-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:25 np0005597378 NetworkManager[48904]: <info>  [1769521885.5695] device (tap89a5b6ba-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:25Z|00445|binding|INFO|Setting lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 up in Southbound
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.577 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:d7:e7 10.100.0.6'], port_security=['fa:16:3e:d6:d7:e7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e8705e9-4e86-44aa-b532-55fcccac542c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=89a5b6ba-141b-45b8-b1ea-fc2a60970931) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.578 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 89a5b6ba-141b-45b8-b1ea-fc2a60970931 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e bound to our chassis#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.580 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.601 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a31f48a-c165-444c-8747-a2f05f9a11eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.603 238945 DEBUG nova.network.neutron [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.637 238945 DEBUG oslo_concurrency.lockutils [req-4482b5cf-019b-4f5f-8aaa-6dd3cbe14fc9 req-95d0fd69-90d9-4d0c-a169-b2127969795b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.638 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[32b18f3c-a318-498a-be20-ca2c97069cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.642 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[148ca149-760d-4e67-9f87-a6665c93e306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.672 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e3afbea7-aaf6-40cf-9ac4-648f710fd727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.690 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e544e3e-0aa8-4c2a-8e96-cd9efc07cac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287433, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.707 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79fbcc00-3a71-4b7e-8134-47801bcea186]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287434, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287434, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.709 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:25 np0005597378 nova_compute[238941]: 2026-01-27 13:51:25.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.712 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.713 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.713 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:25.714 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.013 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.014 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.014 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.015 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.015 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Processing event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.016 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.017 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.018 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.019 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.020 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] No waiting events found dispatching network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.021 238945 WARNING nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received unexpected event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.022 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.024 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.024 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.025 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.026 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.027 238945 WARNING nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.028 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.028 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.029 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.030 238945 DEBUG oslo_concurrency.lockutils [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.031 238945 DEBUG nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.032 238945 WARNING nova.compute.manager [req-f5c7b232-d6dc-485f-85ed-d3f2a0285c8b req-296c0a68-9ca4-4897-9694-bb56e3cd2229 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state None.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.035 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.038 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.0374317, 7e8705e9-4e86-44aa-b532-55fcccac542c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.038 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.044 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.049 238945 INFO nova.virt.libvirt.driver [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance spawned successfully.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.049 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.057 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.065 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.0408742, 7e8705e9-4e86-44aa-b532-55fcccac542c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.066 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.073 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.074 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.075 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.075 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.076 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.076 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.088 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.092 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.120 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.121 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.0439801, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.122 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.283 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 15.83 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.283 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.284 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.294 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 557 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 201 op/s
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.355 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 17.09 seconds to build instance.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.374 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.437 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.438 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.438 238945 INFO nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Unshelving#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.530 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.531 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.536 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_requests' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.559 238945 DEBUG nova.compute.manager [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.559 238945 DEBUG oslo_concurrency.lockutils [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.560 238945 DEBUG oslo_concurrency.lockutils [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.560 238945 DEBUG oslo_concurrency.lockutils [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.562 238945 DEBUG nova.compute.manager [req-b954b9d4-dcd8-4989-a65a-ec25ca3b46d8 req-ac7cd19b-da64-4339-af38-c882ecf69e74 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Processing event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.564 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.566 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'numa_topology' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.569 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521886.5692072, 7e8705e9-4e86-44aa-b532-55fcccac542c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.569 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.572 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.575 238945 INFO nova.virt.libvirt.driver [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance spawned successfully.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.576 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.578 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.579 238945 INFO nova.compute.claims [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.605 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.612 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.616 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.616 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.617 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.618 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.618 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.619 238945 DEBUG nova.virt.libvirt.driver [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.655 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.696 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 12.56 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.696 238945 DEBUG nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.727 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully updated port: d7c86f5b-f6e4-4637-9ff2-1d6007449737 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.768 238945 INFO nova.compute.manager [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 17.41 seconds to build instance.#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.789 238945 DEBUG oslo_concurrency.lockutils [None req-a12ed97b-5224-4867-874c-c1bb73ed848e 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.914 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:26 np0005597378 nova_compute[238941]: 2026-01-27 13:51:26.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.047 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.048 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.049 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.049 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.050 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.051 238945 INFO nova.compute.manager [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Terminating instance#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.053 238945 DEBUG nova.compute.manager [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:27 np0005597378 kernel: tapbf0b7102-1d (unregistering): left promiscuous mode
Jan 27 08:51:27 np0005597378 NetworkManager[48904]: <info>  [1769521887.1281] device (tapbf0b7102-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:27Z|00446|binding|INFO|Releasing lport bf0b7102-1d3f-448b-912f-96a2c136df6b from this chassis (sb_readonly=0)
Jan 27 08:51:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:27Z|00447|binding|INFO|Setting lport bf0b7102-1d3f-448b-912f-96a2c136df6b down in Southbound
Jan 27 08:51:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:27Z|00448|binding|INFO|Removing iface tapbf0b7102-1d ovn-installed in OVS
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.151 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a7:77 10.100.0.5'], port_security=['fa:16:3e:c7:a7:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f433aa34-c04e-4ae6-8fd3-0999a41789fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a bdbc0303-6f38-42ad-b94e-af8975653381 d69f7bb8-0f27-4330-919f-a99b9bc92557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bf0b7102-1d3f-448b-912f-96a2c136df6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.154 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bf0b7102-1d3f-448b-912f-96a2c136df6b in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 unbound from our chassis#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.157 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.182 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e273ac57-f442-4277-93d6-6bdeb8b3b14f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:27 np0005597378 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 27 08:51:27 np0005597378 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Consumed 5.708s CPU time.
Jan 27 08:51:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:27 np0005597378 systemd-machined[207425]: Machine qemu-59-instance-00000032 terminated.
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.217 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4052e4c-23f1-42ea-a6dc-a7c324c52c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.221 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ce73dac2-6f7e-469d-9015-dcd6c9429d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.257 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6af17b-e879-4d81-83fc-dc8da8bc869f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0074d895-2fe9-429c-9507-10b02e3cf702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa17e2f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:db:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447280, 'reachable_time': 29382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287506, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.295 238945 INFO nova.virt.libvirt.driver [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance destroyed successfully.#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.296 238945 DEBUG nova.objects.instance [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'resources' on Instance uuid f433aa34-c04e-4ae6-8fd3-0999a41789fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23e07e34-9ffa-478e-ab0b-7ab3af7dc182]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447291, 'tstamp': 447291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287516, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6fa17e2f-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447294, 'tstamp': 447294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287516, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.311 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa17e2f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.311 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.312 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa17e2f-40, col_values=(('external_ids', {'iface-id': '023f53bd-5452-48b1-a708-41a1d13bdb08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:27.312 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.315 238945 DEBUG nova.virt.libvirt.vif [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-89666614',display_name='tempest-SecurityGroupsTestJSON-server-89666614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-89666614',id=50,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-0l1r18b1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:22Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=f433aa34-c04e-4ae6-8fd3-0999a41789fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.316 238945 DEBUG nova.network.os_vif_util [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.316 238945 DEBUG nova.network.os_vif_util [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.317 238945 DEBUG os_vif [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.318 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0b7102-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.324 238945 INFO os_vif [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:a7:77,bridge_name='br-int',has_traffic_filtering=True,id=bf0b7102-1d3f-448b-912f-96a2c136df6b,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0b7102-1d')#033[00m
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0032588957715579416 of space, bias 1.0, pg target 0.9776687314673825 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0017719396259695118 of space, bias 1.0, pg target 0.5315818877908536 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1677340525951113e-06 of space, bias 4.0, pg target 0.0014012808631141335 quantized to 16 (current 16)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:51:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.596 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Successfully updated port: a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.623 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.624 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.624 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126744248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.768 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.777 238945 DEBUG nova.compute.provider_tree [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.797 238945 DEBUG nova.scheduler.client.report [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.818 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.823 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.956 238945 INFO nova.virt.libvirt.driver [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deleting instance files /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe_del#033[00m
Jan 27 08:51:27 np0005597378 nova_compute[238941]: 2026-01-27 13:51:27.957 238945 INFO nova.virt.libvirt.driver [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deletion of /var/lib/nova/instances/f433aa34-c04e-4ae6-8fd3-0999a41789fe_del complete#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.007 238945 INFO nova.compute.manager [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.008 238945 DEBUG oslo.service.loopingcall [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.008 238945 DEBUG nova.compute.manager [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.008 238945 DEBUG nova.network.neutron [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.046 238945 INFO nova.network.neutron [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating port ceb7b09e-b635-4570-bcf2-a08115d41365 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.116 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.117 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-unplugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.118 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.119 238945 DEBUG oslo_concurrency.lockutils [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.119 238945 DEBUG nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] No waiting events found dispatching network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.119 238945 WARNING nova.compute.manager [req-3a359e90-f6f9-4975-a75a-61484eebdec3 req-773356dd-8a58-4e36-85e4-5a8f2af9638b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received unexpected event network-vif-plugged-bf0b7102-1d3f-448b-912f-96a2c136df6b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:51:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 547 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 256 op/s
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.659 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.660 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing instance network info cache due to event network-changed-bf0b7102-1d3f-448b-912f-96a2c136df6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.660 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.661 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:28 np0005597378 nova_compute[238941]: 2026-01-27 13:51:28.661 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Refreshing network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.323 238945 DEBUG nova.network.neutron [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.353 238945 INFO nova.compute.manager [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Took 1.34 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.412 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.413 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.539 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.540 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.540 238945 DEBUG nova.network.neutron [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.633 238945 DEBUG oslo_concurrency.processutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.859 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updated VIF entry in instance network info cache for port bf0b7102-1d3f-448b-912f-96a2c136df6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.861 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Updating instance_info_cache with network_info: [{"id": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "address": "fa:16:3e:c7:a7:77", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0b7102-1d", "ovs_interfaceid": "bf0b7102-1d3f-448b-912f-96a2c136df6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.889 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-f433aa34-c04e-4ae6-8fd3-0999a41789fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.889 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.890 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.890 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.891 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.891 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] No waiting events found dispatching network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.891 238945 WARNING nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received unexpected event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.892 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-changed-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.892 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing instance network info cache due to event network-changed-d7c86f5b-f6e4-4637-9ff2-1d6007449737. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.892 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.985 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.985 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.986 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.986 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.987 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.989 238945 INFO nova.compute.manager [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Terminating instance#033[00m
Jan 27 08:51:29 np0005597378 nova_compute[238941]: 2026-01-27 13:51:29.991 238945 DEBUG nova.compute.manager [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:30 np0005597378 kernel: tap15a02d2b-a2 (unregistering): left promiscuous mode
Jan 27 08:51:30 np0005597378 NetworkManager[48904]: <info>  [1769521890.0405] device (tap15a02d2b-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:30Z|00449|binding|INFO|Releasing lport 15a02d2b-a26e-4680-91c7-6294785d6e82 from this chassis (sb_readonly=0)
Jan 27 08:51:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:30Z|00450|binding|INFO|Setting lport 15a02d2b-a26e-4680-91c7-6294785d6e82 down in Southbound
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:30Z|00451|binding|INFO|Removing iface tap15a02d2b-a2 ovn-installed in OVS
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.096 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:a9:8b 10.100.0.8'], port_security=['fa:16:3e:4b:a9:8b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6696d934-5b11-43a6-828d-b968bbf1ba9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15a02d2b-a26e-4680-91c7-6294785d6e82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.102 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15a02d2b-a26e-4680-91c7-6294785d6e82 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis#033[00m
Jan 27 08:51:30 np0005597378 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Deactivated successfully.
Jan 27 08:51:30 np0005597378 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Consumed 4.337s CPU time.
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.106 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e#033[00m
Jan 27 08:51:30 np0005597378 systemd-machined[207425]: Machine qemu-58-instance-00000033 terminated.
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.133 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58ea753b-aeec-4127-bac1-6139124c8ec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.167 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[025255ef-8691-45ef-bceb-a70e505c7c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.172 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7241d8-93f6-48bd-b020-204639053163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0e503dda-c5cb-437b-b42c-09988581af9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.229 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1573ca71-af70-443a-ac28-4e3656d830c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287575, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.230 238945 INFO nova.virt.libvirt.driver [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Instance destroyed successfully.#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.231 238945 DEBUG nova.objects.instance [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'resources' on Instance uuid 6696d934-5b11-43a6-828d-b968bbf1ba9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a99c61-0ae0-4052-bcba-11a9d3f9066a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287581, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287581, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.250 238945 DEBUG nova.virt.libvirt.vif [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-1',id=51,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:26Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=6696d934-5b11-43a6-828d-b968bbf1ba9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.251 238945 DEBUG nova.network.os_vif_util [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "15a02d2b-a26e-4680-91c7-6294785d6e82", "address": "fa:16:3e:4b:a9:8b", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a02d2b-a2", "ovs_interfaceid": "15a02d2b-a26e-4680-91c7-6294785d6e82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.252 238945 DEBUG nova.network.os_vif_util [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.252 238945 DEBUG os_vif [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.252 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.255 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a02d2b-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.261 238945 INFO os_vif [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:a9:8b,bridge_name='br-int',has_traffic_filtering=True,id=15a02d2b-a26e-4680-91c7-6294785d6e82,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a02d2b-a2')#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.263 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:30.264 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 511 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 353 op/s
Jan 27 08:51:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162303298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.380 238945 DEBUG oslo_concurrency.processutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.386 238945 DEBUG nova.compute.provider_tree [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.457 238945 DEBUG nova.scheduler.client.report [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.485 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.508 238945 INFO nova.scheduler.client.report [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Deleted allocations for instance f433aa34-c04e-4ae6-8fd3-0999a41789fe#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.579 238945 DEBUG oslo_concurrency.lockutils [None req-aa625717-ba93-4cde-ab83-c1b21a1faca9 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "f433aa34-c04e-4ae6-8fd3-0999a41789fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.882 238945 DEBUG nova.compute.manager [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-unplugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.882 238945 DEBUG oslo_concurrency.lockutils [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.883 238945 DEBUG oslo_concurrency.lockutils [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.883 238945 DEBUG oslo_concurrency.lockutils [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.883 238945 DEBUG nova.compute.manager [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] No waiting events found dispatching network-vif-unplugged-15a02d2b-a26e-4680-91c7-6294785d6e82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.884 238945 DEBUG nova.compute.manager [req-642e649e-6a8e-4ee2-b039-dc9c14b7081c req-acbb714c-7919-462f-9650-1841b0e8d7a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-unplugged-15a02d2b-a26e-4680-91c7-6294785d6e82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.981 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Received event network-vif-deleted-bf0b7102-1d3f-448b-912f-96a2c136df6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.981 238945 INFO nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Neutron deleted interface bf0b7102-1d3f-448b-912f-96a2c136df6b; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.981 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.984 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Detach interface failed, port_id=bf0b7102-1d3f-448b-912f-96a2c136df6b, reason: Instance f433aa34-c04e-4ae6-8fd3-0999a41789fe could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.984 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.985 238945 DEBUG nova.compute.manager [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing instance network info cache due to event network-changed-ceb7b09e-b635-4570-bcf2-a08115d41365. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:30 np0005597378 nova_compute[238941]: 2026-01-27 13:51:30.985 238945 DEBUG oslo_concurrency.lockutils [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.243 238945 DEBUG nova.network.neutron [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.255 238945 INFO nova.virt.libvirt.driver [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deleting instance files /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d_del#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.256 238945 INFO nova.virt.libvirt.driver [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deletion of /var/lib/nova/instances/6696d934-5b11-43a6-828d-b968bbf1ba9d_del complete#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.280 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.282 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.282 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating image(s)#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.303 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.307 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'trusted_certs' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.309 238945 DEBUG oslo_concurrency.lockutils [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.309 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Refreshing network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.326 238945 INFO nova.compute.manager [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.327 238945 DEBUG oslo.service.loopingcall [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.327 238945 DEBUG nova.compute.manager [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.327 238945 DEBUG nova.network.neutron [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.351 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.373 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.377 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "b416907720f2c494ff701725db8a8c045ca56bcf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.378 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "b416907720f2c494ff701725db8a8c045ca56bcf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.562 238945 DEBUG nova.virt.libvirt.imagebackend [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.604 238945 DEBUG nova.virt.libvirt.imagebackend [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.605 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] cloning images/af1a2f6f-cd22-4a1a-b2d9-576a65db1604@snap to None/e053f779-294f-4782-bb33-a14e40753795_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.898 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "b416907720f2c494ff701725db8a8c045ca56bcf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:31 np0005597378 nova_compute[238941]: 2026-01-27 13:51:31.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.038 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'migration_context' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.103 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] flattening vms/e053f779-294f-4782-bb33-a14e40753795_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.164 238945 DEBUG nova.network.neutron [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.195 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.195 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance network_info: |[{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.196 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.196 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing network info cache for port d7c86f5b-f6e4-4637-9ff2-1d6007449737 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.201 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start _get_guest_xml network_info=[{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.209 238945 WARNING nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.219 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.220 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.223 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.223 238945 DEBUG nova.virt.libvirt.host [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.224 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.225 238945 DEBUG nova.virt.hardware [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.227 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 511 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 40 KiB/s wr, 318 op/s
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.465 238945 DEBUG nova.network.neutron [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.490 238945 INFO nova.compute.manager [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Took 1.16 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.530 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.531 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.688 238945 DEBUG oslo_concurrency.processutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889167286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.885 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.906 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:32 np0005597378 nova_compute[238941]: 2026-01-27 13:51:32.909 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.009 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updated VIF entry in instance network info cache for port ceb7b09e-b635-4570-bcf2-a08115d41365. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.010 238945 DEBUG nova.network.neutron [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.089 238945 DEBUG oslo_concurrency.lockutils [req-bb063df6-5b26-4571-a51b-ac45eed26078 req-10613897-7913-4d5a-82c0-5d94f05c7513 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e053f779-294f-4782-bb33-a14e40753795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.242 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Image rbd:vms/e053f779-294f-4782-bb33-a14e40753795_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.243 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.243 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Ensure instance console log exists: /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.244 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.244 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.244 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.246 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start _get_guest_xml network_info=[{"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:51:03Z,direct_url=<?>,disk_format='raw',id=af1a2f6f-cd22-4a1a-b2d9-576a65db1604,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1638292425-shelved',owner='89715d52c38241dbb1fdcc016ede5d3c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:51:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.259 238945 WARNING nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.266 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.267 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.270 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.270 238945 DEBUG nova.virt.libvirt.host [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T13:51:03Z,direct_url=<?>,disk_format='raw',id=af1a2f6f-cd22-4a1a-b2d9-576a65db1604,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1638292425-shelved',owner='89715d52c38241dbb1fdcc016ede5d3c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T13:51:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.271 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.272 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.virt.hardware [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.273 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'vcpu_model' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918019811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.323 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.364 238945 DEBUG oslo_concurrency.processutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.371 238945 DEBUG nova.compute.provider_tree [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.375 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updated VIF entry in instance network info cache for port d7c86f5b-f6e4-4637-9ff2-1d6007449737. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.375 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.395 238945 DEBUG nova.scheduler.client.report [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.428 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.428 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-changed-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.438 238945 DEBUG nova.compute.manager [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing instance network info cache due to event network-changed-a7f80eaf-94c9-4184-9984-32cc6a6db6e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.438 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.438 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.439 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Refreshing network info cache for port a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.530 238945 DEBUG nova.compute.manager [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.531 238945 DEBUG oslo_concurrency.lockutils [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.532 238945 DEBUG oslo_concurrency.lockutils [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.533 238945 DEBUG oslo_concurrency.lockutils [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.533 238945 DEBUG nova.compute.manager [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] No waiting events found dispatching network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.534 238945 WARNING nova.compute.manager [req-3bf648af-c754-429e-9f77-401b1bc273fb req-0fea0c7d-1680-43a3-aeff-8166c50a2bc5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received unexpected event network-vif-plugged-15a02d2b-a26e-4680-91c7-6294785d6e82 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.565 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547047224' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.601 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.610 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.611 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.613 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.617 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.618 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.619 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.619 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.620 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.622 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.624 238945 DEBUG nova.objects.instance [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'pci_devices' on Instance uuid b17763fd-bf68-45e0-84a4-579e1453d6cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.775 238945 DEBUG nova.compute.manager [req-433b88a1-c645-475c-bd64-b1aff2a47d8d req-8caaf8c1-a441-419c-84d6-8e0391b89bd1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Received event network-vif-deleted-15a02d2b-a26e-4680-91c7-6294785d6e82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.787 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <uuid>b17763fd-bf68-45e0-84a4-579e1453d6cc</uuid>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <name>instance-00000036</name>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersTestMultiNic-server-1690429552</nova:name>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:32</nova:creationTime>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:user uuid="0ba812648bec43bbbd7489f6c33289cc">tempest-ServersTestMultiNic-438271831-project-member</nova:user>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:project uuid="ad39416b63df4f6194a01f4e91fdda1c">tempest-ServersTestMultiNic-438271831</nova:project>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:port uuid="a00dfa6b-3d70-4dbd-b9c8-4817560c3488">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.84" ipVersion="4"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:port uuid="d7c86f5b-f6e4-4637-9ff2-1d6007449737">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.1.130" ipVersion="4"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <nova:port uuid="a7f80eaf-94c9-4184-9984-32cc6a6db6e3">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.224" ipVersion="4"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <entry name="serial">b17763fd-bf68-45e0-84a4-579e1453d6cc</entry>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <entry name="uuid">b17763fd-bf68-45e0-84a4-579e1453d6cc</entry>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b17763fd-bf68-45e0-84a4-579e1453d6cc_disk">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:17:42:8d"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <target dev="tapa00dfa6b-3d"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:95:bf:d5"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <target dev="tapd7c86f5b-f6"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e4:95:9c"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <target dev="tapa7f80eaf-94"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/console.log" append="off"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:33 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:33 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.794 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Preparing to wait for external event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.794 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.794 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.795 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.795 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Preparing to wait for external event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.795 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.796 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.796 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.798 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Preparing to wait for external event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.799 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.799 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.799 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.800 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.802 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.804 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.804 238945 DEBUG os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.808 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.814 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa00dfa6b-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.814 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa00dfa6b-3d, col_values=(('external_ids', {'iface-id': 'a00dfa6b-3d70-4dbd-b9c8-4817560c3488', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:42:8d', 'vm-uuid': 'b17763fd-bf68-45e0-84a4-579e1453d6cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 NetworkManager[48904]: <info>  [1769521893.8180] manager: (tapa00dfa6b-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.816 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.827 238945 INFO os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d')#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.828 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.828 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.829 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.829 238945 DEBUG os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.830 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.830 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.831 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.833 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7c86f5b-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.833 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7c86f5b-f6, col_values=(('external_ids', {'iface-id': 'd7c86f5b-f6e4-4637-9ff2-1d6007449737', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:bf:d5', 'vm-uuid': 'b17763fd-bf68-45e0-84a4-579e1453d6cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 NetworkManager[48904]: <info>  [1769521893.8367] manager: (tapd7c86f5b-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.839 238945 INFO nova.scheduler.client.report [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Deleted allocations for instance 6696d934-5b11-43a6-828d-b968bbf1ba9d#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.847 238945 INFO os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6')#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.848 238945 DEBUG nova.virt.libvirt.vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:15Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.848 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.849 238945 DEBUG nova.network.os_vif_util [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.849 238945 DEBUG os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.850 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.850 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.852 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.852 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7f80eaf-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.852 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7f80eaf-94, col_values=(('external_ids', {'iface-id': 'a7f80eaf-94c9-4184-9984-32cc6a6db6e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:95:9c', 'vm-uuid': 'b17763fd-bf68-45e0-84a4-579e1453d6cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 NetworkManager[48904]: <info>  [1769521893.8545] manager: (tapa7f80eaf-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.862 238945 INFO os_vif [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94')#033[00m
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826125761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.936 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:33 np0005597378 nova_compute[238941]: 2026-01-27 13:51:33.958 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.000 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.051 238945 DEBUG oslo_concurrency.lockutils [None req-21f5ac4c-4d9b-424a-8e12-801e5d52c3eb 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "6696d934-5b11-43a6-828d-b968bbf1ba9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.234280995 +0000 UTC m=+0.053819127 container create 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 08:51:34 np0005597378 systemd[1]: Started libpod-conmon-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope.
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.295 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.296 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.296 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:17:42:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.296 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:95:bf:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.297 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:e4:95:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.201568496 +0000 UTC m=+0.021106658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.298 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Using config drive#033[00m
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.318553688 +0000 UTC m=+0.138091850 container init 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.322 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.325986607 +0000 UTC m=+0.145524739 container start 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.32979791 +0000 UTC m=+0.149336072 container attach 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:51:34 np0005597378 systemd[1]: libpod-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope: Deactivated successfully.
Jan 27 08:51:34 np0005597378 silly_mclaren[288125]: 167 167
Jan 27 08:51:34 np0005597378 conmon[288125]: conmon 77074ebe1756e7d8eb68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope/container/memory.events
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.334672021 +0000 UTC m=+0.154210173 container died 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:51:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 533 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 2.2 MiB/s wr, 365 op/s
Jan 27 08:51:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e42a42e995fc2d229595ab6eca62153a6a6163f9afd14343d8dd6358ebdd12f4-merged.mount: Deactivated successfully.
Jan 27 08:51:34 np0005597378 podman[288109]: 2026-01-27 13:51:34.38490119 +0000 UTC m=+0.204439322 container remove 77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:51:34 np0005597378 systemd[1]: libpod-conmon-77074ebe1756e7d8eb68499889479cf0a88ba54e2f685d0e44253ae8a7cf2397.scope: Deactivated successfully.
Jan 27 08:51:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:51:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847068714' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:51:34 np0005597378 podman[288168]: 2026-01-27 13:51:34.6347374 +0000 UTC m=+0.087766059 container create bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.636 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.638 238945 DEBUG nova.virt.libvirt.vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='af1a2f6f-cd22-4a1a-b2d9-576a65db1604',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:51:11.503493',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='af1a2f6f-cd22-4a1a-b2d9-576a65db1604'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.638 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.639 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.640 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'pci_devices' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:34 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:51:34 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:51:34 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:51:34 np0005597378 podman[288168]: 2026-01-27 13:51:34.570792452 +0000 UTC m=+0.023821121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:51:34 np0005597378 systemd[1]: Started libpod-conmon-bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0.scope.
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.706 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <uuid>e053f779-294f-4782-bb33-a14e40753795</uuid>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <name>instance-0000002a</name>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestOtherB-server-1638292425</nova:name>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:51:33</nova:creationTime>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:user uuid="11a9e491e7f24607aa5d3d710b6607ab">tempest-ServerActionsTestOtherB-1311443694-project-member</nova:user>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:project uuid="89715d52c38241dbb1fdcc016ede5d3c">tempest-ServerActionsTestOtherB-1311443694</nova:project>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="af1a2f6f-cd22-4a1a-b2d9-576a65db1604"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <nova:port uuid="ceb7b09e-b635-4570-bcf2-a08115d41365">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <entry name="serial">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <entry name="uuid">e053f779-294f-4782-bb33-a14e40753795</entry>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e053f779-294f-4782-bb33-a14e40753795_disk.config">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:ad:be:d8"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <target dev="tapceb7b09e-b6"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/console.log" append="off"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:51:34 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:51:34 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:51:34 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:51:34 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.708 238945 DEBUG nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Preparing to wait for external event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.709 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.709 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.710 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.711 238945 DEBUG nova.virt.libvirt.vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='af1a2f6f-cd22-4a1a-b2d9-576a65db1604',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:49:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member',shelved_at='2026-01-27T13:51:11.503493',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='af1a2f6f-cd22-4a1a-b2d9-576a65db1604'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.712 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.713 238945 DEBUG nova.network.os_vif_util [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.714 238945 DEBUG os_vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.715 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.716 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.719 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb7b09e-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb7b09e-b6, col_values=(('external_ids', {'iface-id': 'ceb7b09e-b635-4570-bcf2-a08115d41365', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:be:d8', 'vm-uuid': 'e053f779-294f-4782-bb33-a14e40753795'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:34 np0005597378 NetworkManager[48904]: <info>  [1769521894.7237] manager: (tapceb7b09e-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.740 238945 INFO os_vif [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')#033[00m
Jan 27 08:51:34 np0005597378 podman[288168]: 2026-01-27 13:51:34.811742223 +0000 UTC m=+0.264770892 container init bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:51:34 np0005597378 podman[288168]: 2026-01-27 13:51:34.819169102 +0000 UTC m=+0.272197751 container start bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:51:34 np0005597378 podman[288168]: 2026-01-27 13:51:34.853670269 +0000 UTC m=+0.306698918 container attach bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.944 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.945 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.945 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] No VIF found with MAC fa:16:3e:ad:be:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.946 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Using config drive#033[00m
Jan 27 08:51:34 np0005597378 nova_compute[238941]: 2026-01-27 13:51:34.980 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.015 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Creating config drive at /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.023 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao96csom execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.069 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'ec2_ids' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.131 238945 DEBUG nova.objects.instance [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'keypairs' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.189 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao96csom" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.226 238945 DEBUG nova.storage.rbd_utils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.233 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:35 np0005597378 sweet_pascal[288187]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:51:35 np0005597378 sweet_pascal[288187]: --> All data devices are unavailable
Jan 27 08:51:35 np0005597378 podman[288168]: 2026-01-27 13:51:35.308489684 +0000 UTC m=+0.761518333 container died bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:51:35 np0005597378 systemd[1]: libpod-bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0.scope: Deactivated successfully.
Jan 27 08:51:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-360d0b31ac99e79ffbd98f04839076825dd2eb5569338cc3c028d438040f0cdd-merged.mount: Deactivated successfully.
Jan 27 08:51:35 np0005597378 podman[288168]: 2026-01-27 13:51:35.569880214 +0000 UTC m=+1.022908863 container remove bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:51:35 np0005597378 systemd[1]: libpod-conmon-bd6f8b600ffd4f5b94f3e505fd0c6d4d5b7edd9897c725d3ce47d2bab352a9f0.scope: Deactivated successfully.
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.596 238945 DEBUG oslo_concurrency.processutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config b17763fd-bf68-45e0-84a4-579e1453d6cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.598 238945 INFO nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deleting local config drive /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc/disk.config because it was imported into RBD.#033[00m
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.6557] manager: (tapa00dfa6b-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Jan 27 08:51:35 np0005597378 kernel: tapa00dfa6b-3d: entered promiscuous mode
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00452|binding|INFO|Claiming lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for this chassis.
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00453|binding|INFO|a00dfa6b-3d70-4dbd-b9c8-4817560c3488: Claiming fa:16:3e:17:42:8d 10.100.0.84
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.6880] manager: (tapd7c86f5b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.690 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:42:8d 10.100.0.84'], port_security=['fa:16:3e:17:42:8d 10.100.0.84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.84/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a00dfa6b-3d70-4dbd-b9c8-4817560c3488) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.691 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a00dfa6b-3d70-4dbd-b9c8-4817560c3488 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 bound to our chassis#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.692 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e5870be-3451-43b4-b92c-dd5af9cc1291#033[00m
Jan 27 08:51:35 np0005597378 systemd-udevd[288320]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:35 np0005597378 systemd-udevd[288319]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7098] manager: (tapa7f80eaf-94): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7111] device (tapa00dfa6b-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7120] device (tapa00dfa6b-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:35 np0005597378 systemd-udevd[288330]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75e7e7e5-d1a1-4171-a85c-90628f90ad94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.716 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e5870be-31 in ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.719 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e5870be-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5020c2-3c79-476f-ba18-ed1236381318]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1db457d-dc73-40d9-a25f-b385acbc9abd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 kernel: tapa7f80eaf-94: entered promiscuous mode
Jan 27 08:51:35 np0005597378 kernel: tapd7c86f5b-f6: entered promiscuous mode
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7313] device (tapd7c86f5b-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7323] device (tapd7c86f5b-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00454|binding|INFO|Claiming lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 for this chassis.
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00455|binding|INFO|d7c86f5b-f6e4-4637-9ff2-1d6007449737: Claiming fa:16:3e:95:bf:d5 10.100.1.130
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00456|binding|INFO|Claiming lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for this chassis.
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00457|binding|INFO|a7f80eaf-94c9-4184-9984-32cc6a6db6e3: Claiming fa:16:3e:e4:95:9c 10.100.0.224
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.737 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[59c009c6-f867-48b2-b8c4-0f69dee11080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7419] device (tapa7f80eaf-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.7429] device (tapa7f80eaf-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.745 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:95:9c 10.100.0.224'], port_security=['fa:16:3e:e4:95:9c 10.100.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.224/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a7f80eaf-94c9-4184-9984-32cc6a6db6e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.746 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:bf:d5 10.100.1.130'], port_security=['fa:16:3e:95:bf:d5 10.100.1.130'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.130/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2801040-e0d0-43bf-bfb6-870f7e78fec7, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d7c86f5b-f6e4-4637-9ff2-1d6007449737) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00458|binding|INFO|Setting lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 ovn-installed in OVS
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00459|binding|INFO|Setting lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 up in Southbound
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:35 np0005597378 systemd-machined[207425]: New machine qemu-61-instance-00000036.
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[94068d42-1c61-45a6-b9f3-d107f1a5dc10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 systemd[1]: Started Virtual Machine qemu-61-instance-00000036.
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.798 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2289d5-b561-49fc-82bc-3fc95e43be37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00460|binding|INFO|Setting lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 ovn-installed in OVS
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00461|binding|INFO|Setting lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 up in Southbound
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00462|binding|INFO|Setting lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 ovn-installed in OVS
Jan 27 08:51:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:35Z|00463|binding|INFO|Setting lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 up in Southbound
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.8148] manager: (tap5e5870be-30): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Jan 27 08:51:35 np0005597378 systemd-udevd[288329]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.814 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dda5588a-5a2f-4490-9b8b-0f18e743ab65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.861 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b87006b-69fe-40f8-b219-4795b2213965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.865 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[15dd99bf-8e13-4689-9552-bfa4f7c10b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 NetworkManager[48904]: <info>  [1769521895.8938] device (tap5e5870be-30): carrier: link connected
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.900 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[055fb77f-7530-474a-9a04-b401cd0f056a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.921 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updated VIF entry in instance network info cache for port a7f80eaf-94c9-4184-9984-32cc6a6db6e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.919 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c098637-765a-459b-b824-8147ea4ac514]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288392, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.922 238945 DEBUG nova.network.neutron [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f98941-f023-4084-86eb-5d42f3b27b63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:7fbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453551, 'tstamp': 453551}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288393, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:35 np0005597378 nova_compute[238941]: 2026-01-27 13:51:35.962 238945 DEBUG oslo_concurrency.lockutils [req-abe737b6-1faa-4463-b91d-a044d5e07126 req-4990af92-c8fb-410d-887b-850dc37f4fea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b17763fd-bf68-45e0-84a4-579e1453d6cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:35.966 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f901942-32e6-4db3-aeae-a04498e6958a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288394, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.009 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[233e7c19-4cbd-4125-bdfb-7106ab34be4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.038 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Creating config drive at /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.052 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp686o8x_5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.091 238945 DEBUG nova.compute.manager [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.091 238945 DEBUG oslo_concurrency.lockutils [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.091 238945 DEBUG oslo_concurrency.lockutils [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.092 238945 DEBUG oslo_concurrency.lockutils [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.092 238945 DEBUG nova.compute.manager [req-2328b682-80c3-4657-801b-d9c3aecded71 req-b2baa115-7a87-4976-8afc-abd0cbbda026 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Processing event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.105 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa926a7-eb71-4816-883c-fa448c05256d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.108 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e5870be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 NetworkManager[48904]: <info>  [1769521896.1120] manager: (tap5e5870be-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Jan 27 08:51:36 np0005597378 kernel: tap5e5870be-30: entered promiscuous mode
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.119 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e5870be-30, col_values=(('external_ids', {'iface-id': 'cd6b8921-0b49-406f-b95a-637f6648e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:36Z|00464|binding|INFO|Releasing lport cd6b8921-0b49-406f-b95a-637f6648e882 from this chassis (sb_readonly=0)
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.143 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e5870be-3451-43b4-b92c-dd5af9cc1291.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e5870be-3451-43b4-b92c-dd5af9cc1291.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ce5c0a-b472-4f66-8d41-047f0e374830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.145 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/5e5870be-3451-43b4-b92c-dd5af9cc1291.pid.haproxy
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 5e5870be-3451-43b4-b92c-dd5af9cc1291
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.145 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'env', 'PROCESS_TAG=haproxy-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e5870be-3451-43b4-b92c-dd5af9cc1291.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.162728385 +0000 UTC m=+0.104046765 container create dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.175 238945 DEBUG nova.compute.manager [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.175 238945 DEBUG oslo_concurrency.lockutils [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.176 238945 DEBUG oslo_concurrency.lockutils [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.176 238945 DEBUG oslo_concurrency.lockutils [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.176 238945 DEBUG nova.compute.manager [req-f2da3da2-ca26-443e-b760-5955c898d60d req-29dbc6eb-4607-4ab9-8d7f-3584e094754d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Processing event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.085686886 +0000 UTC m=+0.027005266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.196 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp686o8x_5" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.238 238945 DEBUG nova.storage.rbd_utils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] rbd image e053f779-294f-4782-bb33-a14e40753795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:51:36 np0005597378 systemd[1]: Started libpod-conmon-dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc.scope.
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.243 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.289364736 +0000 UTC m=+0.230683116 container init dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.299726905 +0000 UTC m=+0.241045285 container start dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.305806718 +0000 UTC m=+0.247125088 container attach dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 08:51:36 np0005597378 friendly_maxwell[288473]: 167 167
Jan 27 08:51:36 np0005597378 systemd[1]: libpod-dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc.scope: Deactivated successfully.
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.312099617 +0000 UTC m=+0.253418007 container died dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:51:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-32102b260559c8f1ab375256962fe8cc188258eb8792bd1b4f6b39a1b5f7f84f-merged.mount: Deactivated successfully.
Jan 27 08:51:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 540 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.9 MiB/s wr, 371 op/s
Jan 27 08:51:36 np0005597378 podman[288410]: 2026-01-27 13:51:36.35947529 +0000 UTC m=+0.300793670 container remove dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:51:36 np0005597378 systemd[1]: libpod-conmon-dff982b74e9d01dfa4b9bd3a5e4c1859c15c18f3fcd3f85d68a89801e0c879fc.scope: Deactivated successfully.
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.451 238945 DEBUG oslo_concurrency.processutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config e053f779-294f-4782-bb33-a14e40753795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.452 238945 INFO nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting local config drive /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795/disk.config because it was imported into RBD.#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.508 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521896.5073125, b17763fd-bf68-45e0-84a4-579e1453d6cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:36 np0005597378 kernel: tapceb7b09e-b6: entered promiscuous mode
Jan 27 08:51:36 np0005597378 NetworkManager[48904]: <info>  [1769521896.5289] manager: (tapceb7b09e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:36Z|00465|binding|INFO|Claiming lport ceb7b09e-b635-4570-bcf2-a08115d41365 for this chassis.
Jan 27 08:51:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:36Z|00466|binding|INFO|ceb7b09e-b635-4570-bcf2-a08115d41365: Claiming fa:16:3e:ad:be:d8 10.100.0.7
Jan 27 08:51:36 np0005597378 systemd-udevd[288386]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:51:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:36Z|00467|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 ovn-installed in OVS
Jan 27 08:51:36 np0005597378 NetworkManager[48904]: <info>  [1769521896.5551] device (tapceb7b09e-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:51:36 np0005597378 NetworkManager[48904]: <info>  [1769521896.5563] device (tapceb7b09e-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:36Z|00468|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 up in Southbound
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.566 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.596 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:36 np0005597378 systemd-machined[207425]: New machine qemu-62-instance-0000002a.
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.608 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521896.5080807, b17763fd-bf68-45e0-84a4-579e1453d6cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.609 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:51:36 np0005597378 systemd[1]: Started Virtual Machine qemu-62-instance-0000002a.
Jan 27 08:51:36 np0005597378 podman[288572]: 2026-01-27 13:51:36.640053625 +0000 UTC m=+0.060102635 container create 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.643 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.654 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:36 np0005597378 podman[288573]: 2026-01-27 13:51:36.669363112 +0000 UTC m=+0.080898544 container create 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 27 08:51:36 np0005597378 systemd[1]: Started libpod-conmon-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070.scope.
Jan 27 08:51:36 np0005597378 podman[288572]: 2026-01-27 13:51:36.61307564 +0000 UTC m=+0.033124670 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:51:36 np0005597378 podman[288573]: 2026-01-27 13:51:36.633583471 +0000 UTC m=+0.045118923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:51:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:36 np0005597378 systemd[1]: Started libpod-conmon-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope.
Jan 27 08:51:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc4a4a8015a746dd956f32125462a808c2708de024f83f5714cf7c909a82509/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:36 np0005597378 podman[288572]: 2026-01-27 13:51:36.739824804 +0000 UTC m=+0.159873844 container init 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 08:51:36 np0005597378 podman[288572]: 2026-01-27 13:51:36.747730137 +0000 UTC m=+0.167779147 container start 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:51:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.774 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:36 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : New worker (288618) forked
Jan 27 08:51:36 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : Loading success.
Jan 27 08:51:36 np0005597378 podman[288573]: 2026-01-27 13:51:36.783448265 +0000 UTC m=+0.194983727 container init 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 08:51:36 np0005597378 podman[288573]: 2026-01-27 13:51:36.793531347 +0000 UTC m=+0.205066779 container start 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:51:36 np0005597378 podman[288573]: 2026-01-27 13:51:36.804788849 +0000 UTC m=+0.216324291 container attach 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.822 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a7f80eaf-94c9-4184-9984-32cc6a6db6e3 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 unbound from our chassis#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.824 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e5870be-3451-43b4-b92c-dd5af9cc1291#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c05ee556-c5b3-4a3b-b1d9-1a7228932b57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.877 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[57e2bbeb-1577-40db-a24e-e97c49865f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.880 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6f6da8-774b-496e-b1c8-aae67ba5590c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.907 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[65c94357-678a-499d-895b-f8675d9491c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.924 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee3e5c2-24a8-4cc3-84ed-357f9698ffa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288649, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.939 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fff05a4-e71f-47ee-a32b-534402adb517]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453567, 'tstamp': 453567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288653, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453572, 'tstamp': 453572}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288653, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.940 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 nova_compute[238941]: 2026-01-27 13:51:36.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.945 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e5870be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.945 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.946 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e5870be-30, col_values=(('external_ids', {'iface-id': 'cd6b8921-0b49-406f-b95a-637f6648e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.946 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.948 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d7c86f5b-f6e4-4637-9ff2-1d6007449737 in datapath 20fa5117-7a98-4fad-80b8-7654f1d826c9 unbound from our chassis#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.950 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20fa5117-7a98-4fad-80b8-7654f1d826c9#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.968 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36b3e7aa-64ef-4af1-bd8b-93f324534ab9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.969 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20fa5117-71 in ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.971 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20fa5117-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.971 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d058a6bd-59f6-4280-81d2-49e9aa5c766a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.972 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dea24a-62ef-4b30-ba49-55bf08306407]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:36.995 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b15c4793-db5c-4a49-ad38-b4d788330371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.033 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c85c5f26-6f0f-442a-b8f7-4cf664d6dbdf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ea97137d-a137-4cc6-96d3-2303533e13f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.089 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44755dc8-877c-46ab-b52e-7eb68c3290bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 NetworkManager[48904]: <info>  [1769521897.0904] manager: (tap20fa5117-70): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.136 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521897.1356142, e053f779-294f-4782-bb33-a14e40753795 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.137 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Started (Lifecycle Event)#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.141 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[25fc1c16-fd8e-4900-a819-623e9b8dacc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.146 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[feb31915-601b-4124-acea-96e2c9a654ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 NetworkManager[48904]: <info>  [1769521897.1715] device (tap20fa5117-70): carrier: link connected
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]: {
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.178 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaa322d-a794-4f04-bc50-61816088e415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:    "0": [
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:        {
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "devices": [
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "/dev/loop3"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            ],
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_name": "ceph_lv0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_size": "21470642176",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "name": "ceph_lv0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "tags": {
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cluster_name": "ceph",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.crush_device_class": "",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.encrypted": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.objectstore": "bluestore",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osd_id": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.type": "block",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.vdo": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.with_tpm": "0"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            },
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "type": "block",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "vg_name": "ceph_vg0"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:        }
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:    ],
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:    "1": [
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:        {
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "devices": [
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "/dev/loop4"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            ],
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_name": "ceph_lv1",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_size": "21470642176",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "name": "ceph_lv1",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "tags": {
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cluster_name": "ceph",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.crush_device_class": "",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.encrypted": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.objectstore": "bluestore",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osd_id": "1",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.type": "block",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.vdo": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.with_tpm": "0"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            },
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "type": "block",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "vg_name": "ceph_vg1"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:        }
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:    ],
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:    "2": [
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:        {
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "devices": [
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "/dev/loop5"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            ],
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_name": "ceph_lv2",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_size": "21470642176",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "name": "ceph_lv2",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "tags": {
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.cluster_name": "ceph",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.crush_device_class": "",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.encrypted": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.objectstore": "bluestore",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osd_id": "2",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.type": "block",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.vdo": "0",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:                "ceph.with_tpm": "0"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            },
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "type": "block",
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:            "vg_name": "ceph_vg2"
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:        }
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]:    ]
Jan 27 08:51:37 np0005597378 hopeful_clarke[288612]: }
Jan 27 08:51:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7772677e-5f25-4c39-b0e9-73e21fdd061e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20fa5117-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:32:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453679, 'reachable_time': 25955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288692, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 systemd[1]: libpod-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope: Deactivated successfully.
Jan 27 08:51:37 np0005597378 conmon[288612]: conmon 3ecfa46e8af21ebe1ca9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope/container/memory.events
Jan 27 08:51:37 np0005597378 podman[288573]: 2026-01-27 13:51:37.228509628 +0000 UTC m=+0.640045080 container died 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab875c36-15cb-45e8-bda3-4b4130d65c28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:326c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453679, 'tstamp': 453679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288693, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b57b4cf6-e0e8-4e7c-96f5-b71357c6a497]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20fa5117-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:32:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453679, 'reachable_time': 25955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288694, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d4ceb460c8a959130adedaf644160b404555f0d8560a37b49c6ef978afa6b896-merged.mount: Deactivated successfully.
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.270 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.284 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521897.1361492, e053f779-294f-4782-bb33-a14e40753795 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.284 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:51:37 np0005597378 podman[288573]: 2026-01-27 13:51:37.287011059 +0000 UTC m=+0.698546491 container remove 3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa6666-7e3c-4d25-9bf5-5104ab8be1b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 systemd[1]: libpod-conmon-3ecfa46e8af21ebe1ca941511db37acd483a1e8fa3ad40773c03da309c19b6ab.scope: Deactivated successfully.
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.365 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.383 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[989d7a89-4169-4793-a512-3f332bc8dd3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.385 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20fa5117-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.385 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.385 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20fa5117-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:37 np0005597378 NetworkManager[48904]: <info>  [1769521897.3886] manager: (tap20fa5117-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Jan 27 08:51:37 np0005597378 kernel: tap20fa5117-70: entered promiscuous mode
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.391 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20fa5117-70, col_values=(('external_ids', {'iface-id': '3bb1a69d-8aa3-4e09-b274-0acc4272a41a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:37 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:37Z|00469|binding|INFO|Releasing lport 3bb1a69d-8aa3-4e09-b274-0acc4272a41a from this chassis (sb_readonly=0)
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.414 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:37 np0005597378 nova_compute[238941]: 2026-01-27 13:51:37.414 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.417 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20fa5117-7a98-4fad-80b8-7654f1d826c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20fa5117-7a98-4fad-80b8-7654f1d826c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.418 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88251774-8cc6-4986-b43d-3e0e3c8e41ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.419 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-20fa5117-7a98-4fad-80b8-7654f1d826c9
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/20fa5117-7a98-4fad-80b8-7654f1d826c9.pid.haproxy
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 20fa5117-7a98-4fad-80b8-7654f1d826c9
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:51:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:37.419 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'env', 'PROCESS_TAG=haproxy-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20fa5117-7a98-4fad-80b8-7654f1d826c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:51:37 np0005597378 podman[288788]: 2026-01-27 13:51:37.814934497 +0000 UTC m=+0.066918198 container create d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 08:51:37 np0005597378 podman[288788]: 2026-01-27 13:51:37.779022803 +0000 UTC m=+0.031006514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:51:37 np0005597378 systemd[1]: Started libpod-conmon-d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b.scope.
Jan 27 08:51:37 np0005597378 podman[288812]: 2026-01-27 13:51:37.914974025 +0000 UTC m=+0.122045770 container create 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:51:37 np0005597378 podman[288812]: 2026-01-27 13:51:37.831292837 +0000 UTC m=+0.038364612 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:51:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:37 np0005597378 podman[288788]: 2026-01-27 13:51:37.966114807 +0000 UTC m=+0.218098539 container init d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:51:37 np0005597378 podman[288788]: 2026-01-27 13:51:37.974097242 +0000 UTC m=+0.226080943 container start d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 08:51:37 np0005597378 goofy_fermi[288827]: 167 167
Jan 27 08:51:37 np0005597378 systemd[1]: Started libpod-conmon-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62.scope.
Jan 27 08:51:37 np0005597378 systemd[1]: libpod-d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b.scope: Deactivated successfully.
Jan 27 08:51:38 np0005597378 podman[288788]: 2026-01-27 13:51:38.00310698 +0000 UTC m=+0.255090691 container attach d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:51:38 np0005597378 podman[288788]: 2026-01-27 13:51:38.004985652 +0000 UTC m=+0.256969353 container died d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:51:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c17e318ecb7c2c2ae2ad0fa96b53f7b9b528afd30b8e9486a755e96c0015140/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:38 np0005597378 podman[288812]: 2026-01-27 13:51:38.114650977 +0000 UTC m=+0.321722752 container init 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 08:51:38 np0005597378 podman[288812]: 2026-01-27 13:51:38.124131331 +0000 UTC m=+0.331203086 container start 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:51:38 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : New worker (288852) forked
Jan 27 08:51:38 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : Loading success.
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.198 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.200 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No event matching network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 in dict_keys([('network-vif-plugged', 'a7f80eaf-94c9-4184-9984-32cc6a6db6e3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 WARNING nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.201 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Processing event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.202 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 DEBUG oslo_concurrency.lockutils [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 DEBUG nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.203 238945 WARNING nova.compute.manager [req-844c579c-e389-4493-aea4-e87948f2ddd9 req-ab897b5e-5adc-4c42-90f2-43231b9ffb84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.204 238945 DEBUG nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.209 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521898.2091591, e053f779-294f-4782-bb33-a14e40753795 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.210 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.213 238945 DEBUG nova.virt.libvirt.driver [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.225 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance spawned successfully.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.233 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.238 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.259 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7d9e89ac2803ff98473f117cc55810c10c30c7d73ac171f20e5fae1c939635d9-merged.mount: Deactivated successfully.
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.287 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis#033[00m
Jan 27 08:51:38 np0005597378 podman[288788]: 2026-01-27 13:51:38.289450661 +0000 UTC m=+0.541434362 container remove d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.290 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.301 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No event matching network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 in dict_keys([('network-vif-plugged', 'a7f80eaf-94c9-4184-9984-32cc6a6db6e3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.302 238945 WARNING nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Processing event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.303 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG oslo_concurrency.lockutils [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 DEBUG nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.304 238945 WARNING nova.compute.manager [req-3bd2fb9f-c0f9-4abd-880c-2e2562ec959d req-876894cb-df11-4b03-93ee-ca5248e499a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.305 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.317 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521898.3173945, b17763fd-bf68-45e0-84a4-579e1453d6cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.318 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[489b79ed-2e2d-4c8e-afe0-80fe1f663387]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.321 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.341 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.342 238945 INFO nova.virt.libvirt.driver [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance spawned successfully.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.342 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:51:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 544 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.9 MiB/s wr, 340 op/s
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.362 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c316ee70-d0d4-4ad6-9f21-d6c7ae830c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.369 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:51:38 np0005597378 systemd[1]: libpod-conmon-d621853de96818d15c1d47d79c1bd6baf5d776c0f77a074c2f137aec2619723b.scope: Deactivated successfully.
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.368 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e940877-9060-4f7b-be00-93e6cb4d2543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.376 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.377 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.377 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.377 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.378 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.379 238945 DEBUG nova.virt.libvirt.driver [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.417 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c4a4ae-a081-4cbd-8aa4-1b0666d750b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.451 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db24694d-c816-46f2-9319-402071b0065b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288868, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.455 238945 INFO nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 22.76 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.456 238945 DEBUG nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[69ab8bae-3647-4cd3-978d-d59e638734ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288869, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288869, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.481 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.485 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.485 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.486 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:38.486 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.522 238945 INFO nova.compute.manager [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 28.90 seconds to build instance.#033[00m
Jan 27 08:51:38 np0005597378 nova_compute[238941]: 2026-01-27 13:51:38.542 238945 DEBUG oslo_concurrency.lockutils [None req-5c4f0c85-f025-4947-936e-8933e7d7c626 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:38 np0005597378 podman[288875]: 2026-01-27 13:51:38.592488639 +0000 UTC m=+0.063530687 container create 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:51:38 np0005597378 systemd[1]: Started libpod-conmon-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope.
Jan 27 08:51:38 np0005597378 podman[288875]: 2026-01-27 13:51:38.567234271 +0000 UTC m=+0.038276339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:51:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 27 08:51:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 27 08:51:38 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 27 08:51:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:51:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:51:38 np0005597378 podman[288875]: 2026-01-27 13:51:38.724449634 +0000 UTC m=+0.195491702 container init 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:51:38 np0005597378 podman[288875]: 2026-01-27 13:51:38.735762878 +0000 UTC m=+0.206804926 container start 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:51:38 np0005597378 podman[288875]: 2026-01-27 13:51:38.73958999 +0000 UTC m=+0.210632038 container attach 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.083 238945 DEBUG nova.compute.manager [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.170 238945 DEBUG oslo_concurrency.lockutils [None req-fffbe659-1776-4cd0-b7cf-4bb789ec1109 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:39 np0005597378 lvm[288970]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:51:39 np0005597378 lvm[288971]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:51:39 np0005597378 lvm[288967]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:51:39 np0005597378 lvm[288970]: VG ceph_vg2 finished
Jan 27 08:51:39 np0005597378 lvm[288971]: VG ceph_vg1 finished
Jan 27 08:51:39 np0005597378 lvm[288967]: VG ceph_vg0 finished
Jan 27 08:51:39 np0005597378 flamboyant_brattain[288890]: {}
Jan 27 08:51:39 np0005597378 systemd[1]: libpod-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope: Deactivated successfully.
Jan 27 08:51:39 np0005597378 systemd[1]: libpod-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope: Consumed 1.394s CPU time.
Jan 27 08:51:39 np0005597378 podman[288875]: 2026-01-27 13:51:39.700200269 +0000 UTC m=+1.171242337 container died 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b56fdb4ef3a8fa7782b0606c715ab86569bb07b480d2c7839c115b19e660d7fd-merged.mount: Deactivated successfully.
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.742 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.743 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.743 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.744 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.744 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.746 238945 INFO nova.compute.manager [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Terminating instance#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.747 238945 DEBUG nova.compute.manager [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.749 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.750 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.750 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.751 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.751 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.752 238945 INFO nova.compute.manager [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Terminating instance#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.754 238945 DEBUG nova.compute.manager [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:39 np0005597378 podman[288875]: 2026-01-27 13:51:39.756589493 +0000 UTC m=+1.227631541 container remove 14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:51:39 np0005597378 systemd[1]: libpod-conmon-14283a14c0f11ecbc23b7e8e060752eee23bd1c18bb51a3a9efe327b4dad78d4.scope: Deactivated successfully.
Jan 27 08:51:39 np0005597378 kernel: tapa00dfa6b-3d (unregistering): left promiscuous mode
Jan 27 08:51:39 np0005597378 NetworkManager[48904]: <info>  [1769521899.8120] device (tapa00dfa6b-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:39 np0005597378 kernel: tap8a6b3097-3b (unregistering): left promiscuous mode
Jan 27 08:51:39 np0005597378 NetworkManager[48904]: <info>  [1769521899.8246] device (tap8a6b3097-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00470|binding|INFO|Releasing lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 from this chassis (sb_readonly=0)
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00471|binding|INFO|Setting lport a00dfa6b-3d70-4dbd-b9c8-4817560c3488 down in Southbound
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00472|binding|INFO|Removing iface tapa00dfa6b-3d ovn-installed in OVS
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 kernel: tapd7c86f5b-f6 (unregistering): left promiscuous mode
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.839 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:42:8d 10.100.0.84'], port_security=['fa:16:3e:17:42:8d 10.100.0.84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.84/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a00dfa6b-3d70-4dbd-b9c8-4817560c3488) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.841 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a00dfa6b-3d70-4dbd-b9c8-4817560c3488 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 unbound from our chassis#033[00m
Jan 27 08:51:39 np0005597378 NetworkManager[48904]: <info>  [1769521899.8430] device (tapd7c86f5b-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.843 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e5870be-3451-43b4-b92c-dd5af9cc1291#033[00m
Jan 27 08:51:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:51:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00473|binding|INFO|Releasing lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 from this chassis (sb_readonly=0)
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00474|binding|INFO|Setting lport 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 down in Southbound
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00475|binding|INFO|Releasing lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 from this chassis (sb_readonly=0)
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00476|binding|INFO|Setting lport d7c86f5b-f6e4-4637-9ff2-1d6007449737 down in Southbound
Jan 27 08:51:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00477|binding|INFO|Removing iface tapd7c86f5b-f6 ovn-installed in OVS
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00478|binding|INFO|Removing iface tap8a6b3097-3b ovn-installed in OVS
Jan 27 08:51:39 np0005597378 kernel: tapa7f80eaf-94 (unregistering): left promiscuous mode
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 NetworkManager[48904]: <info>  [1769521899.8741] device (tapa7f80eaf-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.874 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:1f:41 10.100.0.7'], port_security=['fa:16:3e:4b:1f:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17b9acbe-02b3-41d7-af4b-fd8b3d902d47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7fc23a96b5e44bf687aafd92e4199313', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2aa93dd2-b6fa-4307-bde8-658361fd357a b45e6abb-cfaa-4d65-a9b9-3a393d9b40b3 e04cc62d-050c-41c5-9ac8-83724a3ee20d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24d97abc-1098-48ce-8d9c-90139c3050c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8a6b3097-3b81-4bf7-8197-4ae8263c57e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.876 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:bf:d5 10.100.1.130'], port_security=['fa:16:3e:95:bf:d5 10.100.1.130'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.130/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2801040-e0d0-43bf-bfb6-870f7e78fec7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d7c86f5b-f6e4-4637-9ff2-1d6007449737) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.877 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0ac730-3d92-4f41-a54e-b1ca8e2d3636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:39 np0005597378 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Jan 27 08:51:39 np0005597378 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Consumed 16.028s CPU time.
Jan 27 08:51:39 np0005597378 systemd-machined[207425]: Machine qemu-53-instance-0000002f terminated.
Jan 27 08:51:39 np0005597378 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Deactivated successfully.
Jan 27 08:51:39 np0005597378 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Consumed 1.806s CPU time.
Jan 27 08:51:39 np0005597378 systemd-machined[207425]: Machine qemu-61-instance-00000036 terminated.
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00479|binding|INFO|Releasing lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 from this chassis (sb_readonly=0)
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00480|binding|INFO|Setting lport a7f80eaf-94c9-4184-9984-32cc6a6db6e3 down in Southbound
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:39Z|00481|binding|INFO|Removing iface tapa7f80eaf-94 ovn-installed in OVS
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.922 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.927 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:95:9c 10.100.0.224'], port_security=['fa:16:3e:e4:95:9c 10.100.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.224/24', 'neutron:device_id': 'b17763fd-bf68-45e0-84a4-579e1453d6cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f5d2f82-3b77-4b4a-b319-c5cb2af5a026, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a7f80eaf-94c9-4184-9984-32cc6a6db6e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:39 np0005597378 nova_compute[238941]: 2026-01-27 13:51:39.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.942 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba40612-126c-4363-8d0b-88e54f6e6087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.948 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8180da22-fc98-4401-9600-9ba06d3e20fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:39 np0005597378 NetworkManager[48904]: <info>  [1769521899.9901] manager: (tapa00dfa6b-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Jan 27 08:51:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:39.997 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d97c10d-a942-44cc-bd85-b58026e08f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 NetworkManager[48904]: <info>  [1769521900.0034] manager: (tapd7c86f5b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.008 238945 INFO nova.virt.libvirt.driver [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Instance destroyed successfully.#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.009 238945 DEBUG nova.objects.instance [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lazy-loading 'resources' on Instance uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:40 np0005597378 NetworkManager[48904]: <info>  [1769521900.0148] manager: (tapa7f80eaf-94): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.021 238945 DEBUG nova.virt.libvirt.vif [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-174010246',display_name='tempest-SecurityGroupsTestJSON-server-174010246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-174010246',id=47,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7fc23a96b5e44bf687aafd92e4199313',ramdisk_id='',reservation_id='r-ozsokgto',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-915122805',owner_user_name='tempest-SecurityGroupsTestJSON-915122805-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:35Z,user_data=None,user_id='dc97508eec004685b1c36a85261430bd',uuid=17b9acbe-02b3-41d7-af4b-fd8b3d902d47,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.022 238945 DEBUG nova.network.os_vif_util [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converting VIF {"id": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "address": "fa:16:3e:4b:1f:41", "network": {"id": "6fa17e2f-4576-4e68-b7d9-6d78705f8a05", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1786233078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7fc23a96b5e44bf687aafd92e4199313", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a6b3097-3b", "ovs_interfaceid": "8a6b3097-3b81-4bf7-8197-4ae8263c57e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.023 238945 DEBUG nova.network.os_vif_util [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.023 238945 DEBUG os_vif [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.027 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08a55058-0ba9-4323-a3bc-b31ba5567457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e5870be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453551, 'reachable_time': 34659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289054, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.029 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a6b3097-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.046 238945 INFO nova.virt.libvirt.driver [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Instance destroyed successfully.#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.048 238945 DEBUG nova.objects.instance [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'resources' on Instance uuid b17763fd-bf68-45e0-84a4-579e1453d6cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.062 238945 INFO os_vif [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:1f:41,bridge_name='br-int',has_traffic_filtering=True,id=8a6b3097-3b81-4bf7-8197-4ae8263c57e1,network=Network(6fa17e2f-4576-4e68-b7d9-6d78705f8a05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a6b3097-3b')#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.064 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb720e0f-b6fb-49bc-94cf-0d8bc249ac0d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453567, 'tstamp': 453567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289080, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e5870be-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453572, 'tstamp': 453572}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289080, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.066 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e5870be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.079 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.081 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e5870be-30, col_values=(('external_ids', {'iface-id': 'cd6b8921-0b49-406f-b95a-637f6648e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.083 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.085 238945 DEBUG nova.virt.libvirt.vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:38Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.085 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.086 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.086 238945 DEBUG os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.089 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa00dfa6b-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.089 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8a6b3097-3b81-4bf7-8197-4ae8263c57e1 in datapath 6fa17e2f-4576-4e68-b7d9-6d78705f8a05 unbound from our chassis#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.092 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fa17e2f-4576-4e68-b7d9-6d78705f8a05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.094 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb8a935-77de-4c5d-938a-df34753dc786]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.094 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 namespace which is not needed anymore#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.105 238945 INFO os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:42:8d,bridge_name='br-int',has_traffic_filtering=True,id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa00dfa6b-3d')#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.106 238945 DEBUG nova.virt.libvirt.vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:38Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.107 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.108 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.108 238945 DEBUG os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.110 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7c86f5b-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.120 238945 INFO os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:bf:d5,bridge_name='br-int',has_traffic_filtering=True,id=d7c86f5b-f6e4-4637-9ff2-1d6007449737,network=Network(20fa5117-7a98-4fad-80b8-7654f1d826c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7c86f5b-f6')#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.121 238945 DEBUG nova.virt.libvirt.vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1690429552',display_name='tempest-ServersTestMultiNic-server-1690429552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1690429552',id=54,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-tj80cbsg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:38Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=b17763fd-bf68-45e0-84a4-579e1453d6cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.121 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "address": "fa:16:3e:e4:95:9c", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f80eaf-94", "ovs_interfaceid": "a7f80eaf-94c9-4184-9984-32cc6a6db6e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.122 238945 DEBUG nova.network.os_vif_util [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.122 238945 DEBUG os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.125 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7f80eaf-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.132 238945 INFO os_vif [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3,network=Network(5e5870be-3451-43b4-b92c-dd5af9cc1291),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f80eaf-94')#033[00m
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : haproxy version is 2.8.14-c23fe91
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [NOTICE]   (284386) : path to executable is /usr/sbin/haproxy
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [WARNING]  (284386) : Exiting Master process...
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [WARNING]  (284386) : Exiting Master process...
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [ALERT]    (284386) : Current worker (284391) exited with code 143 (Terminated)
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05[284362]: [WARNING]  (284386) : All workers exited. Exiting... (0)
Jan 27 08:51:40 np0005597378 systemd[1]: libpod-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927.scope: Deactivated successfully.
Jan 27 08:51:40 np0005597378 podman[289143]: 2026-01-27 13:51:40.318213026 +0000 UTC m=+0.083979357 container died 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:51:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 519 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 27 08:51:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927-userdata-shm.mount: Deactivated successfully.
Jan 27 08:51:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5ab0fe96905189fab7283c3eb95c3fab982410fb113c0b0e10de31b1021d24e5-merged.mount: Deactivated successfully.
Jan 27 08:51:40 np0005597378 podman[289143]: 2026-01-27 13:51:40.389597753 +0000 UTC m=+0.155364064 container cleanup 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 08:51:40 np0005597378 systemd[1]: libpod-conmon-46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927.scope: Deactivated successfully.
Jan 27 08:51:40 np0005597378 podman[289173]: 2026-01-27 13:51:40.503251615 +0000 UTC m=+0.072128918 container remove 46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.515 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d2c440-508a-45d3-bc2f-088a2941fcd9]: (4, ('Tue Jan 27 01:51:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 (46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927)\n46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927\nTue Jan 27 01:51:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 (46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927)\n46b27ceb25ec54bec2e8d4be2b1bc2717bb7de90801aee7ebc41f99fa5159927\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.518 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4307a3-0346-490c-975f-76f107548e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.519 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa17e2f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.521 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 kernel: tap6fa17e2f-40: left promiscuous mode
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52296cc8-5126-4772-a48a-0e74be6c61b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.571 238945 INFO nova.virt.libvirt.driver [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deleting instance files /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_del#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.572 238945 INFO nova.virt.libvirt.driver [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deletion of /var/lib/nova/instances/17b9acbe-02b3-41d7-af4b-fd8b3d902d47_del complete#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1427266d-75dd-4e80-83e1-cb5b5ebe1c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.574 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84a92184-2ea2-4925-9793-afa3175779f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e147fb-51e6-4104-a945-104c66d89fb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447274, 'reachable_time': 25817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289188, 'error': None, 'target': 'ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 systemd[1]: run-netns-ovnmeta\x2d6fa17e2f\x2d4576\x2d4e68\x2db7d9\x2d6d78705f8a05.mount: Deactivated successfully.
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.601 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fa17e2f-4576-4e68-b7d9-6d78705f8a05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.601 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5cd9d5-10f5-4802-a431-c9c86fcf4e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.606 238945 INFO nova.virt.libvirt.driver [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deleting instance files /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc_del#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.608 238945 INFO nova.virt.libvirt.driver [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deletion of /var/lib/nova/instances/b17763fd-bf68-45e0-84a4-579e1453d6cc_del complete#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.605 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d7c86f5b-f6e4-4637-9ff2-1d6007449737 in datapath 20fa5117-7a98-4fad-80b8-7654f1d826c9 unbound from our chassis#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.613 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20fa5117-7a98-4fad-80b8-7654f1d826c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.614 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3160b48b-c7ea-45c3-a997-31fa4f6229cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.615 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 namespace which is not needed anymore#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.645 238945 INFO nova.compute.manager [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.646 238945 DEBUG oslo.service.loopingcall [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.646 238945 DEBUG nova.compute.manager [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.646 238945 DEBUG nova.network.neutron [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.662 238945 INFO nova.compute.manager [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.663 238945 DEBUG oslo.service.loopingcall [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.663 238945 DEBUG nova.compute.manager [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.663 238945 DEBUG nova.network.neutron [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : haproxy version is 2.8.14-c23fe91
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [NOTICE]   (288850) : path to executable is /usr/sbin/haproxy
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [WARNING]  (288850) : Exiting Master process...
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [ALERT]    (288850) : Current worker (288852) exited with code 143 (Terminated)
Jan 27 08:51:40 np0005597378 neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9[288835]: [WARNING]  (288850) : All workers exited. Exiting... (0)
Jan 27 08:51:40 np0005597378 systemd[1]: libpod-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62.scope: Deactivated successfully.
Jan 27 08:51:40 np0005597378 podman[289206]: 2026-01-27 13:51:40.763524655 +0000 UTC m=+0.046568771 container died 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:51:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62-userdata-shm.mount: Deactivated successfully.
Jan 27 08:51:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5c17e318ecb7c2c2ae2ad0fa96b53f7b9b528afd30b8e9486a755e96c0015140-merged.mount: Deactivated successfully.
Jan 27 08:51:40 np0005597378 podman[289206]: 2026-01-27 13:51:40.801915946 +0000 UTC m=+0.084960062 container cleanup 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:51:40 np0005597378 systemd[1]: libpod-conmon-3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62.scope: Deactivated successfully.
Jan 27 08:51:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:51:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:51:40 np0005597378 podman[289232]: 2026-01-27 13:51:40.88656915 +0000 UTC m=+0.064006160 container remove 3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.895 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3aee1649-9bb5-458a-bea7-70df73ab1ff2]: (4, ('Tue Jan 27 01:51:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 (3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62)\n3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62\nTue Jan 27 01:51:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 (3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62)\n3207ccfb43ac040316f74754e01bc3b1e2c793b3f5df046c8e328b1e80d0aa62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.897 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28acac06-348c-4289-8b43-0a0ce96c74ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.898 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20fa5117-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 kernel: tap20fa5117-70: left promiscuous mode
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 nova_compute[238941]: 2026-01-27 13:51:40.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.926 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d534483-2207-4435-8b13-dd0dc0d3a843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.940 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6ee307-ed04-4c7e-8677-b9cdb1cdbfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.941 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6041eef-cf6c-4e19-becd-ef6a5dccdf4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.956 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6d1dde-c058-4928-a2cd-8b4e186a03a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453669, 'reachable_time': 43290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289246, 'error': None, 'target': 'ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 systemd[1]: run-netns-ovnmeta\x2d20fa5117\x2d7a98\x2d4fad\x2d80b8\x2d7654f1d826c9.mount: Deactivated successfully.
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.958 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20fa5117-7a98-4fad-80b8-7654f1d826c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.958 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3e6b92-ed76-4123-8be9-a9a008af38aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.962 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a7f80eaf-94c9-4184-9984-32cc6a6db6e3 in datapath 5e5870be-3451-43b4-b92c-dd5af9cc1291 unbound from our chassis#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.964 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e5870be-3451-43b4-b92c-dd5af9cc1291, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b97ce248-478b-4267-9346-d1acd9dd8a2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:40.966 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 namespace which is not needed anymore#033[00m
Jan 27 08:51:41 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : haproxy version is 2.8.14-c23fe91
Jan 27 08:51:41 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [NOTICE]   (288616) : path to executable is /usr/sbin/haproxy
Jan 27 08:51:41 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [WARNING]  (288616) : Exiting Master process...
Jan 27 08:51:41 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [WARNING]  (288616) : Exiting Master process...
Jan 27 08:51:41 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [ALERT]    (288616) : Current worker (288618) exited with code 143 (Terminated)
Jan 27 08:51:41 np0005597378 neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291[288605]: [WARNING]  (288616) : All workers exited. Exiting... (0)
Jan 27 08:51:41 np0005597378 systemd[1]: libpod-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070.scope: Deactivated successfully.
Jan 27 08:51:41 np0005597378 podman[289263]: 2026-01-27 13:51:41.150901249 +0000 UTC m=+0.066157729 container died 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 08:51:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070-userdata-shm.mount: Deactivated successfully.
Jan 27 08:51:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9cc4a4a8015a746dd956f32125462a808c2708de024f83f5714cf7c909a82509-merged.mount: Deactivated successfully.
Jan 27 08:51:41 np0005597378 podman[289263]: 2026-01-27 13:51:41.190278606 +0000 UTC m=+0.105535076 container cleanup 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 08:51:41 np0005597378 systemd[1]: libpod-conmon-37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070.scope: Deactivated successfully.
Jan 27 08:51:41 np0005597378 podman[289293]: 2026-01-27 13:51:41.265831005 +0000 UTC m=+0.047731832 container remove 37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.272 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe7094a-cbfe-4e01-a765-34dcaccf2435]: (4, ('Tue Jan 27 01:51:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 (37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070)\n37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070\nTue Jan 27 01:51:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 (37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070)\n37e26151642f93fb10291bb6e20f3d521133dcf1a63b2f7df477c7b31f30a070\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.275 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5757d668-ffdd-487b-ad26-baad4afa63fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.276 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e5870be-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:41 np0005597378 nova_compute[238941]: 2026-01-27 13:51:41.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:41 np0005597378 kernel: tap5e5870be-30: left promiscuous mode
Jan 27 08:51:41 np0005597378 nova_compute[238941]: 2026-01-27 13:51:41.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.299 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db8d728f-a964-4e0a-ac38-59857ca22ea5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.314 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c983edbe-62eb-41b1-b602-0b5d40560f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.316 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e47994-de54-4195-b40a-99416a2208b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.343 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fad43f30-8ac0-4707-be20-271c7739412e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453541, 'reachable_time': 21735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289307, 'error': None, 'target': 'ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.345 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e5870be-3451-43b4-b92c-dd5af9cc1291 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:51:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:41.345 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[02249652-94b1-477a-9bd9-f9d52d08a849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:41 np0005597378 systemd[1]: run-netns-ovnmeta\x2d5e5870be\x2d3451\x2d43b4\x2db92c\x2ddd5af9cc1291.mount: Deactivated successfully.
Jan 27 08:51:41 np0005597378 nova_compute[238941]: 2026-01-27 13:51:41.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.296 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521887.294683, f433aa34-c04e-4ae6-8fd3-0999a41789fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.296 238945 INFO nova.compute.manager [-] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.323 238945 DEBUG nova.compute.manager [None req-36cb5f7f-b0fc-4ccb-92a5-ac83a0387419 - - - - - -] [instance: f433aa34-c04e-4ae6-8fd3-0999a41789fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 519 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.480 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.480 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.481 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.481 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.481 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.482 238945 INFO nova.compute.manager [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Terminating instance#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.483 238945 DEBUG nova.compute.manager [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:42 np0005597378 kernel: tap7417a545-1c (unregistering): left promiscuous mode
Jan 27 08:51:42 np0005597378 NetworkManager[48904]: <info>  [1769521902.5521] device (tap7417a545-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:42Z|00482|binding|INFO|Releasing lport 7417a545-1c1e-4477-b4ff-72b924a65f11 from this chassis (sb_readonly=0)
Jan 27 08:51:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:42Z|00483|binding|INFO|Setting lport 7417a545-1c1e-4477-b4ff-72b924a65f11 down in Southbound
Jan 27 08:51:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:42Z|00484|binding|INFO|Removing iface tap7417a545-1c ovn-installed in OVS
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.570 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:99:51 10.100.0.11'], port_security=['fa:16:3e:0d:99:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4316dbd4-e3b9-4411-b921-6dbdd5a3197f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=7417a545-1c1e-4477-b4ff-72b924a65f11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.571 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7417a545-1c1e-4477-b4ff-72b924a65f11 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.573 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc0a286-57b7-4099-8601-e0f075cad96e#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.591 238945 DEBUG nova.compute.manager [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.591 238945 DEBUG oslo_concurrency.lockutils [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.591 238945 DEBUG oslo_concurrency.lockutils [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.592 238945 DEBUG oslo_concurrency.lockutils [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.592 238945 DEBUG nova.compute.manager [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-unplugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.591 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dd26e9-446a-4498-8541-b2e1470a6f4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.592 238945 DEBUG nova.compute.manager [req-5a191e54-c096-4306-9974-62fb296f2250 req-d56ea4d5-593c-4a41-8778-2f3736e5855a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.621 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac7a05d-cb5f-4aae-aed3-66001f24d72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.624 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[36df4fc3-7f1b-4254-9fbd-d7c1606c539b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:42 np0005597378 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Deactivated successfully.
Jan 27 08:51:42 np0005597378 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Consumed 18.862s CPU time.
Jan 27 08:51:42 np0005597378 systemd-machined[207425]: Machine qemu-57-instance-00000034 terminated.
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.654 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[95e4451a-329a-48f0-b6ef-23fe1ba62ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.688 238945 DEBUG nova.compute.manager [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.688 238945 DEBUG oslo_concurrency.lockutils [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG oslo_concurrency.lockutils [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG oslo_concurrency.lockutils [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG nova.compute.manager [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-unplugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.689 238945 DEBUG nova.compute.manager [req-8111fec9-2581-4e79-87a4-6bb049522f66 req-cefb7ddd-26f6-4013-b267-82a0baa79354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.691 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57172995-2e05-4f52-8955-ebafc4dc5f99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc0a286-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 12, 'rx_bytes': 532, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 12, 'rx_bytes': 532, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452010, 'reachable_time': 33825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289319, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.695 238945 DEBUG nova.network.neutron [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.714 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2eef76ee-6048-4d4c-981e-4cfddbe29be0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452021, 'tstamp': 452021}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289321, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfc0a286-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452025, 'tstamp': 452025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289321, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.722 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.728 238945 INFO nova.compute.manager [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Took 2.08 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.731 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc0a286-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.732 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.732 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc0a286-50, col_values=(('external_ids', {'iface-id': '7435efea-97d4-42e4-b8e7-2f77985e6cb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:42.732 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.747 238945 INFO nova.virt.libvirt.driver [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Instance destroyed successfully.#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.748 238945 DEBUG nova.objects.instance [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'resources' on Instance uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.791 238945 DEBUG nova.virt.libvirt.vif [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-2',id=52,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-27T13:51:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:23Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=4316dbd4-e3b9-4411-b921-6dbdd5a3197f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.791 238945 DEBUG nova.network.os_vif_util [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "7417a545-1c1e-4477-b4ff-72b924a65f11", "address": "fa:16:3e:0d:99:51", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7417a545-1c", "ovs_interfaceid": "7417a545-1c1e-4477-b4ff-72b924a65f11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.792 238945 DEBUG nova.network.os_vif_util [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.793 238945 DEBUG os_vif [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.795 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7417a545-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.801 238945 INFO os_vif [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:99:51,bridge_name='br-int',has_traffic_filtering=True,id=7417a545-1c1e-4477-b4ff-72b924a65f11,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7417a545-1c')#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.818 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.823 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.823 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.856 238945 WARNING nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] While synchronizing instance power states, found 6 instances in the database and 4 instances on the hypervisor.#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.857 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.857 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 4316dbd4-e3b9-4411-b921-6dbdd5a3197f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.858 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 7e8705e9-4e86-44aa-b532-55fcccac542c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.859 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid e053f779-294f-4782-bb33-a14e40753795 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.859 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.859 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid b17763fd-bf68-45e0-84a4-579e1453d6cc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.860 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.860 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.861 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.862 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.862 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.862 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.864 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.864 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.929 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.930 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.962 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.962 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.963 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.963 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.963 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.964 238945 INFO nova.compute.manager [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Terminating instance#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.965 238945 DEBUG nova.compute.manager [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:42 np0005597378 nova_compute[238941]: 2026-01-27 13:51:42.971 238945 DEBUG oslo_concurrency.processutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:43 np0005597378 kernel: tap89a5b6ba-14 (unregistering): left promiscuous mode
Jan 27 08:51:43 np0005597378 NetworkManager[48904]: <info>  [1769521903.0125] device (tap89a5b6ba-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:43Z|00485|binding|INFO|Releasing lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 from this chassis (sb_readonly=0)
Jan 27 08:51:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:43Z|00486|binding|INFO|Setting lport 89a5b6ba-141b-45b8-b1ea-fc2a60970931 down in Southbound
Jan 27 08:51:43 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:43Z|00487|binding|INFO|Removing iface tap89a5b6ba-14 ovn-installed in OVS
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.035 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:d7:e7 10.100.0.6'], port_security=['fa:16:3e:d6:d7:e7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7e8705e9-4e86-44aa-b532-55fcccac542c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc0a286-57b7-4099-8601-e0f075cad96e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14aa89c69a294999aab63771025b995a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38299218-872e-42a3-bc48-5b780b8d4828', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3bdd11e-ecae-4f85-a5c8-f91378f5b71f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=89a5b6ba-141b-45b8-b1ea-fc2a60970931) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.037 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 89a5b6ba-141b-45b8-b1ea-fc2a60970931 in datapath cfc0a286-57b7-4099-8601-e0f075cad96e unbound from our chassis#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.039 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cfc0a286-57b7-4099-8601-e0f075cad96e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9ae249-ea0e-4b99-8984-54e57490d09e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.041 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e namespace which is not needed anymore#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Jan 27 08:51:43 np0005597378 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 15.903s CPU time.
Jan 27 08:51:43 np0005597378 systemd-machined[207425]: Machine qemu-60-instance-00000035 terminated.
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.124 238945 INFO nova.virt.libvirt.driver [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deleting instance files /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_del#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.125 238945 INFO nova.virt.libvirt.driver [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deletion of /var/lib/nova/instances/4316dbd4-e3b9-4411-b921-6dbdd5a3197f_del complete#033[00m
Jan 27 08:51:43 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : haproxy version is 2.8.14-c23fe91
Jan 27 08:51:43 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [NOTICE]   (287192) : path to executable is /usr/sbin/haproxy
Jan 27 08:51:43 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [WARNING]  (287192) : Exiting Master process...
Jan 27 08:51:43 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [WARNING]  (287192) : Exiting Master process...
Jan 27 08:51:43 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [ALERT]    (287192) : Current worker (287194) exited with code 143 (Terminated)
Jan 27 08:51:43 np0005597378 neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e[287174]: [WARNING]  (287192) : All workers exited. Exiting... (0)
Jan 27 08:51:43 np0005597378 systemd[1]: libpod-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope: Deactivated successfully.
Jan 27 08:51:43 np0005597378 conmon[287174]: conmon 3e16f2eb02095e042931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope/container/memory.events
Jan 27 08:51:43 np0005597378 podman[289373]: 2026-01-27 13:51:43.179278813 +0000 UTC m=+0.047608640 container died 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.202 238945 INFO nova.virt.libvirt.driver [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Instance destroyed successfully.#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.203 238945 DEBUG nova.objects.instance [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lazy-loading 'resources' on Instance uuid 7e8705e9-4e86-44aa-b532-55fcccac542c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb-userdata-shm.mount: Deactivated successfully.
Jan 27 08:51:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f6b190e352736a83f7d17ea65747b1d58325fe8f6470af34fb8178ac46c48bc5-merged.mount: Deactivated successfully.
Jan 27 08:51:43 np0005597378 podman[289373]: 2026-01-27 13:51:43.219481483 +0000 UTC m=+0.087811310 container cleanup 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.226 238945 INFO nova.compute.manager [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.227 238945 DEBUG oslo.service.loopingcall [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.227 238945 DEBUG nova.compute.manager [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.228 238945 DEBUG nova.network.neutron [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:43 np0005597378 systemd[1]: libpod-conmon-3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb.scope: Deactivated successfully.
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.231 238945 DEBUG nova.virt.libvirt.vif [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-2140282589',display_name='tempest-ListServersNegativeTestJSON-server-2140282589-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-2140282589-3',id=53,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-27T13:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14aa89c69a294999aab63771025b995a',ramdisk_id='',reservation_id='r-tbz20eb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2145054704',owner_user_name='tempest-ListServersNegativeTestJSON-2145054704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:26Z,user_data=None,user_id='2731f35d38de444e8d3fac25a4164453',uuid=7e8705e9-4e86-44aa-b532-55fcccac542c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.231 238945 DEBUG nova.network.os_vif_util [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converting VIF {"id": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "address": "fa:16:3e:d6:d7:e7", "network": {"id": "cfc0a286-57b7-4099-8601-e0f075cad96e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-410585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14aa89c69a294999aab63771025b995a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89a5b6ba-14", "ovs_interfaceid": "89a5b6ba-141b-45b8-b1ea-fc2a60970931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.232 238945 DEBUG nova.network.os_vif_util [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.232 238945 DEBUG os_vif [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.234 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89a5b6ba-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.240 238945 INFO os_vif [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:d7:e7,bridge_name='br-int',has_traffic_filtering=True,id=89a5b6ba-141b-45b8-b1ea-fc2a60970931,network=Network(cfc0a286-57b7-4099-8601-e0f075cad96e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89a5b6ba-14')#033[00m
Jan 27 08:51:43 np0005597378 podman[289430]: 2026-01-27 13:51:43.290012456 +0000 UTC m=+0.046461058 container remove 3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b8b249-d338-46e6-bfef-9d52ebd6c4d1]: (4, ('Tue Jan 27 01:51:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e (3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb)\n3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb\nTue Jan 27 01:51:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e (3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb)\n3e16f2eb02095e0429318040e937fc2f32055cdbfc660e893a9a8ebb1e7f37eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d405e6-b3a6-48cd-90ee-75510df2539f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc0a286-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:43 np0005597378 kernel: tapcfc0a286-50: left promiscuous mode
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.309 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b836e32-608b-40f9-bd1d-54875abe625d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e448a67-bad3-4846-b85b-eae211bc2626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.329 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[faf29793-0343-4119-ba10-884663176795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.348 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9507c264-76a8-4fe9-ac6d-416f9d7a5b17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452002, 'reachable_time': 21216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289462, 'error': None, 'target': 'ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 systemd[1]: run-netns-ovnmeta\x2dcfc0a286\x2d57b7\x2d4099\x2d8601\x2de0f075cad96e.mount: Deactivated successfully.
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.350 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cfc0a286-57b7-4099-8601-e0f075cad96e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:51:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:43.351 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c99b13bb-35bd-4583-9169-20f63b97e72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.512 238945 INFO nova.virt.libvirt.driver [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deleting instance files /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c_del#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.513 238945 INFO nova.virt.libvirt.driver [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deletion of /var/lib/nova/instances/7e8705e9-4e86-44aa-b532-55fcccac542c_del complete#033[00m
Jan 27 08:51:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890856604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.613 238945 DEBUG oslo_concurrency.processutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.619 238945 DEBUG nova.compute.provider_tree [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.658 238945 INFO nova.compute.manager [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.659 238945 DEBUG oslo.service.loopingcall [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.661 238945 DEBUG nova.compute.manager [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.661 238945 DEBUG nova.network.neutron [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.663 238945 DEBUG nova.scheduler.client.report [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.760 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:43 np0005597378 nova_compute[238941]: 2026-01-27 13:51:43.985 238945 INFO nova.scheduler.client.report [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Deleted allocations for instance 17b9acbe-02b3-41d7-af4b-fd8b3d902d47#033[00m
Jan 27 08:51:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 331 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.1 MiB/s wr, 352 op/s
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.229 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521890.2268615, 6696d934-5b11-43a6-828d-b968bbf1ba9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.229 238945 INFO nova.compute.manager [-] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.350 238945 DEBUG nova.network.neutron [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.395 238945 DEBUG nova.compute.manager [None req-14ae708c-3eaa-4763-bb2d-e1b1374b4377 - - - - - -] [instance: 6696d934-5b11-43a6-828d-b968bbf1ba9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.588 238945 INFO nova.compute.manager [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Took 2.36 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.595 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 WARNING nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-deleted-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 INFO nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Neutron deleted interface a7f80eaf-94c9-4184-9984-32cc6a6db6e3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.596 238945 DEBUG nova.network.neutron [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "address": "fa:16:3e:17:42:8d", "network": {"id": "5e5870be-3451-43b4-b92c-dd5af9cc1291", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-138700370", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.84", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa00dfa6b-3d", "ovs_interfaceid": "a00dfa6b-3d70-4dbd-b9c8-4817560c3488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.599 238945 DEBUG oslo_concurrency.lockutils [None req-224b8e09-3182-4df5-9db8-2df38bf9c508 dc97508eec004685b1c36a85261430bd 7fc23a96b5e44bf687aafd92e4199313 - - default default] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.599 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.610 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.610 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.611 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.611 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.611 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.646 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "17b9acbe-02b3-41d7-af4b-fd8b3d902d47" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.649 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Detach interface failed, port_id=a7f80eaf-94c9-4184-9984-32cc6a6db6e3, reason: Instance b17763fd-bf68-45e0-84a4-579e1453d6cc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-unplugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.650 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] No waiting events found dispatching network-vif-unplugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-unplugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG oslo_concurrency.lockutils [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] No waiting events found dispatching network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.651 238945 WARNING nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received unexpected event network-vif-plugged-89a5b6ba-141b-45b8-b1ea-fc2a60970931 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.652 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-deleted-a00dfa6b-3d70-4dbd-b9c8-4817560c3488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.652 238945 INFO nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Neutron deleted interface a00dfa6b-3d70-4dbd-b9c8-4817560c3488; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.652 238945 DEBUG nova.network.neutron [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [{"id": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "address": "fa:16:3e:95:bf:d5", "network": {"id": "20fa5117-7a98-4fad-80b8-7654f1d826c9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2142398357", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7c86f5b-f6", "ovs_interfaceid": "d7c86f5b-f6e4-4637-9ff2-1d6007449737", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.655 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.655 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.684 238945 DEBUG nova.compute.manager [req-b4ac5968-91d8-4667-b372-ddd163e5e470 req-131cfbc2-4945-4f12-b99b-76a5ea3885df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Detach interface failed, port_id=a00dfa6b-3d70-4dbd-b9c8-4817560c3488, reason: Instance b17763fd-bf68-45e0-84a4-579e1453d6cc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.703 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.704 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-d7c86f5b-f6e4-4637-9ff2-1d6007449737 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Received event network-vif-deleted-8a6b3097-3b81-4bf7-8197-4ae8263c57e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.705 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-unplugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-unplugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] No waiting events found dispatching network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.706 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received unexpected event network-vif-plugged-a7f80eaf-94c9-4184-9984-32cc6a6db6e3 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-unplugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] No waiting events found dispatching network-vif-unplugged-7417a545-1c1e-4477-b4ff-72b924a65f11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.707 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received unexpected event network-vif-unplugged-7417a545-1c1e-4477-b4ff-72b924a65f11 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG oslo_concurrency.lockutils [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] No waiting events found dispatching network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.708 238945 WARNING nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received unexpected event network-vif-plugged-7417a545-1c1e-4477-b4ff-72b924a65f11 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.709 238945 DEBUG nova.compute.manager [req-fd2fb2a3-20d6-4a12-bc7f-383521e0c670 req-c504c29c-896e-43d1-aa0a-2ae178efba14 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Received event network-vif-deleted-7417a545-1c1e-4477-b4ff-72b924a65f11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:45 np0005597378 nova_compute[238941]: 2026-01-27 13:51:45.808 238945 DEBUG oslo_concurrency.processutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 27 08:51:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 27 08:51:45 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.052 238945 DEBUG nova.network.neutron [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.053 238945 DEBUG nova.network.neutron [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.083 238945 INFO nova.compute.manager [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Took 2.42 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.090 238945 INFO nova.compute.manager [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Took 5.43 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.163 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.171 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500512299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.243 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:46.299 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:46.299 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:46.300 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.324 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.325 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.329 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.329 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:51:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 269 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.2 MiB/s wr, 440 op/s
Jan 27 08:51:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582023174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.450 238945 DEBUG oslo_concurrency.processutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.456 238945 DEBUG nova.compute.provider_tree [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.471 238945 DEBUG nova.scheduler.client.report [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.498 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.501 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.524 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.525 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3649MB free_disk=59.84511078521609GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.525 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.527 238945 INFO nova.scheduler.client.report [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Deleted allocations for instance 4316dbd4-e3b9-4411-b921-6dbdd5a3197f#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.617 238945 DEBUG oslo_concurrency.processutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.656 238945 DEBUG oslo_concurrency.lockutils [None req-9184cea5-770d-44fc-97e9-9195f9a8c0df 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.658 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.658 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.658 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "4316dbd4-e3b9-4411-b921-6dbdd5a3197f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:46 np0005597378 nova_compute[238941]: 2026-01-27 13:51:46.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245511527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.175 238945 DEBUG oslo_concurrency.processutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.180 238945 DEBUG nova.compute.provider_tree [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.195 238945 DEBUG nova.scheduler.client.report [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.218 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.220 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.247 238945 INFO nova.scheduler.client.report [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Deleted allocations for instance b17763fd-bf68-45e0-84a4-579e1453d6cc#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.310 238945 DEBUG oslo_concurrency.processutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.359 238945 DEBUG oslo_concurrency.lockutils [None req-9ad4cb19-157f-4d08-ac26-69e9f12f08f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.362 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.362 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.362 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "b17763fd-bf68-45e0-84a4-579e1453d6cc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.675 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.676 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.677 238945 INFO nova.compute.manager [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Terminating instance#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.677 238945 DEBUG nova.compute.manager [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:47 np0005597378 kernel: tap15ed6f57-c4 (unregistering): left promiscuous mode
Jan 27 08:51:47 np0005597378 NetworkManager[48904]: <info>  [1769521907.7428] device (tap15ed6f57-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:47Z|00488|binding|INFO|Releasing lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 from this chassis (sb_readonly=0)
Jan 27 08:51:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:47Z|00489|binding|INFO|Setting lport 15ed6f57-c44c-4ee6-a349-3a8efc982101 down in Southbound
Jan 27 08:51:47 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:47Z|00490|binding|INFO|Removing iface tap15ed6f57-c4 ovn-installed in OVS
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.761 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:33:f8 10.100.0.14'], port_security=['fa:16:3e:99:33:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '73a36ce7-38f6-4b8c-a3b7-bc84ad632778', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f3982b-0a1e-4454-92cd-6be83c00fc3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=15ed6f57-c44c-4ee6-a349-3a8efc982101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.762 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 15ed6f57-c44c-4ee6-a349-3a8efc982101 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.764 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25155fe5-3d99-4510-9613-2ca9c8acc75a#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.784 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aef8857e-7f67-4cd6-b0a3-758b42c37d62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:47 np0005597378 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Jan 27 08:51:47 np0005597378 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Consumed 15.853s CPU time.
Jan 27 08:51:47 np0005597378 systemd-machined[207425]: Machine qemu-52-instance-0000002e terminated.
Jan 27 08:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.818 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[241bc54b-529f-47e2-be6d-505412a728d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.822 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d168b4-faf7-4626-814c-8ca8f2767dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.850 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed68b44-be43-4e17-a210-7ee70e324b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762189676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1efcd016-837e-4b4c-a3a0-230d8d5ffa47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25155fe5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:48:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438190, 'reachable_time': 24336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289564, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.894 238945 DEBUG oslo_concurrency.processutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efec556c-41a4-4662-9655-226da4bc4b7c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438203, 'tstamp': 438203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289567, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap25155fe5-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438206, 'tstamp': 438206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289567, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25155fe5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25155fe5-30, col_values=(('external_ids', {'iface-id': '9be77910-ec7e-4258-ab0d-6b93cc735b2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:47.911 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.914 238945 DEBUG nova.compute.provider_tree [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.920 238945 INFO nova.virt.libvirt.driver [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance destroyed successfully.#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.921 238945 DEBUG nova.objects.instance [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.939 238945 DEBUG nova.scheduler.client.report [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.943 238945 DEBUG nova.virt.libvirt.vif [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022258544',display_name='tempest-ServerActionsTestOtherB-server-2022258544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022258544',id=46,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-v8la0m7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:50:19Z,user_data=None,user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=73a36ce7-38f6-4b8c-a3b7-bc84ad632778,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.944 238945 DEBUG nova.network.os_vif_util [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "address": "fa:16:3e:99:33:f8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15ed6f57-c4", "ovs_interfaceid": "15ed6f57-c44c-4ee6-a349-3a8efc982101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.944 238945 DEBUG nova.network.os_vif_util [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.945 238945 DEBUG os_vif [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.949 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15ed6f57-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.952 238945 DEBUG nova.compute.manager [req-43cda014-7226-4cb5-98a2-5dd479f95be6 req-e41d7c15-d99e-4f35-91fd-0ac2c00cba00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Received event network-vif-deleted-89a5b6ba-141b-45b8-b1ea-fc2a60970931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.952 238945 DEBUG nova.compute.manager [req-43cda014-7226-4cb5-98a2-5dd479f95be6 req-e41d7c15-d99e-4f35-91fd-0ac2c00cba00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Received event network-vif-deleted-d7c86f5b-f6e4-4637-9ff2-1d6007449737 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.958 238945 INFO os_vif [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:33:f8,bridge_name='br-int',has_traffic_filtering=True,id=15ed6f57-c44c-4ee6-a349-3a8efc982101,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15ed6f57-c4')#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.978 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:47 np0005597378 nova_compute[238941]: 2026-01-27 13:51:47.981 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.013 238945 INFO nova.scheduler.client.report [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Deleted allocations for instance 7e8705e9-4e86-44aa-b532-55fcccac542c#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.069 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.069 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e053f779-294f-4782-bb33-a14e40753795 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.069 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.070 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.102 238945 DEBUG oslo_concurrency.lockutils [None req-4139372d-c700-428a-a018-a8abfb138463 2731f35d38de444e8d3fac25a4164453 14aa89c69a294999aab63771025b995a - - default default] Lock "7e8705e9-4e86-44aa-b532-55fcccac542c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.126 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 220 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.1 MiB/s wr, 404 op/s
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.357 238945 INFO nova.virt.libvirt.driver [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deleting instance files /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_del#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.361 238945 INFO nova.virt.libvirt.driver [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deletion of /var/lib/nova/instances/73a36ce7-38f6-4b8c-a3b7-bc84ad632778_del complete#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.402 238945 INFO nova.compute.manager [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.402 238945 DEBUG oslo.service.loopingcall [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.403 238945 DEBUG nova.compute.manager [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.403 238945 DEBUG nova.network.neutron [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506738362' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.701 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.706 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.727 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.754 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:51:48 np0005597378 nova_compute[238941]: 2026-01-27 13:51:48.754 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.304 238945 DEBUG nova.network.neutron [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.329 238945 INFO nova.compute.manager [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Took 0.93 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.370 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.370 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.432 238945 DEBUG oslo_concurrency.processutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:49Z|00491|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.749 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.749 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.749 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:51:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:49Z|00492|binding|INFO|Releasing lport 9be77910-ec7e-4258-ab0d-6b93cc735b2a from this chassis (sb_readonly=0)
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.906 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.906 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.906 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:51:49 np0005597378 nova_compute[238941]: 2026-01-27 13:51:49.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381154972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.045 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-unplugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] No waiting events found dispatching network-vif-unplugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.046 238945 WARNING nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received unexpected event network-vif-unplugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG oslo_concurrency.lockutils [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] No waiting events found dispatching network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.047 238945 WARNING nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received unexpected event network-vif-plugged-15ed6f57-c44c-4ee6-a349-3a8efc982101 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.048 238945 DEBUG nova.compute.manager [req-df42d39c-8e9e-46bd-971e-7125e533c6ca req-9c3ea239-04e1-479d-9119-9caa12dfcad5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Received event network-vif-deleted-15ed6f57-c44c-4ee6-a349-3a8efc982101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.066 238945 DEBUG oslo_concurrency.processutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.073 238945 DEBUG nova.compute.provider_tree [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.091 238945 DEBUG nova.scheduler.client.report [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.127 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.158 238945 INFO nova.scheduler.client.report [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance 73a36ce7-38f6-4b8c-a3b7-bc84ad632778#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.171 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.248 238945 DEBUG oslo_concurrency.lockutils [None req-63944ded-de6f-4e8f-be1c-3d4340a62528 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "73a36ce7-38f6-4b8c-a3b7-bc84ad632778" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 158 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.1 MiB/s wr, 460 op/s
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.539 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.539 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.540 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.541 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.541 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.542 238945 INFO nova.compute.manager [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Terminating instance#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.543 238945 DEBUG nova.compute.manager [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:51:50 np0005597378 kernel: tapceb7b09e-b6 (unregistering): left promiscuous mode
Jan 27 08:51:50 np0005597378 NetworkManager[48904]: <info>  [1769521910.5796] device (tapceb7b09e-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:51:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:50Z|00493|binding|INFO|Releasing lport ceb7b09e-b635-4570-bcf2-a08115d41365 from this chassis (sb_readonly=0)
Jan 27 08:51:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:50Z|00494|binding|INFO|Setting lport ceb7b09e-b635-4570-bcf2-a08115d41365 down in Southbound
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 ovn_controller[144812]: 2026-01-27T13:51:50Z|00495|binding|INFO|Removing iface tapceb7b09e-b6 ovn-installed in OVS
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.615 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:be:d8 10.100.0.7'], port_security=['fa:16:3e:ad:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e053f779-294f-4782-bb33-a14e40753795', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89715d52c38241dbb1fdcc016ede5d3c', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a0c34526-a874-4960-805d-36c3b59e9c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85e669ce-9410-46ed-abaf-db841ce91264, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ceb7b09e-b635-4570-bcf2-a08115d41365) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.616 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ceb7b09e-b635-4570-bcf2-a08115d41365 in datapath 25155fe5-3d99-4510-9613-2ca9c8acc75a unbound from our chassis#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.617 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25155fe5-3d99-4510-9613-2ca9c8acc75a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.618 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b195158d-7591-4242-8076-7a6fd554da5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.619 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a namespace which is not needed anymore#033[00m
Jan 27 08:51:50 np0005597378 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 27 08:51:50 np0005597378 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000002a.scope: Consumed 12.809s CPU time.
Jan 27 08:51:50 np0005597378 systemd-machined[207425]: Machine qemu-62-instance-0000002a terminated.
Jan 27 08:51:50 np0005597378 podman[289643]: 2026-01-27 13:51:50.688588865 +0000 UTC m=+0.083981987 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:51:50 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : haproxy version is 2.8.14-c23fe91
Jan 27 08:51:50 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [NOTICE]   (278693) : path to executable is /usr/sbin/haproxy
Jan 27 08:51:50 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [WARNING]  (278693) : Exiting Master process...
Jan 27 08:51:50 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [ALERT]    (278693) : Current worker (278695) exited with code 143 (Terminated)
Jan 27 08:51:50 np0005597378 neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a[278689]: [WARNING]  (278693) : All workers exited. Exiting... (0)
Jan 27 08:51:50 np0005597378 systemd[1]: libpod-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed.scope: Deactivated successfully.
Jan 27 08:51:50 np0005597378 podman[289689]: 2026-01-27 13:51:50.754449273 +0000 UTC m=+0.045659377 container died 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:51:50 np0005597378 NetworkManager[48904]: <info>  [1769521910.7616] manager: (tapceb7b09e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.779 238945 INFO nova.virt.libvirt.driver [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Instance destroyed successfully.#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.781 238945 DEBUG nova.objects.instance [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lazy-loading 'resources' on Instance uuid e053f779-294f-4782-bb33-a14e40753795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:51:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed-userdata-shm.mount: Deactivated successfully.
Jan 27 08:51:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-209ad30fbdfa4bb4da330a1cc3224f8070b7b1cb9b8d8a757768c254258b37b3-merged.mount: Deactivated successfully.
Jan 27 08:51:50 np0005597378 podman[289689]: 2026-01-27 13:51:50.795023983 +0000 UTC m=+0.086234077 container cleanup 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 08:51:50 np0005597378 systemd[1]: libpod-conmon-0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed.scope: Deactivated successfully.
Jan 27 08:51:50 np0005597378 podman[289729]: 2026-01-27 13:51:50.874605389 +0000 UTC m=+0.049313255 container remove 0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.876 238945 DEBUG nova.virt.libvirt.vif [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T13:48:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1638292425',display_name='tempest-ServerActionsTestOtherB-server-1638292425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1638292425',id=42,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDP03C0DYkDkDM16rv5xyWrKTfQIVUT5qLMxRMlYzm8hHmeSnMZhV7Wff2liK7vQEs3cYnPwrKMCJRSQi2claQqUZb9ipt64IX/AxK1O0DzECaHBkBTMxxg75MbSwKsocA==',key_name='tempest-keypair-848214420',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:51:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89715d52c38241dbb1fdcc016ede5d3c',ramdisk_id='',reservation_id='r-dk0ibvk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1311443694',owner_user_name='tempest-ServerActionsTestOtherB-1311443694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:51:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11a9e491e7f24607aa5d3d710b6607ab',uuid=e053f779-294f-4782-bb33-a14e40753795,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.877 238945 DEBUG nova.network.os_vif_util [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converting VIF {"id": "ceb7b09e-b635-4570-bcf2-a08115d41365", "address": "fa:16:3e:ad:be:d8", "network": {"id": "25155fe5-3d99-4510-9613-2ca9c8acc75a", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1899473384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89715d52c38241dbb1fdcc016ede5d3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb7b09e-b6", "ovs_interfaceid": "ceb7b09e-b635-4570-bcf2-a08115d41365", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.877 238945 DEBUG nova.network.os_vif_util [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.878 238945 DEBUG os_vif [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.880 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb7b09e-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[478a937d-a95d-44b1-96e8-9c5fc1055a72]: (4, ('Tue Jan 27 01:51:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a (0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed)\n0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed\nTue Jan 27 01:51:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a (0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed)\n0decac4e90f904cb41cd6b78fcc32d2e0966a7e75eaf44362e492d985546f9ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.883 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[368b6dc8-d23a-405d-b4fa-f7512e4ca2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.884 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25155fe5-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 kernel: tap25155fe5-30: left promiscuous mode
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.887 238945 INFO os_vif [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=ceb7b09e-b635-4570-bcf2-a08115d41365,network=Network(25155fe5-3d99-4510-9613-2ca9c8acc75a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb7b09e-b6')#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8032ff49-8e96-421f-bea1-22eb95224881]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7fe749-9791-40c0-a8a5-8b46d6687470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eb95e9-f03b-4217-9321-1d81d3603163]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f3988213-151c-4b27-aad1-b7b49beea9eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438182, 'reachable_time': 38908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289764, 'error': None, 'target': 'ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 systemd[1]: run-netns-ovnmeta\x2d25155fe5\x2d3d99\x2d4510\x2d9613\x2d2ca9c8acc75a.mount: Deactivated successfully.
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.949 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25155fe5-3d99-4510-9613-2ca9c8acc75a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:51:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:51:50.950 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dce5c8f1-1e90-489d-a6a7-f15473e049df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:51:50 np0005597378 nova_compute[238941]: 2026-01-27 13:51:50.982 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.112 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-73a36ce7-38f6-4b8c-a3b7-bc84ad632778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.112 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.113 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.113 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.113 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.202 238945 INFO nova.virt.libvirt.driver [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deleting instance files /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.203 238945 INFO nova.virt.libvirt.driver [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Deletion of /var/lib/nova/instances/e053f779-294f-4782-bb33-a14e40753795_del complete#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.274 238945 INFO nova.compute.manager [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.275 238945 DEBUG oslo.service.loopingcall [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.275 238945 DEBUG nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.276 238945 DEBUG nova.network.neutron [-] [instance: e053f779-294f-4782-bb33-a14e40753795] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:51:51 np0005597378 podman[289766]: 2026-01-27 13:51:51.717381573 +0000 UTC m=+0.055921843 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 27 08:51:51 np0005597378 nova_compute[238941]: 2026-01-27 13:51:51.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 27 08:51:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 27 08:51:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 27 08:51:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 158 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 6.7 KiB/s wr, 109 op/s
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.561 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-unplugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.562 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e053f779-294f-4782-bb33-a14e40753795-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG oslo_concurrency.lockutils [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 DEBUG nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] No waiting events found dispatching network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.563 238945 WARNING nova.compute.manager [req-59ef0677-e880-41d4-abb1-641a3250149b req-9f9efb9e-2d2d-490d-914b-6fdc225f3a2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received unexpected event network-vif-plugged-ceb7b09e-b635-4570-bcf2-a08115d41365 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:51:52 np0005597378 nova_compute[238941]: 2026-01-27 13:51:52.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:53 np0005597378 nova_compute[238941]: 2026-01-27 13:51:53.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:53 np0005597378 nova_compute[238941]: 2026-01-27 13:51:53.412 238945 DEBUG nova.network.neutron [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:51:53 np0005597378 nova_compute[238941]: 2026-01-27 13:51:53.432 238945 INFO nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] Took 2.16 seconds to deallocate network for instance.#033[00m
Jan 27 08:51:53 np0005597378 nova_compute[238941]: 2026-01-27 13:51:53.482 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:53 np0005597378 nova_compute[238941]: 2026-01-27 13:51:53.483 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:53 np0005597378 nova_compute[238941]: 2026-01-27 13:51:53.533 238945 DEBUG oslo_concurrency.processutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232959847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.127 238945 DEBUG oslo_concurrency.processutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.134 238945 DEBUG nova.compute.provider_tree [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.158 238945 DEBUG nova.scheduler.client.report [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.179 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.212 238945 INFO nova.scheduler.client.report [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Deleted allocations for instance e053f779-294f-4782-bb33-a14e40753795#033[00m
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.269 238945 DEBUG oslo_concurrency.lockutils [None req-ca403d1a-aecf-438b-b219-b171901df664 11a9e491e7f24607aa5d3d710b6607ab 89715d52c38241dbb1fdcc016ede5d3c - - default default] Lock "e053f779-294f-4782-bb33-a14e40753795" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 74 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 7.4 KiB/s wr, 125 op/s
Jan 27 08:51:54 np0005597378 nova_compute[238941]: 2026-01-27 13:51:54.754 238945 DEBUG nova.compute.manager [req-1d0184cd-89b6-4279-9164-ac12a0a8ffdf req-36a27428-b2b4-4a03-a97a-cbdaa6407543 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e053f779-294f-4782-bb33-a14e40753795] Received event network-vif-deleted-ceb7b09e-b635-4570-bcf2-a08115d41365 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.004 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521900.002358, 17b9acbe-02b3-41d7-af4b-fd8b3d902d47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.004 238945 INFO nova.compute.manager [-] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.024 238945 DEBUG nova.compute.manager [None req-b8d3a27a-ee9e-438e-b44a-12dd68f42f31 - - - - - -] [instance: 17b9acbe-02b3-41d7-af4b-fd8b3d902d47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.032 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521900.0311208, b17763fd-bf68-45e0-84a4-579e1453d6cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.032 238945 INFO nova.compute.manager [-] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.058 238945 DEBUG nova.compute.manager [None req-ac678de3-2b28-4c16-8c62-6f38ec9e7211 - - - - - -] [instance: b17763fd-bf68-45e0-84a4-579e1453d6cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:55 np0005597378 nova_compute[238941]: 2026-01-27 13:51:55.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.9 KiB/s wr, 126 op/s
Jan 27 08:51:56 np0005597378 nova_compute[238941]: 2026-01-27 13:51:56.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:51:56 np0005597378 nova_compute[238941]: 2026-01-27 13:51:56.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:51:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:51:57 np0005597378 nova_compute[238941]: 2026-01-27 13:51:57.729 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521902.7249506, 4316dbd4-e3b9-4411-b921-6dbdd5a3197f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:57 np0005597378 nova_compute[238941]: 2026-01-27 13:51:57.729 238945 INFO nova.compute.manager [-] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:57 np0005597378 nova_compute[238941]: 2026-01-27 13:51:57.867 238945 DEBUG nova.compute.manager [None req-79f53d9f-73d3-4c52-b216-b5523dc9d1df - - - - - -] [instance: 4316dbd4-e3b9-4411-b921-6dbdd5a3197f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:58 np0005597378 nova_compute[238941]: 2026-01-27 13:51:58.197 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521903.195577, 7e8705e9-4e86-44aa-b532-55fcccac542c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:51:58 np0005597378 nova_compute[238941]: 2026-01-27 13:51:58.197 238945 INFO nova.compute.manager [-] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:51:58 np0005597378 nova_compute[238941]: 2026-01-27 13:51:58.231 238945 DEBUG nova.compute.manager [None req-7bab8798-10f4-4030-b716-a36bf0f6098d - - - - - -] [instance: 7e8705e9-4e86-44aa-b532-55fcccac542c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:51:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.7 KiB/s wr, 89 op/s
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.016 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.016 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.043 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.117 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.118 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.126 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.127 238945 INFO nova.compute.claims [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.259 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:51:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:51:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/858398045' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:51:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:51:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/858398045' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:51:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:51:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489490435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.820 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.825 238945 DEBUG nova.compute.provider_tree [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.846 238945 DEBUG nova.scheduler.client.report [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.869 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.869 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.911 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.911 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.936 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:51:59 np0005597378 nova_compute[238941]: 2026-01-27 13:51:59.957 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.052 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.053 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.054 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Creating image(s)#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.075 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.097 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.119 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.124 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.195 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.196 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.196 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.196 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.217 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.221 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.512 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.567 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] resizing rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.647 238945 DEBUG nova.policy [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ba812648bec43bbbd7489f6c33289cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.658 238945 DEBUG nova.objects.instance [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'migration_context' on Instance uuid 3a36add6-8f5a-4197-ba24-f5c29b83301e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.764 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.765 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Ensure instance console log exists: /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.765 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.765 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.766 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:00 np0005597378 nova_compute[238941]: 2026-01-27 13:52:00.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:01 np0005597378 nova_compute[238941]: 2026-01-27 13:52:01.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 27 08:52:02 np0005597378 nova_compute[238941]: 2026-01-27 13:52:02.543 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully created port: 424bfede-9a65-4656-87bc-4e1c9124e547 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:02 np0005597378 nova_compute[238941]: 2026-01-27 13:52:02.918 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521907.9173343, 73a36ce7-38f6-4b8c-a3b7-bc84ad632778 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:02 np0005597378 nova_compute[238941]: 2026-01-27 13:52:02.918 238945 INFO nova.compute.manager [-] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:52:02 np0005597378 nova_compute[238941]: 2026-01-27 13:52:02.951 238945 DEBUG nova.compute.manager [None req-de774998-40f7-4134-af3e-ec1cfa156a07 - - - - - -] [instance: 73a36ce7-38f6-4b8c-a3b7-bc84ad632778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.841 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "dca5ed40-f98a-4b4f-84be-dcc966896524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.841 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.859 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.930 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.930 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.937 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:03 np0005597378 nova_compute[238941]: 2026-01-27 13:52:03.938 238945 INFO nova.compute.claims [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.060 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.252 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully created port: 6fe4867f-99bb-4272-90ef-56a425b07f13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 86 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Jan 27 08:52:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202252938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.682 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.688 238945 DEBUG nova.compute.provider_tree [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.707 238945 DEBUG nova.scheduler.client.report [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.751 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.752 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.803 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.818 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.842 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.933 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.935 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.935 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Creating image(s)#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.955 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:04 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.978 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:04.999 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.002 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.073 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.074 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.074 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.075 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.097 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.101 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dca5ed40-f98a-4b4f-84be-dcc966896524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.431 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dca5ed40-f98a-4b4f-84be-dcc966896524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.519 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] resizing rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.617 238945 DEBUG nova.objects.instance [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'migration_context' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.640 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.641 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Ensure instance console log exists: /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.641 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.641 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.642 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.643 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.648 238945 WARNING nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.652 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.652 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.655 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.656 238945 DEBUG nova.virt.libvirt.host [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.656 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.656 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.657 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.658 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.659 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.659 238945 DEBUG nova.virt.hardware [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.662 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.777 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521910.776243, e053f779-294f-4782-bb33-a14e40753795 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.778 238945 INFO nova.compute.manager [-] [instance: e053f779-294f-4782-bb33-a14e40753795] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.806 238945 DEBUG nova.compute.manager [None req-14cd2a3e-284f-42ca-a05c-ce8f8872323a - - - - - -] [instance: e053f779-294f-4782-bb33-a14e40753795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:05 np0005597378 nova_compute[238941]: 2026-01-27 13:52:05.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674534191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.240 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.271 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.277 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 88 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 27 08:52:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674936149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.857 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.859 238945 DEBUG nova.objects.instance [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'pci_devices' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.875 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <uuid>dca5ed40-f98a-4b4f-84be-dcc966896524</uuid>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <name>instance-00000038</name>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersAaction247Test-server-702335819</nova:name>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:52:05</nova:creationTime>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:user uuid="a5bbdc8b33bc4c3e9b558b5b1c007e9f">tempest-ServersAaction247Test-29006437-project-member</nova:user>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <nova:project uuid="1a21c3e6617a46f5a76a074e7d40140a">tempest-ServersAaction247Test-29006437</nova:project>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <entry name="serial">dca5ed40-f98a-4b4f-84be-dcc966896524</entry>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <entry name="uuid">dca5ed40-f98a-4b4f-84be-dcc966896524</entry>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dca5ed40-f98a-4b4f-84be-dcc966896524_disk">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/console.log" append="off"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:52:06 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:52:06 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:52:06 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:52:06 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:52:06 np0005597378 nova_compute[238941]: 2026-01-27 13:52:06.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.132 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.132 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.133 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Using config drive#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.156 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.377 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully updated port: 424bfede-9a65-4656-87bc-4e1c9124e547 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.409 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Creating config drive at /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.413 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ex1y0r4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.532 238945 DEBUG nova.compute.manager [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-changed-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.532 238945 DEBUG nova.compute.manager [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing instance network info cache due to event network-changed-424bfede-9a65-4656-87bc-4e1c9124e547. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.532 238945 DEBUG oslo_concurrency.lockutils [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.533 238945 DEBUG oslo_concurrency.lockutils [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.533 238945 DEBUG nova.network.neutron [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing network info cache for port 424bfede-9a65-4656-87bc-4e1c9124e547 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.557 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ex1y0r4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.584 238945 DEBUG nova.storage.rbd_utils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] rbd image dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.587 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.728 238945 DEBUG oslo_concurrency.processutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config dca5ed40-f98a-4b4f-84be-dcc966896524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.729 238945 INFO nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deleting local config drive /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524/disk.config because it was imported into RBD.#033[00m
Jan 27 08:52:07 np0005597378 systemd-machined[207425]: New machine qemu-63-instance-00000038.
Jan 27 08:52:07 np0005597378 systemd[1]: Started Virtual Machine qemu-63-instance-00000038.
Jan 27 08:52:07 np0005597378 nova_compute[238941]: 2026-01-27 13:52:07.797 238945 DEBUG nova.network.neutron [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.334 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521928.3341548, dca5ed40-f98a-4b4f-84be-dcc966896524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.335 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.337 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.338 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.344 238945 INFO nova.virt.libvirt.driver [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance spawned successfully.#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.345 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.365 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 103 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.379 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.380 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.380 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.381 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.381 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.381 238945 DEBUG nova.virt.libvirt.driver [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.387 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.388 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521928.334261, dca5ed40-f98a-4b4f-84be-dcc966896524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.388 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] VM Started (Lifecycle Event)#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.417 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.422 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.441 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.452 238945 INFO nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 3.52 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.452 238945 DEBUG nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:08.498 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:08.499 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.507 238945 INFO nova.compute.manager [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 4.60 seconds to build instance.#033[00m
Jan 27 08:52:08 np0005597378 nova_compute[238941]: 2026-01-27 13:52:08.531 238945 DEBUG oslo_concurrency.lockutils [None req-7184099d-55aa-4a66-bedf-1b6f4c84e390 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:10 np0005597378 nova_compute[238941]: 2026-01-27 13:52:10.097 238945 DEBUG nova.network.neutron [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 08:52:10 np0005597378 nova_compute[238941]: 2026-01-27 13:52:10.846 238945 DEBUG oslo_concurrency.lockutils [req-90378f10-ca1b-4111-b9d9-6f2a1ee458af req-dccb24f9-dae2-4b5b-a8a0-5e581d449173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:10 np0005597378 nova_compute[238941]: 2026-01-27 13:52:10.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:11 np0005597378 nova_compute[238941]: 2026-01-27 13:52:11.644 238945 DEBUG nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:11 np0005597378 nova_compute[238941]: 2026-01-27 13:52:11.923 238945 INFO nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] instance snapshotting#033[00m
Jan 27 08:52:11 np0005597378 nova_compute[238941]: 2026-01-27 13:52:11.924 238945 DEBUG nova.objects.instance [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'flavor' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:11 np0005597378 nova_compute[238941]: 2026-01-27 13:52:11.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.662 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "dca5ed40-f98a-4b4f-84be-dcc966896524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.663 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.663 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "dca5ed40-f98a-4b4f-84be-dcc966896524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.664 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.664 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.665 238945 INFO nova.compute.manager [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Terminating instance#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.666 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "refresh_cache-dca5ed40-f98a-4b4f-84be-dcc966896524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.667 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquired lock "refresh_cache-dca5ed40-f98a-4b4f-84be-dcc966896524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:12 np0005597378 nova_compute[238941]: 2026-01-27 13:52:12.667 238945 DEBUG nova.network.neutron [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:52:13 np0005597378 nova_compute[238941]: 2026-01-27 13:52:13.018 238945 INFO nova.virt.libvirt.driver [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Beginning live snapshot process#033[00m
Jan 27 08:52:13 np0005597378 nova_compute[238941]: 2026-01-27 13:52:13.107 238945 DEBUG nova.network.neutron [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:13 np0005597378 nova_compute[238941]: 2026-01-27 13:52:13.254 238945 DEBUG nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Jan 27 08:52:13 np0005597378 nova_compute[238941]: 2026-01-27 13:52:13.412 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Successfully updated port: 6fe4867f-99bb-4272-90ef-56a425b07f13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:52:14 np0005597378 nova_compute[238941]: 2026-01-27 13:52:14.223 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:14 np0005597378 nova_compute[238941]: 2026-01-27 13:52:14.224 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquired lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:14 np0005597378 nova_compute[238941]: 2026-01-27 13:52:14.224 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:52:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 27 08:52:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:14.501 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:14 np0005597378 nova_compute[238941]: 2026-01-27 13:52:14.805 238945 DEBUG nova.network.neutron [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:15 np0005597378 nova_compute[238941]: 2026-01-27 13:52:15.040 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Releasing lock "refresh_cache-dca5ed40-f98a-4b4f-84be-dcc966896524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:15 np0005597378 nova_compute[238941]: 2026-01-27 13:52:15.041 238945 DEBUG nova.compute.manager [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:52:15 np0005597378 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Deactivated successfully.
Jan 27 08:52:15 np0005597378 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Consumed 7.405s CPU time.
Jan 27 08:52:15 np0005597378 systemd-machined[207425]: Machine qemu-63-instance-00000038 terminated.
Jan 27 08:52:15 np0005597378 nova_compute[238941]: 2026-01-27 13:52:15.262 238945 INFO nova.virt.libvirt.driver [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance destroyed successfully.#033[00m
Jan 27 08:52:15 np0005597378 nova_compute[238941]: 2026-01-27 13:52:15.263 238945 DEBUG nova.objects.instance [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lazy-loading 'resources' on Instance uuid dca5ed40-f98a-4b4f-84be-dcc966896524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:15 np0005597378 nova_compute[238941]: 2026-01-27 13:52:15.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 27 08:52:16 np0005597378 nova_compute[238941]: 2026-01-27 13:52:16.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:52:17
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:52:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:52:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 08:52:18 np0005597378 nova_compute[238941]: 2026-01-27 13:52:18.730 238945 INFO nova.virt.libvirt.driver [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deleting instance files /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524_del#033[00m
Jan 27 08:52:18 np0005597378 nova_compute[238941]: 2026-01-27 13:52:18.732 238945 INFO nova.virt.libvirt.driver [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deletion of /var/lib/nova/instances/dca5ed40-f98a-4b4f-84be-dcc966896524_del complete#033[00m
Jan 27 08:52:19 np0005597378 nova_compute[238941]: 2026-01-27 13:52:19.750 238945 INFO nova.compute.manager [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 4.71 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:52:19 np0005597378 nova_compute[238941]: 2026-01-27 13:52:19.750 238945 DEBUG oslo.service.loopingcall [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:52:19 np0005597378 nova_compute[238941]: 2026-01-27 13:52:19.751 238945 DEBUG nova.compute.manager [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:52:19 np0005597378 nova_compute[238941]: 2026-01-27 13:52:19.751 238945 DEBUG nova.network.neutron [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.361 238945 DEBUG nova.compute.manager [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-changed-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.361 238945 DEBUG nova.compute.manager [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing instance network info cache due to event network-changed-6fe4867f-99bb-4272-90ef-56a425b07f13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.361 238945 DEBUG oslo_concurrency.lockutils [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 117 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.394 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.537 238945 DEBUG nova.network.neutron [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.554 238945 DEBUG nova.network.neutron [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.577 238945 INFO nova.compute.manager [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Took 0.83 seconds to deallocate network for instance.#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.623 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.624 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.629 238945 DEBUG nova.compute.manager [None req-118e2d8d-1510-4f5c-93ab-2051e20459f3 a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.724 238945 DEBUG oslo_concurrency.processutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:20 np0005597378 nova_compute[238941]: 2026-01-27 13:52:20.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955412204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.375 238945 DEBUG oslo_concurrency.processutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.383 238945 DEBUG nova.compute.provider_tree [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.405 238945 DEBUG nova.scheduler.client.report [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.430 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.459 238945 INFO nova.scheduler.client.report [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Deleted allocations for instance dca5ed40-f98a-4b4f-84be-dcc966896524#033[00m
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.515 238945 DEBUG oslo_concurrency.lockutils [None req-556c9eb4-5cae-42d4-8541-a96edf71b8dc a5bbdc8b33bc4c3e9b558b5b1c007e9f 1a21c3e6617a46f5a76a074e7d40140a - - default default] Lock "dca5ed40-f98a-4b4f-84be-dcc966896524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:21 np0005597378 podman[290407]: 2026-01-27 13:52:21.772966001 +0000 UTC m=+0.106797959 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:21 np0005597378 podman[290433]: 2026-01-27 13:52:21.858528989 +0000 UTC m=+0.051507864 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 08:52:21 np0005597378 nova_compute[238941]: 2026-01-27 13:52:21.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 117 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 341 B/s wr, 53 op/s
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.471 238945 DEBUG nova.network.neutron [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.490 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Releasing lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.490 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance network_info: |[{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.490 238945 DEBUG oslo_concurrency.lockutils [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.491 238945 DEBUG nova.network.neutron [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Refreshing network info cache for port 6fe4867f-99bb-4272-90ef-56a425b07f13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.494 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start _get_guest_xml network_info=[{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.498 238945 WARNING nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.503 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.504 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.513 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.513 238945 DEBUG nova.virt.libvirt.host [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.514 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.514 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.514 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.515 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.516 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.516 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.516 238945 DEBUG nova.virt.hardware [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:52:23 np0005597378 nova_compute[238941]: 2026-01-27 13:52:23.519 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121488985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.070 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.090 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.094 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 27 08:52:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1675916152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.674 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.676 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.676 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.677 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.678 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.678 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.678 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.679 238945 DEBUG nova.objects.instance [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a36add6-8f5a-4197-ba24-f5c29b83301e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.700 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <uuid>3a36add6-8f5a-4197-ba24-f5c29b83301e</uuid>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <name>instance-00000037</name>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersTestMultiNic-server-1100055046</nova:name>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:52:23</nova:creationTime>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:user uuid="0ba812648bec43bbbd7489f6c33289cc">tempest-ServersTestMultiNic-438271831-project-member</nova:user>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:project uuid="ad39416b63df4f6194a01f4e91fdda1c">tempest-ServersTestMultiNic-438271831</nova:project>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:port uuid="424bfede-9a65-4656-87bc-4e1c9124e547">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.96" ipVersion="4"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <nova:port uuid="6fe4867f-99bb-4272-90ef-56a425b07f13">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.1.218" ipVersion="4"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <entry name="serial">3a36add6-8f5a-4197-ba24-f5c29b83301e</entry>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <entry name="uuid">3a36add6-8f5a-4197-ba24-f5c29b83301e</entry>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3a36add6-8f5a-4197-ba24-f5c29b83301e_disk">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:4d:d2:0a"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <target dev="tap424bfede-9a"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:45:86:7c"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <target dev="tap6fe4867f-99"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/console.log" append="off"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:52:24 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:52:24 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:52:24 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:52:24 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.701 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Preparing to wait for external event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.701 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.701 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Preparing to wait for external event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.702 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.703 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.703 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.703 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.704 238945 DEBUG os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.705 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.705 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.709 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424bfede-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.709 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap424bfede-9a, col_values=(('external_ids', {'iface-id': '424bfede-9a65-4656-87bc-4e1c9124e547', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:d2:0a', 'vm-uuid': '3a36add6-8f5a-4197-ba24-f5c29b83301e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 NetworkManager[48904]: <info>  [1769521944.7117] manager: (tap424bfede-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.712 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.718 238945 INFO os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a')#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.719 238945 DEBUG nova.virt.libvirt.vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:51:59Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.719 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.719 238945 DEBUG nova.network.os_vif_util [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.722 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.722 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fe4867f-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.723 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fe4867f-99, col_values=(('external_ids', {'iface-id': '6fe4867f-99bb-4272-90ef-56a425b07f13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:86:7c', 'vm-uuid': '3a36add6-8f5a-4197-ba24-f5c29b83301e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 NetworkManager[48904]: <info>  [1769521944.7247] manager: (tap6fe4867f-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.730 238945 INFO os_vif [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99')#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:4d:d2:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.795 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] No VIF found with MAC fa:16:3e:45:86:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.796 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Using config drive#033[00m
Jan 27 08:52:24 np0005597378 nova_compute[238941]: 2026-01-27 13:52:24.815 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:25 np0005597378 nova_compute[238941]: 2026-01-27 13:52:25.993 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Creating config drive at /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config#033[00m
Jan 27 08:52:25 np0005597378 nova_compute[238941]: 2026-01-27 13:52:25.998 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9tp8sx_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.139 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9tp8sx_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.167 238945 DEBUG nova.storage.rbd_utils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] rbd image 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.170 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.259 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.259 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.268 238945 DEBUG nova.network.neutron [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updated VIF entry in instance network info cache for port 6fe4867f-99bb-4272-90ef-56a425b07f13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.269 238945 DEBUG nova.network.neutron [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.295 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.299 238945 DEBUG oslo_concurrency.lockutils [req-ad040078-4da2-458a-9a8f-4486d702c98e req-d00770c9-6683-4724-8211-f68a87c23dc3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a36add6-8f5a-4197-ba24-f5c29b83301e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.308 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.308 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.326 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.385 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.385 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.394 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.395 238945 INFO nova.compute.claims [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.398 238945 DEBUG oslo_concurrency.processutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config 3a36add6-8f5a-4197-ba24-f5c29b83301e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.398 238945 INFO nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deleting local config drive /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e/disk.config because it was imported into RBD.#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.420 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.4609] manager: (tap424bfede-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/217)
Jan 27 08:52:26 np0005597378 kernel: tap424bfede-9a: entered promiscuous mode
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00496|binding|INFO|Claiming lport 424bfede-9a65-4656-87bc-4e1c9124e547 for this chassis.
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00497|binding|INFO|424bfede-9a65-4656-87bc-4e1c9124e547: Claiming fa:16:3e:4d:d2:0a 10.100.0.96
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.4764] manager: (tap6fe4867f-99): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.482 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d2:0a 10.100.0.96'], port_security=['fa:16:3e:4d:d2:0a 10.100.0.96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.96/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689b1c21-664a-46df-b8a2-8b9a794dba22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2326baf1-79d9-4da2-af2a-c0fb6400b8b9, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=424bfede-9a65-4656-87bc-4e1c9124e547) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.484 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 424bfede-9a65-4656-87bc-4e1c9124e547 in datapath 689b1c21-664a-46df-b8a2-8b9a794dba22 bound to our chassis#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.485 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 689b1c21-664a-46df-b8a2-8b9a794dba22#033[00m
Jan 27 08:52:26 np0005597378 systemd-udevd[290597]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:26 np0005597378 systemd-udevd[290596]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.498 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77a28353-8234-4229-813d-ae44d9f3dafa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.502 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap689b1c21-61 in ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.504 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap689b1c21-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.504 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97130068-76dc-43e8-a576-25d786393d82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 systemd-machined[207425]: New machine qemu-64-instance-00000037.
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.506 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a741e02c-3de9-4098-bdbc-cc1f6927d51f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.5100] device (tap424bfede-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.5106] device (tap424bfede-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:52:26 np0005597378 systemd[1]: Started Virtual Machine qemu-64-instance-00000037.
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.522 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[462a68ae-3a4f-44a3-be76-6e69a7f3db86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 kernel: tap6fe4867f-99: entered promiscuous mode
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.5248] device (tap6fe4867f-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.5257] device (tap6fe4867f-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00498|binding|INFO|Claiming lport 6fe4867f-99bb-4272-90ef-56a425b07f13 for this chassis.
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00499|binding|INFO|6fe4867f-99bb-4272-90ef-56a425b07f13: Claiming fa:16:3e:45:86:7c 10.100.1.218
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00500|binding|INFO|Setting lport 424bfede-9a65-4656-87bc-4e1c9124e547 ovn-installed in OVS
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00501|binding|INFO|Setting lport 424bfede-9a65-4656-87bc-4e1c9124e547 up in Southbound
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.541 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:86:7c 10.100.1.218'], port_security=['fa:16:3e:45:86:7c 10.100.1.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.218/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fe18587-d414-46eb-8958-e626dcc4e93a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22ce82a6-059b-4ca1-a897-b4a243ac5e0d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6fe4867f-99bb-4272-90ef-56a425b07f13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d07a1e73-c2a3-4e53-9443-81e3ebcd5ff9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00502|binding|INFO|Setting lport 6fe4867f-99bb-4272-90ef-56a425b07f13 ovn-installed in OVS
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00503|binding|INFO|Setting lport 6fe4867f-99bb-4272-90ef-56a425b07f13 up in Southbound
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.577 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.582 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.582 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[abe0da8c-0ad1-4f3a-9e62-6a0dfc487ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.5899] manager: (tap689b1c21-60): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.589 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6469c2-414c-4934-9edd-2df62e263397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.622 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3f80e74e-110f-4e8a-bbf2-333d92436441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.626 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd9778f-047e-4edd-af82-b45a9f47a798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.6506] device (tap689b1c21-60): carrier: link connected
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.655 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac566526-5102-40aa-bcdf-ccfcb85a50e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.671 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac631114-0ebe-4cba-b363-c2a218ee1a0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap689b1c21-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:af:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458627, 'reachable_time': 43219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290631, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a40732cc-dd64-4d40-808e-03ff84332e22]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:afc8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458627, 'tstamp': 458627}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290632, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.706 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d610b2-f50c-45bc-9aa8-c0551a9960cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap689b1c21-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:af:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458627, 'reachable_time': 43219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290633, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.737 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1fd3fe-5aad-47ed-a61a-88c030239e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.799 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8840c42f-ee1e-4ed7-8a59-430ccd160a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.801 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap689b1c21-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.801 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.803 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap689b1c21-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:26 np0005597378 NetworkManager[48904]: <info>  [1769521946.8059] manager: (tap689b1c21-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 27 08:52:26 np0005597378 kernel: tap689b1c21-60: entered promiscuous mode
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.809 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap689b1c21-60, col_values=(('external_ids', {'iface-id': '5e6d4878-b27b-47d5-8d8c-069d907ed576'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:26Z|00504|binding|INFO|Releasing lport 5e6d4878-b27b-47d5-8d8c-069d907ed576 from this chassis (sb_readonly=0)
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.811 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.834 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/689b1c21-664a-46df-b8a2-8b9a794dba22.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/689b1c21-664a-46df-b8a2-8b9a794dba22.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.835 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d969e82b-37c0-4a00-b83b-0184ed050805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.836 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-689b1c21-664a-46df-b8a2-8b9a794dba22
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/689b1c21-664a-46df-b8a2-8b9a794dba22.pid.haproxy
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 689b1c21-664a-46df-b8a2-8b9a794dba22
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:52:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:26.837 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'env', 'PROCESS_TAG=haproxy-689b1c21-664a-46df-b8a2-8b9a794dba22', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/689b1c21-664a-46df-b8a2-8b9a794dba22.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.905 238945 DEBUG nova.compute.manager [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.905 238945 DEBUG oslo_concurrency.lockutils [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.906 238945 DEBUG oslo_concurrency.lockutils [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.906 238945 DEBUG oslo_concurrency.lockutils [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.906 238945 DEBUG nova.compute.manager [req-b324b98b-45f3-4a0b-9ba4-c13ce0c71d3a req-17728492-4659-4ad9-852f-263fd06e9402 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Processing event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:52:26 np0005597378 nova_compute[238941]: 2026-01-27 13:52:26.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577871330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.171 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.180 238945 DEBUG nova.compute.provider_tree [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:27 np0005597378 podman[290685]: 2026-01-27 13:52:27.270302859 +0000 UTC m=+0.092993929 container create 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.270 238945 DEBUG nova.scheduler.client.report [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:27 np0005597378 podman[290685]: 2026-01-27 13:52:27.202411885 +0000 UTC m=+0.025102975 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:52:27 np0005597378 systemd[1]: Started libpod-conmon-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a.scope.
Jan 27 08:52:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b534e003e19face37d27cade80f084177c94c0ef542c988280e3336705100343/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:27 np0005597378 podman[290685]: 2026-01-27 13:52:27.383975272 +0000 UTC m=+0.206666362 container init 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 08:52:27 np0005597378 podman[290685]: 2026-01-27 13:52:27.389734226 +0000 UTC m=+0.212425296 container start 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 08:52:27 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : New worker (290714) forked
Jan 27 08:52:27 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : Loading success.
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.447 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.448 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.450 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.455 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6fe4867f-99bb-4272-90ef-56a425b07f13 in datapath 2fe18587-d414-46eb-8958-e626dcc4e93a unbound from our chassis#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.457 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fe18587-d414-46eb-8958-e626dcc4e93a#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.463 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.463 238945 INFO nova.compute.claims [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[237f1d2a-202d-4783-b4e9-cca23d8d94bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.471 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fe18587-d1 in ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.473 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fe18587-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3e86de-89f2-4b9b-81eb-fe1f9764d50b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.474 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0513c9d4-5a24-4cdf-b389-8244bbd7afff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.487 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7cb314-2ca1-47c6-836b-b2c8cd9c9aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035261794242603034 of space, bias 1.0, pg target 0.1057853827278091 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006679923939442522 of space, bias 1.0, pg target 0.20039771818327565 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1692555099599582e-06 of space, bias 4.0, pg target 0.0014031066119519497 quantized to 16 (current 16)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:52:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.501 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d594a2e5-da99-4f3b-b22a-6d41d2aaf572]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.529 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c93108d9-943a-465e-9624-932e7e6bc7d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 NetworkManager[48904]: <info>  [1769521947.5386] manager: (tap2fe18587-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/221)
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.537 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98f35b46-a6d8-440b-8788-9227517116c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.541 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.541 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:52:27 np0005597378 systemd-udevd[290624]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.570 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[75656ca4-bd07-43b2-8458-ac2a59813cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.574 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.574 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ccce2646-e05a-4f80-bba8-6a265bc6fb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.588 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521947.5881453, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.588 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Started (Lifecycle Event)#033[00m
Jan 27 08:52:27 np0005597378 NetworkManager[48904]: <info>  [1769521947.5964] device (tap2fe18587-d0): carrier: link connected
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.601 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e84217-66b4-4bc7-aabf-4974ca2b1ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.617 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f47da9ea-2ec0-46de-bda1-4c233a2fad01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fe18587-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:f5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458722, 'reachable_time': 21023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290771, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.630 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.630 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.632 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2a0f79-43a4-4b15-97b2-c81dea7d140d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:f5e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458722, 'tstamp': 458722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290772, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.636 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521947.588262, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.637 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.648 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12c7cdd9-bfce-47f9-a413-8c2bf0691bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fe18587-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:f5:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458722, 'reachable_time': 21023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290773, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.678 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.679 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6705134-b020-4cfe-9485-c23bf9ca0485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.706 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbca031f-d0f3-4f74-95f6-d07c0c9daac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe18587-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.743 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fe18587-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:27 np0005597378 NetworkManager[48904]: <info>  [1769521947.7461] manager: (tap2fe18587-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:27 np0005597378 kernel: tap2fe18587-d0: entered promiscuous mode
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.749 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fe18587-d0, col_values=(('external_ids', {'iface-id': '769e1173-bbb1-456f-946e-924c9c2fe2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:27Z|00505|binding|INFO|Releasing lport 769e1173-bbb1-456f-946e-924c9c2fe2a7 from this chassis (sb_readonly=0)
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.765 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.770 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fe18587-d414-46eb-8958-e626dcc4e93a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fe18587-d414-46eb-8958-e626dcc4e93a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04ed3dd9-64eb-497e-8d4f-64b3665b82a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.771 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-2fe18587-d414-46eb-8958-e626dcc4e93a
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/2fe18587-d414-46eb-8958-e626dcc4e93a.pid.haproxy
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 2fe18587-d414-46eb-8958-e626dcc4e93a
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:52:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:27.772 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'env', 'PROCESS_TAG=haproxy-2fe18587-d414-46eb-8958-e626dcc4e93a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fe18587-d414-46eb-8958-e626dcc4e93a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.834 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.836 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.837 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Creating image(s)#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.861 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.881 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.902 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.905 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.944 238945 DEBUG nova.policy [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.986 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.988 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.989 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:27 np0005597378 nova_compute[238941]: 2026-01-27 13:52:27.990 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.013 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.018 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e435204e-d1d1-4031-8984-a628dda926cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:28 np0005597378 podman[290913]: 2026-01-27 13:52:28.157169466 +0000 UTC m=+0.026852932 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:52:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458948490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.295 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:28 np0005597378 podman[290913]: 2026-01-27 13:52:28.299143779 +0000 UTC m=+0.168827225 container create f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.305 238945 DEBUG nova.compute.provider_tree [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:28 np0005597378 systemd[1]: Started libpod-conmon-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f.scope.
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.339 238945 DEBUG nova.scheduler.client.report [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.347 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e435204e-d1d1-4031-8984-a628dda926cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8616905b6ac6fe093d9aaab4fbdbd8adbe33fb17506770533321203357840951/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 08:52:28 np0005597378 podman[290913]: 2026-01-27 13:52:28.376516658 +0000 UTC m=+0.246200134 container init f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 08:52:28 np0005597378 podman[290913]: 2026-01-27 13:52:28.383463134 +0000 UTC m=+0.253146580 container start f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:52:28 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : New worker (290975) forked
Jan 27 08:52:28 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : Loading success.
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.414 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.482 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.483 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.491 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid e435204e-d1d1-4031-8984-a628dda926cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.639 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Ensure instance console log exists: /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.639 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.640 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.640 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.716 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.716 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.793 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:28 np0005597378 nova_compute[238941]: 2026-01-27 13:52:28.924 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.065 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Successfully created port: 1b57c2a6-9156-4778-ad7e-2302f4523d88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.186 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.188 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.188 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Creating image(s)#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.209 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.234 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.261 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.265 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.313 238945 DEBUG nova.compute.manager [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.314 238945 DEBUG oslo_concurrency.lockutils [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 DEBUG oslo_concurrency.lockutils [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 DEBUG oslo_concurrency.lockutils [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 DEBUG nova.compute.manager [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No event matching network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 in dict_keys([('network-vif-plugged', '6fe4867f-99bb-4272-90ef-56a425b07f13')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.315 238945 WARNING nova.compute.manager [req-13063fad-834e-4fae-9f78-d2b049094a0d req-233dc3a7-f05d-4501-b4a7-f40bf631e525 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.316 238945 DEBUG nova.compute.manager [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.316 238945 DEBUG oslo_concurrency.lockutils [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.317 238945 DEBUG oslo_concurrency.lockutils [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.317 238945 DEBUG oslo_concurrency.lockutils [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.317 238945 DEBUG nova.compute.manager [req-cb08e314-6b31-4230-ac8b-8f2e6f214a3e req-7204e8dc-1af5-45c0-b811-6ad26d8f61cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Processing event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.318 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.322 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521949.3217926, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.323 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.327 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.331 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance spawned successfully.#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.332 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.347 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.348 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.372 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.376 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d09fd69a-4503-4b5d-b452-b406d958ffab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.421 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.429 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.432 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.433 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.434 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.434 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.434 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.435 238945 DEBUG nova.virt.libvirt.driver [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.468 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.499 238945 DEBUG nova.policy [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.507 238945 INFO nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 29.45 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.507 238945 DEBUG nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.620 238945 INFO nova.compute.manager [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 30.52 seconds to build instance.#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.644 238945 DEBUG oslo_concurrency.lockutils [None req-2266cd38-ce05-40ab-bb6d-ef12d8026da7 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.721 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d09fd69a-4503-4b5d-b452-b406d958ffab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.787 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.871 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid d09fd69a-4503-4b5d-b452-b406d958ffab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.886 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.887 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Ensure instance console log exists: /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.888 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.888 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:29 np0005597378 nova_compute[238941]: 2026-01-27 13:52:29.889 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.260 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521935.2582304, dca5ed40-f98a-4b4f-84be-dcc966896524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.261 238945 INFO nova.compute.manager [-] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.286 238945 DEBUG nova.compute.manager [None req-d2bdedf5-789e-4416-b21d-113263c29396 - - - - - -] [instance: dca5ed40-f98a-4b4f-84be-dcc966896524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 137 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.5 MiB/s wr, 64 op/s
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.846 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.848 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.848 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.849 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.849 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.850 238945 INFO nova.compute.manager [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Terminating instance#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.851 238945 DEBUG nova.compute.manager [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:52:30 np0005597378 kernel: tap424bfede-9a (unregistering): left promiscuous mode
Jan 27 08:52:30 np0005597378 NetworkManager[48904]: <info>  [1769521950.8904] device (tap424bfede-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:52:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:30Z|00506|binding|INFO|Releasing lport 424bfede-9a65-4656-87bc-4e1c9124e547 from this chassis (sb_readonly=0)
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:30Z|00507|binding|INFO|Setting lport 424bfede-9a65-4656-87bc-4e1c9124e547 down in Southbound
Jan 27 08:52:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:30Z|00508|binding|INFO|Removing iface tap424bfede-9a ovn-installed in OVS
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.904 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d2:0a 10.100.0.96'], port_security=['fa:16:3e:4d:d2:0a 10.100.0.96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.96/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689b1c21-664a-46df-b8a2-8b9a794dba22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2326baf1-79d9-4da2-af2a-c0fb6400b8b9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=424bfede-9a65-4656-87bc-4e1c9124e547) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.905 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 424bfede-9a65-4656-87bc-4e1c9124e547 in datapath 689b1c21-664a-46df-b8a2-8b9a794dba22 unbound from our chassis#033[00m
Jan 27 08:52:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.906 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 689b1c21-664a-46df-b8a2-8b9a794dba22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:52:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5802ae03-d887-4a33-9eff-f5fd61812652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.908 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 namespace which is not needed anymore#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.915 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:30 np0005597378 kernel: tap6fe4867f-99 (unregistering): left promiscuous mode
Jan 27 08:52:30 np0005597378 NetworkManager[48904]: <info>  [1769521950.9263] device (tap6fe4867f-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.927 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Successfully created port: 11be2e1f-225a-49ab-9814-310e74c3f48a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:30Z|00509|binding|INFO|Releasing lport 6fe4867f-99bb-4272-90ef-56a425b07f13 from this chassis (sb_readonly=0)
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:30Z|00510|binding|INFO|Setting lport 6fe4867f-99bb-4272-90ef-56a425b07f13 down in Southbound
Jan 27 08:52:30 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:30Z|00511|binding|INFO|Removing iface tap6fe4867f-99 ovn-installed in OVS
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:30.949 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:86:7c 10.100.1.218'], port_security=['fa:16:3e:45:86:7c 10.100.1.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.218/24', 'neutron:device_id': '3a36add6-8f5a-4197-ba24-f5c29b83301e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fe18587-d414-46eb-8958-e626dcc4e93a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad39416b63df4f6194a01f4e91fdda1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86647fa6-2464-452c-a1c2-aafbd1a71d16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22ce82a6-059b-4ca1-a897-b4a243ac5e0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6fe4867f-99bb-4272-90ef-56a425b07f13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:30 np0005597378 nova_compute[238941]: 2026-01-27 13:52:30.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:30 np0005597378 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Deactivated successfully.
Jan 27 08:52:30 np0005597378 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Consumed 2.625s CPU time.
Jan 27 08:52:30 np0005597378 systemd-machined[207425]: Machine qemu-64-instance-00000037 terminated.
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : haproxy version is 2.8.14-c23fe91
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [NOTICE]   (290705) : path to executable is /usr/sbin/haproxy
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [WARNING]  (290705) : Exiting Master process...
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [WARNING]  (290705) : Exiting Master process...
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [ALERT]    (290705) : Current worker (290714) exited with code 143 (Terminated)
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22[290701]: [WARNING]  (290705) : All workers exited. Exiting... (0)
Jan 27 08:52:31 np0005597378 systemd[1]: libpod-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a.scope: Deactivated successfully.
Jan 27 08:52:31 np0005597378 podman[291218]: 2026-01-27 13:52:31.051853377 +0000 UTC m=+0.045595886 container died 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:52:31 np0005597378 NetworkManager[48904]: <info>  [1769521951.0774] manager: (tap424bfede-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Jan 27 08:52:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a-userdata-shm.mount: Deactivated successfully.
Jan 27 08:52:31 np0005597378 NetworkManager[48904]: <info>  [1769521951.0880] manager: (tap6fe4867f-99): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Jan 27 08:52:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b534e003e19face37d27cade80f084177c94c0ef542c988280e3336705100343-merged.mount: Deactivated successfully.
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.106 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Instance destroyed successfully.#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.108 238945 DEBUG nova.objects.instance [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lazy-loading 'resources' on Instance uuid 3a36add6-8f5a-4197-ba24-f5c29b83301e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:31 np0005597378 podman[291218]: 2026-01-27 13:52:31.114219721 +0000 UTC m=+0.107962220 container cleanup 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:31 np0005597378 systemd[1]: libpod-conmon-7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a.scope: Deactivated successfully.
Jan 27 08:52:31 np0005597378 podman[291266]: 2026-01-27 13:52:31.198494554 +0000 UTC m=+0.052289184 container remove 7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.205 238945 DEBUG nova.virt.libvirt.vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.206 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6466e826-6084-4a27-bfda-32fe8b350785]: (4, ('Tue Jan 27 01:52:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 (7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a)\n7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a\nTue Jan 27 01:52:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 (7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a)\n7901893b945466689b5ca087adcb2c83d7a4632715e54b6f9d2071eb54fc770a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.207 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.207 238945 DEBUG os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58ac655c-17e3-47cb-803f-860dbece5988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.209 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.209 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424bfede-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.210 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap689b1c21-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.228 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 kernel: tap689b1c21-60: left promiscuous mode
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.232 238945 INFO os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d2:0a,bridge_name='br-int',has_traffic_filtering=True,id=424bfede-9a65-4656-87bc-4e1c9124e547,network=Network(689b1c21-664a-46df-b8a2-8b9a794dba22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424bfede-9a')#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.232 238945 DEBUG nova.virt.libvirt.vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1100055046',display_name='tempest-ServersTestMultiNic-server-1100055046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1100055046',id=55,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad39416b63df4f6194a01f4e91fdda1c',ramdisk_id='',reservation_id='r-n1qqmpdj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-438271831',owner_user_name='tempest-ServersTestMultiNic-438271831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='0ba812648bec43bbbd7489f6c33289cc',uuid=3a36add6-8f5a-4197-ba24-f5c29b83301e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.233 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converting VIF {"id": "6fe4867f-99bb-4272-90ef-56a425b07f13", "address": "fa:16:3e:45:86:7c", "network": {"id": "2fe18587-d414-46eb-8958-e626dcc4e93a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1455375427", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fe4867f-99", "ovs_interfaceid": "6fe4867f-99bb-4272-90ef-56a425b07f13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.233 238945 DEBUG nova.network.os_vif_util [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.234 238945 DEBUG os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fe4867f-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a18791-30ee-4130-b422-be7d200b9dc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.247 238945 INFO os_vif [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:86:7c,bridge_name='br-int',has_traffic_filtering=True,id=6fe4867f-99bb-4272-90ef-56a425b07f13,network=Network(2fe18587-d414-46eb-8958-e626dcc4e93a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fe4867f-99')#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.255 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f36bfd-1fb8-41cd-a47a-d5fa43399deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.257 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[382aee84-5191-49d1-9273-3e2e3f3ed052]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbe9554-71d4-47e9-a303-8395fc2f9b08]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458619, 'reachable_time': 31716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291299, 'error': None, 'target': 'ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 systemd[1]: run-netns-ovnmeta\x2d689b1c21\x2d664a\x2d46df\x2db8a2\x2d8b9a794dba22.mount: Deactivated successfully.
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.283 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-689b1c21-664a-46df-b8a2-8b9a794dba22 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.284 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f59edfe1-14bd-4e8b-a6b7-84220b3699b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.286 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6fe4867f-99bb-4272-90ef-56a425b07f13 in datapath 2fe18587-d414-46eb-8958-e626dcc4e93a unbound from our chassis#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.287 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fe18587-d414-46eb-8958-e626dcc4e93a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.289 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4fc090-7d8a-4be1-8754-6f9c3476a83f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.290 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a namespace which is not needed anymore#033[00m
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : haproxy version is 2.8.14-c23fe91
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [NOTICE]   (290955) : path to executable is /usr/sbin/haproxy
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [WARNING]  (290955) : Exiting Master process...
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [WARNING]  (290955) : Exiting Master process...
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [ALERT]    (290955) : Current worker (290975) exited with code 143 (Terminated)
Jan 27 08:52:31 np0005597378 neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a[290933]: [WARNING]  (290955) : All workers exited. Exiting... (0)
Jan 27 08:52:31 np0005597378 systemd[1]: libpod-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f.scope: Deactivated successfully.
Jan 27 08:52:31 np0005597378 podman[291320]: 2026-01-27 13:52:31.444789499 +0000 UTC m=+0.049310095 container died f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 08:52:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f-userdata-shm.mount: Deactivated successfully.
Jan 27 08:52:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8616905b6ac6fe093d9aaab4fbdbd8adbe33fb17506770533321203357840951-merged.mount: Deactivated successfully.
Jan 27 08:52:31 np0005597378 podman[291320]: 2026-01-27 13:52:31.492586733 +0000 UTC m=+0.097107329 container cleanup f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 08:52:31 np0005597378 systemd[1]: libpod-conmon-f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f.scope: Deactivated successfully.
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.536 238945 INFO nova.virt.libvirt.driver [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deleting instance files /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e_del#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.536 238945 INFO nova.virt.libvirt.driver [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deletion of /var/lib/nova/instances/3a36add6-8f5a-4197-ba24-f5c29b83301e_del complete#033[00m
Jan 27 08:52:31 np0005597378 podman[291350]: 2026-01-27 13:52:31.556114099 +0000 UTC m=+0.039016659 container remove f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.564 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[137d8348-0581-4fe7-9947-7d89119611a3]: (4, ('Tue Jan 27 01:52:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a (f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f)\nf8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f\nTue Jan 27 01:52:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a (f8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f)\nf8e90fc8138ee38444f119326fd46e2815a4efb8ab031e59e2613e44bf903b6f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.565 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5e4f94-e209-45d5-a9fa-4c23cb59f6fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.566 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe18587-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 kernel: tap2fe18587-d0: left promiscuous mode
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.587 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[558ce2d9-eb4f-4740-9fc1-a2a3a8542742]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.608 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[987f11c4-c7fd-4e72-aba1-783923c7f71a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.609 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3556bc3-9a35-467c-928d-586a2233f6b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.630 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3627b11-99c3-4672-838b-94dfe5cc84ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458714, 'reachable_time': 23728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291363, 'error': None, 'target': 'ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.632 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fe18587-d414-46eb-8958-e626dcc4e93a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:52:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:31.632 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[215f607c-914e-4d39-ad1c-675c3774902e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:31 np0005597378 nova_compute[238941]: 2026-01-27 13:52:31.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:32 np0005597378 systemd[1]: run-netns-ovnmeta\x2d2fe18587\x2dd414\x2d46eb\x2d8958\x2de626dcc4e93a.mount: Deactivated successfully.
Jan 27 08:52:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 137 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Jan 27 08:52:33 np0005597378 nova_compute[238941]: 2026-01-27 13:52:33.828 238945 INFO nova.compute.manager [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 2.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:52:33 np0005597378 nova_compute[238941]: 2026-01-27 13:52:33.828 238945 DEBUG oslo.service.loopingcall [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:52:33 np0005597378 nova_compute[238941]: 2026-01-27 13:52:33.829 238945 DEBUG nova.compute.manager [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:52:33 np0005597378 nova_compute[238941]: 2026-01-27 13:52:33.829 238945 DEBUG nova.network.neutron [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.010 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Successfully updated port: 1b57c2a6-9156-4778-ad7e-2302f4523d88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.070 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.070 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.070 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.138 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.139 238945 WARNING nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG oslo_concurrency.lockutils [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-unplugged-6fe4867f-99bb-4272-90ef-56a425b07f13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.140 238945 DEBUG nova.compute.manager [req-c85fee92-254e-4417-b0e1-637005652306 req-0d324bbd-4927-4bdf-bc7e-c66f929bf252 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-6fe4867f-99bb-4272-90ef-56a425b07f13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.257 238945 DEBUG nova.compute.manager [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.257 238945 DEBUG oslo_concurrency.lockutils [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG oslo_concurrency.lockutils [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG oslo_concurrency.lockutils [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG nova.compute.manager [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-unplugged-424bfede-9a65-4656-87bc-4e1c9124e547 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.258 238945 DEBUG nova.compute.manager [req-142d44ad-8dd8-488f-aa15-96bf4f94ea61 req-4bdc8b6f-db42-4a3d-ae27-88e12c2b822a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-unplugged-424bfede-9a65-4656-87bc-4e1c9124e547 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:52:34 np0005597378 nova_compute[238941]: 2026-01-27 13:52:34.378 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 136 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 158 op/s
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG nova.compute.manager [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG oslo_concurrency.lockutils [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG oslo_concurrency.lockutils [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG oslo_concurrency.lockutils [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.228 238945 DEBUG nova.compute.manager [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.229 238945 WARNING nova.compute.manager [req-2aaa41a0-576a-4be7-9d2c-a82ddaa98c52 req-f2569f33-a846-4a16-b8f3-6afeab82f015 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-6fe4867f-99bb-4272-90ef-56a425b07f13 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.380 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-changed-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.380 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Refreshing instance network info cache due to event network-changed-1b57c2a6-9156-4778-ad7e-2302f4523d88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.380 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.419 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updating instance_info_cache with network_info: [{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.444 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.444 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance network_info: |[{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.445 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.445 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Refreshing network info cache for port 1b57c2a6-9156-4778-ad7e-2302f4523d88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.448 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start _get_guest_xml network_info=[{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.453 238945 WARNING nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.460 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.460 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.467 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.468 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.468 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.469 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.470 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.471 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.474 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.582 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Successfully updated port: 11be2e1f-225a-49ab-9814-310e74c3f48a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.831 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.831 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.831 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:52:36 np0005597378 nova_compute[238941]: 2026-01-27 13:52:36.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139985104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.050 238945 DEBUG nova.network.neutron [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.052 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.072 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.075 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.109 238945 INFO nova.compute.manager [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Took 3.28 seconds to deallocate network for instance.#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.157 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.158 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.165 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.280 238945 DEBUG oslo_concurrency.processutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102444953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.682 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.684 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-1',id=57,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:27Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=e435204e-d1d1-4031-8984-a628dda926cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.685 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.686 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.688 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid e435204e-d1d1-4031-8984-a628dda926cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.706 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <uuid>e435204e-d1d1-4031-8984-a628dda926cc</uuid>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <name>instance-00000039</name>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:name>tempest-tempest.common.compute-instance-2118013111-1</nova:name>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:52:36</nova:creationTime>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <nova:port uuid="1b57c2a6-9156-4778-ad7e-2302f4523d88">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <entry name="serial">e435204e-d1d1-4031-8984-a628dda926cc</entry>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <entry name="uuid">e435204e-d1d1-4031-8984-a628dda926cc</entry>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e435204e-d1d1-4031-8984-a628dda926cc_disk">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e435204e-d1d1-4031-8984-a628dda926cc_disk.config">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:2e:68:3e"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <target dev="tap1b57c2a6-91"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/console.log" append="off"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:52:37 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:52:37 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:52:37 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:52:37 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.708 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Preparing to wait for external event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.708 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.710 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.710 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.711 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-1',id=57,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:27Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=e435204e-d1d1-4031-8984-a628dda926cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.711 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.712 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.712 238945 DEBUG os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.714 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.714 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b57c2a6-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.720 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b57c2a6-91, col_values=(('external_ids', {'iface-id': '1b57c2a6-9156-4778-ad7e-2302f4523d88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:68:3e', 'vm-uuid': 'e435204e-d1d1-4031-8984-a628dda926cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:37 np0005597378 NetworkManager[48904]: <info>  [1769521957.7234] manager: (tap1b57c2a6-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.732 238945 INFO os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91')#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.797 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.798 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.798 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:2e:68:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.799 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Using config drive#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.819 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897723617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.883 238945 DEBUG oslo_concurrency.processutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.890 238945 DEBUG nova.compute.provider_tree [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.911 238945 DEBUG nova.scheduler.client.report [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:37 np0005597378 nova_compute[238941]: 2026-01-27 13:52:37.941 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.003 238945 INFO nova.scheduler.client.report [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Deleted allocations for instance 3a36add6-8f5a-4197-ba24-f5c29b83301e#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.126 238945 DEBUG oslo_concurrency.lockutils [None req-d8056f91-e87b-4ddb-80ed-1d16b40c06f9 0ba812648bec43bbbd7489f6c33289cc ad39416b63df4f6194a01f4e91fdda1c - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.314 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Creating config drive at /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.320 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5g_0ycy1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.364 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updated VIF entry in instance network info cache for port 1b57c2a6-9156-4778-ad7e-2302f4523d88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.365 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updating instance_info_cache with network_info: [{"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.386 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e435204e-d1d1-4031-8984-a628dda926cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.386 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.387 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.387 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.387 238945 DEBUG oslo_concurrency.lockutils [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a36add6-8f5a-4197-ba24-f5c29b83301e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] No waiting events found dispatching network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 WARNING nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received unexpected event network-vif-plugged-424bfede-9a65-4656-87bc-4e1c9124e547 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-deleted-6fe4867f-99bb-4272-90ef-56a425b07f13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 INFO nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Neutron deleted interface 6fe4867f-99bb-4272-90ef-56a425b07f13; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.388 238945 DEBUG nova.network.neutron [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Updating instance_info_cache with network_info: [{"id": "424bfede-9a65-4656-87bc-4e1c9124e547", "address": "fa:16:3e:4d:d2:0a", "network": {"id": "689b1c21-664a-46df-b8a2-8b9a794dba22", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1732597975", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.96", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad39416b63df4f6194a01f4e91fdda1c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424bfede-9a", "ovs_interfaceid": "424bfede-9a65-4656-87bc-4e1c9124e547", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.412 238945 DEBUG nova.compute.manager [req-b5a28381-f38d-4268-99a1-c1fa51019c74 req-68a75ef3-4e06-4a4f-80b2-7b6bd71a809a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Detach interface failed, port_id=6fe4867f-99bb-4272-90ef-56a425b07f13, reason: Instance 3a36add6-8f5a-4197-ba24-f5c29b83301e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.463 238945 DEBUG nova.network.neutron [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updating instance_info_cache with network_info: [{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.471 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5g_0ycy1" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.503 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image e435204e-d1d1-4031-8984-a628dda926cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.507 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config e435204e-d1d1-4031-8984-a628dda926cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.551 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.552 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance network_info: |[{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.555 238945 DEBUG nova.compute.manager [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-changed-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.556 238945 DEBUG nova.compute.manager [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Refreshing instance network info cache due to event network-changed-11be2e1f-225a-49ab-9814-310e74c3f48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.556 238945 DEBUG oslo_concurrency.lockutils [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.556 238945 DEBUG oslo_concurrency.lockutils [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.557 238945 DEBUG nova.network.neutron [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Refreshing network info cache for port 11be2e1f-225a-49ab-9814-310e74c3f48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.562 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start _get_guest_xml network_info=[{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.568 238945 WARNING nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.581 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.582 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.587 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.587 238945 DEBUG nova.virt.libvirt.host [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.588 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.588 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.589 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.589 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.589 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.590 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.591 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.591 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.591 238945 DEBUG nova.virt.hardware [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.595 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.661 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config e435204e-d1d1-4031-8984-a628dda926cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.662 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deleting local config drive /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc/disk.config because it was imported into RBD.#033[00m
Jan 27 08:52:38 np0005597378 kernel: tap1b57c2a6-91: entered promiscuous mode
Jan 27 08:52:38 np0005597378 NetworkManager[48904]: <info>  [1769521958.7389] manager: (tap1b57c2a6-91): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:38Z|00512|binding|INFO|Claiming lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 for this chassis.
Jan 27 08:52:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:38Z|00513|binding|INFO|1b57c2a6-9156-4778-ad7e-2302f4523d88: Claiming fa:16:3e:2e:68:3e 10.100.0.7
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.766 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:68:3e 10.100.0.7'], port_security=['fa:16:3e:2e:68:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e435204e-d1d1-4031-8984-a628dda926cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1b57c2a6-9156-4778-ad7e-2302f4523d88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.767 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1b57c2a6-9156-4778-ad7e-2302f4523d88 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f bound to our chassis#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.768 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f#033[00m
Jan 27 08:52:38 np0005597378 systemd-udevd[291534]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:38 np0005597378 systemd-machined[207425]: New machine qemu-65-instance-00000039.
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.782 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[081a40ee-e8dc-4746-844e-8acd12038a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.783 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b1231fc-f1 in ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.787 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b1231fc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.787 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40ea030f-f6f2-45ae-bfd5-8071b37e77ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.788 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be6d5848-161a-4575-b35e-50cfc2ccb2a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 NetworkManager[48904]: <info>  [1769521958.7992] device (tap1b57c2a6-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:52:38 np0005597378 NetworkManager[48904]: <info>  [1769521958.7997] device (tap1b57c2a6-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:52:38 np0005597378 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.801 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6f74b34a-aff0-4a88-be74-dd23815b29bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.833 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bd99ed-ef18-469f-84d4-9c3553e33ae1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:38Z|00514|binding|INFO|Setting lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 ovn-installed in OVS
Jan 27 08:52:38 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:38Z|00515|binding|INFO|Setting lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 up in Southbound
Jan 27 08:52:38 np0005597378 nova_compute[238941]: 2026-01-27 13:52:38.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.862 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb08994-a926-476d-8aae-6517cbeebf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 NetworkManager[48904]: <info>  [1769521958.8688] manager: (tap0b1231fc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.871 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0c66dd-7185-4e40-9dc0-ac31466b1261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.906 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[054309e7-ac70-41ef-a9b5-ef57daa9f55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.908 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5444b8-fa33-46b2-9e37-7e07179ca315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 NetworkManager[48904]: <info>  [1769521958.9316] device (tap0b1231fc-f0): carrier: link connected
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.938 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6e77afd1-4f56-406c-a61c-b2aea90dea10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d295f73e-9e05-4ad4-ba33-3a5c10896a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291575, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.969 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c55a46-c36d-4a1e-b00b-2806da1f0ea8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:ccd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459855, 'tstamp': 459855}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291576, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:38.985 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6492a4be-8e78-4e98-a227-328711b48c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291577, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.011 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa07080b-9703-4d79-8ca4-d34752719c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3021cb95-190e-4b65-b730-9c35ce1f39d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.068 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.068 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.068 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:39 np0005597378 kernel: tap0b1231fc-f0: entered promiscuous mode
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.070 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 NetworkManager[48904]: <info>  [1769521959.0708] manager: (tap0b1231fc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.074 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:39Z|00516|binding|INFO|Releasing lport 9d3a2d95-6e13-45c9-8614-b22897c037b4 from this chassis (sb_readonly=0)
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.091 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9aed2cfb-0c97-4d7a-b646-a44dc653397f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.093 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:52:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:39.094 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'env', 'PROCESS_TAG=haproxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:52:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375277792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.273 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.297 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.302 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:39 np0005597378 podman[291629]: 2026-01-27 13:52:39.457911 +0000 UTC m=+0.033539302 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:52:39 np0005597378 podman[291629]: 2026-01-27 13:52:39.582175457 +0000 UTC m=+0.157803739 container create aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.586 238945 DEBUG nova.network.neutron [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updated VIF entry in instance network info cache for port 11be2e1f-225a-49ab-9814-310e74c3f48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.588 238945 DEBUG nova.network.neutron [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updating instance_info_cache with network_info: [{"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.613 238945 DEBUG oslo_concurrency.lockutils [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d09fd69a-4503-4b5d-b452-b406d958ffab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.614 238945 DEBUG nova.compute.manager [req-edb3e662-84e2-4597-ba8d-964a012021a6 req-3405cb44-bf1c-4a34-892c-3a922299e50e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Received event network-vif-deleted-424bfede-9a65-4656-87bc-4e1c9124e547 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:39 np0005597378 systemd[1]: Started libpod-conmon-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db.scope.
Jan 27 08:52:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91026fc61b30e3b7ddfd806d44501a78ca7450503fcde21b50ceaf45dff9768/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:39 np0005597378 podman[291629]: 2026-01-27 13:52:39.720898573 +0000 UTC m=+0.296526875 container init aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:52:39 np0005597378 podman[291629]: 2026-01-27 13:52:39.728066275 +0000 UTC m=+0.303694557 container start aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:52:39 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : New worker (291669) forked
Jan 27 08:52:39 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : Loading success.
Jan 27 08:52:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022766058' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.910 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.913 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-2',id=58,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=d09fd69a-4503-4b5d-b452-b406d958ffab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.914 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.915 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.917 238945 DEBUG nova.objects.instance [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid d09fd69a-4503-4b5d-b452-b406d958ffab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.944 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <uuid>d09fd69a-4503-4b5d-b452-b406d958ffab</uuid>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <name>instance-0000003a</name>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:name>tempest-tempest.common.compute-instance-2118013111-2</nova:name>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:52:38</nova:creationTime>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <nova:port uuid="11be2e1f-225a-49ab-9814-310e74c3f48a">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <entry name="serial">d09fd69a-4503-4b5d-b452-b406d958ffab</entry>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <entry name="uuid">d09fd69a-4503-4b5d-b452-b406d958ffab</entry>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d09fd69a-4503-4b5d-b452-b406d958ffab_disk">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:98:de:a4"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <target dev="tap11be2e1f-22"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/console.log" append="off"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:52:39 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:52:39 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:52:39 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:52:39 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.945 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Preparing to wait for external event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.945 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.946 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.946 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.947 238945 DEBUG nova.virt.libvirt.vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-2',id=58,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:29Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=d09fd69a-4503-4b5d-b452-b406d958ffab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.947 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.948 238945 DEBUG nova.network.os_vif_util [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.948 238945 DEBUG os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.949 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.950 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.954 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11be2e1f-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.954 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11be2e1f-22, col_values=(('external_ids', {'iface-id': '11be2e1f-225a-49ab-9814-310e74c3f48a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:de:a4', 'vm-uuid': 'd09fd69a-4503-4b5d-b452-b406d958ffab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:39 np0005597378 NetworkManager[48904]: <info>  [1769521959.9581] manager: (tap11be2e1f-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:39 np0005597378 nova_compute[238941]: 2026-01-27 13:52:39.967 238945 INFO os_vif [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22')#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.003 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521960.0034084, e435204e-d1d1-4031-8984-a628dda926cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.004 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Started (Lifecycle Event)#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.027 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.031 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521960.0042934, e435204e-d1d1-4031-8984-a628dda926cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.031 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:98:de:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.034 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Using config drive#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.055 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.061 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.065 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:40 np0005597378 nova_compute[238941]: 2026-01-27 13:52:40.083 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:52:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.132 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Creating config drive at /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.138 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfvuy9m5z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.299 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfvuy9m5z" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.339 238945 DEBUG nova.storage.rbd_utils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.343 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.269371379 +0000 UTC m=+0.026295807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.428683728 +0000 UTC m=+0.185608126 container create a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:41Z|00517|binding|INFO|Releasing lport 9d3a2d95-6e13-45c9-8614-b22897c037b4 from this chassis (sb_readonly=0)
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 systemd[1]: Started libpod-conmon-a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199.scope.
Jan 27 08:52:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.729 238945 DEBUG oslo_concurrency.processutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config d09fd69a-4503-4b5d-b452-b406d958ffab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.731 238945 INFO nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deleting local config drive /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab/disk.config because it was imported into RBD.#033[00m
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.748524578 +0000 UTC m=+0.505448976 container init a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.756532013 +0000 UTC m=+0.513456401 container start a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:52:41 np0005597378 elated_bell[291941]: 167 167
Jan 27 08:52:41 np0005597378 systemd[1]: libpod-a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199.scope: Deactivated successfully.
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.777753753 +0000 UTC m=+0.534678151 container attach a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.77837434 +0000 UTC m=+0.535298738 container died a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:52:41 np0005597378 kernel: tap11be2e1f-22: entered promiscuous mode
Jan 27 08:52:41 np0005597378 NetworkManager[48904]: <info>  [1769521961.7884] manager: (tap11be2e1f-22): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Jan 27 08:52:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:41Z|00518|binding|INFO|Claiming lport 11be2e1f-225a-49ab-9814-310e74c3f48a for this chassis.
Jan 27 08:52:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:41Z|00519|binding|INFO|11be2e1f-225a-49ab-9814-310e74c3f48a: Claiming fa:16:3e:98:de:a4 10.100.0.8
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:41Z|00520|binding|INFO|Setting lport 11be2e1f-225a-49ab-9814-310e74c3f48a ovn-installed in OVS
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-497105c0ba6d4e053c39173d05b2d672215a9c778240957e726a0573d122bb2a-merged.mount: Deactivated successfully.
Jan 27 08:52:41 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:41Z|00521|binding|INFO|Setting lport 11be2e1f-225a-49ab-9814-310e74c3f48a up in Southbound
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.825 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:de:a4 10.100.0.8'], port_security=['fa:16:3e:98:de:a4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd09fd69a-4503-4b5d-b452-b406d958ffab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=11be2e1f-225a-49ab-9814-310e74c3f48a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.827 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 11be2e1f-225a-49ab-9814-310e74c3f48a in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f bound to our chassis#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.828 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f#033[00m
Jan 27 08:52:41 np0005597378 systemd-udevd[291967]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:41 np0005597378 systemd-machined[207425]: New machine qemu-66-instance-0000003a.
Jan 27 08:52:41 np0005597378 podman[291887]: 2026-01-27 13:52:41.842029739 +0000 UTC m=+0.598954137 container remove a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:52:41 np0005597378 NetworkManager[48904]: <info>  [1769521961.8504] device (tap11be2e1f-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:52:41 np0005597378 NetworkManager[48904]: <info>  [1769521961.8512] device (tap11be2e1f-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:52:41 np0005597378 systemd[1]: Started Virtual Machine qemu-66-instance-0000003a.
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe6dbdc-cbd0-4a13-9513-e7fa8098a1a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:41 np0005597378 systemd[1]: libpod-conmon-a6df67af8d08757cb8e9d7eb30635d0efd3b096f3a055f75a5ebf44e719ce199.scope: Deactivated successfully.
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.890 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fde8fb7b-0d93-4d55-a015-c636a758ee31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.893 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[05c032f6-77a1-4b7f-81d2-8a7e8bd7e1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.932 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c01bc5e1-4a37-472d-b344-4dbad71b99cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7e84d9-6daa-42df-9592-66934f71a38b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291986, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:52:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.973 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[96ec07b9-9c8a-4cd0-b40f-9fcc33ec6f26]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459865, 'tstamp': 459865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291988, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459868, 'tstamp': 459868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291988, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.975 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 nova_compute[238941]: 2026-01-27 13:52:41.978 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:41.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:42 np0005597378 podman[291993]: 2026-01-27 13:52:42.053375875 +0000 UTC m=+0.056236551 container create 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:52:42 np0005597378 systemd[1]: Started libpod-conmon-131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b.scope.
Jan 27 08:52:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:42 np0005597378 podman[291993]: 2026-01-27 13:52:42.028957729 +0000 UTC m=+0.031818435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:52:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:42 np0005597378 podman[291993]: 2026-01-27 13:52:42.148971972 +0000 UTC m=+0.151832668 container init 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 08:52:42 np0005597378 podman[291993]: 2026-01-27 13:52:42.159852425 +0000 UTC m=+0.162713101 container start 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:42 np0005597378 podman[291993]: 2026-01-27 13:52:42.163123722 +0000 UTC m=+0.165984428 container attach 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.207 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521962.2068362, d09fd69a-4503-4b5d-b452-b406d958ffab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.208 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Started (Lifecycle Event)#033[00m
Jan 27 08:52:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.239 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.243 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521962.207203, d09fd69a-4503-4b5d-b452-b406d958ffab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.244 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.262 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.265 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.291 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.389 238945 DEBUG nova.compute.manager [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.390 238945 DEBUG oslo_concurrency.lockutils [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.391 238945 DEBUG oslo_concurrency.lockutils [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.391 238945 DEBUG oslo_concurrency.lockutils [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.391 238945 DEBUG nova.compute.manager [req-a116158c-e904-467c-8779-a2a6b8af70dd req-3c5858de-a460-4205-8ecc-5dade0ddab19 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Processing event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.392 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.396 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521962.3957639, d09fd69a-4503-4b5d-b452-b406d958ffab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.396 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.398 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.402 238945 INFO nova.virt.libvirt.driver [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance spawned successfully.#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.402 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.420 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.425 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.431 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.431 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.431 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.432 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.432 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.433 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.455 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.523 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 13.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.523 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.624 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 16.24 seconds to build instance.#033[00m
Jan 27 08:52:42 np0005597378 nova_compute[238941]: 2026-01-27 13:52:42.645 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:42 np0005597378 upbeat_booth[292047]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:52:42 np0005597378 upbeat_booth[292047]: --> All data devices are unavailable
Jan 27 08:52:42 np0005597378 systemd[1]: libpod-131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b.scope: Deactivated successfully.
Jan 27 08:52:42 np0005597378 podman[292071]: 2026-01-27 13:52:42.748165274 +0000 UTC m=+0.025891585 container died 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f337e66d2db782f2cd871f2185507481d2eda6e93db6476ce191cf0cf4a78a96-merged.mount: Deactivated successfully.
Jan 27 08:52:42 np0005597378 podman[292071]: 2026-01-27 13:52:42.798009454 +0000 UTC m=+0.075735745 container remove 131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:52:42 np0005597378 systemd[1]: libpod-conmon-131726be84587ec8903acc442cc36f85fd5c3e7c78a351e21b55f92d0c44b18b.scope: Deactivated successfully.
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.350481781 +0000 UTC m=+0.059060127 container create d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:52:43 np0005597378 nova_compute[238941]: 2026-01-27 13:52:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:43 np0005597378 systemd[1]: Started libpod-conmon-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope.
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.323748623 +0000 UTC m=+0.032326989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:52:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.454796353 +0000 UTC m=+0.163374719 container init d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.465465849 +0000 UTC m=+0.174044185 container start d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:43 np0005597378 systemd[1]: libpod-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope: Deactivated successfully.
Jan 27 08:52:43 np0005597378 vigorous_snyder[292168]: 167 167
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.473497355 +0000 UTC m=+0.182075721 container attach d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 08:52:43 np0005597378 conmon[292168]: conmon d3849e4167f4e1c4b296 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope/container/memory.events
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.474777719 +0000 UTC m=+0.183356085 container died d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2dd24f57df7fb16afa115c9a2e6391e5f566b23759a92450fe0f33a5c3f47f38-merged.mount: Deactivated successfully.
Jan 27 08:52:43 np0005597378 podman[292151]: 2026-01-27 13:52:43.524606878 +0000 UTC m=+0.233185224 container remove d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:52:43 np0005597378 systemd[1]: libpod-conmon-d3849e4167f4e1c4b2962d7a0b7de88151e20869e7d9aeed28401989b2e78470.scope: Deactivated successfully.
Jan 27 08:52:43 np0005597378 podman[292191]: 2026-01-27 13:52:43.7310066 +0000 UTC m=+0.046238122 container create de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:52:43 np0005597378 systemd[1]: Started libpod-conmon-de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3.scope.
Jan 27 08:52:43 np0005597378 podman[292191]: 2026-01-27 13:52:43.709311078 +0000 UTC m=+0.024542620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:52:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:43 np0005597378 podman[292191]: 2026-01-27 13:52:43.847124539 +0000 UTC m=+0.162356081 container init de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:52:43 np0005597378 podman[292191]: 2026-01-27 13:52:43.85462133 +0000 UTC m=+0.169852852 container start de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:43 np0005597378 podman[292191]: 2026-01-27 13:52:43.872035988 +0000 UTC m=+0.187267540 container attach de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]: {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:    "0": [
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:        {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "devices": [
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "/dev/loop3"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            ],
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_name": "ceph_lv0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_size": "21470642176",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "name": "ceph_lv0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "tags": {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cluster_name": "ceph",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.crush_device_class": "",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.encrypted": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.objectstore": "bluestore",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osd_id": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.type": "block",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.vdo": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.with_tpm": "0"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            },
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "type": "block",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "vg_name": "ceph_vg0"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:        }
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:    ],
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:    "1": [
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:        {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "devices": [
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "/dev/loop4"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            ],
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_name": "ceph_lv1",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_size": "21470642176",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "name": "ceph_lv1",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "tags": {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cluster_name": "ceph",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.crush_device_class": "",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.encrypted": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.objectstore": "bluestore",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osd_id": "1",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.type": "block",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.vdo": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.with_tpm": "0"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            },
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "type": "block",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "vg_name": "ceph_vg1"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:        }
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:    ],
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:    "2": [
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:        {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "devices": [
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "/dev/loop5"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            ],
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_name": "ceph_lv2",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_size": "21470642176",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "name": "ceph_lv2",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "tags": {
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.cluster_name": "ceph",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.crush_device_class": "",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.encrypted": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.objectstore": "bluestore",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osd_id": "2",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.type": "block",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.vdo": "0",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:                "ceph.with_tpm": "0"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            },
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "type": "block",
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:            "vg_name": "ceph_vg2"
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:        }
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]:    ]
Jan 27 08:52:44 np0005597378 amazing_stonebraker[292207]: }
Jan 27 08:52:44 np0005597378 systemd[1]: libpod-de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3.scope: Deactivated successfully.
Jan 27 08:52:44 np0005597378 podman[292191]: 2026-01-27 13:52:44.173206906 +0000 UTC m=+0.488438428 container died de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 08:52:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5cb3aa7cdda0ba8316ed9242569c0e4bdec3ddf636e20355e2899a4bd3b79fd6-merged.mount: Deactivated successfully.
Jan 27 08:52:44 np0005597378 podman[292191]: 2026-01-27 13:52:44.608139367 +0000 UTC m=+0.923370909 container remove de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 08:52:44 np0005597378 systemd[1]: libpod-conmon-de344c61de7f8ce008fee81862c5a2556127898f59d4a76288506a5059bc55c3.scope: Deactivated successfully.
Jan 27 08:52:44 np0005597378 nova_compute[238941]: 2026-01-27 13:52:44.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:45 np0005597378 podman[292293]: 2026-01-27 13:52:45.075835638 +0000 UTC m=+0.027823328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:52:45 np0005597378 podman[292293]: 2026-01-27 13:52:45.367815719 +0000 UTC m=+0.319803389 container create c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:52:45 np0005597378 nova_compute[238941]: 2026-01-27 13:52:45.533 238945 DEBUG nova.compute.manager [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:45 np0005597378 nova_compute[238941]: 2026-01-27 13:52:45.534 238945 DEBUG oslo_concurrency.lockutils [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:45 np0005597378 nova_compute[238941]: 2026-01-27 13:52:45.535 238945 DEBUG oslo_concurrency.lockutils [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:45 np0005597378 nova_compute[238941]: 2026-01-27 13:52:45.535 238945 DEBUG oslo_concurrency.lockutils [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:45 np0005597378 nova_compute[238941]: 2026-01-27 13:52:45.536 238945 DEBUG nova.compute.manager [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] No waiting events found dispatching network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:45 np0005597378 nova_compute[238941]: 2026-01-27 13:52:45.536 238945 WARNING nova.compute.manager [req-6ca1c6f9-5a87-49c1-af1c-7c8117192b95 req-39ba3ab0-6c9b-4919-9b58-c56aa430d96d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received unexpected event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a for instance with vm_state active and task_state None.#033[00m
Jan 27 08:52:45 np0005597378 systemd[1]: Started libpod-conmon-c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a.scope.
Jan 27 08:52:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:45 np0005597378 podman[292293]: 2026-01-27 13:52:45.945993387 +0000 UTC m=+0.897981077 container init c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:45 np0005597378 podman[292293]: 2026-01-27 13:52:45.95619686 +0000 UTC m=+0.908184530 container start c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:45 np0005597378 silly_kowalevski[292309]: 167 167
Jan 27 08:52:45 np0005597378 systemd[1]: libpod-c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a.scope: Deactivated successfully.
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.105 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521951.1034722, 3a36add6-8f5a-4197-ba24-f5c29b83301e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.106 238945 INFO nova.compute.manager [-] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:52:46 np0005597378 podman[292293]: 2026-01-27 13:52:46.127128951 +0000 UTC m=+1.079116651 container attach c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:52:46 np0005597378 podman[292293]: 2026-01-27 13:52:46.127734948 +0000 UTC m=+1.079722638 container died c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 08:52:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5d38d5aabaf01104576e9ab7c9304620fc74c8ea5ee40a7899be4fc05796baf4-merged.mount: Deactivated successfully.
Jan 27 08:52:46 np0005597378 podman[292293]: 2026-01-27 13:52:46.288616118 +0000 UTC m=+1.240603788 container remove c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:52:46 np0005597378 systemd[1]: libpod-conmon-c276c1784589cc29e46cfd197446bd4ceb7d5d2092e3df2c5e3e1dabd4951c3a.scope: Deactivated successfully.
Jan 27 08:52:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:46.300 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:46.301 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 956 KiB/s rd, 25 KiB/s wr, 56 op/s
Jan 27 08:52:46 np0005597378 podman[292336]: 2026-01-27 13:52:46.493255834 +0000 UTC m=+0.068129611 container create d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 27 08:52:46 np0005597378 podman[292336]: 2026-01-27 13:52:46.454613096 +0000 UTC m=+0.029486903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:52:46 np0005597378 systemd[1]: Started libpod-conmon-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope.
Jan 27 08:52:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:46 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:46 np0005597378 podman[292336]: 2026-01-27 13:52:46.607955054 +0000 UTC m=+0.182828861 container init d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:52:46 np0005597378 podman[292336]: 2026-01-27 13:52:46.614530501 +0000 UTC m=+0.189404278 container start d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:52:46 np0005597378 podman[292336]: 2026-01-27 13:52:46.644210348 +0000 UTC m=+0.219084135 container attach d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.830 238945 DEBUG nova.compute.manager [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.833 238945 DEBUG oslo_concurrency.lockutils [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.833 238945 DEBUG oslo_concurrency.lockutils [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.834 238945 DEBUG oslo_concurrency.lockutils [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.834 238945 DEBUG nova.compute.manager [req-bbc46191-1cae-4cad-a4e7-6b72ff91e19f req-719680ae-40d7-4e13-bd2e-9ba88394c3a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Processing event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.835 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.839 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521966.839737, e435204e-d1d1-4031-8984-a628dda926cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.840 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.843 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.847 238945 INFO nova.virt.libvirt.driver [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance spawned successfully.#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.848 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.854 238945 DEBUG nova.compute.manager [None req-2b2c5eb0-92a9-44df-9716-5e5dc13fb82d - - - - - -] [instance: 3a36add6-8f5a-4197-ba24-f5c29b83301e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.875 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.882 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.883 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.884 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.884 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.885 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.885 238945 DEBUG nova.virt.libvirt.driver [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.891 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.925 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.967 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 19.13 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.970 238945 DEBUG nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:46 np0005597378 nova_compute[238941]: 2026-01-27 13:52:46.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.060 238945 INFO nova.compute.manager [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 20.70 seconds to build instance.#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.087 238945 DEBUG oslo_concurrency.lockutils [None req-6dd9e687-fedc-4333-b23f-8031b7da7d7b f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.455 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.456 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.486 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.487 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.487 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.488 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.488 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:47 np0005597378 lvm[292433]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:52:47 np0005597378 lvm[292433]: VG ceph_vg1 finished
Jan 27 08:52:47 np0005597378 lvm[292432]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:52:47 np0005597378 lvm[292432]: VG ceph_vg0 finished
Jan 27 08:52:47 np0005597378 lvm[292435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:52:47 np0005597378 lvm[292435]: VG ceph_vg2 finished
Jan 27 08:52:47 np0005597378 eager_lehmann[292354]: {}
Jan 27 08:52:47 np0005597378 systemd[1]: libpod-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope: Deactivated successfully.
Jan 27 08:52:47 np0005597378 systemd[1]: libpod-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope: Consumed 1.586s CPU time.
Jan 27 08:52:47 np0005597378 podman[292446]: 2026-01-27 13:52:47.714551943 +0000 UTC m=+0.036170202 container died d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 08:52:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-aa0a8ab74ad8b35bc26647a06d5d13e6061baf421a18682c0b55ab90b559e47a-merged.mount: Deactivated successfully.
Jan 27 08:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.828 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.830 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.859 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:47 np0005597378 podman[292446]: 2026-01-27 13:52:47.859857046 +0000 UTC m=+0.181475285 container remove d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:52:47 np0005597378 systemd[1]: libpod-conmon-d009762227bd59bf617dbe203b460346216b6085384edc9437d0217c5aab51fd.scope: Deactivated successfully.
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.945 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.946 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.961 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:52:47 np0005597378 nova_compute[238941]: 2026-01-27 13:52:47.963 238945 INFO nova.compute.claims [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:52:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.035 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.036 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.037 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.037 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.037 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.039 238945 INFO nova.compute.manager [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Terminating instance#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.041 238945 DEBUG nova.compute.manager [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:52:48 np0005597378 kernel: tap1b57c2a6-91 (unregistering): left promiscuous mode
Jan 27 08:52:48 np0005597378 NetworkManager[48904]: <info>  [1769521968.0970] device (tap1b57c2a6-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:48Z|00522|binding|INFO|Releasing lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 from this chassis (sb_readonly=0)
Jan 27 08:52:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:48Z|00523|binding|INFO|Setting lport 1b57c2a6-9156-4778-ad7e-2302f4523d88 down in Southbound
Jan 27 08:52:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:48Z|00524|binding|INFO|Removing iface tap1b57c2a6-91 ovn-installed in OVS
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.120 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:68:3e 10.100.0.7'], port_security=['fa:16:3e:2e:68:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e435204e-d1d1-4031-8984-a628dda926cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1b57c2a6-9156-4778-ad7e-2302f4523d88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.122 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1b57c2a6-9156-4778-ad7e-2302f4523d88 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.124 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.157 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9256cb72-2fc6-4b4f-9b30-56e1779156c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559789162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:48 np0005597378 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Jan 27 08:52:48 np0005597378 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 2.330s CPU time.
Jan 27 08:52:48 np0005597378 systemd-machined[207425]: Machine qemu-65-instance-00000039 terminated.
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.206 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bca399-d76e-452e-b36c-b886bcc3754d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7f871f-937c-4909-b269-8177ea47c016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.212 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.234 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.235 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.235 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.236 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.236 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.237 238945 INFO nova.compute.manager [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Terminating instance#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.238 238945 DEBUG nova.compute.manager [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.241 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0872e314-f1a2-4105-85c0-831ce7a38a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.271 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90f77ec6-eece-4acb-9e2d-9c4ee988e6d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459855, 'reachable_time': 29424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292510, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.290 238945 INFO nova.virt.libvirt.driver [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Instance destroyed successfully.#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.291 238945 DEBUG nova.objects.instance [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid e435204e-d1d1-4031-8984-a628dda926cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23d2795f-ee21-486b-b0b0-c7c9a378c209]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459865, 'tstamp': 459865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292518, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459868, 'tstamp': 459868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292518, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.296 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 kernel: tap11be2e1f-22 (unregistering): left promiscuous mode
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.306 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.307 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.306 238945 DEBUG nova.virt.libvirt.vif [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-1',id=57,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:52:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:47Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=e435204e-d1d1-4031-8984-a628dda926cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.307 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.308 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.307 238945 DEBUG nova.network.os_vif_util [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "address": "fa:16:3e:2e:68:3e", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b57c2a6-91", "ovs_interfaceid": "1b57c2a6-9156-4778-ad7e-2302f4523d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.309 238945 DEBUG nova.network.os_vif_util [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.309 238945 DEBUG os_vif [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:52:48 np0005597378 NetworkManager[48904]: <info>  [1769521968.3100] device (tap11be2e1f-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.312 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b57c2a6-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:48Z|00525|binding|INFO|Releasing lport 11be2e1f-225a-49ab-9814-310e74c3f48a from this chassis (sb_readonly=0)
Jan 27 08:52:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:48Z|00526|binding|INFO|Setting lport 11be2e1f-225a-49ab-9814-310e74c3f48a down in Southbound
Jan 27 08:52:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:48Z|00527|binding|INFO|Removing iface tap11be2e1f-22 ovn-installed in OVS
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:de:a4 10.100.0.8'], port_security=['fa:16:3e:98:de:a4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd09fd69a-4503-4b5d-b452-b406d958ffab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=11be2e1f-225a-49ab-9814-310e74c3f48a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.330 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 11be2e1f-225a-49ab-9814-310e74c3f48a in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.331 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.333 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef8b7bc-3371-427f-82ea-28fc13caf10e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.334 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace which is not needed anymore#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.345 238945 INFO os_vif [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:68:3e,bridge_name='br-int',has_traffic_filtering=True,id=1b57c2a6-9156-4778-ad7e-2302f4523d88,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b57c2a6-91')#033[00m
Jan 27 08:52:48 np0005597378 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Jan 27 08:52:48 np0005597378 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Consumed 6.304s CPU time.
Jan 27 08:52:48 np0005597378 systemd-machined[207425]: Machine qemu-66-instance-0000003a terminated.
Jan 27 08:52:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 134 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 78 op/s
Jan 27 08:52:48 np0005597378 NetworkManager[48904]: <info>  [1769521968.4595] manager: (tap11be2e1f-22): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.480 238945 INFO nova.virt.libvirt.driver [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Instance destroyed successfully.#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.481 238945 DEBUG nova.objects.instance [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid d09fd69a-4503-4b5d-b452-b406d958ffab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:48 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : haproxy version is 2.8.14-c23fe91
Jan 27 08:52:48 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [NOTICE]   (291667) : path to executable is /usr/sbin/haproxy
Jan 27 08:52:48 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [WARNING]  (291667) : Exiting Master process...
Jan 27 08:52:48 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [WARNING]  (291667) : Exiting Master process...
Jan 27 08:52:48 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [ALERT]    (291667) : Current worker (291669) exited with code 143 (Terminated)
Jan 27 08:52:48 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[291663]: [WARNING]  (291667) : All workers exited. Exiting... (0)
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.498 238945 DEBUG nova.virt.libvirt.vif [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2118013111',display_name='tempest-tempest.common.compute-instance-2118013111-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2118013111-2',id=58,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-27T13:52:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-a5lcdo8w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:52:42Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=d09fd69a-4503-4b5d-b452-b406d958ffab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.499 238945 DEBUG nova.network.os_vif_util [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "11be2e1f-225a-49ab-9814-310e74c3f48a", "address": "fa:16:3e:98:de:a4", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11be2e1f-22", "ovs_interfaceid": "11be2e1f-225a-49ab-9814-310e74c3f48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.499 238945 DEBUG nova.network.os_vif_util [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:48 np0005597378 systemd[1]: libpod-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db.scope: Deactivated successfully.
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.500 238945 DEBUG os_vif [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.503 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11be2e1f-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 podman[292580]: 2026-01-27 13:52:48.508098305 +0000 UTC m=+0.076843165 container died aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.514 238945 INFO os_vif [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:de:a4,bridge_name='br-int',has_traffic_filtering=True,id=11be2e1f-225a-49ab-9814-310e74c3f48a,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11be2e1f-22')#033[00m
Jan 27 08:52:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db-userdata-shm.mount: Deactivated successfully.
Jan 27 08:52:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b91026fc61b30e3b7ddfd806d44501a78ca7450503fcde21b50ceaf45dff9768-merged.mount: Deactivated successfully.
Jan 27 08:52:48 np0005597378 podman[292580]: 2026-01-27 13:52:48.584556679 +0000 UTC m=+0.153301519 container cleanup aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:52:48 np0005597378 systemd[1]: libpod-conmon-aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db.scope: Deactivated successfully.
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.602 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.602 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.607 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.608 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:52:48 np0005597378 podman[292640]: 2026-01-27 13:52:48.665208084 +0000 UTC m=+0.055007558 container remove aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.678 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c92f92-045a-44c5-99ef-296074c6e58b]: (4, ('Tue Jan 27 01:52:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db)\naee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db\nTue Jan 27 01:52:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (aee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db)\naee2e540bace41b11d8f02cba21bdf704248f840ab3630c6edb23f6688ee34db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c62f2425-2b82-465b-9e31-8f6faa0d94e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.683 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 kernel: tap0b1231fc-f0: left promiscuous mode
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f16c04-e603-4388-b93a-ada8770337fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.737 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93ca9b8b-2472-4213-aeef-c164bc158dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.740 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61b60286-0906-4038-b9d4-b351661e57ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.768 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cce5211f-11f6-453e-9f40-f5c7c6ea796f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459848, 'reachable_time': 37942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292656, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 systemd[1]: run-netns-ovnmeta\x2d0b1231fc\x2df48c\x2d438b\x2d9fe3\x2d1ac6cd8a496f.mount: Deactivated successfully.
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.774 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:52:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:48.774 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b0683e48-3e79-4e43-9781-f300784838b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.792 238945 DEBUG nova.compute.manager [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-unplugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.793 238945 DEBUG oslo_concurrency.lockutils [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG oslo_concurrency.lockutils [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG oslo_concurrency.lockutils [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG nova.compute.manager [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] No waiting events found dispatching network-vif-unplugged-11be2e1f-225a-49ab-9814-310e74c3f48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.794 238945 DEBUG nova.compute.manager [req-26260f96-c57c-4d67-b34f-d5617b8b910a req-920e28f1-9cdf-46c5-a7d3-7fffd986fedc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-unplugged-11be2e1f-225a-49ab-9814-310e74c3f48a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:52:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037340085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.822 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.833 238945 INFO nova.virt.libvirt.driver [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deleting instance files /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc_del#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.834 238945 INFO nova.virt.libvirt.driver [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deletion of /var/lib/nova/instances/e435204e-d1d1-4031-8984-a628dda926cc_del complete#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.840 238945 DEBUG nova.compute.provider_tree [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.859 238945 DEBUG nova.scheduler.client.report [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.894 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.895 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.909 238945 INFO nova.compute.manager [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.910 238945 DEBUG oslo.service.loopingcall [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.912 238945 DEBUG nova.compute.manager [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.912 238945 DEBUG nova.network.neutron [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.946 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.947 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.969 238945 INFO nova.virt.libvirt.driver [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deleting instance files /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab_del#033[00m
Jan 27 08:52:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:52:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.971 238945 INFO nova.virt.libvirt.driver [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deletion of /var/lib/nova/instances/d09fd69a-4503-4b5d-b452-b406d958ffab_del complete#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.987 238945 DEBUG nova.compute.manager [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.988 238945 DEBUG oslo_concurrency.lockutils [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e435204e-d1d1-4031-8984-a628dda926cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.988 238945 DEBUG oslo_concurrency.lockutils [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.988 238945 DEBUG oslo_concurrency.lockutils [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.989 238945 DEBUG nova.compute.manager [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] No waiting events found dispatching network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:48 np0005597378 nova_compute[238941]: 2026-01-27 13:52:48.989 238945 WARNING nova.compute.manager [req-2e1db4c2-c2f1-4021-b3f8-9064b277e898 req-c8e036c7-7d85-4e44-88f9-8afd7678ef2c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received unexpected event network-vif-plugged-1b57c2a6-9156-4778-ad7e-2302f4523d88 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.017 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.069 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.079 238945 INFO nova.compute.manager [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.080 238945 DEBUG oslo.service.loopingcall [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.081 238945 DEBUG nova.compute.manager [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.081 238945 DEBUG nova.network.neutron [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.089 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.091 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=59.94608336687088GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.091 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.091 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.187 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e435204e-d1d1-4031-8984-a628dda926cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d09fd69a-4503-4b5d-b452-b406d958ffab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 511a49bc-bc87-444f-8323-95e4c88313c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.188 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.196 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.199 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.201 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Creating image(s)#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.224 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.259 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.287 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.292 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.340 238945 DEBUG nova.policy [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.382 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.383 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.383 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.384 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.407 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.411 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 511a49bc-bc87-444f-8323-95e4c88313c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.554 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.853 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 511a49bc-bc87-444f-8323-95e4c88313c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.922 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:49 np0005597378 nova_compute[238941]: 2026-01-27 13:52:49.984 238945 DEBUG nova.network.neutron [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.021 238945 INFO nova.compute.manager [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Took 0.94 seconds to deallocate network for instance.#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.032 238945 DEBUG nova.objects.instance [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.035 238945 DEBUG nova.network.neutron [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.057 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.059 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Ensure instance console log exists: /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.059 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.060 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.060 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.064 238945 INFO nova.compute.manager [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Took 1.15 seconds to deallocate network for instance.#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.078 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.113 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745581939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.251 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.259 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.273 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.296 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.297 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.297 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.368 238945 DEBUG oslo_concurrency.processutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 815 KiB/s wr, 130 op/s
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.889 238945 DEBUG nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.890 238945 DEBUG oslo_concurrency.lockutils [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.890 238945 DEBUG oslo_concurrency.lockutils [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.890 238945 DEBUG oslo_concurrency.lockutils [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.891 238945 DEBUG nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] No waiting events found dispatching network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.891 238945 WARNING nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received unexpected event network-vif-plugged-11be2e1f-225a-49ab-9814-310e74c3f48a for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.891 238945 DEBUG nova.compute.manager [req-530091cc-9c25-4201-922c-17902f9d765f req-2f823c3d-6f5e-427e-b09e-6ec282ef154a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Received event network-vif-deleted-11be2e1f-225a-49ab-9814-310e74c3f48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:50 np0005597378 nova_compute[238941]: 2026-01-27 13:52:50.956 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Successfully created port: ff596883-7a7a-4226-a61f-de4382f6ff0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240676182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.020 238945 DEBUG oslo_concurrency.processutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.026 238945 DEBUG nova.compute.provider_tree [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.053 238945 DEBUG nova.scheduler.client.report [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.081 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.083 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.109 238945 INFO nova.scheduler.client.report [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance d09fd69a-4503-4b5d-b452-b406d958ffab#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.166 238945 DEBUG oslo_concurrency.processutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.202 238945 DEBUG oslo_concurrency.lockutils [None req-30fb6e03-742d-409c-b8fd-034b3ec05252 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "d09fd69a-4503-4b5d-b452-b406d958ffab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.205 238945 DEBUG nova.compute.manager [req-bd439719-cd90-4d74-bd24-0b1c4229e945 req-05eebae8-dc3a-46e7-978a-fb616a5acc93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Received event network-vif-deleted-1b57c2a6-9156-4778-ad7e-2302f4523d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.222 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.223 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.223 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.223 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:52:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283709805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.791 238945 DEBUG oslo_concurrency.processutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.800 238945 DEBUG nova.compute.provider_tree [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.816 238945 DEBUG nova.scheduler.client.report [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.843 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.867 238945 INFO nova.scheduler.client.report [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance e435204e-d1d1-4031-8984-a628dda926cc#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.935 238945 DEBUG oslo_concurrency.lockutils [None req-b4bc489e-88a0-4947-8397-0ff45c1dfea5 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "e435204e-d1d1-4031-8984-a628dda926cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:51 np0005597378 nova_compute[238941]: 2026-01-27 13:52:51.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 814 KiB/s wr, 129 op/s
Jan 27 08:52:52 np0005597378 podman[292893]: 2026-01-27 13:52:52.716270778 +0000 UTC m=+0.053511538 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:52:52 np0005597378 podman[292892]: 2026-01-27 13:52:52.748225446 +0000 UTC m=+0.087950692 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:52 np0005597378 nova_compute[238941]: 2026-01-27 13:52:52.850 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Successfully updated port: ff596883-7a7a-4226-a61f-de4382f6ff0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:52:52 np0005597378 nova_compute[238941]: 2026-01-27 13:52:52.873 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:52 np0005597378 nova_compute[238941]: 2026-01-27 13:52:52.873 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:52 np0005597378 nova_compute[238941]: 2026-01-27 13:52:52.873 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.039 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.040 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.042 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.083 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.199 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.199 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.207 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.207 238945 INFO nova.compute.claims [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.375 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:53 np0005597378 nova_compute[238941]: 2026-01-27 13:52:53.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352043833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.005 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.013 238945 DEBUG nova.compute.provider_tree [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.083 238945 DEBUG nova.scheduler.client.report [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.106 238945 DEBUG nova.compute.manager [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-changed-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.107 238945 DEBUG nova.compute.manager [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Refreshing instance network info cache due to event network-changed-ff596883-7a7a-4226-a61f-de4382f6ff0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.107 238945 DEBUG oslo_concurrency.lockutils [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.156 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.158 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.236 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.236 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.269 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.315 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.335 238945 DEBUG nova.network.neutron [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 88 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.459 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.460 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance network_info: |[{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.460 238945 DEBUG oslo_concurrency.lockutils [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.460 238945 DEBUG nova.network.neutron [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Refreshing network info cache for port ff596883-7a7a-4226-a61f-de4382f6ff0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.463 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Start _get_guest_xml network_info=[{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.469 238945 WARNING nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.474 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.475 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.478 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.479 238945 DEBUG nova.virt.libvirt.host [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.479 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.480 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.481 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.482 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.482 238945 DEBUG nova.virt.hardware [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.486 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.527 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.530 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.530 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Creating image(s)#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.563 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.588 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.611 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.615 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.654 238945 DEBUG nova.policy [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2689eaf31d4443a7a0885f648f53d3b4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bab4841f97143a08a3ba0eeacba626a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.687 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.688 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.688 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.689 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.711 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:54 np0005597378 nova_compute[238941]: 2026-01-27 13:52:54.715 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.004 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596592666' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.079 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] resizing rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.111 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.139 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.146 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.226 238945 DEBUG nova.objects.instance [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'migration_context' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.281 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.282 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Ensure instance console log exists: /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.282 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.283 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.283 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:52:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940255027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.760 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.761 238945 DEBUG nova.virt.libvirt.vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-785618824',display_name='tempest-₡-785618824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--785618824',id=59,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-f2pqaeem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:49Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=511a49bc-bc87-444f-8323-95e4c88313c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.762 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.763 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.764 238945 DEBUG nova.objects.instance [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.805 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <uuid>511a49bc-bc87-444f-8323-95e4c88313c6</uuid>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <name>instance-0000003b</name>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:name>tempest-₡-785618824</nova:name>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:52:54</nova:creationTime>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <nova:port uuid="ff596883-7a7a-4226-a61f-de4382f6ff0e">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <entry name="serial">511a49bc-bc87-444f-8323-95e4c88313c6</entry>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <entry name="uuid">511a49bc-bc87-444f-8323-95e4c88313c6</entry>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/511a49bc-bc87-444f-8323-95e4c88313c6_disk">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/511a49bc-bc87-444f-8323-95e4c88313c6_disk.config">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e4:d4:f7"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <target dev="tapff596883-7a"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/console.log" append="off"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:52:55 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:52:55 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:52:55 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:52:55 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Preparing to wait for external event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.807 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.808 238945 DEBUG nova.virt.libvirt.vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-785618824',display_name='tempest-₡-785618824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--785618824',id=59,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-f2pqaeem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:49Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=511a49bc-bc87-444f-8323-95e4c88313c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.808 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.809 238945 DEBUG nova.network.os_vif_util [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.809 238945 DEBUG os_vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.816 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.816 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.818 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff596883-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.818 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff596883-7a, col_values=(('external_ids', {'iface-id': 'ff596883-7a7a-4226-a61f-de4382f6ff0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:d4:f7', 'vm-uuid': '511a49bc-bc87-444f-8323-95e4c88313c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.820 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully created port: 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:55 np0005597378 NetworkManager[48904]: <info>  [1769521975.8214] manager: (tapff596883-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.827 238945 INFO os_vif [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:d4:f7,bridge_name='br-int',has_traffic_filtering=True,id=ff596883-7a7a-4226-a61f-de4382f6ff0e,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff596883-7a')#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.857 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.900 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.900 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.901 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:e4:d4:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.901 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Using config drive#033[00m
Jan 27 08:52:55 np0005597378 nova_compute[238941]: 2026-01-27 13:52:55.921 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.018 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.019 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.036 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.037 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.045 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.046 238945 INFO nova.compute.claims [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.069 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.184 238945 DEBUG nova.network.neutron [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updated VIF entry in instance network info cache for port ff596883-7a7a-4226-a61f-de4382f6ff0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.185 238945 DEBUG nova.network.neutron [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.212 238945 DEBUG oslo_concurrency.lockutils [req-3dd16949-bf9f-4f1a-bdfd-4c3f9700048e req-4d4d4049-f12e-415f-9591-7b9e05b222c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.226 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.280 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Creating config drive at /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.288 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznw50_oy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.337 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 104 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.445 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznw50_oy" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.471 238945 DEBUG nova.storage.rbd_utils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.477 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.646 238945 DEBUG oslo_concurrency.processutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config 511a49bc-bc87-444f-8323-95e4c88313c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.648 238945 INFO nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Deleting local config drive /var/lib/nova/instances/511a49bc-bc87-444f-8323-95e4c88313c6/disk.config because it was imported into RBD.#033[00m
Jan 27 08:52:56 np0005597378 kernel: tapff596883-7a: entered promiscuous mode
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.721 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:56 np0005597378 NetworkManager[48904]: <info>  [1769521976.7246] manager: (tapff596883-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 27 08:52:56 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:56Z|00528|binding|INFO|Claiming lport ff596883-7a7a-4226-a61f-de4382f6ff0e for this chassis.
Jan 27 08:52:56 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:56Z|00529|binding|INFO|ff596883-7a7a-4226-a61f-de4382f6ff0e: Claiming fa:16:3e:e4:d4:f7 10.100.0.3
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.749 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:d4:f7 10.100.0.3'], port_security=['fa:16:3e:e4:d4:f7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '511a49bc-bc87-444f-8323-95e4c88313c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ff596883-7a7a-4226-a61f-de4382f6ff0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.751 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ff596883-7a7a-4226-a61f-de4382f6ff0e in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.753 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.767 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9585b46d-7f44-4f2b-a8fe-81d8e2cd2900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.768 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13754bbc-81 in ovnmeta-13754bbc-8f22-4885-aa27-198718585636 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.770 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13754bbc-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[593a5e3d-9145-40ff-a7b4-c9398e5dc26b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41d1ddc2-ca9a-4ff9-b97a-0a9ca43972b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 systemd-machined[207425]: New machine qemu-67-instance-0000003b.
Jan 27 08:52:56 np0005597378 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.790 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[78e6fec3-8977-46dd-bcca-11e853362fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 systemd-udevd[293279]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:56 np0005597378 NetworkManager[48904]: <info>  [1769521976.8145] device (tapff596883-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:52:56 np0005597378 NetworkManager[48904]: <info>  [1769521976.8156] device (tapff596883-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.817 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b581a944-a41a-400b-99cc-d9a5954faf46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:56 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:56Z|00530|binding|INFO|Setting lport ff596883-7a7a-4226-a61f-de4382f6ff0e ovn-installed in OVS
Jan 27 08:52:56 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:56Z|00531|binding|INFO|Setting lport ff596883-7a7a-4226-a61f-de4382f6ff0e up in Southbound
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.854 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[52a54d41-a82d-46fd-ad3e-b7a43757f823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 systemd-udevd[293282]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:52:56 np0005597378 NetworkManager[48904]: <info>  [1769521976.8746] manager: (tap13754bbc-80): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.873 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[888a480f-1478-433e-811a-c0f4d30253b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.916 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8843811-aa44-4c1d-a701-465038fe9290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.919 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb11b6e1-31ce-4aca-9fa4-15a5a47a2508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 NetworkManager[48904]: <info>  [1769521976.9496] device (tap13754bbc-80): carrier: link connected
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.957 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e510222d-5ba2-4575-bd2e-bcc020979ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821255933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:56.975 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b67bbf46-bbed-4654-a686-935d276d82ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293311, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:56 np0005597378 nova_compute[238941]: 2026-01-27 13:52:56.992 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.000 238945 DEBUG nova.compute.provider_tree [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1622c49f-c6dc-4078-befc-4e2f92256017]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:c363'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461657, 'tstamp': 461657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293313, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f7af03e2-cab9-450a-90dd-b19dc94d8e19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293314, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.072 238945 DEBUG nova.scheduler.client.report [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.075 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf05efe6-5edd-40aa-a0fb-7c25465867f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.137 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c565a313-c72a-4719-97e3-d08f20bb3e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.140 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:57 np0005597378 kernel: tap13754bbc-80: entered promiscuous mode
Jan 27 08:52:57 np0005597378 NetworkManager[48904]: <info>  [1769521977.1424] manager: (tap13754bbc-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.144 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:57 np0005597378 ovn_controller[144812]: 2026-01-27T13:52:57Z|00532|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.165 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13754bbc-8f22-4885-aa27-198718585636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13754bbc-8f22-4885-aa27-198718585636.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c74c0be5-60b1-4b6f-918c-70c3edf26121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.167 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-13754bbc-8f22-4885-aa27-198718585636
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/13754bbc-8f22-4885-aa27-198718585636.pid.haproxy
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 13754bbc-8f22-4885-aa27-198718585636
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:52:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:52:57.169 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'env', 'PROCESS_TAG=haproxy-13754bbc-8f22-4885-aa27-198718585636', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13754bbc-8f22-4885-aa27-198718585636.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:52:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.352 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.353 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.355 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.371 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.372 238945 INFO nova.compute.claims [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:52:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:52:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6552 writes, 29K keys, 6552 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6552 writes, 6552 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1734 writes, 7971 keys, 1734 commit groups, 1.0 writes per commit group, ingest: 10.41 MB, 0.02 MB/s#012Interval WAL: 1734 writes, 1734 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.9      0.52              0.09        16    0.032       0      0       0.0       0.0#012  L6      1/0    8.36 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     88.5     72.3      1.55              0.28        15    0.103     71K   8379       0.0       0.0#012 Sum      1/0    8.36 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     66.4     70.4      2.07              0.37        31    0.067     71K   8379       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     86.9     89.5      0.47              0.10         8    0.059     22K   2591       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     88.5     72.3      1.55              0.28        15    0.103     71K   8379       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     66.0      0.51              0.09        15    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.1 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 2.1 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 16.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000268 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(996,15.43 MB,5.07501%) FilterBlock(32,206.42 KB,0.0663105%) IndexBlock(32,379.45 KB,0.121895%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 08:52:57 np0005597378 podman[293346]: 2026-01-27 13:52:57.671553463 +0000 UTC m=+0.114620879 container create f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:52:57 np0005597378 podman[293346]: 2026-01-27 13:52:57.581132624 +0000 UTC m=+0.024200060 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:52:57 np0005597378 systemd[1]: Started libpod-conmon-f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25.scope.
Jan 27 08:52:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:52:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d26458ed319b87c4855c82bd4e6cfd65252f73eee4b63c1f3d01d794246af85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:52:57 np0005597378 podman[293346]: 2026-01-27 13:52:57.831464967 +0000 UTC m=+0.274532403 container init f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.836 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.836 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:52:57 np0005597378 podman[293346]: 2026-01-27 13:52:57.838151967 +0000 UTC m=+0.281219383 container start f2bec12d6ba446e8ce4f2a2dbda01a75486497164c3d70cee6aa14239db89e25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 08:52:57 np0005597378 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [NOTICE]   (293365) : New worker (293367) forked
Jan 27 08:52:57 np0005597378 neutron-haproxy-ovnmeta-13754bbc-8f22-4885-aa27-198718585636[293361]: [NOTICE]   (293365) : Loading success.
Jan 27 08:52:57 np0005597378 nova_compute[238941]: 2026-01-27 13:52:57.985 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.019 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.153 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.154 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.155 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Creating image(s)#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.179 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.212 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.247 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.252 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.301 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521978.1985476, 511a49bc-bc87-444f-8323-95e4c88313c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.303 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Started (Lifecycle Event)#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.319 238945 DEBUG nova.policy [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.333 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.336 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.384 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.385 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.386 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.386 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 116 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 156 op/s
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.415 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.421 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 762ca3c0-2865-41c8-89fc-445573c554c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.478 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521978.1988213, 511a49bc-bc87-444f-8323-95e4c88313c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.479 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.505 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.511 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.534 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.540 238945 DEBUG nova.compute.manager [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.541 238945 DEBUG oslo_concurrency.lockutils [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.543 238945 DEBUG oslo_concurrency.lockutils [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.543 238945 DEBUG oslo_concurrency.lockutils [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.543 238945 DEBUG nova.compute.manager [req-c81a2c20-0436-40ed-8a68-3d54e6e8bd42 req-d9bde4bf-f914-45f1-84d7-c64d6ad2c567 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Processing event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.544 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.550 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521978.5498133, 511a49bc-bc87-444f-8323-95e4c88313c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.550 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.553 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.559 238945 INFO nova.virt.libvirt.driver [-] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Instance spawned successfully.#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.560 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.623 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.629 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.635 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.635 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.635 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.636 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.636 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.637 238945 DEBUG nova.virt.libvirt.driver [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.658 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.747 238945 INFO nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Took 9.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.748 238945 DEBUG nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.762 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully updated port: 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.805 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.806 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.806 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.840 238945 INFO nova.compute.manager [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Took 10.93 seconds to build instance.#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.860 238945 DEBUG oslo_concurrency.lockutils [None req-4082f77a-4e5b-4a63-9cc3-83c53a2a9562 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:58 np0005597378 nova_compute[238941]: 2026-01-27 13:52:58.999 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.004 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 762ca3c0-2865-41c8-89fc-445573c554c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:52:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279501735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.071 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.099 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.108 238945 DEBUG nova.compute.provider_tree [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.166 238945 DEBUG nova.scheduler.client.report [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.184 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid 762ca3c0-2865-41c8-89fc-445573c554c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.195 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.197 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.203 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.204 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Ensure instance console log exists: /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.205 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.205 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.206 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.261 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.262 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.297 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.324 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.359 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Successfully created port: 4ef8a620-c221-4b3f-ba32-3ab574e341af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.442 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.448 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.449 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Creating image(s)#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.485 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.515 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:52:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1475588033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:52:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:52:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1475588033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.547 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.551 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.593 238945 DEBUG nova.policy [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9e663079ce44f94a4dbe6125b395ce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd333430b14814ea487cbd2af414c350f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.632 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.635 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.636 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.637 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.663 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:52:59 np0005597378 nova_compute[238941]: 2026-01-27 13:52:59.669 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f de740849-c0ca-4217-974b-693a30f63855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.167 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f de740849-c0ca-4217-974b-693a30f63855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.235 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] resizing rbd image de740849-c0ca-4217-974b-693a30f63855_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.343 238945 DEBUG nova.network.neutron [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.351 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'migration_context' on Instance uuid de740849-c0ca-4217-974b-693a30f63855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.374 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.374 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Ensure instance console log exists: /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.375 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.375 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.375 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.381 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.382 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance network_info: |[{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.385 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start _get_guest_xml network_info=[{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:53:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 172 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 177 op/s
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.392 238945 WARNING nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.398 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.399 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.403 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.404 238945 DEBUG nova.virt.libvirt.host [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.404 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.405 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.405 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.405 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.406 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.406 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.406 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.407 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.407 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.407 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.408 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.408 238945 DEBUG nova.virt.hardware [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.411 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.453 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Successfully created port: e2362203-8b31-4317-a96a-2089dfc590a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.656 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Successfully updated port: 4ef8a620-c221-4b3f-ba32-3ab574e341af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.703 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.704 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.704 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:00 np0005597378 nova_compute[238941]: 2026-01-27 13:53:00.863 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:53:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155039619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.047 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.079 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.084 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.421 238945 DEBUG nova.compute.manager [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-changed-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.422 238945 DEBUG nova.compute.manager [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Refreshing instance network info cache due to event network-changed-4ef8a620-c221-4b3f-ba32-3ab574e341af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.422 238945 DEBUG oslo_concurrency.lockutils [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.504 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-changed-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing instance network info cache due to event network-changed-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.505 238945 DEBUG nova.network.neutron [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing network info cache for port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380608990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.675 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.677 238945 DEBUG nova.virt.libvirt.vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:54Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.677 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.678 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.679 238945 DEBUG nova.objects.instance [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'pci_devices' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.894 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updating instance_info_cache with network_info: [{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.898 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <uuid>c03e1ba1-3e7e-4cb8-847e-07c85da05427</uuid>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <name>instance-0000003c</name>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:name>tempest-AttachInterfacesV270Test-server-604088264</nova:name>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:53:00</nova:creationTime>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:user uuid="2689eaf31d4443a7a0885f648f53d3b4">tempest-AttachInterfacesV270Test-1808141455-project-member</nova:user>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:project uuid="4bab4841f97143a08a3ba0eeacba626a">tempest-AttachInterfacesV270Test-1808141455</nova:project>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <nova:port uuid="3e1005c5-9cfa-4994-99cc-4c1d6d7d171d">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <entry name="serial">c03e1ba1-3e7e-4cb8-847e-07c85da05427</entry>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <entry name="uuid">c03e1ba1-3e7e-4cb8-847e-07c85da05427</entry>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:1f:77:54"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <target dev="tap3e1005c5-9c"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/console.log" append="off"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:53:01 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:53:01 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:53:01 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:53:01 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.900 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Preparing to wait for external event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.900 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.901 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.901 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.902 238945 DEBUG nova.virt.libvirt.vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:54Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.902 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.903 238945 DEBUG nova.network.os_vif_util [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.903 238945 DEBUG os_vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.905 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Successfully updated port: e2362203-8b31-4317-a96a-2089dfc590a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.907 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.908 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.911 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e1005c5-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.911 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e1005c5-9c, col_values=(('external_ids', {'iface-id': '3e1005c5-9cfa-4994-99cc-4c1d6d7d171d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:77:54', 'vm-uuid': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:01 np0005597378 NetworkManager[48904]: <info>  [1769521981.9152] manager: (tap3e1005c5-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.922 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.923 238945 INFO os_vif [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c')#033[00m
Jan 27 08:53:01 np0005597378 nova_compute[238941]: 2026-01-27 13:53:01.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.059 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.060 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquired lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.060 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.066 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.066 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance network_info: |[{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.067 238945 DEBUG oslo_concurrency.lockutils [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.067 238945 DEBUG nova.network.neutron [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Refreshing network info cache for port 4ef8a620-c221-4b3f-ba32-3ab574e341af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.069 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start _get_guest_xml network_info=[{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.074 238945 WARNING nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.080 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.081 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.085 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.085 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.085 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.086 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.086 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.086 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.087 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.088 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.088 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.088 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.091 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.150 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.152 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.153 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No VIF found with MAC fa:16:3e:1f:77:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.154 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Using config drive#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.180 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.322 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:53:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 172 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 125 op/s
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.671 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Creating config drive at /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.678 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprb_k0o8b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110412392' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.720 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.750 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.757 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.826 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprb_k0o8b" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.854 238945 DEBUG nova.storage.rbd_utils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] rbd image c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:02 np0005597378 nova_compute[238941]: 2026-01-27 13:53:02.861 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.088 238945 DEBUG oslo_concurrency.processutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config c03e1ba1-3e7e-4cb8-847e-07c85da05427_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.089 238945 INFO nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deleting local config drive /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427/disk.config because it was imported into RBD.#033[00m
Jan 27 08:53:03 np0005597378 kernel: tap3e1005c5-9c: entered promiscuous mode
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.1493] manager: (tap3e1005c5-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:03Z|00533|binding|INFO|Claiming lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for this chassis.
Jan 27 08:53:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:03Z|00534|binding|INFO|3e1005c5-9cfa-4994-99cc-4c1d6d7d171d: Claiming fa:16:3e:1f:77:54 10.100.0.12
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.181 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:77:54 10.100.0.12'], port_security=['fa:16:3e:1f:77:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.183 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 bound to our chassis#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.184 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2#033[00m
Jan 27 08:53:03 np0005597378 systemd-udevd[293965]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.197 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f63dc0c7-7940-431b-8cd6-6f69941637c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.198 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e4604ca-e1 in ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.200 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e4604ca-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.200 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75678c-06a4-4cff-b2e3-36f17a259a29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7105940f-428a-4f6a-af98-2cfdf87aff68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.2095] device (tap3e1005c5-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.2111] device (tap3e1005c5-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:03 np0005597378 systemd-machined[207425]: New machine qemu-68-instance-0000003c.
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.217 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dbeb62-39a7-4e8c-b49a-5e4c8295c213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 systemd[1]: Started Virtual Machine qemu-68-instance-0000003c.
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.245 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebcab13-a4f4-46de-b045-01e764ef33b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:03Z|00535|binding|INFO|Setting lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d ovn-installed in OVS
Jan 27 08:53:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:03Z|00536|binding|INFO|Setting lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d up in Southbound
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.279 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e9539b69-f589-463d-bea4-d54b6ba298c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.285 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521968.2832658, e435204e-d1d1-4031-8984-a628dda926cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.285 238945 INFO nova.compute.manager [-] [instance: e435204e-d1d1-4031-8984-a628dda926cc] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.2970] manager: (tap3e4604ca-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f27452fe-2f0a-4eef-8b03-f132cccf0d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.307 238945 DEBUG nova.compute.manager [None req-dd5b858b-bc30-4fcf-a2fd-4acd6b35d78f - - - - - -] [instance: e435204e-d1d1-4031-8984-a628dda926cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.342 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc4d94b-7cd5-4ed5-b5d3-09b2dd8df43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.346 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[93ba5fe0-3198-4866-8a25-773c5dce8cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.3724] device (tap3e4604ca-e0): carrier: link connected
Jan 27 08:53:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195805417' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.378 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02ed7c79-9a0e-475c-871e-a833eefba675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.396 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6420127b-9b46-4dcf-8d40-424f6f4f508f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294003, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.406 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.408 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-1',id=61,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:58Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=762ca3c0-2865-41c8-89fc-445573c554c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.409 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.411 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.413 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid 762ca3c0-2865-41c8-89fc-445573c554c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.414 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b281319-76bd-4ab0-a53e-67b49505576b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:75d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462299, 'tstamp': 462299}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294004, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.431 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b96873ab-e425-40ff-bf4f-3ce56a749ee1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294005, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.434 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <uuid>762ca3c0-2865-41c8-89fc-445573c554c9</uuid>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <name>instance-0000003d</name>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:name>tempest-MultipleCreateTestJSON-server-1479146776-1</nova:name>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:53:02</nova:creationTime>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <nova:port uuid="4ef8a620-c221-4b3f-ba32-3ab574e341af">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <entry name="serial">762ca3c0-2865-41c8-89fc-445573c554c9</entry>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <entry name="uuid">762ca3c0-2865-41c8-89fc-445573c554c9</entry>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/762ca3c0-2865-41c8-89fc-445573c554c9_disk">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/762ca3c0-2865-41c8-89fc-445573c554c9_disk.config">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:cf:4e:76"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <target dev="tap4ef8a620-c2"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/console.log" append="off"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:53:03 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:53:03 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:53:03 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:53:03 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.436 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Preparing to wait for external event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.437 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.437 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.437 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.438 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-1',id=61,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:58Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=762ca3c0-2865-41c8-89fc-445573c554c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.438 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.439 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.440 238945 DEBUG os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.441 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.441 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.444 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.445 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ef8a620-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.445 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ef8a620-c2, col_values=(('external_ids', {'iface-id': '4ef8a620-c221-4b3f-ba32-3ab574e341af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:4e:76', 'vm-uuid': '762ca3c0-2865-41c8-89fc-445573c554c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.447 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.4480] manager: (tap4ef8a620-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.452 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.453 238945 INFO os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2')#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.463 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf3245a-10d8-47e1-96e1-b83301b7cab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.474 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521968.4739003, d09fd69a-4503-4b5d-b452-b406d958ffab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.475 238945 INFO nova.compute.manager [-] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.495 238945 DEBUG nova.compute.manager [None req-b0f1fd6a-d773-4871-b096-1badd4240fa6 - - - - - -] [instance: d09fd69a-4503-4b5d-b452-b406d958ffab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9bd29c-774c-43db-8ffd-5166f1448362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.523 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.523 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.523 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4604ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:03 np0005597378 kernel: tap3e4604ca-e0: entered promiscuous mode
Jan 27 08:53:03 np0005597378 NetworkManager[48904]: <info>  [1769521983.5262] manager: (tap3e4604ca-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.533 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4604ca-e0, col_values=(('external_ids', {'iface-id': 'd5faa58d-a805-4b68-958a-189d413602e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:03Z|00537|binding|INFO|Releasing lport d5faa58d-a805-4b68-958a-189d413602e2 from this chassis (sb_readonly=0)
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.557 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.558 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[87de7df4-2062-40e5-9903-3d5c0a114500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.559 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.pid.haproxy
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 3e4604ca-eef3-4b48-8b54-479f9b2b30c2
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:53:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:03.560 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'env', 'PROCESS_TAG=haproxy-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e4604ca-eef3-4b48-8b54-479f9b2b30c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.584 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.584 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.584 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:cf:4e:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.585 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Using config drive#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.720 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.726 238945 DEBUG nova.network.neutron [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updated VIF entry in instance network info cache for port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.727 238945 DEBUG nova.network.neutron [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.754 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.754 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG oslo_concurrency.lockutils [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "511a49bc-bc87-444f-8323-95e4c88313c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.755 238945 DEBUG nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] No waiting events found dispatching network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.756 238945 WARNING nova.compute.manager [req-b753900d-0f90-45a9-9895-35059522b36a req-eb4cade0-0625-4c07-b592-274fb809740e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Received unexpected event network-vif-plugged-ff596883-7a7a-4226-a61f-de4382f6ff0e for instance with vm_state active and task_state None.#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.911 238945 DEBUG nova.compute.manager [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-changed-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.912 238945 DEBUG nova.compute.manager [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Refreshing instance network info cache due to event network-changed-e2362203-8b31-4317-a96a-2089dfc590a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.912 238945 DEBUG oslo_concurrency.lockutils [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.956 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521983.9555895, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.956 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Started (Lifecycle Event)#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.975 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.980 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521983.9557958, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.980 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:03 np0005597378 nova_compute[238941]: 2026-01-27 13:53:03.997 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.017 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.078 238945 DEBUG nova.network.neutron [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Updating instance_info_cache with network_info: [{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:04 np0005597378 podman[294103]: 2026-01-27 13:53:03.988622317 +0000 UTC m=+0.028924157 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.090 238945 DEBUG nova.network.neutron [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updated VIF entry in instance network info cache for port 4ef8a620-c221-4b3f-ba32-3ab574e341af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.091 238945 DEBUG nova.network.neutron [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updating instance_info_cache with network_info: [{"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.098 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Releasing lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.098 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance network_info: |[{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.099 238945 DEBUG oslo_concurrency.lockutils [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.099 238945 DEBUG nova.network.neutron [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Refreshing network info cache for port e2362203-8b31-4317-a96a-2089dfc590a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.102 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start _get_guest_xml network_info=[{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.105 238945 DEBUG oslo_concurrency.lockutils [req-e20017c2-594a-4b58-94b3-312d66b376b5 req-d9b6c1e4-eb51-45bb-b1c5-47128d2cc66d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-762ca3c0-2865-41c8-89fc-445573c554c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.108 238945 WARNING nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.114 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.116 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.119 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.119 238945 DEBUG nova.virt.libvirt.host [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.120 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.120 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.120 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.121 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.122 238945 DEBUG nova.virt.hardware [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.125 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:04 np0005597378 podman[294103]: 2026-01-27 13:53:04.205520612 +0000 UTC m=+0.245822452 container create 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 08:53:04 np0005597378 systemd[1]: Started libpod-conmon-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41.scope.
Jan 27 08:53:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329b82253451ec637a0da697ce0bba50afb18575ee7d2a92ef0b867bca85ec94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:04 np0005597378 podman[294103]: 2026-01-27 13:53:04.387171851 +0000 UTC m=+0.427473711 container init 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:53:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 217 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.8 MiB/s wr, 211 op/s
Jan 27 08:53:04 np0005597378 podman[294103]: 2026-01-27 13:53:04.393751588 +0000 UTC m=+0.434053438 container start 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:53:04 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : New worker (294144) forked
Jan 27 08:53:04 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : Loading success.
Jan 27 08:53:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499780639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.706 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.738 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.743 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.925 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Creating config drive at /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config#033[00m
Jan 27 08:53:04 np0005597378 nova_compute[238941]: 2026-01-27 13:53:04.929 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_cyx5za execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.067 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_cyx5za" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.091 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.094 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572236976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.297 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.299 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-2',id=62,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:59Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=de740849-c0ca-4217-974b-693a30f63855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.299 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.300 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.301 238945 DEBUG nova.objects.instance [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'pci_devices' on Instance uuid de740849-c0ca-4217-974b-693a30f63855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.316 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <uuid>de740849-c0ca-4217-974b-693a30f63855</uuid>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <name>instance-0000003e</name>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:name>tempest-MultipleCreateTestJSON-server-1479146776-2</nova:name>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:53:04</nova:creationTime>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:user uuid="f9e663079ce44f94a4dbe6125b395ce1">tempest-MultipleCreateTestJSON-644845764-project-member</nova:user>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:project uuid="d333430b14814ea487cbd2af414c350f">tempest-MultipleCreateTestJSON-644845764</nova:project>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <nova:port uuid="e2362203-8b31-4317-a96a-2089dfc590a2">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <entry name="serial">de740849-c0ca-4217-974b-693a30f63855</entry>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <entry name="uuid">de740849-c0ca-4217-974b-693a30f63855</entry>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/de740849-c0ca-4217-974b-693a30f63855_disk">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/de740849-c0ca-4217-974b-693a30f63855_disk.config">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:45:fb:82"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <target dev="tape2362203-8b"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/console.log" append="off"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:53:05 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:53:05 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:53:05 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:53:05 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.316 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Preparing to wait for external event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.317 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.317 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.317 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.318 238945 DEBUG nova.virt.libvirt.vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-2',id=62,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:52:59Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=de740849-c0ca-4217-974b-693a30f63855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.318 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.319 238945 DEBUG nova.network.os_vif_util [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.319 238945 DEBUG os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.320 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.321 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.323 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2362203-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.324 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2362203-8b, col_values=(('external_ids', {'iface-id': 'e2362203-8b31-4317-a96a-2089dfc590a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:fb:82', 'vm-uuid': 'de740849-c0ca-4217-974b-693a30f63855'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:05 np0005597378 NetworkManager[48904]: <info>  [1769521985.3265] manager: (tape2362203-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.334 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.335 238945 INFO os_vif [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b')#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.493 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.493 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.493 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] No VIF found with MAC fa:16:3e:45:fb:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.494 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Using config drive#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.513 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.900 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config 762ca3c0-2865-41c8-89fc-445573c554c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.901 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.901 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.902 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deleting local config drive /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9/disk.config because it was imported into RBD.#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.918 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:53:05 np0005597378 NetworkManager[48904]: <info>  [1769521985.9615] manager: (tap4ef8a620-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Jan 27 08:53:05 np0005597378 systemd-udevd[293986]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:05 np0005597378 kernel: tap4ef8a620-c2: entered promiscuous mode
Jan 27 08:53:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:05Z|00538|binding|INFO|Claiming lport 4ef8a620-c221-4b3f-ba32-3ab574e341af for this chassis.
Jan 27 08:53:05 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:05Z|00539|binding|INFO|4ef8a620-c221-4b3f-ba32-3ab574e341af: Claiming fa:16:3e:cf:4e:76 10.100.0.4
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:05 np0005597378 NetworkManager[48904]: <info>  [1769521985.9825] device (tap4ef8a620-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:05 np0005597378 NetworkManager[48904]: <info>  [1769521985.9837] device (tap4ef8a620-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.983 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:4e:76 10.100.0.4'], port_security=['fa:16:3e:cf:4e:76 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '762ca3c0-2865-41c8-89fc-445573c554c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4ef8a620-c221-4b3f-ba32-3ab574e341af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.984 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4ef8a620-c221-4b3f-ba32-3ab574e341af in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f bound to our chassis#033[00m
Jan 27 08:53:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.986 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.995 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:05 np0005597378 nova_compute[238941]: 2026-01-27 13:53:05.996 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.998 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[327974d2-2ccd-40e0-a523-0e20e0d456d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:05.999 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b1231fc-f1 in ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.001 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b1231fc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.001 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5b3cb6-a344-4325-9e66-8ec51cd437cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:06Z|00540|binding|INFO|Setting lport 4ef8a620-c221-4b3f-ba32-3ab574e341af ovn-installed in OVS
Jan 27 08:53:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:06Z|00541|binding|INFO|Setting lport 4ef8a620-c221-4b3f-ba32-3ab574e341af up in Southbound
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[741089ff-c10a-4cd0-a9d1-02bb35fd5ccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.005 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.005 238945 INFO nova.compute.claims [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.010 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Creating config drive at /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.016 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsdy6gf3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:06 np0005597378 systemd-machined[207425]: New machine qemu-69-instance-0000003d.
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.017 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c38e6-ed43-4a9c-a393-97795ecf7932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 systemd[1]: Started Virtual Machine qemu-69-instance-0000003d.
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.050 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b41d3ecb-b2a7-407f-8693-563f2cc6194b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.076 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.076 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Processing event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.077 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG oslo_concurrency.lockutils [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 DEBUG nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.078 238945 WARNING nova.compute.manager [req-a5978700-43cf-4eec-842c-8d335afa8da0 req-6ca76724-4e5b-4f99-9ac4-d524917e7b31 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.079 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.086 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521986.0858755, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.086 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.093 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.104 238945 INFO nova.virt.libvirt.driver [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance spawned successfully.#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.104 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.103 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a886e1ab-567f-419e-bc54-02aafa533475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 NetworkManager[48904]: <info>  [1769521986.1153] manager: (tap0b1231fc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/243)
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.116 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d36d2667-090c-4b75-942b-8a85ebf30c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.123 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.139 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.143 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.143 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.144 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.144 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.145 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.145 238945 DEBUG nova.virt.libvirt.driver [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.159 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a14ac967-050a-41e9-9711-190c7a50f121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.163 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a21a5-fecf-41e1-8746-769e86869612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.181 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsdy6gf3" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:06 np0005597378 NetworkManager[48904]: <info>  [1769521986.1857] device (tap0b1231fc-f0): carrier: link connected
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.191 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4503fe7b-6793-458b-9623-65dae6197895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.211 238945 DEBUG nova.storage.rbd_utils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] rbd image de740849-c0ca-4217-974b-693a30f63855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.212 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[769bd188-3884-4e72-94cf-d675092837f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294309, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.224 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config de740849-c0ca-4217-974b-693a30f63855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c22473ed-0a70-46df-aaba-c607d17963cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:ccd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462580, 'tstamp': 462580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294329, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aab5413f-8acb-400d-87ce-9e3253ad14e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294334, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.265 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.267 238945 INFO nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 11.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.268 238945 DEBUG nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8a34af-3db0-487a-9d37-da3202a71daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.347 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19c44f4e-d110-491f-a377-9b4a2c94fb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.349 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.349 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.349 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:06 np0005597378 NetworkManager[48904]: <info>  [1769521986.3515] manager: (tap0b1231fc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.351 238945 INFO nova.compute.manager [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 13.17 seconds to build instance.#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:06 np0005597378 kernel: tap0b1231fc-f0: entered promiscuous mode
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:06Z|00542|binding|INFO|Releasing lport 9d3a2d95-6e13-45c9-8614-b22897c037b4 from this chassis (sb_readonly=0)
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.369 238945 DEBUG oslo_concurrency.lockutils [None req-63a5cdf3-80e5-4bde-b0c4-70d0b6de3e38 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.387 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.389 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.390 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0bb032-00cb-4338-a9b5-25279d4ad624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.390 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.pid.haproxy
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:53:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:06.391 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'env', 'PROCESS_TAG=haproxy-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b1231fc-f48c-438b-9fe3-1ac6cd8a496f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:53:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 227 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 159 op/s
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.478 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521986.4785008, 762ca3c0-2865-41c8-89fc-445573c554c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.479 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Started (Lifecycle Event)#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.506 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.512 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521986.4785795, 762ca3c0-2865-41c8-89fc-445573c554c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.512 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.543 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.547 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.566 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:06 np0005597378 podman[294433]: 2026-01-27 13:53:06.741812195 +0000 UTC m=+0.025301970 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.964 238945 DEBUG oslo_concurrency.processutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config de740849-c0ca-4217-974b-693a30f63855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.965 238945 INFO nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Deleting local config drive /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855/disk.config because it was imported into RBD.#033[00m
Jan 27 08:53:06 np0005597378 nova_compute[238941]: 2026-01-27 13:53:06.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:07 np0005597378 NetworkManager[48904]: <info>  [1769521987.0198] manager: (tape2362203-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Jan 27 08:53:07 np0005597378 kernel: tape2362203-8b: entered promiscuous mode
Jan 27 08:53:07 np0005597378 systemd-udevd[294381]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:07Z|00543|binding|INFO|Claiming lport e2362203-8b31-4317-a96a-2089dfc590a2 for this chassis.
Jan 27 08:53:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:07Z|00544|binding|INFO|e2362203-8b31-4317-a96a-2089dfc590a2: Claiming fa:16:3e:45:fb:82 10.100.0.12
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:07 np0005597378 NetworkManager[48904]: <info>  [1769521987.0304] device (tape2362203-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:07 np0005597378 NetworkManager[48904]: <info>  [1769521987.0312] device (tape2362203-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.030 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:fb:82 10.100.0.12'], port_security=['fa:16:3e:45:fb:82 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'de740849-c0ca-4217-974b-693a30f63855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e2362203-8b31-4317-a96a-2089dfc590a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:07Z|00545|binding|INFO|Setting lport e2362203-8b31-4317-a96a-2089dfc590a2 ovn-installed in OVS
Jan 27 08:53:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:07Z|00546|binding|INFO|Setting lport e2362203-8b31-4317-a96a-2089dfc590a2 up in Southbound
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467497674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:07 np0005597378 systemd-machined[207425]: New machine qemu-70-instance-0000003e.
Jan 27 08:53:07 np0005597378 systemd[1]: Started Virtual Machine qemu-70-instance-0000003e.
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.074 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.083 238945 DEBUG nova.compute.provider_tree [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.108 238945 DEBUG nova.scheduler.client.report [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.138 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.139 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.191 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.192 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.211 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:53:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.229 238945 DEBUG nova.network.neutron [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Updated VIF entry in instance network info cache for port e2362203-8b31-4317-a96a-2089dfc590a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.231 238945 DEBUG nova.network.neutron [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Updating instance_info_cache with network_info: [{"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.235 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.249 238945 DEBUG oslo_concurrency.lockutils [req-11ec6984-0067-46e2-bd66-a26a521389e2 req-590adf84-fee1-4acd-92f9-d3d2f274070d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-de740849-c0ca-4217-974b-693a30f63855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:07 np0005597378 podman[294433]: 2026-01-27 13:53:07.271299294 +0000 UTC m=+0.554789049 container create 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.344 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.346 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.347 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Creating image(s)#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.374 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.412 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.446 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.450 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.487 238945 DEBUG nova.policy [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.528 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.529 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.530 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.530 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:07 np0005597378 systemd[1]: Started libpod-conmon-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope.
Jan 27 08:53:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359d450ac27da225132b38663b330b92ce8c29085a06427feb6092dc808c2ac9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:07 np0005597378 podman[294433]: 2026-01-27 13:53:07.701680812 +0000 UTC m=+0.985170577 container init 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.719 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:07 np0005597378 podman[294433]: 2026-01-27 13:53:07.723717854 +0000 UTC m=+1.007207609 container start 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.731 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:07 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : New worker (294570) forked
Jan 27 08:53:07 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : Loading success.
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.848 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e2362203-8b31-4317-a96a-2089dfc590a2 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis#033[00m
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.850 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.862 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "interface-c03e1ba1-3e7e-4cb8-847e-07c85da05427-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.863 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "interface-c03e1ba1-3e7e-4cb8-847e-07c85da05427-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.864 238945 DEBUG nova.objects.instance [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'flavor' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f25bb46-0507-42a4-b828-88ecdd1e0c92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.911 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b011197a-85bd-48a1-a675-a1652c5a6c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.912 238945 DEBUG nova.objects.instance [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'pci_requests' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.917 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[06efbbf6-c131-43b8-9969-51c8ca0e2dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.963 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.963 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[42255f43-6032-4da0-9851-bd9cadf2ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.967 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521987.9660118, de740849-c0ca-4217-974b-693a30f63855 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:07 np0005597378 nova_compute[238941]: 2026-01-27 13:53:07.968 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Started (Lifecycle Event)#033[00m
Jan 27 08:53:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:07.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[621afe24-ce7b-4810-86cb-4b1f6ad2e5b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294627, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.011 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[651d7bec-d4a5-47f3-aac3-4684feeccf0f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462593, 'tstamp': 462593}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294628, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462596, 'tstamp': 462596}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294628, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.014 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.015 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521987.9661336, de740849-c0ca-4217-974b-693a30f63855 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.016 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.018 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.019 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.020 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.021 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.203 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.207 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.273 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 227 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 188 op/s
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.438 238945 DEBUG nova.policy [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2689eaf31d4443a7a0885f648f53d3b4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bab4841f97143a08a3ba0eeacba626a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.538 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Successfully created port: 64542a94-92c4-4d0e-94ac-b711946dae41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.757 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.829 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.975 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:08.976 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:53:08 np0005597378 nova_compute[238941]: 2026-01-27 13:53:08.987 238945 DEBUG nova.objects.instance [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 3ec8ab83-8d5f-4296-bc39-61f269193e6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.000 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.001 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Ensure instance console log exists: /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.002 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.002 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.002 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.068 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully created port: 6886b862-923c-424e-b362-982c324a598c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.498 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Successfully updated port: 64542a94-92c4-4d0e-94ac-b711946dae41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.514 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.514 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.515 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.642 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.774 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Successfully updated port: 6886b862-923c-424e-b362-982c324a598c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.804 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.805 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.805 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:53:09 np0005597378 nova_compute[238941]: 2026-01-27 13:53:09.961 238945 WARNING nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 already exists in list: networks containing: ['3e4604ca-eef3-4b48-8b54-479f9b2b30c2']. ignoring it#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.304 238945 DEBUG nova.network.neutron [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updating instance_info_cache with network_info: [{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.322 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.323 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance network_info: |[{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.325 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start _get_guest_xml network_info=[{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.329 238945 WARNING nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.333 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.334 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.336 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.337 238945 DEBUG nova.virt.libvirt.host [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.337 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.337 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.338 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.338 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.339 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.340 238945 DEBUG nova.virt.hardware [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.343 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 249 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 255 op/s
Jan 27 08:53:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111616998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.927 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.948 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:10 np0005597378 nova_compute[238941]: 2026-01-27 13:53:10.953 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.095 238945 DEBUG nova.compute.manager [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-changed-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.098 238945 DEBUG nova.compute.manager [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Refreshing instance network info cache due to event network-changed-64542a94-92c4-4d0e-94ac-b711946dae41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.098 238945 DEBUG oslo_concurrency.lockutils [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.099 238945 DEBUG oslo_concurrency.lockutils [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.099 238945 DEBUG nova.network.neutron [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Refreshing network info cache for port 64542a94-92c4-4d0e-94ac-b711946dae41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.506 238945 DEBUG nova.compute.manager [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-changed-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.506 238945 DEBUG nova.compute.manager [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing instance network info cache due to event network-changed-6886b862-923c-424e-b362-982c324a598c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.506 238945 DEBUG oslo_concurrency.lockutils [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452893136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.585 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.585 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.590 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.591 238945 DEBUG nova.virt.libvirt.vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-739165555',display_name='tempest-ServersTestJSON-server-739165555',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-739165555',id=63,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-0bjr1zu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:07Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=3ec8ab83-8d5f-4296-bc39-61f269193e6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.592 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.592 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.593 238945 DEBUG nova.objects.instance [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ec8ab83-8d5f-4296-bc39-61f269193e6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.607 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.617 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <uuid>3ec8ab83-8d5f-4296-bc39-61f269193e6a</uuid>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <name>instance-0000003f</name>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersTestJSON-server-739165555</nova:name>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:53:10</nova:creationTime>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <nova:port uuid="64542a94-92c4-4d0e-94ac-b711946dae41">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <entry name="serial">3ec8ab83-8d5f-4296-bc39-61f269193e6a</entry>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <entry name="uuid">3ec8ab83-8d5f-4296-bc39-61f269193e6a</entry>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:03:2b:0e"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <target dev="tap64542a94-92"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/console.log" append="off"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:53:11 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:53:11 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:53:11 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:53:11 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.618 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Preparing to wait for external event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.619 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.619 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.619 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.620 238945 DEBUG nova.virt.libvirt.vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-739165555',display_name='tempest-ServersTestJSON-server-739165555',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-739165555',id=63,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-0bjr1zu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:07Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=3ec8ab83-8d5f-4296-bc39-61f269193e6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.621 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.622 238945 DEBUG nova.network.os_vif_util [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.622 238945 DEBUG os_vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.623 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.624 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.628 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64542a94-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.628 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64542a94-92, col_values=(('external_ids', {'iface-id': '64542a94-92c4-4d0e-94ac-b711946dae41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:2b:0e', 'vm-uuid': '3ec8ab83-8d5f-4296-bc39-61f269193e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:11 np0005597378 NetworkManager[48904]: <info>  [1769521991.6313] manager: (tap64542a94-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.640 238945 INFO os_vif [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92')#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.682 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.683 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.690 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.691 238945 INFO nova.compute.claims [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.697 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.698 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.698 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:03:2b:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.698 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Using config drive#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.719 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.949 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:11 np0005597378 nova_compute[238941]: 2026-01-27 13:53:11.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.049 238945 DEBUG nova.network.neutron [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.072 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.074 238945 DEBUG oslo_concurrency.lockutils [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.075 238945 DEBUG nova.network.neutron [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Refreshing network info cache for port 6886b862-923c-424e-b362-982c324a598c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.079 238945 DEBUG nova.virt.libvirt.vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.079 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.080 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.081 238945 DEBUG os_vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.082 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.082 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.087 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6886b862-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.088 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6886b862-92, col_values=(('external_ids', {'iface-id': '6886b862-923c-424e-b362-982c324a598c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:4a:5e', 'vm-uuid': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.0910] manager: (tap6886b862-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.106 238945 INFO os_vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92')#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.107 238945 DEBUG nova.virt.libvirt.vif [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.107 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.108 238945 DEBUG nova.network.os_vif_util [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.112 238945 DEBUG nova.virt.libvirt.guest [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] attach device xml: <interface type="ethernet">
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:f8:4a:5e"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <target dev="tap6886b862-92"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]: </interface>
Jan 27 08:53:12 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.1260] manager: (tap6886b862-92): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Jan 27 08:53:12 np0005597378 kernel: tap6886b862-92: entered promiscuous mode
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00547|binding|INFO|Claiming lport 6886b862-923c-424e-b362-982c324a598c for this chassis.
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00548|binding|INFO|6886b862-923c-424e-b362-982c324a598c: Claiming fa:16:3e:f8:4a:5e 10.100.0.8
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.137 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Creating config drive at /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.144 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:4a:5e 10.100.0.8'], port_security=['fa:16:3e:f8:4a:5e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6886b862-923c-424e-b362-982c324a598c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.144 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoarerywn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.146 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6886b862-923c-424e-b362-982c324a598c in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 bound to our chassis#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.148 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2#033[00m
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00549|binding|INFO|Setting lport 6886b862-923c-424e-b362-982c324a598c ovn-installed in OVS
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00550|binding|INFO|Setting lport 6886b862-923c-424e-b362-982c324a598c up in Southbound
Jan 27 08:53:12 np0005597378 systemd-udevd[294813]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.177 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c879d2f7-6a95-44e6-bdf6-a3166a4eec64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.1814] device (tap6886b862-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.1851] device (tap6886b862-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.219 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e8beb628-d077-4ce2-bb5c-dfeb09258cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.229 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3ed6e1-d881-49fa-9334-82059b21f13e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.266 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cd17a040-23b2-4c38-b67e-b010ca22fd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.273 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.274 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.274 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No VIF found with MAC fa:16:3e:1f:77:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.274 238945 DEBUG nova.virt.libvirt.driver [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] No VIF found with MAC fa:16:3e:f8:4a:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.280 238945 DEBUG nova.network.neutron [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updated VIF entry in instance network info cache for port 64542a94-92c4-4d0e-94ac-b711946dae41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.281 238945 DEBUG nova.network.neutron [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updating instance_info_cache with network_info: [{"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cec4914c-50de-464c-b175-2cd407b625f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294824, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.299 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoarerywn" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.320 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[06802074-c7fa-492b-8fb8-a6ea354622ff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462311, 'tstamp': 462311}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294825, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462314, 'tstamp': 462314}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294825, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.324 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.332 238945 DEBUG nova.storage.rbd_utils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.335 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4604ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.336 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.337 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4604ca-e0, col_values=(('external_ids', {'iface-id': 'd5faa58d-a805-4b68-958a-189d413602e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.337 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.338 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 249 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.397 238945 DEBUG nova.virt.libvirt.guest [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:name>tempest-AttachInterfacesV270Test-server-604088264</nova:name>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 13:53:12</nova:creationTime>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:user uuid="2689eaf31d4443a7a0885f648f53d3b4">tempest-AttachInterfacesV270Test-1808141455-project-member</nova:user>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:project uuid="4bab4841f97143a08a3ba0eeacba626a">tempest-AttachInterfacesV270Test-1808141455</nova:project>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:port uuid="3e1005c5-9cfa-4994-99cc-4c1d6d7d171d">
Jan 27 08:53:12 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    <nova:port uuid="6886b862-923c-424e-b362-982c324a598c">
Jan 27 08:53:12 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 08:53:12 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 08:53:12 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 08:53:12 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.398 238945 DEBUG oslo_concurrency.lockutils [req-5cf570cd-2fe9-462d-9f9d-5e032d9cbc31 req-f95f7cae-ba3a-4551-8b5b-0ce6c4625bac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3ec8ab83-8d5f-4296-bc39-61f269193e6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.422 238945 DEBUG oslo_concurrency.lockutils [None req-d7e9d1a4-49d1-40ae-86a2-ab600f09ab41 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "interface-c03e1ba1-3e7e-4cb8-847e-07c85da05427-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912545389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.544 238945 DEBUG oslo_concurrency.processutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config 3ec8ab83-8d5f-4296-bc39-61f269193e6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.545 238945 INFO nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deleting local config drive /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a/disk.config because it was imported into RBD.#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.546 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.550 238945 DEBUG nova.compute.provider_tree [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.566 238945 DEBUG nova.scheduler.client.report [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.5881] manager: (tap64542a94-92): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Jan 27 08:53:12 np0005597378 kernel: tap64542a94-92: entered promiscuous mode
Jan 27 08:53:12 np0005597378 systemd-udevd[294817]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00551|binding|INFO|Claiming lport 64542a94-92c4-4d0e-94ac-b711946dae41 for this chassis.
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00552|binding|INFO|64542a94-92c4-4d0e-94ac-b711946dae41: Claiming fa:16:3e:03:2b:0e 10.100.0.5
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.596 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.596 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.603 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2b:0e 10.100.0.5'], port_security=['fa:16:3e:03:2b:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ec8ab83-8d5f-4296-bc39-61f269193e6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64542a94-92c4-4d0e-94ac-b711946dae41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.604 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64542a94-92c4-4d0e-94ac-b711946dae41 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.606 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636#033[00m
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.6064] device (tap64542a94-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:12 np0005597378 NetworkManager[48904]: <info>  [1769521992.6073] device (tap64542a94-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00553|binding|INFO|Setting lport 64542a94-92c4-4d0e-94ac-b711946dae41 ovn-installed in OVS
Jan 27 08:53:12 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:12Z|00554|binding|INFO|Setting lport 64542a94-92c4-4d0e-94ac-b711946dae41 up in Southbound
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.617 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.625 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9064a5a-f7f3-44ec-b494-33535d78a5ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 27 08:53:12 np0005597378 systemd-machined[207425]: New machine qemu-71-instance-0000003f.
Jan 27 08:53:12 np0005597378 systemd[1]: Started Virtual Machine qemu-71-instance-0000003f.
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.654 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.654 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.657 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a0663456-2f65-482e-80aa-0e538712624f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.661 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[656ebc46-a8b4-436f-9506-6f3a110b6e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.671 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.687 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.712 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd79da0-8593-4e3a-becd-2161c74975d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7971b9b-2d28-4d5f-926d-804681f03982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294890, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.757 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54fea9a3-d452-42dc-918b-200253247da7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294891, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294891, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.759 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.761 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.765 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.766 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.767 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.782 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.783 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.784 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Creating image(s)#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.808 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.832 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.862 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.866 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.906 238945 DEBUG nova.policy [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c69d0d1031754a3ea963316c805e1662', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3f4bad8f405b4cdcbb174936852069ed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.943 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.943 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.944 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.944 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.969 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:12 np0005597378 nova_compute[238941]: 2026-01-27 13:53:12.973 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 347f7116-1ca3-4a98-be15-0e50d25961d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:12.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.224 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521993.2236822, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.224 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Started (Lifecycle Event)#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.340 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 347f7116-1ca3-4a98-be15-0e50d25961d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.373 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.415 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521993.2239063, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.416 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.422 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] resizing rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.455 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.459 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.481 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.529 238945 DEBUG nova.objects.instance [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lazy-loading 'migration_context' on Instance uuid 347f7116-1ca3-4a98-be15-0e50d25961d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.545 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.546 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Ensure instance console log exists: /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.546 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.546 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.547 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:13Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:d4:f7 10.100.0.3
Jan 27 08:53:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:13Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:d4:f7 10.100.0.3
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.889 238945 DEBUG nova.network.neutron [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updated VIF entry in instance network info cache for port 6886b862-923c-424e-b362-982c324a598c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.889 238945 DEBUG nova.network.neutron [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [{"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:13 np0005597378 nova_compute[238941]: 2026-01-27 13:53:13.950 238945 DEBUG oslo_concurrency.lockutils [req-3d9438cd-6cb1-432c-b962-81a1820ad2b3 req-d205b3f7-da07-4893-95f7-e87e4d106247 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c03e1ba1-3e7e-4cb8-847e-07c85da05427" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.315 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.316 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 WARNING nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c for instance with vm_state active and task_state None.#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.317 238945 DEBUG oslo_concurrency.lockutils [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.318 238945 DEBUG nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.318 238945 WARNING nova.compute.manager [req-61f8cc78-ed0d-456e-86c6-c8953abdf395 req-bd2c0167-37bd-41ea-84fb-c6c9cf07075d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c for instance with vm_state active and task_state None.#033[00m
Jan 27 08:53:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 301 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 244 op/s
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.484 238945 DEBUG nova.compute.manager [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.484 238945 DEBUG oslo_concurrency.lockutils [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.485 238945 DEBUG oslo_concurrency.lockutils [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.485 238945 DEBUG oslo_concurrency.lockutils [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.486 238945 DEBUG nova.compute.manager [req-924e26f1-d94b-4bab-8d57-3fed7efbfd5f req-cf2c594f-3c31-4827-9c00-bb55fd469de3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Processing event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.486 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.497 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521994.4972327, 762ca3c0-2865-41c8-89fc-445573c554c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.505 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.508 238945 INFO nova.virt.libvirt.driver [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance spawned successfully.#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.508 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.545 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.546 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.546 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.546 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.547 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.547 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.551 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.611 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.652 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 16.50 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.652 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.731 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 18.72 seconds to build instance.#033[00m
Jan 27 08:53:14 np0005597378 nova_compute[238941]: 2026-01-27 13:53:14.767 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.213 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Successfully created port: 1627368c-ee5c-442e-9ad3-4b7789309df9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.318 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.320 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.320 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.321 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.321 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.323 238945 INFO nova.compute.manager [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Terminating instance#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.324 238945 DEBUG nova.compute.manager [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:53:15 np0005597378 kernel: tap3e1005c5-9c (unregistering): left promiscuous mode
Jan 27 08:53:15 np0005597378 NetworkManager[48904]: <info>  [1769521995.3711] device (tap3e1005c5-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:15Z|00555|binding|INFO|Releasing lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d from this chassis (sb_readonly=0)
Jan 27 08:53:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:15Z|00556|binding|INFO|Setting lport 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d down in Southbound
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:15Z|00557|binding|INFO|Removing iface tap3e1005c5-9c ovn-installed in OVS
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.390 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:77:54 10.100.0.12'], port_security=['fa:16:3e:1f:77:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.391 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 3e1005c5-9cfa-4994-99cc-4c1d6d7d171d in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 unbound from our chassis#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.392 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2#033[00m
Jan 27 08:53:15 np0005597378 kernel: tap6886b862-92 (unregistering): left promiscuous mode
Jan 27 08:53:15 np0005597378 NetworkManager[48904]: <info>  [1769521995.4045] device (tap6886b862-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.420 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:15Z|00558|binding|INFO|Releasing lport 6886b862-923c-424e-b362-982c324a598c from this chassis (sb_readonly=0)
Jan 27 08:53:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:15Z|00559|binding|INFO|Setting lport 6886b862-923c-424e-b362-982c324a598c down in Southbound
Jan 27 08:53:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:15Z|00560|binding|INFO|Removing iface tap6886b862-92 ovn-installed in OVS
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.423 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[11030df2-6a17-46d0-862c-125b2c6f0112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.442 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Jan 27 08:53:15 np0005597378 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Consumed 9.915s CPU time.
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.462 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d28dcf1d-eb86-4224-9ea2-85544e390036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 systemd-machined[207425]: Machine qemu-68-instance-0000003c terminated.
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.466 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8548404-a50a-40fa-9ab0-58fe0c28a8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.499 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf58b4aa-c53f-4e7c-b0eb-51174e08ab90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.517 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4232ef48-1078-40da-afd0-6b42fbdadfca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4604ca-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:75:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462299, 'reachable_time': 41286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295116, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.537 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[869dbf06-0851-4709-af7c-c90d35cf981e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462311, 'tstamp': 462311}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295117, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3e4604ca-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462314, 'tstamp': 462314}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295117, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.539 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.552 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4604ca-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.552 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.553 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4604ca-e0, col_values=(('external_ids', {'iface-id': 'd5faa58d-a805-4b68-958a-189d413602e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:15 np0005597378 NetworkManager[48904]: <info>  [1769521995.5533] manager: (tap6886b862-92): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.553 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.567 238945 INFO nova.virt.libvirt.driver [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Instance destroyed successfully.#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.568 238945 DEBUG nova.objects.instance [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lazy-loading 'resources' on Instance uuid c03e1ba1-3e7e-4cb8-847e-07c85da05427 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.723 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:4a:5e 10.100.0.8'], port_security=['fa:16:3e:f8:4a:5e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c03e1ba1-3e7e-4cb8-847e-07c85da05427', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bab4841f97143a08a3ba0eeacba626a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7da167e-a929-4104-9685-e5234ac2ada1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac2d5b79-2f6e-4d23-96d5-6332de9c033d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=6886b862-923c-424e-b362-982c324a598c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.725 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 6886b862-923c-424e-b362-982c324a598c in datapath 3e4604ca-eef3-4b48-8b54-479f9b2b30c2 unbound from our chassis#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.728 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e4604ca-eef3-4b48-8b54-479f9b2b30c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.729 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1be7e507-f564-4898-a7e3-f24121334ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:15.729 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 namespace which is not needed anymore#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.822 238945 DEBUG nova.virt.libvirt.vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.823 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "address": "fa:16:3e:1f:77:54", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e1005c5-9c", "ovs_interfaceid": "3e1005c5-9cfa-4994-99cc-4c1d6d7d171d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.823 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.824 238945 DEBUG os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.826 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e1005c5-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.834 238945 INFO os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:77:54,bridge_name='br-int',has_traffic_filtering=True,id=3e1005c5-9cfa-4994-99cc-4c1d6d7d171d,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e1005c5-9c')#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.835 238945 DEBUG nova.virt.libvirt.vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-604088264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-604088264',id=60,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bab4841f97143a08a3ba0eeacba626a',ramdisk_id='',reservation_id='r-vty0bhj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1808141455',owner_user_name='tempest-AttachInterfacesV270Test-1808141455-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:06Z,user_data=None,user_id='2689eaf31d4443a7a0885f648f53d3b4',uuid=c03e1ba1-3e7e-4cb8-847e-07c85da05427,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.835 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converting VIF {"id": "6886b862-923c-424e-b362-982c324a598c", "address": "fa:16:3e:f8:4a:5e", "network": {"id": "3e4604ca-eef3-4b48-8b54-479f9b2b30c2", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-826140706-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bab4841f97143a08a3ba0eeacba626a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6886b862-92", "ovs_interfaceid": "6886b862-923c-424e-b362-982c324a598c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.837 238945 DEBUG nova.network.os_vif_util [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.837 238945 DEBUG os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.839 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6886b862-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.843 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:15 np0005597378 nova_compute[238941]: 2026-01-27 13:53:15.850 238945 INFO os_vif [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:4a:5e,bridge_name='br-int',has_traffic_filtering=True,id=6886b862-923c-424e-b362-982c324a598c,network=Network(3e4604ca-eef3-4b48-8b54-479f9b2b30c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6886b862-92')#033[00m
Jan 27 08:53:15 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : haproxy version is 2.8.14-c23fe91
Jan 27 08:53:15 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [NOTICE]   (294142) : path to executable is /usr/sbin/haproxy
Jan 27 08:53:15 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [WARNING]  (294142) : Exiting Master process...
Jan 27 08:53:15 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [ALERT]    (294142) : Current worker (294144) exited with code 143 (Terminated)
Jan 27 08:53:15 np0005597378 neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2[294138]: [WARNING]  (294142) : All workers exited. Exiting... (0)
Jan 27 08:53:15 np0005597378 systemd[1]: libpod-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41.scope: Deactivated successfully.
Jan 27 08:53:15 np0005597378 podman[295158]: 2026-01-27 13:53:15.92316715 +0000 UTC m=+0.053974690 container died 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:53:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41-userdata-shm.mount: Deactivated successfully.
Jan 27 08:53:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-329b82253451ec637a0da697ce0bba50afb18575ee7d2a92ef0b867bca85ec94-merged.mount: Deactivated successfully.
Jan 27 08:53:15 np0005597378 podman[295158]: 2026-01-27 13:53:15.988929847 +0000 UTC m=+0.119737387 container cleanup 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:53:15 np0005597378 systemd[1]: libpod-conmon-800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41.scope: Deactivated successfully.
Jan 27 08:53:16 np0005597378 podman[295207]: 2026-01-27 13:53:16.10783934 +0000 UTC m=+0.093733128 container remove 800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2808c92a-8f07-40fb-9c48-76002d4058f9]: (4, ('Tue Jan 27 01:53:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 (800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41)\n800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41\nTue Jan 27 01:53:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 (800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41)\n800766bb7c05509f5baddb8e492ce8d2b5322b7ff79075018d5a9a8f5160ca41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0b2ca2-f588-4984-8e27-041c802cb8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.120 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4604ca-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:16 np0005597378 kernel: tap3e4604ca-e0: left promiscuous mode
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb4a6b7-fa04-4c9e-9bdc-06869b4f53c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[711ed75c-fbd9-4cfa-aa9e-fc9d94e38026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.167 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d627fd-3f13-48b3-97c3-c3e0b16f9895]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.189 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57af9049-802a-4057-b3d1-b8c67d3141b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462289, 'reachable_time': 29752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295222, 'error': None, 'target': 'ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.192 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e4604ca-eef3-4b48-8b54-479f9b2b30c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:53:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:16.192 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c29fe56-de24-433c-81f0-ce86906255a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:16 np0005597378 systemd[1]: run-netns-ovnmeta\x2d3e4604ca\x2deef3\x2d4b48\x2d8b54\x2d479f9b2b30c2.mount: Deactivated successfully.
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.287 238945 INFO nova.virt.libvirt.driver [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deleting instance files /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427_del#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.288 238945 INFO nova.virt.libvirt.driver [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deletion of /var/lib/nova/instances/c03e1ba1-3e7e-4cb8-847e-07c85da05427_del complete#033[00m
Jan 27 08:53:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 343 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 218 op/s
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.448 238945 INFO nova.compute.manager [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.448 238945 DEBUG oslo.service.loopingcall [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.449 238945 DEBUG nova.compute.manager [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.449 238945 DEBUG nova.network.neutron [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.512 238945 DEBUG nova.compute.manager [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-unplugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG oslo_concurrency.lockutils [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG oslo_concurrency.lockutils [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG oslo_concurrency.lockutils [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.513 238945 DEBUG nova.compute.manager [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-unplugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.514 238945 DEBUG nova.compute.manager [req-99d1dd5a-e45d-4a48-a397-91f6f08aae3d req-4c7474db-3a5b-4fa3-a816-41ebe63eff67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-unplugged-6886b862-923c-424e-b362-982c324a598c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.643 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] No waiting events found dispatching network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 WARNING nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received unexpected event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af for instance with vm_state active and task_state None.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.644 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Processing event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.645 238945 DEBUG oslo_concurrency.lockutils [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.646 238945 DEBUG nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] No waiting events found dispatching network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.646 238945 WARNING nova.compute.manager [req-797753b1-bcf5-4cc8-93f0-a9b49e88c23b req-ebe66537-beaa-4c1b-a54d-a7cb17aa3062 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received unexpected event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.647 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.650 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521996.650105, de740849-c0ca-4217-974b-693a30f63855 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.650 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.652 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.655 238945 INFO nova.virt.libvirt.driver [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance spawned successfully.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.656 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.671 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.673 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.695 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.696 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.696 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.696 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.697 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.697 238945 DEBUG nova.virt.libvirt.driver [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.703 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.856 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 17.41 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.857 238945 DEBUG nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.975 238945 INFO nova.compute.manager [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 20.77 seconds to build instance.#033[00m
Jan 27 08:53:16 np0005597378 nova_compute[238941]: 2026-01-27 13:53:16.978 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:53:17
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr']
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:53:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:17 np0005597378 nova_compute[238941]: 2026-01-27 13:53:17.470 238945 DEBUG oslo_concurrency.lockutils [None req-80730d6f-303e-488c-8e86-f81f7c41f86f f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.006 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Successfully updated port: 1627368c-ee5c-442e-9ad3-4b7789309df9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.152 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.153 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquired lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.153 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.381 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.394 238945 DEBUG nova.network.neutron [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 340 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.458 238945 INFO nova.compute.manager [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Took 2.01 seconds to deallocate network for instance.#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.550 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.551 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.636 238945 DEBUG nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.637 238945 DEBUG oslo_concurrency.lockutils [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.638 238945 DEBUG oslo_concurrency.lockutils [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.638 238945 DEBUG oslo_concurrency.lockutils [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.639 238945 DEBUG nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-6886b862-923c-424e-b362-982c324a598c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.642 238945 WARNING nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-6886b862-923c-424e-b362-982c324a598c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.644 238945 DEBUG nova.compute.manager [req-22ee3734-7649-4c07-af29-8781a22783c6 req-60983160-18bd-4718-aa71-c4164a3bc7e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-deleted-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:18 np0005597378 nova_compute[238941]: 2026-01-27 13:53:18.735 238945 DEBUG oslo_concurrency.processutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.206 238945 DEBUG nova.network.neutron [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updating instance_info_cache with network_info: [{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915702451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.346 238945 DEBUG oslo_concurrency.processutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.351 238945 DEBUG nova.compute.provider_tree [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.428 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.429 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.429 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.429 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.430 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.431 238945 INFO nova.compute.manager [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Terminating instance#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.432 238945 DEBUG nova.compute.manager [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.477 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.477 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Processing event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.478 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] No waiting events found dispatching network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 WARNING nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received unexpected event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.479 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-deleted-6886b862-923c-424e-b362-982c324a598c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.480 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-changed-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.480 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Refreshing instance network info cache due to event network-changed-1627368c-ee5c-442e-9ad3-4b7789309df9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.480 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.481 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.486 238945 DEBUG nova.scheduler.client.report [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.489 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769521999.4842935, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.489 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.492 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.498 238945 INFO nova.virt.libvirt.driver [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance spawned successfully.#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.498 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:53:19 np0005597378 kernel: tap4ef8a620-c2 (unregistering): left promiscuous mode
Jan 27 08:53:19 np0005597378 NetworkManager[48904]: <info>  [1769521999.5202] device (tap4ef8a620-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:19Z|00561|binding|INFO|Releasing lport 4ef8a620-c221-4b3f-ba32-3ab574e341af from this chassis (sb_readonly=0)
Jan 27 08:53:19 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:19Z|00562|binding|INFO|Setting lport 4ef8a620-c221-4b3f-ba32-3ab574e341af down in Southbound
Jan 27 08:53:19 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:19Z|00563|binding|INFO|Removing iface tap4ef8a620-c2 ovn-installed in OVS
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.531 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Jan 27 08:53:19 np0005597378 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003d.scope: Consumed 5.366s CPU time.
Jan 27 08:53:19 np0005597378 systemd-machined[207425]: Machine qemu-69-instance-0000003d terminated.
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.667 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:4e:76 10.100.0.4'], port_security=['fa:16:3e:cf:4e:76 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '762ca3c0-2865-41c8-89fc-445573c554c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4ef8a620-c221-4b3f-ba32-3ab574e341af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.668 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4ef8a620-c221-4b3f-ba32-3ab574e341af in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.669 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.680 238945 INFO nova.virt.libvirt.driver [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Instance destroyed successfully.#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.681 238945 DEBUG nova.objects.instance [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid 762ca3c0-2865-41c8-89fc-445573c554c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.685 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Releasing lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.685 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance network_info: |[{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.687 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.687 238945 DEBUG nova.network.neutron [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Refreshing network info cache for port 1627368c-ee5c-442e-9ad3-4b7789309df9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.690 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start _get_guest_xml network_info=[{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.702 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.707 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f23fbf11-6e05-418a-8edd-2f349eb969f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.708 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.709 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.709 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.710 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.710 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.711 238945 DEBUG nova.virt.libvirt.driver [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.716 238945 WARNING nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.720 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.721 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.725 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.725 238945 DEBUG nova.virt.libvirt.host [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.725 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.726 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.727 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.728 238945 DEBUG nova.virt.hardware [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.732 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.753 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[18ba4da1-a768-4077-abfe-35f51dea6c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.756 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2ed991-6dfb-464f-a30c-eb722d41fe26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.783 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.795 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d55fb793-12e4-4d09-a73d-a348e3142eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.821 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf04e888-0f17-4bd0-bdd9-58d93a30a397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b1231fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:cc:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462580, 'reachable_time': 15186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295268, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[48633a1a-5d73-4c44-a5b2-f014d2dfedd9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462593, 'tstamp': 462593}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295269, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b1231fc-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462596, 'tstamp': 462596}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295269, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.846 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.847 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b1231fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.853 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b1231fc-f0, col_values=(('external_ids', {'iface-id': '9d3a2d95-6e13-45c9-8614-b22897c037b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:19.854 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.866 238945 INFO nova.scheduler.client.report [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Deleted allocations for instance c03e1ba1-3e7e-4cb8-847e-07c85da05427#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.886 238945 DEBUG nova.virt.libvirt.vif [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-1',id=61,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:14Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=762ca3c0-2865-41c8-89fc-445573c554c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.886 238945 DEBUG nova.network.os_vif_util [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "address": "fa:16:3e:cf:4e:76", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef8a620-c2", "ovs_interfaceid": "4ef8a620-c221-4b3f-ba32-3ab574e341af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.887 238945 DEBUG nova.network.os_vif_util [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.888 238945 DEBUG os_vif [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.894 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ef8a620-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.896 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.903 238945 INFO os_vif [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:4e:76,bridge_name='br-int',has_traffic_filtering=True,id=4ef8a620-c221-4b3f-ba32-3ab574e341af,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef8a620-c2')#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.949 238945 INFO nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 12.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.949 238945 DEBUG nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.960 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.961 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.961 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.962 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.962 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.964 238945 INFO nova.compute.manager [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Terminating instance#033[00m
Jan 27 08:53:19 np0005597378 nova_compute[238941]: 2026-01-27 13:53:19.965 238945 DEBUG nova.compute.manager [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.014 238945 DEBUG oslo_concurrency.lockutils [None req-a2234247-a9fc-4c2f-9f4a-5b990b0fb890 2689eaf31d4443a7a0885f648f53d3b4 4bab4841f97143a08a3ba0eeacba626a - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.068 238945 INFO nova.compute.manager [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 14.10 seconds to build instance.#033[00m
Jan 27 08:53:20 np0005597378 kernel: tape2362203-8b (unregistering): left promiscuous mode
Jan 27 08:53:20 np0005597378 NetworkManager[48904]: <info>  [1769522000.1132] device (tape2362203-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:20Z|00564|binding|INFO|Releasing lport e2362203-8b31-4317-a96a-2089dfc590a2 from this chassis (sb_readonly=0)
Jan 27 08:53:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:20Z|00565|binding|INFO|Setting lport e2362203-8b31-4317-a96a-2089dfc590a2 down in Southbound
Jan 27 08:53:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:20Z|00566|binding|INFO|Removing iface tape2362203-8b ovn-installed in OVS
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.131 238945 DEBUG oslo_concurrency.lockutils [None req-1d8c66b9-323c-4eae-b62e-cab0f6cf3283 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.138 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:fb:82 10.100.0.12'], port_security=['fa:16:3e:45:fb:82 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'de740849-c0ca-4217-974b-693a30f63855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd333430b14814ea487cbd2af414c350f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6912d520-bac9-4dbc-8621-a6eda576ed63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0aaafa11-7826-468a-b8bf-1210da9401ed, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e2362203-8b31-4317-a96a-2089dfc590a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.140 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e2362203-8b31-4317-a96a-2089dfc590a2 in datapath 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f unbound from our chassis#033[00m
Jan 27 08:53:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.142 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:53:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b07ee42-97f6-4149-b2d0-3cc957512e69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:20.144 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f namespace which is not needed anymore#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:20 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 27 08:53:20 np0005597378 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Jan 27 08:53:20 np0005597378 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003e.scope: Consumed 4.122s CPU time.
Jan 27 08:53:20 np0005597378 systemd-machined[207425]: Machine qemu-70-instance-0000003e terminated.
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.205 238945 INFO nova.virt.libvirt.driver [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Instance destroyed successfully.#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.206 238945 DEBUG nova.objects.instance [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lazy-loading 'resources' on Instance uuid de740849-c0ca-4217-974b-693a30f63855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.229 238945 DEBUG nova.virt.libvirt.vif [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:52:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1479146776',display_name='tempest-MultipleCreateTestJSON-server-1479146776-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1479146776-2',id=62,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-27T13:53:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d333430b14814ea487cbd2af414c350f',ramdisk_id='',reservation_id='r-4g0jf4bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-644845764',owner_user_name='tempest-MultipleCreateTestJSON-644845764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:16Z,user_data=None,user_id='f9e663079ce44f94a4dbe6125b395ce1',uuid=de740849-c0ca-4217-974b-693a30f63855,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.230 238945 DEBUG nova.network.os_vif_util [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converting VIF {"id": "e2362203-8b31-4317-a96a-2089dfc590a2", "address": "fa:16:3e:45:fb:82", "network": {"id": "0b1231fc-f48c-438b-9fe3-1ac6cd8a496f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1350678511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d333430b14814ea487cbd2af414c350f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2362203-8b", "ovs_interfaceid": "e2362203-8b31-4317-a96a-2089dfc590a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.231 238945 DEBUG nova.network.os_vif_util [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.231 238945 DEBUG os_vif [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.233 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2362203-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.240 238945 INFO os_vif [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:fb:82,bridge_name='br-int',has_traffic_filtering=True,id=e2362203-8b31-4317-a96a-2089dfc590a2,network=Network(0b1231fc-f48c-438b-9fe3-1ac6cd8a496f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2362203-8b')#033[00m
Jan 27 08:53:20 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : haproxy version is 2.8.14-c23fe91
Jan 27 08:53:20 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [NOTICE]   (294565) : path to executable is /usr/sbin/haproxy
Jan 27 08:53:20 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [WARNING]  (294565) : Exiting Master process...
Jan 27 08:53:20 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [ALERT]    (294565) : Current worker (294570) exited with code 143 (Terminated)
Jan 27 08:53:20 np0005597378 neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f[294535]: [WARNING]  (294565) : All workers exited. Exiting... (0)
Jan 27 08:53:20 np0005597378 systemd[1]: libpod-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope: Deactivated successfully.
Jan 27 08:53:20 np0005597378 conmon[294535]: conmon 28a3afe162f8765afbb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope/container/memory.events
Jan 27 08:53:20 np0005597378 podman[295337]: 2026-01-27 13:53:20.34120576 +0000 UTC m=+0.084702137 container died 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:53:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 347 op/s
Jan 27 08:53:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414337084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.438 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.460 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:20 np0005597378 nova_compute[238941]: 2026-01-27 13:53:20.463 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7-userdata-shm.mount: Deactivated successfully.
Jan 27 08:53:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-359d450ac27da225132b38663b330b92ce8c29085a06427feb6092dc808c2ac9-merged.mount: Deactivated successfully.
Jan 27 08:53:20 np0005597378 podman[295337]: 2026-01-27 13:53:20.725620563 +0000 UTC m=+0.469116940 container cleanup 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:53:20 np0005597378 systemd[1]: libpod-conmon-28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7.scope: Deactivated successfully.
Jan 27 08:53:21 np0005597378 podman[295425]: 2026-01-27 13:53:21.009178969 +0000 UTC m=+0.256386117 container remove 28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.016 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[25b2f564-d8bf-4faa-bdde-128f53187747]: (4, ('Tue Jan 27 01:53:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7)\n28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7\nTue Jan 27 01:53:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f (28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7)\n28a3afe162f8765afbb04b925c109940701d0f0959d6815e8e792857335d01d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2312d653-8416-48e0-aec5-26c129a5203a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.018 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b1231fc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 kernel: tap0b1231fc-f0: left promiscuous mode
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19a9dcc1-61f3-4c70-8f8b-4ea41fbe0754]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4fb8a2-5b4a-4a37-9d78-fb683f7a8ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[970d2aee-73e9-4dc3-9966-f38fa20b722c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[046708d3-6f3f-47bf-baff-da971a124c1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462572, 'reachable_time': 20467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295439, 'error': None, 'target': 'ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.061 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b1231fc-f48c-438b-9fe3-1ac6cd8a496f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:53:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:21.061 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a50a31-d6c1-49a0-ba76-0c115a3558e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:21 np0005597378 systemd[1]: run-netns-ovnmeta\x2d0b1231fc\x2df48c\x2d438b\x2d9fe3\x2d1ac6cd8a496f.mount: Deactivated successfully.
Jan 27 08:53:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325252525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.143 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.145 238945 DEBUG nova.virt.libvirt.vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1505918091',display_name='tempest-ServerAddressesTestJSON-server-1505918091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1505918091',id=64,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f4bad8f405b4cdcbb174936852069ed',ramdisk_id='',reservation_id='r-de000eqq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-43382846',owner_user_name='tempest-ServerAddressesTestJSON-43382846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:12Z,user_data=None,user_id='c69d0d1031754a3ea963316c805e1662',uuid=347f7116-1ca3-4a98-be15-0e50d25961d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.145 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converting VIF {"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.146 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.147 238945 DEBUG nova.objects.instance [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 347f7116-1ca3-4a98-be15-0e50d25961d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.261 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <uuid>347f7116-1ca3-4a98-be15-0e50d25961d3</uuid>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <name>instance-00000040</name>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerAddressesTestJSON-server-1505918091</nova:name>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:53:19</nova:creationTime>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:user uuid="c69d0d1031754a3ea963316c805e1662">tempest-ServerAddressesTestJSON-43382846-project-member</nova:user>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:project uuid="3f4bad8f405b4cdcbb174936852069ed">tempest-ServerAddressesTestJSON-43382846</nova:project>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <nova:port uuid="1627368c-ee5c-442e-9ad3-4b7789309df9">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <entry name="serial">347f7116-1ca3-4a98-be15-0e50d25961d3</entry>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <entry name="uuid">347f7116-1ca3-4a98-be15-0e50d25961d3</entry>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/347f7116-1ca3-4a98-be15-0e50d25961d3_disk">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:78:40:ba"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <target dev="tap1627368c-ee"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/console.log" append="off"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:53:21 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:53:21 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:53:21 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:53:21 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Preparing to wait for external event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.262 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.263 238945 DEBUG nova.virt.libvirt.vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1505918091',display_name='tempest-ServerAddressesTestJSON-server-1505918091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1505918091',id=64,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f4bad8f405b4cdcbb174936852069ed',ramdisk_id='',reservation_id='r-de000eqq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-43382846',owner_user_name='tempest-ServerAddressesTestJSON-43382846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:12Z,user_data=None,user_id='c69d0d1031754a3ea963316c805e1662',uuid=347f7116-1ca3-4a98-be15-0e50d25961d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.263 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converting VIF {"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.264 238945 DEBUG nova.network.os_vif_util [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.264 238945 DEBUG os_vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.265 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.266 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.272 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1627368c-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.273 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1627368c-ee, col_values=(('external_ids', {'iface-id': '1627368c-ee5c-442e-9ad3-4b7789309df9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:40:ba', 'vm-uuid': '347f7116-1ca3-4a98-be15-0e50d25961d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:21 np0005597378 NetworkManager[48904]: <info>  [1769522001.2754] manager: (tap1627368c-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.281 238945 INFO os_vif [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee')#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.374 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.374 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.375 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] No VIF found with MAC fa:16:3e:78:40:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.375 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Using config drive#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.395 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.865 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-unplugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] No waiting events found dispatching network-vif-unplugged-4ef8a620-c221-4b3f-ba32-3ab574e341af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-unplugged-4ef8a620-c221-4b3f-ba32-3ab574e341af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.866 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] No waiting events found dispatching network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 WARNING nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received unexpected event network-vif-plugged-4ef8a620-c221-4b3f-ba32-3ab574e341af for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-unplugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.867 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG oslo_concurrency.lockutils [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] No waiting events found dispatching network-vif-unplugged-e2362203-8b31-4317-a96a-2089dfc590a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.868 238945 DEBUG nova.compute.manager [req-0e59b435-3c2e-479b-b10f-1af8045fc4e3 req-ac46b9d8-9965-46d0-8acb-3bd5a1afafe8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-unplugged-e2362203-8b31-4317-a96a-2089dfc590a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.981 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.988 238945 INFO nova.virt.libvirt.driver [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deleting instance files /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9_del#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.988 238945 INFO nova.virt.libvirt.driver [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deletion of /var/lib/nova/instances/762ca3c0-2865-41c8-89fc-445573c554c9_del complete#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.995 238945 INFO nova.virt.libvirt.driver [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Deleting instance files /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855_del#033[00m
Jan 27 08:53:21 np0005597378 nova_compute[238941]: 2026-01-27 13:53:21.995 238945 INFO nova.virt.libvirt.driver [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Deletion of /var/lib/nova/instances/de740849-c0ca-4217-974b-693a30f63855_del complete#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.125 238945 INFO nova.compute.manager [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.125 238945 DEBUG oslo.service.loopingcall [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.126 238945 DEBUG nova.compute.manager [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.127 238945 DEBUG nova.network.neutron [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.135 238945 INFO nova.compute.manager [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 2.17 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.136 238945 DEBUG oslo.service.loopingcall [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.137 238945 DEBUG nova.compute.manager [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.138 238945 DEBUG nova.network.neutron [-] [instance: de740849-c0ca-4217-974b-693a30f63855] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.208 238945 DEBUG nova.network.neutron [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updated VIF entry in instance network info cache for port 1627368c-ee5c-442e-9ad3-4b7789309df9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.209 238945 DEBUG nova.network.neutron [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updating instance_info_cache with network_info: [{"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.229 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Creating config drive at /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.234 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9gaqiq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.310 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-347f7116-1ca3-4a98-be15-0e50d25961d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.311 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-unplugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.311 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-unplugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 WARNING nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-unplugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.312 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG oslo_concurrency.lockutils [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c03e1ba1-3e7e-4cb8-847e-07c85da05427-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 DEBUG nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] No waiting events found dispatching network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.313 238945 WARNING nova.compute.manager [req-5e0394b2-2dfb-4032-9d53-94d71a30a484 req-c5ecdb9d-6162-40b9-8079-9741435c3d72 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Received unexpected event network-vif-plugged-3e1005c5-9cfa-4994-99cc-4c1d6d7d171d for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.373 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9gaqiq1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.396 238945 DEBUG nova.storage.rbd_utils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] rbd image 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 264 op/s
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.404 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.570 238945 DEBUG oslo_concurrency.processutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config 347f7116-1ca3-4a98-be15-0e50d25961d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.571 238945 INFO nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deleting local config drive /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3/disk.config because it was imported into RBD.#033[00m
Jan 27 08:53:22 np0005597378 kernel: tap1627368c-ee: entered promiscuous mode
Jan 27 08:53:22 np0005597378 NetworkManager[48904]: <info>  [1769522002.6289] manager: (tap1627368c-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:22Z|00567|binding|INFO|Claiming lport 1627368c-ee5c-442e-9ad3-4b7789309df9 for this chassis.
Jan 27 08:53:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:22Z|00568|binding|INFO|1627368c-ee5c-442e-9ad3-4b7789309df9: Claiming fa:16:3e:78:40:ba 10.100.0.8
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:22 np0005597378 systemd-udevd[295517]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:22 np0005597378 systemd-machined[207425]: New machine qemu-72-instance-00000040.
Jan 27 08:53:22 np0005597378 NetworkManager[48904]: <info>  [1769522002.6780] device (tap1627368c-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:22 np0005597378 NetworkManager[48904]: <info>  [1769522002.6793] device (tap1627368c-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:22 np0005597378 systemd[1]: Started Virtual Machine qemu-72-instance-00000040.
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:22Z|00569|binding|INFO|Setting lport 1627368c-ee5c-442e-9ad3-4b7789309df9 ovn-installed in OVS
Jan 27 08:53:22 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:22Z|00570|binding|INFO|Setting lport 1627368c-ee5c-442e-9ad3-4b7789309df9 up in Southbound
Jan 27 08:53:22 np0005597378 nova_compute[238941]: 2026-01-27 13:53:22.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.717 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:40:ba 10.100.0.8'], port_security=['fa:16:3e:78:40:ba 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '347f7116-1ca3-4a98-be15-0e50d25961d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f4bad8f405b4cdcbb174936852069ed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d73eac7-7a8f-4239-9741-7eab87827cb0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c948a0a0-103f-4031-923b-65a0b07643ba, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1627368c-ee5c-442e-9ad3-4b7789309df9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.719 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1627368c-ee5c-442e-9ad3-4b7789309df9 in datapath e120265a-8c77-490f-9cb1-b93e6252e0c3 bound to our chassis#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.720 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e120265a-8c77-490f-9cb1-b93e6252e0c3#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.731 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa9105e-1f43-40a0-9588-372016f4fb2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.734 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape120265a-81 in ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.735 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape120265a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d73b7a-cbd6-40e9-8b03-3c0a5f0cfe12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.736 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d746b1a-1421-4088-8503-3065aabe2bb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.749 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[42a6812c-5b15-4890-93da-bf2f36741634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.771 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd8c079-580d-43e7-9226-fe5bd9fa7921]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.805 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b277f081-ace9-4c5d-973a-871bff84463d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 systemd-udevd[295520]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:22 np0005597378 NetworkManager[48904]: <info>  [1769522002.8149] manager: (tape120265a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.816 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9d84b0eb-efd6-4dd1-87f5-7bfb50b53f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 podman[295529]: 2026-01-27 13:53:22.852333088 +0000 UTC m=+0.070234838 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.852 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4a96428a-6447-4344-a3d7-6ba00af7e0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.857 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cd239bec-4543-4ba7-8c3d-4d8cf77cda44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 podman[295531]: 2026-01-27 13:53:22.878636964 +0000 UTC m=+0.092304730 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:53:22 np0005597378 NetworkManager[48904]: <info>  [1769522002.8856] device (tape120265a-80): carrier: link connected
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.892 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[35966968-776f-48d0-9697-e1e33ed81773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.913 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6de597d0-4065-4466-a2fc-596c4acdb1a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape120265a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:b1:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464250, 'reachable_time': 23898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295592, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf821a1-570b-485d-a23f-48dcf6092daf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:b16a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464250, 'tstamp': 464250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295593, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92edf2c2-f44c-468f-9b95-5217a244d52e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape120265a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:b1:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464250, 'reachable_time': 23898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295594, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:22.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5713b7aa-7a91-44d6-adc2-28a0c3e94b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.054 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00563ce8-186f-409d-99a6-bcda7ddd0067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.055 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape120265a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.055 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.056 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape120265a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:23 np0005597378 NetworkManager[48904]: <info>  [1769522003.0582] manager: (tape120265a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Jan 27 08:53:23 np0005597378 kernel: tape120265a-80: entered promiscuous mode
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.062 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape120265a-80, col_values=(('external_ids', {'iface-id': '656dc8af-0c2c-427c-9f50-66ca9dddd3cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:23 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:23Z|00571|binding|INFO|Releasing lport 656dc8af-0c2c-427c-9f50-66ca9dddd3cd from this chassis (sb_readonly=0)
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.065 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e120265a-8c77-490f-9cb1-b93e6252e0c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e120265a-8c77-490f-9cb1-b93e6252e0c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c42bd98-60e1-432d-a8dc-cfe43a93e51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.066 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e120265a-8c77-490f-9cb1-b93e6252e0c3
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e120265a-8c77-490f-9cb1-b93e6252e0c3.pid.haproxy
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e120265a-8c77-490f-9cb1-b93e6252e0c3
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:53:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:23.066 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'env', 'PROCESS_TAG=haproxy-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e120265a-8c77-490f-9cb1-b93e6252e0c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.382 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522003.3814676, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.382 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Started (Lifecycle Event)#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.453 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.459 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522003.3816738, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.459 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:53:23 np0005597378 podman[295666]: 2026-01-27 13:53:23.482805639 +0000 UTC m=+0.078009426 container create b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:53:23 np0005597378 podman[295666]: 2026-01-27 13:53:23.430037702 +0000 UTC m=+0.025241509 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:53:23 np0005597378 systemd[1]: Started libpod-conmon-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91.scope.
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.539 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.543 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd289a5e37cbb715378c71af1ef650710fa5985fe43f52cb74ac6fbc72b45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:23 np0005597378 podman[295666]: 2026-01-27 13:53:23.581172711 +0000 UTC m=+0.176376518 container init b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:53:23 np0005597378 podman[295666]: 2026-01-27 13:53:23.587769438 +0000 UTC m=+0.182973225 container start b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 08:53:23 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : New worker (295687) forked
Jan 27 08:53:23 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : Loading success.
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.697 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.910 238945 DEBUG nova.compute.manager [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.910 238945 DEBUG oslo_concurrency.lockutils [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.910 238945 DEBUG oslo_concurrency.lockutils [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.911 238945 DEBUG oslo_concurrency.lockutils [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.911 238945 DEBUG nova.compute.manager [req-dd175363-cf9e-4a6e-af2e-40e8b0befe2d req-81e2fc25-dfd2-4eb6-b099-26bb44970acd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Processing event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.911 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.915 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.915 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522003.9149766, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.915 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.920 238945 INFO nova.virt.libvirt.driver [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance spawned successfully.#033[00m
Jan 27 08:53:23 np0005597378 nova_compute[238941]: 2026-01-27 13:53:23.921 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.051 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.054 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.086 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.087 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.087 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.087 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.088 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.088 238945 DEBUG nova.virt.libvirt.driver [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.186 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 261 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 365 op/s
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.488 238945 INFO nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 11.71 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.489 238945 DEBUG nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.511 238945 DEBUG nova.compute.manager [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.511 238945 DEBUG oslo_concurrency.lockutils [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "de740849-c0ca-4217-974b-693a30f63855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.511 238945 DEBUG oslo_concurrency.lockutils [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.512 238945 DEBUG oslo_concurrency.lockutils [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.512 238945 DEBUG nova.compute.manager [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] No waiting events found dispatching network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.512 238945 WARNING nova.compute.manager [req-e972aac8-b56d-46b7-aef6-3f69add3d4c2 req-f287dca2-3df6-4505-8dd9-86f7eb5dd4eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received unexpected event network-vif-plugged-e2362203-8b31-4317-a96a-2089dfc590a2 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.518 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.519 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.521 238945 INFO nova.compute.manager [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Terminating instance#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.522 238945 DEBUG nova.compute.manager [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.537 238945 DEBUG nova.network.neutron [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.584 238945 DEBUG nova.network.neutron [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.586 238945 INFO nova.compute.manager [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Took 2.46 seconds to deallocate network for instance.#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.588 238945 INFO nova.compute.manager [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 12.93 seconds to build instance.#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.601 238945 INFO nova.compute.manager [-] [instance: de740849-c0ca-4217-974b-693a30f63855] Took 2.46 seconds to deallocate network for instance.#033[00m
Jan 27 08:53:24 np0005597378 kernel: tap64542a94-92 (unregistering): left promiscuous mode
Jan 27 08:53:24 np0005597378 NetworkManager[48904]: <info>  [1769522004.6279] device (tap64542a94-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:24Z|00572|binding|INFO|Releasing lport 64542a94-92c4-4d0e-94ac-b711946dae41 from this chassis (sb_readonly=0)
Jan 27 08:53:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:24Z|00573|binding|INFO|Setting lport 64542a94-92c4-4d0e-94ac-b711946dae41 down in Southbound
Jan 27 08:53:24 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:24Z|00574|binding|INFO|Removing iface tap64542a94-92 ovn-installed in OVS
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.647 238945 DEBUG oslo_concurrency.lockutils [None req-7746e849-ea2b-4488-97bd-8559bfd72643 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.654 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.654 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.654 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2b:0e 10.100.0.5'], port_security=['fa:16:3e:03:2b:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ec8ab83-8d5f-4296-bc39-61f269193e6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64542a94-92c4-4d0e-94ac-b711946dae41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.656 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64542a94-92c4-4d0e-94ac-b711946dae41 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.657 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.662 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a67dd412-4c45-4920-bee5-c2fa6740f49e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:24 np0005597378 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Jan 27 08:53:24 np0005597378 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Consumed 5.563s CPU time.
Jan 27 08:53:24 np0005597378 systemd-machined[207425]: Machine qemu-71-instance-0000003f terminated.
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.720 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcea254-533b-4422-89c7-a2e512e7459c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.726 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[717ae6e3-c0bc-424d-8d50-46787f048f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.764 238945 DEBUG oslo_concurrency.processutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.765 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e20903-e11c-4e8a-ae3f-5ab027e12110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.793 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02f20609-72e2-4cb8-8025-7a6156fa7e61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295715, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.816 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d20c6e-da78-4e46-857f-419940b88a13]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295717, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295717, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.818 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.822 238945 INFO nova.virt.libvirt.driver [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Instance destroyed successfully.#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.823 238945 DEBUG nova.objects.instance [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid 3ec8ab83-8d5f-4296-bc39-61f269193e6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.824 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.825 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.825 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:24.826 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.852 238945 DEBUG nova.virt.libvirt.vif [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-739165555',display_name='tempest-ServersTestJSON-server-739165555',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-739165555',id=63,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-0bjr1zu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:19Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=3ec8ab83-8d5f-4296-bc39-61f269193e6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.852 238945 DEBUG nova.network.os_vif_util [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64542a94-92c4-4d0e-94ac-b711946dae41", "address": "fa:16:3e:03:2b:0e", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64542a94-92", "ovs_interfaceid": "64542a94-92c4-4d0e-94ac-b711946dae41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.853 238945 DEBUG nova.network.os_vif_util [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.854 238945 DEBUG os_vif [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.857 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64542a94-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:24 np0005597378 nova_compute[238941]: 2026-01-27 13:53:24.866 238945 INFO os_vif [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2b:0e,bridge_name='br-int',has_traffic_filtering=True,id=64542a94-92c4-4d0e-94ac-b711946dae41,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64542a94-92')#033[00m
Jan 27 08:53:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340226665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.401 238945 DEBUG oslo_concurrency.processutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.406 238945 DEBUG nova.compute.provider_tree [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.423 238945 DEBUG nova.scheduler.client.report [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.441 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.443 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.481 238945 INFO nova.scheduler.client.report [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance 762ca3c0-2865-41c8-89fc-445573c554c9#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.547 238945 DEBUG oslo_concurrency.lockutils [None req-4a8f856d-80aa-4710-8932-ef9524a4dc6c f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "762ca3c0-2865-41c8-89fc-445573c554c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.566 238945 DEBUG oslo_concurrency.processutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.900 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.901 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.903 238945 INFO nova.compute.manager [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Terminating instance#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.904 238945 DEBUG nova.compute.manager [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.986 238945 DEBUG nova.compute.manager [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.986 238945 DEBUG oslo_concurrency.lockutils [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 DEBUG oslo_concurrency.lockutils [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 DEBUG oslo_concurrency.lockutils [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 DEBUG nova.compute.manager [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] No waiting events found dispatching network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:25 np0005597378 nova_compute[238941]: 2026-01-27 13:53:25.987 238945 WARNING nova.compute.manager [req-b55635f8-a883-45e7-b4ab-53146e00e5e6 req-84ec659b-22a3-4f9a-9203-4fb7666d46c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received unexpected event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:53:26 np0005597378 kernel: tap1627368c-ee (unregistering): left promiscuous mode
Jan 27 08:53:26 np0005597378 NetworkManager[48904]: <info>  [1769522006.0894] device (tap1627368c-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:26Z|00575|binding|INFO|Releasing lport 1627368c-ee5c-442e-9ad3-4b7789309df9 from this chassis (sb_readonly=0)
Jan 27 08:53:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:26Z|00576|binding|INFO|Setting lport 1627368c-ee5c-442e-9ad3-4b7789309df9 down in Southbound
Jan 27 08:53:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:26Z|00577|binding|INFO|Removing iface tap1627368c-ee ovn-installed in OVS
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.109 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:40:ba 10.100.0.8'], port_security=['fa:16:3e:78:40:ba 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '347f7116-1ca3-4a98-be15-0e50d25961d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f4bad8f405b4cdcbb174936852069ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d73eac7-7a8f-4239-9741-7eab87827cb0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c948a0a0-103f-4031-923b-65a0b07643ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1627368c-ee5c-442e-9ad3-4b7789309df9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.111 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1627368c-ee5c-442e-9ad3-4b7789309df9 in datapath e120265a-8c77-490f-9cb1-b93e6252e0c3 unbound from our chassis#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.112 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e120265a-8c77-490f-9cb1-b93e6252e0c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fe89df57-3642-4cd9-8536-5f7ff34002ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.113 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 namespace which is not needed anymore#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000040.scope: Deactivated successfully.
Jan 27 08:53:26 np0005597378 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000040.scope: Consumed 2.632s CPU time.
Jan 27 08:53:26 np0005597378 systemd-machined[207425]: Machine qemu-72-instance-00000040 terminated.
Jan 27 08:53:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510926290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.217 238945 DEBUG oslo_concurrency.processutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.224 238945 DEBUG nova.compute.provider_tree [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.238 238945 DEBUG nova.scheduler.client.report [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.274 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.297 238945 INFO nova.scheduler.client.report [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Deleted allocations for instance de740849-c0ca-4217-974b-693a30f63855#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.335 238945 INFO nova.virt.libvirt.driver [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Instance destroyed successfully.#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.336 238945 DEBUG nova.objects.instance [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lazy-loading 'resources' on Instance uuid 347f7116-1ca3-4a98-be15-0e50d25961d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.355 238945 DEBUG nova.virt.libvirt.vif [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1505918091',display_name='tempest-ServerAddressesTestJSON-server-1505918091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1505918091',id=64,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f4bad8f405b4cdcbb174936852069ed',ramdisk_id='',reservation_id='r-de000eqq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-43382846',owner_user_name='tempest-ServerAddressesTestJSON-43382846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:24Z,user_data=None,user_id='c69d0d1031754a3ea963316c805e1662',uuid=347f7116-1ca3-4a98-be15-0e50d25961d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.356 238945 DEBUG nova.network.os_vif_util [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converting VIF {"id": "1627368c-ee5c-442e-9ad3-4b7789309df9", "address": "fa:16:3e:78:40:ba", "network": {"id": "e120265a-8c77-490f-9cb1-b93e6252e0c3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1061329593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f4bad8f405b4cdcbb174936852069ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1627368c-ee", "ovs_interfaceid": "1627368c-ee5c-442e-9ad3-4b7789309df9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.357 238945 DEBUG nova.network.os_vif_util [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.357 238945 DEBUG os_vif [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.359 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1627368c-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.364 238945 INFO os_vif [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:40:ba,bridge_name='br-int',has_traffic_filtering=True,id=1627368c-ee5c-442e-9ad3-4b7789309df9,network=Network(e120265a-8c77-490f-9cb1-b93e6252e0c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1627368c-ee')#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.379 238945 DEBUG oslo_concurrency.lockutils [None req-5728018c-c67b-47cd-b18f-c7437e5f7cd2 f9e663079ce44f94a4dbe6125b395ce1 d333430b14814ea487cbd2af414c350f - - default default] Lock "de740849-c0ca-4217-974b-693a30f63855" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 214 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.4 MiB/s wr, 372 op/s
Jan 27 08:53:26 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : haproxy version is 2.8.14-c23fe91
Jan 27 08:53:26 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [NOTICE]   (295685) : path to executable is /usr/sbin/haproxy
Jan 27 08:53:26 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [WARNING]  (295685) : Exiting Master process...
Jan 27 08:53:26 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [ALERT]    (295685) : Current worker (295687) exited with code 143 (Terminated)
Jan 27 08:53:26 np0005597378 neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3[295681]: [WARNING]  (295685) : All workers exited. Exiting... (0)
Jan 27 08:53:26 np0005597378 systemd[1]: libpod-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91.scope: Deactivated successfully.
Jan 27 08:53:26 np0005597378 podman[295802]: 2026-01-27 13:53:26.5220939 +0000 UTC m=+0.322514252 container died b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:53:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:26Z|00578|binding|INFO|Releasing lport 656dc8af-0c2c-427c-9f50-66ca9dddd3cd from this chassis (sb_readonly=0)
Jan 27 08:53:26 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:26Z|00579|binding|INFO|Releasing lport 1a4e395a-c1da-494c-a8bb-160c38bbc6e6 from this chassis (sb_readonly=0)
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.671 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Received event network-vif-deleted-4ef8a620-c221-4b3f-ba32-3ab574e341af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.671 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: de740849-c0ca-4217-974b-693a30f63855] Received event network-vif-deleted-e2362203-8b31-4317-a96a-2089dfc590a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.671 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-unplugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] No waiting events found dispatching network-vif-unplugged-64542a94-92c4-4d0e-94ac-b711946dae41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.672 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-unplugged-64542a94-92c4-4d0e-94ac-b711946dae41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG oslo_concurrency.lockutils [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.673 238945 DEBUG nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] No waiting events found dispatching network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.674 238945 WARNING nova.compute.manager [req-7fece9af-7ef2-40ff-9e1c-502ca706a826 req-83856ebb-6ad5-4314-9e16-e1ee3e9b2dbd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received unexpected event network-vif-plugged-64542a94-92c4-4d0e-94ac-b711946dae41 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91-userdata-shm.mount: Deactivated successfully.
Jan 27 08:53:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-85cfd289a5e37cbb715378c71af1ef650710fa5985fe43f52cb74ac6fbc72b45-merged.mount: Deactivated successfully.
Jan 27 08:53:26 np0005597378 podman[295802]: 2026-01-27 13:53:26.799396517 +0000 UTC m=+0.599816869 container cleanup b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:53:26 np0005597378 systemd[1]: libpod-conmon-b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91.scope: Deactivated successfully.
Jan 27 08:53:26 np0005597378 podman[295864]: 2026-01-27 13:53:26.886829475 +0000 UTC m=+0.060966488 container remove b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.893 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14a0a655-9027-489b-8368-100ca754a398]: (4, ('Tue Jan 27 01:53:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 (b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91)\nb49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91\nTue Jan 27 01:53:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 (b49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91)\nb49f6b955a6821dfcda46fa24199f6e776a665279218e2155e7df7f931fcdf91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.895 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4118b726-869c-44de-ab8b-ebde00fb46d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.896 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape120265a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 kernel: tape120265a-80: left promiscuous mode
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.917 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[13892dc3-426b-48fd-803a-3cba99c79403]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.931 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f7b5b6-0c1d-446a-a98c-ecb6773db054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.933 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e47a8a5-b1f3-4b56-8cbe-c183f00cb2fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.934 238945 INFO nova.virt.libvirt.driver [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deleting instance files /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a_del#033[00m
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.935 238945 INFO nova.virt.libvirt.driver [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deletion of /var/lib/nova/instances/3ec8ab83-8d5f-4296-bc39-61f269193e6a_del complete#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0aa540-390e-4602-8546-6acefc51f8ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464242, 'reachable_time': 22013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295878, 'error': None, 'target': 'ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.952 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e120265a-8c77-490f-9cb1-b93e6252e0c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:53:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:26.953 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8eb565-9b9e-46c5-8cba-fbaff8679e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:26 np0005597378 systemd[1]: run-netns-ovnmeta\x2de120265a\x2d8c77\x2d490f\x2d9cb1\x2db93e6252e0c3.mount: Deactivated successfully.
Jan 27 08:53:26 np0005597378 nova_compute[238941]: 2026-01-27 13:53:26.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.001 238945 INFO nova.compute.manager [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 2.48 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.002 238945 DEBUG oslo.service.loopingcall [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.002 238945 DEBUG nova.compute.manager [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.002 238945 DEBUG nova.network.neutron [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.083 238945 INFO nova.virt.libvirt.driver [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deleting instance files /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3_del#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.084 238945 INFO nova.virt.libvirt.driver [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deletion of /var/lib/nova/instances/347f7116-1ca3-4a98-be15-0e50d25961d3_del complete#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.148 238945 INFO nova.compute.manager [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.149 238945 DEBUG oslo.service.loopingcall [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.150 238945 DEBUG nova.compute.manager [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.150 238945 DEBUG nova.network.neutron [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:53:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014625526991348182 of space, bias 1.0, pg target 0.43876580974044543 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006679662652427722 of space, bias 1.0, pg target 0.20038987957283166 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.150873820980584e-06 of space, bias 4.0, pg target 0.0013810485851767007 quantized to 16 (current 16)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:53:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.672 238945 DEBUG nova.network.neutron [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.675 238945 DEBUG nova.network.neutron [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.689 238945 INFO nova.compute.manager [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Took 0.69 seconds to deallocate network for instance.#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.692 238945 INFO nova.compute.manager [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Took 0.54 seconds to deallocate network for instance.#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.758 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.758 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.763 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:27 np0005597378 nova_compute[238941]: 2026-01-27 13:53:27.840 238945 DEBUG oslo_concurrency.processutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.127 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-unplugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.127 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.127 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] No waiting events found dispatching network-vif-unplugged-1627368c-ee5c-442e-9ad3-4b7789309df9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 WARNING nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received unexpected event network-vif-unplugged-1627368c-ee5c-442e-9ad3-4b7789309df9 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.128 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG oslo_concurrency.lockutils [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] No waiting events found dispatching network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.129 238945 WARNING nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received unexpected event network-vif-plugged-1627368c-ee5c-442e-9ad3-4b7789309df9 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.130 238945 DEBUG nova.compute.manager [req-d4cca52f-853a-47c0-a573-bcb8c12fd3a6 req-8bb16d24-143b-44b0-b391-582e3d0161b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Received event network-vif-deleted-1627368c-ee5c-442e-9ad3-4b7789309df9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 188 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 466 KiB/s wr, 336 op/s
Jan 27 08:53:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93224568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.425 238945 DEBUG oslo_concurrency.processutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.431 238945 DEBUG nova.compute.provider_tree [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.457 238945 DEBUG nova.scheduler.client.report [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.475 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.478 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.509 238945 INFO nova.scheduler.client.report [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance 3ec8ab83-8d5f-4296-bc39-61f269193e6a#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.548 238945 DEBUG oslo_concurrency.processutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:28 np0005597378 nova_compute[238941]: 2026-01-27 13:53:28.585 238945 DEBUG oslo_concurrency.lockutils [None req-3445488b-ed65-47cd-8792-aa71c449647d e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "3ec8ab83-8d5f-4296-bc39-61f269193e6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845406754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.095 238945 DEBUG oslo_concurrency.processutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.102 238945 DEBUG nova.compute.provider_tree [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.112 238945 DEBUG nova.compute.manager [req-de28b8ac-8c78-41cf-9ef0-27354c5d6451 req-fc78a2e1-7ca3-471b-a8be-1ca10d68673b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Received event network-vif-deleted-64542a94-92c4-4d0e-94ac-b711946dae41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.124 238945 DEBUG nova.scheduler.client.report [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.151 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.179 238945 INFO nova.scheduler.client.report [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Deleted allocations for instance 347f7116-1ca3-4a98-be15-0e50d25961d3#033[00m
Jan 27 08:53:29 np0005597378 nova_compute[238941]: 2026-01-27 13:53:29.249 238945 DEBUG oslo_concurrency.lockutils [None req-574d5217-f4c3-484f-a695-ef9fdd41f2e1 c69d0d1031754a3ea963316c805e1662 3f4bad8f405b4cdcbb174936852069ed - - default default] Lock "347f7116-1ca3-4a98-be15-0e50d25961d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 32 KiB/s wr, 353 op/s
Jan 27 08:53:30 np0005597378 nova_compute[238941]: 2026-01-27 13:53:30.567 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521995.5655468, c03e1ba1-3e7e-4cb8-847e-07c85da05427 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:30 np0005597378 nova_compute[238941]: 2026-01-27 13:53:30.567 238945 INFO nova.compute.manager [-] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:30 np0005597378 nova_compute[238941]: 2026-01-27 13:53:30.603 238945 DEBUG nova.compute.manager [None req-e3104687-984c-496b-907b-14b3a6a87c88 - - - - - -] [instance: c03e1ba1-3e7e-4cb8-847e-07c85da05427] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:31 np0005597378 nova_compute[238941]: 2026-01-27 13:53:31.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:31 np0005597378 nova_compute[238941]: 2026-01-27 13:53:31.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 241 op/s
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.220 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.221 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.314 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.425 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.426 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.434 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.435 238945 INFO nova.compute.claims [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:53:33 np0005597378 nova_compute[238941]: 2026-01-27 13:53:33.536 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311873693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.099 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.105 238945 DEBUG nova.compute.provider_tree [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.123 238945 DEBUG nova.scheduler.client.report [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.150 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.151 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.196 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.196 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.220 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.265 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.365 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.367 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.367 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Creating image(s)#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.388 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 121 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 241 op/s
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.412 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.435 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.438 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.510 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.511 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.512 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.512 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.534 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.537 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.703 238945 DEBUG nova.policy [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.897 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769521999.665489, 762ca3c0-2865-41c8-89fc-445573c554c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.898 238945 INFO nova.compute.manager [-] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.923 238945 DEBUG nova.compute.manager [None req-8ac4ed61-1866-451b-8526-195b4096564f - - - - - -] [instance: 762ca3c0-2865-41c8-89fc-445573c554c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.927 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:34 np0005597378 nova_compute[238941]: 2026-01-27 13:53:34.983 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.090 238945 DEBUG nova.objects.instance [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid afe7a605-0545-4e95-9e9a-4938d17f4a8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.111 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.112 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Ensure instance console log exists: /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.113 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.113 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.113 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.202 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522000.200662, de740849-c0ca-4217-974b-693a30f63855 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.204 238945 INFO nova.compute.manager [-] [instance: de740849-c0ca-4217-974b-693a30f63855] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.222 238945 DEBUG nova.compute.manager [None req-e238fa5a-376e-409e-b487-96e5431e19e6 - - - - - -] [instance: de740849-c0ca-4217-974b-693a30f63855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:35 np0005597378 nova_compute[238941]: 2026-01-27 13:53:35.317 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Successfully created port: 1c387633-89d6-40ae-a2cf-102af6198381 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.236 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Successfully updated port: 1c387633-89d6-40ae-a2cf-102af6198381 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.255 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.255 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.255 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.338 238945 DEBUG nova.compute.manager [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-changed-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.338 238945 DEBUG nova.compute.manager [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Refreshing instance network info cache due to event network-changed-1c387633-89d6-40ae-a2cf-102af6198381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.339 238945 DEBUG oslo_concurrency.lockutils [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 140 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 648 KiB/s wr, 153 op/s
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.444 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:53:36 np0005597378 nova_compute[238941]: 2026-01-27 13:53:36.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.554 238945 DEBUG nova.network.neutron [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updating instance_info_cache with network_info: [{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.576 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.576 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance network_info: |[{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.577 238945 DEBUG oslo_concurrency.lockutils [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.577 238945 DEBUG nova.network.neutron [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Refreshing network info cache for port 1c387633-89d6-40ae-a2cf-102af6198381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.580 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start _get_guest_xml network_info=[{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.583 238945 WARNING nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.588 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.589 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.592 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.593 238945 DEBUG nova.virt.libvirt.host [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.593 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.594 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.594 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.594 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.595 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.595 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.595 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.596 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.597 238945 DEBUG nova.virt.hardware [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:53:37 np0005597378 nova_compute[238941]: 2026-01-27 13:53:37.599 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2535241180' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.150 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.172 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.178 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 148 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 106 op/s
Jan 27 08:53:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:53:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373586063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.772 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.774 238945 DEBUG nova.virt.libvirt.vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1612128163',display_name='tempest-ServersTestJSON-server-1612128163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1612128163',id=65,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGe8ucoZvXk9LyMLWsJjESvrH9cQbWgf6qGhTQ+ehjY9kwB90isGVG/mBPiE+MhJP0Cg2/AEEyPfe4xUyx5m/URrPDagK21Ume1U+6jptE1fu2xZideFG+s8qqN0nVYPZQ==',key_name='tempest-key-9636339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-lswldz3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:34Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=afe7a605-0545-4e95-9e9a-4938d17f4a8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.774 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.776 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.777 238945 DEBUG nova.objects.instance [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid afe7a605-0545-4e95-9e9a-4938d17f4a8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.804 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <uuid>afe7a605-0545-4e95-9e9a-4938d17f4a8c</uuid>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <name>instance-00000041</name>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersTestJSON-server-1612128163</nova:name>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:53:37</nova:creationTime>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <nova:port uuid="1c387633-89d6-40ae-a2cf-102af6198381">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <entry name="serial">afe7a605-0545-4e95-9e9a-4938d17f4a8c</entry>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <entry name="uuid">afe7a605-0545-4e95-9e9a-4938d17f4a8c</entry>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:05:c2:a7"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <target dev="tap1c387633-89"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/console.log" append="off"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:53:38 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:53:38 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:53:38 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:53:38 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.807 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Preparing to wait for external event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.807 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.807 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.808 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.808 238945 DEBUG nova.virt.libvirt.vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1612128163',display_name='tempest-ServersTestJSON-server-1612128163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1612128163',id=65,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGe8ucoZvXk9LyMLWsJjESvrH9cQbWgf6qGhTQ+ehjY9kwB90isGVG/mBPiE+MhJP0Cg2/AEEyPfe4xUyx5m/URrPDagK21Ume1U+6jptE1fu2xZideFG+s8qqN0nVYPZQ==',key_name='tempest-key-9636339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-lswldz3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:34Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=afe7a605-0545-4e95-9e9a-4938d17f4a8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.809 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.809 238945 DEBUG nova.network.os_vif_util [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.810 238945 DEBUG os_vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.811 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.811 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.815 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c387633-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.815 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c387633-89, col_values=(('external_ids', {'iface-id': '1c387633-89d6-40ae-a2cf-102af6198381', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:c2:a7', 'vm-uuid': 'afe7a605-0545-4e95-9e9a-4938d17f4a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:38 np0005597378 NetworkManager[48904]: <info>  [1769522018.8186] manager: (tap1c387633-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:38 np0005597378 nova_compute[238941]: 2026-01-27 13:53:38.824 238945 INFO os_vif [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89')#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.024 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.025 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.025 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:05:c2:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.025 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Using config drive#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.051 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.079 238945 DEBUG nova.network.neutron [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updated VIF entry in instance network info cache for port 1c387633-89d6-40ae-a2cf-102af6198381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.079 238945 DEBUG nova.network.neutron [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updating instance_info_cache with network_info: [{"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.104 238945 DEBUG oslo_concurrency.lockutils [req-a2474499-2fee-433c-9b4f-a5ec8c50fdab req-cc69941f-700f-47ef-98ff-42bd7d433778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-afe7a605-0545-4e95-9e9a-4938d17f4a8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.587 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Creating config drive at /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.592 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpwavrqv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.734 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpwavrqv" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.763 238945 DEBUG nova.storage.rbd_utils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.767 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.800 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522004.7626312, 3ec8ab83-8d5f-4296-bc39-61f269193e6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.801 238945 INFO nova.compute.manager [-] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.819 238945 DEBUG nova.compute.manager [None req-d1e6c78c-220d-4bed-b124-df2bf8e889f2 - - - - - -] [instance: 3ec8ab83-8d5f-4296-bc39-61f269193e6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.914 238945 DEBUG oslo_concurrency.processutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config afe7a605-0545-4e95-9e9a-4938d17f4a8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.915 238945 INFO nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deleting local config drive /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:53:39 np0005597378 kernel: tap1c387633-89: entered promiscuous mode
Jan 27 08:53:39 np0005597378 NetworkManager[48904]: <info>  [1769522019.9679] manager: (tap1c387633-89): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Jan 27 08:53:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:39Z|00580|binding|INFO|Claiming lport 1c387633-89d6-40ae-a2cf-102af6198381 for this chassis.
Jan 27 08:53:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:39Z|00581|binding|INFO|1c387633-89d6-40ae-a2cf-102af6198381: Claiming fa:16:3e:05:c2:a7 10.100.0.6
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.977 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:c2:a7 10.100.0.6'], port_security=['fa:16:3e:05:c2:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'afe7a605-0545-4e95-9e9a-4938d17f4a8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1c387633-89d6-40ae-a2cf-102af6198381) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.979 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1c387633-89d6-40ae-a2cf-102af6198381 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis#033[00m
Jan 27 08:53:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.981 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636#033[00m
Jan 27 08:53:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:39Z|00582|binding|INFO|Setting lport 1c387633-89d6-40ae-a2cf-102af6198381 ovn-installed in OVS
Jan 27 08:53:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:39Z|00583|binding|INFO|Setting lport 1c387633-89d6-40ae-a2cf-102af6198381 up in Southbound
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:39 np0005597378 nova_compute[238941]: 2026-01-27 13:53:39.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:39.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[645d05ba-7448-4ee9-99b1-c0b525852903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:40 np0005597378 systemd-udevd[296248]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:53:40 np0005597378 systemd-machined[207425]: New machine qemu-73-instance-00000041.
Jan 27 08:53:40 np0005597378 systemd[1]: Started Virtual Machine qemu-73-instance-00000041.
Jan 27 08:53:40 np0005597378 NetworkManager[48904]: <info>  [1769522020.0210] device (tap1c387633-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:53:40 np0005597378 NetworkManager[48904]: <info>  [1769522020.0220] device (tap1c387633-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.030 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[82be7c23-c032-4af0-a9ef-4d98239a5140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.033 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[187c6ca3-93f7-4b1b-982d-0fdeffd42824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.067 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5af384-1230-447a-94a4-59332613083c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.087 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac030320-fc97-4d37-8cf3-778cbca2f228]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296260, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58aebdb4-a516-42a5-ac1a-5071058c09ba]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296262, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296262, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.106 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:40.109 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1016 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.554 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522020.5539052, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.554 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.627 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.632 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522020.5547209, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.632 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.671 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.675 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:40 np0005597378 nova_compute[238941]: 2026-01-27 13:53:40.862 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:41 np0005597378 nova_compute[238941]: 2026-01-27 13:53:41.335 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522006.3336933, 347f7116-1ca3-4a98-be15-0e50d25961d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:41 np0005597378 nova_compute[238941]: 2026-01-27 13:53:41.335 238945 INFO nova.compute.manager [-] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:53:41 np0005597378 nova_compute[238941]: 2026-01-27 13:53:41.363 238945 DEBUG nova.compute.manager [None req-c3aa1db9-86a2-4a7d-8e1b-06b58ed36de3 - - - - - -] [instance: 347f7116-1ca3-4a98-be15-0e50d25961d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:41 np0005597378 nova_compute[238941]: 2026-01-27 13:53:41.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 08:53:43 np0005597378 nova_compute[238941]: 2026-01-27 13:53:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:43 np0005597378 nova_compute[238941]: 2026-01-27 13:53:43.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.115 238945 DEBUG nova.compute.manager [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.116 238945 DEBUG oslo_concurrency.lockutils [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.117 238945 DEBUG oslo_concurrency.lockutils [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.117 238945 DEBUG oslo_concurrency.lockutils [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.117 238945 DEBUG nova.compute.manager [req-7c797441-3a5c-485d-9977-3012283593c5 req-7ec829aa-087f-47b7-b48c-69367d6bad58 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Processing event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.118 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.122 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522026.1222925, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.123 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.125 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.128 238945 INFO nova.virt.libvirt.driver [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance spawned successfully.#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.129 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.154 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.160 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.161 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.162 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.162 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.162 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.163 238945 DEBUG nova.virt.libvirt.driver [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.170 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.208 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.235 238945 INFO nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 11.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.235 238945 DEBUG nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.298 238945 INFO nova.compute.manager [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 12.89 seconds to build instance.#033[00m
Jan 27 08:53:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:46.301 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:46.302 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.318 238945 DEBUG oslo_concurrency.lockutils [None req-01277b95-8ea0-4026-b140-2a5c6d9e1475 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 27 08:53:46 np0005597378 nova_compute[238941]: 2026-01-27 13:53:46.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.896 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.897 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.897 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:53:47 np0005597378 nova_compute[238941]: 2026-01-27 13:53:47.897 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 511a49bc-bc87-444f-8323-95e4c88313c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.091 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.093 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.093 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.095 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.095 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.096 238945 INFO nova.compute.manager [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Terminating instance#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.097 238945 DEBUG nova.compute.manager [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:53:48 np0005597378 kernel: tap1c387633-89 (unregistering): left promiscuous mode
Jan 27 08:53:48 np0005597378 NetworkManager[48904]: <info>  [1769522028.1813] device (tap1c387633-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.189 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:48Z|00584|binding|INFO|Releasing lport 1c387633-89d6-40ae-a2cf-102af6198381 from this chassis (sb_readonly=0)
Jan 27 08:53:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:48Z|00585|binding|INFO|Setting lport 1c387633-89d6-40ae-a2cf-102af6198381 down in Southbound
Jan 27 08:53:48 np0005597378 ovn_controller[144812]: 2026-01-27T13:53:48Z|00586|binding|INFO|Removing iface tap1c387633-89 ovn-installed in OVS
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.198 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:c2:a7 10.100.0.6'], port_security=['fa:16:3e:05:c2:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'afe7a605-0545-4e95-9e9a-4938d17f4a8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1c387633-89d6-40ae-a2cf-102af6198381) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.199 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1c387633-89d6-40ae-a2cf-102af6198381 in datapath 13754bbc-8f22-4885-aa27-198718585636 unbound from our chassis#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.200 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.203 238945 DEBUG nova.compute.manager [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.204 238945 DEBUG oslo_concurrency.lockutils [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.204 238945 DEBUG oslo_concurrency.lockutils [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.204 238945 DEBUG oslo_concurrency.lockutils [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.205 238945 DEBUG nova.compute.manager [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] No waiting events found dispatching network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.205 238945 WARNING nova.compute.manager [req-13bf1acb-1a80-406f-8566-7d32f848fea1 req-80770b7c-1237-435a-b06a-4fd1c27fd40d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received unexpected event network-vif-plugged-1c387633-89d6-40ae-a2cf-102af6198381 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.218 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31e31684-1df1-4d75-968d-ec721dbcb4aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:48 np0005597378 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 27 08:53:48 np0005597378 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000041.scope: Consumed 2.600s CPU time.
Jan 27 08:53:48 np0005597378 systemd-machined[207425]: Machine qemu-73-instance-00000041 terminated.
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.247 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[06817abd-7762-45aa-9fdb-16fee2194a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.250 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3cd2c4-55ea-46ba-aa38-47a43308f84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.277 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ad15afd3-2877-47e9-8843-1e59db9b4ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b84995-00f0-4578-8c82-2ae61093986d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 19783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296366, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72ae70a0-a426-4818-a5ba-c2f871c32f71]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296367, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296367, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.320 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.327 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.327 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:53:48.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.331 238945 INFO nova.virt.libvirt.driver [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Instance destroyed successfully.#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.332 238945 DEBUG nova.objects.instance [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'resources' on Instance uuid afe7a605-0545-4e95-9e9a-4938d17f4a8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.348 238945 DEBUG nova.virt.libvirt.vif [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1612128163',display_name='tempest-ServersTestJSON-server-1612128163',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1612128163',id=65,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGe8ucoZvXk9LyMLWsJjESvrH9cQbWgf6qGhTQ+ehjY9kwB90isGVG/mBPiE+MhJP0Cg2/AEEyPfe4xUyx5m/URrPDagK21Ume1U+6jptE1fu2xZideFG+s8qqN0nVYPZQ==',key_name='tempest-key-9636339',keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:53:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-lswldz3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:53:46Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=afe7a605-0545-4e95-9e9a-4938d17f4a8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.349 238945 DEBUG nova.network.os_vif_util [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "1c387633-89d6-40ae-a2cf-102af6198381", "address": "fa:16:3e:05:c2:a7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c387633-89", "ovs_interfaceid": "1c387633-89d6-40ae-a2cf-102af6198381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.350 238945 DEBUG nova.network.os_vif_util [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.351 238945 DEBUG os_vif [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.354 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.355 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c387633-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:53:48 np0005597378 nova_compute[238941]: 2026-01-27 13:53:48.362 238945 INFO os_vif [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:c2:a7,bridge_name='br-int',has_traffic_filtering=True,id=1c387633-89d6-40ae-a2cf-102af6198381,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c387633-89')#033[00m
Jan 27 08:53:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 1.2 MiB/s wr, 40 op/s
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:53:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.068 238945 INFO nova.virt.libvirt.driver [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deleting instance files /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c_del#033[00m
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.069 238945 INFO nova.virt.libvirt.driver [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deletion of /var/lib/nova/instances/afe7a605-0545-4e95-9e9a-4938d17f4a8c_del complete#033[00m
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.157 238945 INFO nova.compute.manager [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.158 238945 DEBUG oslo.service.loopingcall [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.158 238945 DEBUG nova.compute.manager [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.158 238945 DEBUG nova.network.neutron [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.284453846 +0000 UTC m=+0.046391617 container create 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:53:49 np0005597378 systemd[1]: Started libpod-conmon-5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e.scope.
Jan 27 08:53:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.261167901 +0000 UTC m=+0.023105702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.373031074 +0000 UTC m=+0.134968865 container init 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.381886693 +0000 UTC m=+0.143824484 container start 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.386506596 +0000 UTC m=+0.148444467 container attach 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:53:49 np0005597378 naughty_golick[296507]: 167 167
Jan 27 08:53:49 np0005597378 systemd[1]: libpod-5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e.scope: Deactivated successfully.
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.392064796 +0000 UTC m=+0.154002567 container died 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6ae3e1d3165858844d9c11d65b805ac04615bdefdafc5e472a3a482ef449360f-merged.mount: Deactivated successfully.
Jan 27 08:53:49 np0005597378 podman[296491]: 2026-01-27 13:53:49.447531246 +0000 UTC m=+0.209469017 container remove 5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:53:49 np0005597378 systemd[1]: libpod-conmon-5e66ff20d3b4edeb3c5fdafd8ff729a19b7a6c01e4ca6d879f16aef51143e11e.scope: Deactivated successfully.
Jan 27 08:53:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:53:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:53:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:53:49 np0005597378 podman[296531]: 2026-01-27 13:53:49.633903701 +0000 UTC m=+0.043082959 container create 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 08:53:49 np0005597378 systemd[1]: Started libpod-conmon-5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e.scope.
Jan 27 08:53:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:49 np0005597378 podman[296531]: 2026-01-27 13:53:49.615410174 +0000 UTC m=+0.024589452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:53:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:49 np0005597378 podman[296531]: 2026-01-27 13:53:49.89380698 +0000 UTC m=+0.302986258 container init 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:53:49 np0005597378 podman[296531]: 2026-01-27 13:53:49.902473413 +0000 UTC m=+0.311652671 container start 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 08:53:49 np0005597378 nova_compute[238941]: 2026-01-27 13:53:49.993 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updating instance_info_cache with network_info: [{"id": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "address": "fa:16:3e:e4:d4:f7", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff596883-7a", "ovs_interfaceid": "ff596883-7a7a-4226-a61f-de4382f6ff0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.015 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-511a49bc-bc87-444f-8323-95e4c88313c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.015 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 511a49bc-bc87-444f-8323-95e4c88313c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.016 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.035 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.036 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.036 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.037 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.037 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:50 np0005597378 podman[296531]: 2026-01-27 13:53:50.039627376 +0000 UTC m=+0.448806664 container attach 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:50 np0005597378 agitated_shannon[296548]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:53:50 np0005597378 agitated_shannon[296548]: --> All data devices are unavailable
Jan 27 08:53:50 np0005597378 systemd[1]: libpod-5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e.scope: Deactivated successfully.
Jan 27 08:53:50 np0005597378 podman[296531]: 2026-01-27 13:53:50.372781842 +0000 UTC m=+0.781961110 container died 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:53:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0034f26f71d3cba4a878979a0f17e4fee59a1f3ad2e96129211a6f058a65280b-merged.mount: Deactivated successfully.
Jan 27 08:53:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 148 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 786 KiB/s wr, 97 op/s
Jan 27 08:53:50 np0005597378 podman[296531]: 2026-01-27 13:53:50.420925196 +0000 UTC m=+0.830104454 container remove 5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shannon, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:53:50 np0005597378 systemd[1]: libpod-conmon-5f1fde261c64b4ad2663b7b1443999b28f1122ea96de0cda74900cdf6c7bd72e.scope: Deactivated successfully.
Jan 27 08:53:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539312475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.611 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.673 238945 DEBUG nova.network.neutron [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.711 238945 INFO nova.compute.manager [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Took 1.55 seconds to deallocate network for instance.#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.717 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.717 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.756 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.757 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.771 238945 DEBUG nova.compute.manager [req-80e2776f-76d2-4280-941b-843359743c2f req-54af0b64-055e-4d42-8ca9-80bbdce1006e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Received event network-vif-deleted-1c387633-89d6-40ae-a2cf-102af6198381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.788 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.823 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.825 238945 DEBUG nova.compute.provider_tree [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.846 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.867 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 08:53:50 np0005597378 podman[296665]: 2026-01-27 13:53:50.906037843 +0000 UTC m=+0.045816331 container create d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.919 238945 DEBUG oslo_concurrency.processutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:50 np0005597378 systemd[1]: Started libpod-conmon-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope.
Jan 27 08:53:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.966 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.968 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.92862272169441GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:53:50 np0005597378 nova_compute[238941]: 2026-01-27 13:53:50.968 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:50 np0005597378 podman[296665]: 2026-01-27 13:53:50.976540297 +0000 UTC m=+0.116318805 container init d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:50 np0005597378 podman[296665]: 2026-01-27 13:53:50.887598248 +0000 UTC m=+0.027376756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:53:50 np0005597378 podman[296665]: 2026-01-27 13:53:50.984868401 +0000 UTC m=+0.124646899 container start d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:53:50 np0005597378 podman[296665]: 2026-01-27 13:53:50.988389155 +0000 UTC m=+0.128167663 container attach d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 08:53:50 np0005597378 elated_hugle[296682]: 167 167
Jan 27 08:53:50 np0005597378 systemd[1]: libpod-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope: Deactivated successfully.
Jan 27 08:53:50 np0005597378 conmon[296682]: conmon d5df5b141ae3bb2a5c91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope/container/memory.events
Jan 27 08:53:50 np0005597378 podman[296665]: 2026-01-27 13:53:50.99269365 +0000 UTC m=+0.132472158 container died d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:53:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e1c76afd0e483918b21d3be3e1bd8c52be307746fa81315f1e28e482922a26f2-merged.mount: Deactivated successfully.
Jan 27 08:53:51 np0005597378 podman[296665]: 2026-01-27 13:53:51.02990067 +0000 UTC m=+0.169679158 container remove d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 08:53:51 np0005597378 systemd[1]: libpod-conmon-d5df5b141ae3bb2a5c91477b75f7ab2685384e7c06cecba3cc873044efb491a4.scope: Deactivated successfully.
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.221741132 +0000 UTC m=+0.039557673 container create bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 08:53:51 np0005597378 systemd[1]: Started libpod-conmon-bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2.scope.
Jan 27 08:53:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.204402766 +0000 UTC m=+0.022219327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.311637486 +0000 UTC m=+0.129454047 container init bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.321947642 +0000 UTC m=+0.139764183 container start bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.326806883 +0000 UTC m=+0.144623444 container attach bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:53:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710103550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.539 238945 DEBUG oslo_concurrency.processutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.547 238945 DEBUG nova.compute.provider_tree [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.567 238945 DEBUG nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.588 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.591 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]: {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:    "0": [
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:        {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "devices": [
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "/dev/loop3"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            ],
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_name": "ceph_lv0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_size": "21470642176",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "name": "ceph_lv0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "tags": {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cluster_name": "ceph",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.crush_device_class": "",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.encrypted": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.objectstore": "bluestore",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osd_id": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.type": "block",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.vdo": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.with_tpm": "0"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            },
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "type": "block",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "vg_name": "ceph_vg0"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:        }
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:    ],
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:    "1": [
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:        {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "devices": [
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "/dev/loop4"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            ],
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_name": "ceph_lv1",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_size": "21470642176",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "name": "ceph_lv1",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "tags": {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cluster_name": "ceph",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.crush_device_class": "",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.encrypted": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.objectstore": "bluestore",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osd_id": "1",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.type": "block",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.vdo": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.with_tpm": "0"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            },
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "type": "block",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "vg_name": "ceph_vg1"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:        }
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:    ],
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:    "2": [
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:        {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "devices": [
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "/dev/loop5"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            ],
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_name": "ceph_lv2",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_size": "21470642176",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "name": "ceph_lv2",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "tags": {
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.cluster_name": "ceph",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.crush_device_class": "",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.encrypted": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.objectstore": "bluestore",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osd_id": "2",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.type": "block",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.vdo": "0",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:                "ceph.with_tpm": "0"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            },
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "type": "block",
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:            "vg_name": "ceph_vg2"
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:        }
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]:    ]
Jan 27 08:53:51 np0005597378 inspiring_hodgkin[296743]: }
Jan 27 08:53:51 np0005597378 systemd[1]: libpod-bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2.scope: Deactivated successfully.
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.645565154 +0000 UTC m=+0.463381715 container died bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:53:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d438ba2286c2a10c8665f15064520b4f395fad48fc4aa9ec04c4faf79283d223-merged.mount: Deactivated successfully.
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.698 238945 INFO nova.scheduler.client.report [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Deleted allocations for instance afe7a605-0545-4e95-9e9a-4938d17f4a8c#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.739 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 511a49bc-bc87-444f-8323-95e4c88313c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.740 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.740 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:53:51 np0005597378 podman[296726]: 2026-01-27 13:53:51.755161737 +0000 UTC m=+0.572978268 container remove bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 08:53:51 np0005597378 systemd[1]: libpod-conmon-bb498026a2a40ade438bf965a02ce69d55f5ce16e2e48b20761d9e303ab885f2.scope: Deactivated successfully.
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.783 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.926 238945 DEBUG oslo_concurrency.lockutils [None req-2367738d-2d97-4411-815d-981caa21e775 e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "afe7a605-0545-4e95-9e9a-4938d17f4a8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:51 np0005597378 nova_compute[238941]: 2026-01-27 13:53:51.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.247408206 +0000 UTC m=+0.078770596 container create 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.190142218 +0000 UTC m=+0.021504638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:53:52 np0005597378 systemd[1]: Started libpod-conmon-74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0.scope.
Jan 27 08:53:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121286299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.372222878 +0000 UTC m=+0.203585288 container init 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.380054238 +0000 UTC m=+0.211416628 container start 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 08:53:52 np0005597378 angry_beaver[296863]: 167 167
Jan 27 08:53:52 np0005597378 systemd[1]: libpod-74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0.scope: Deactivated successfully.
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.394 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.400 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 148 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.414862213 +0000 UTC m=+0.246224623 container attach 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.415258733 +0000 UTC m=+0.246621123 container died 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.427 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.463 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.463 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7ded9caef6219aa8b5aa8af8152c7feed33e94dc5b72e3df37423e86972bdaab-merged.mount: Deactivated successfully.
Jan 27 08:53:52 np0005597378 podman[296847]: 2026-01-27 13:53:52.823061806 +0000 UTC m=+0.654424196 container remove 74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.829 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.830 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.830 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:52 np0005597378 nova_compute[238941]: 2026-01-27 13:53:52.830 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:53:52 np0005597378 systemd[1]: libpod-conmon-74181678c8e6629de4419527a9fc0456b23e9c28f6abbfe10126e701751a74d0.scope: Deactivated successfully.
Jan 27 08:53:53 np0005597378 podman[296889]: 2026-01-27 13:53:53.026448117 +0000 UTC m=+0.080677808 container create 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:53 np0005597378 podman[296889]: 2026-01-27 13:53:52.974426211 +0000 UTC m=+0.028655932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:53:53 np0005597378 systemd[1]: Started libpod-conmon-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope.
Jan 27 08:53:53 np0005597378 podman[296904]: 2026-01-27 13:53:53.15167964 +0000 UTC m=+0.085064696 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 08:53:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:53:53 np0005597378 podman[296889]: 2026-01-27 13:53:53.251252674 +0000 UTC m=+0.305482395 container init 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:53 np0005597378 podman[296889]: 2026-01-27 13:53:53.259589578 +0000 UTC m=+0.313819269 container start 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 08:53:53 np0005597378 podman[296889]: 2026-01-27 13:53:53.268454177 +0000 UTC m=+0.322683878 container attach 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:53:53 np0005597378 podman[296903]: 2026-01-27 13:53:53.292011948 +0000 UTC m=+0.225294530 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 08:53:53 np0005597378 nova_compute[238941]: 2026-01-27 13:53:53.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:53 np0005597378 lvm[297024]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:53:53 np0005597378 lvm[297025]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:53:53 np0005597378 lvm[297024]: VG ceph_vg0 finished
Jan 27 08:53:53 np0005597378 lvm[297027]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:53:53 np0005597378 lvm[297027]: VG ceph_vg2 finished
Jan 27 08:53:53 np0005597378 lvm[297025]: VG ceph_vg1 finished
Jan 27 08:53:54 np0005597378 flamboyant_booth[296939]: {}
Jan 27 08:53:54 np0005597378 systemd[1]: libpod-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope: Deactivated successfully.
Jan 27 08:53:54 np0005597378 systemd[1]: libpod-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope: Consumed 1.401s CPU time.
Jan 27 08:53:54 np0005597378 podman[296889]: 2026-01-27 13:53:54.12577026 +0000 UTC m=+1.179999961 container died 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:53:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ddcc274eac5483a57915784a3f0b64daa6ccc751748a4acaa302114fb1f5ef42-merged.mount: Deactivated successfully.
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.334 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.335 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:54 np0005597378 podman[296889]: 2026-01-27 13:53:54.361781607 +0000 UTC m=+1.416011298 container remove 4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:53:54 np0005597378 systemd[1]: libpod-conmon-4a5f6f6fd0962e40721fc320eaef834435f85e0a6e3762ba8c9407210a4d019a.scope: Deactivated successfully.
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:53:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 121 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.414 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:53:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:53:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:53:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.643 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.643 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.651 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.651 238945 INFO nova.compute.claims [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.794 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.990 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:54 np0005597378 nova_compute[238941]: 2026-01-27 13:53:54.991 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.006 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.080 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156263871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.373 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.379 238945 DEBUG nova.compute.provider_tree [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.408 238945 DEBUG nova.scheduler.client.report [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.446 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.447 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.455 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.455 238945 INFO nova.compute.claims [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.484 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "21e2b15c-599b-4ee7-aa0a-71e2d90ea288" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.485 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "21e2b15c-599b-4ee7-aa0a-71e2d90ea288" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.506 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "21e2b15c-599b-4ee7-aa0a-71e2d90ea288" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.507 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.609 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.665 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.665 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.689 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:53:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:53:55 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.727 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.951 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.953 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.953 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Creating image(s)#033[00m
Jan 27 08:53:55 np0005597378 nova_compute[238941]: 2026-01-27 13:53:55.977 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.003 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.025 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.028 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.101 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.102 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.103 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.103 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.130 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.134 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 29a2c604-0230-4d68-a604-b8762babfe58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.173 238945 DEBUG nova.policy [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'febf3fa2bf644f59bdabf84c75d6aca3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f426e9a7cb05472cbc3b92502e087f8b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:53:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040657554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.237 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.242 238945 DEBUG nova.compute.provider_tree [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.261 238945 DEBUG nova.scheduler.client.report [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.295 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.296 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.411 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.412 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 121 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 92 op/s
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.442 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.495 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.648 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.650 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.650 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Creating image(s)#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.676 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.705 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.797 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.803 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.849 238945 DEBUG nova.policy [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5ced810fd6141b292a2237ebe49cfc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c910283aa95c4275954bee4904b21d1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.869 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 29a2c604-0230-4d68-a604-b8762babfe58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.901 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.902 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.902 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.903 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.924 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:53:56 np0005597378 nova_compute[238941]: 2026-01-27 13:53:56.931 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 89992af6-c9c9-4948-a4e8-cf46814953c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.040 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] resizing rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:53:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.257 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Successfully created port: d0059d5e-00d8-4be4-a689-76944e28fe37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.628 238945 DEBUG nova.objects.instance [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lazy-loading 'migration_context' on Instance uuid 29a2c604-0230-4d68-a604-b8762babfe58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.685 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 89992af6-c9c9-4948-a4e8-cf46814953c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.755s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.759 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] resizing rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.803 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.804 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Ensure instance console log exists: /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.804 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.804 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:57 np0005597378 nova_compute[238941]: 2026-01-27 13:53:57.805 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.004 238945 DEBUG nova.objects.instance [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'migration_context' on Instance uuid 89992af6-c9c9-4948-a4e8-cf46814953c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.034 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.035 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Ensure instance console log exists: /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.036 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.036 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.036 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:53:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 153 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 105 op/s
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.687 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Successfully created port: 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.843 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "0545c86a-1cc2-486f-acb1-883a7dc19420" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.843 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "0545c86a-1cc2-486f-acb1-883a7dc19420" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.861 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.923 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.924 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.933 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:53:58 np0005597378 nova_compute[238941]: 2026-01-27 13:53:58.933 238945 INFO nova.compute.claims [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.087 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:53:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:53:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789824652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:53:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:53:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789824652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:53:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:53:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160092989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.701 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.709 238945 DEBUG nova.compute.provider_tree [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.724 238945 DEBUG nova.scheduler.client.report [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.761 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.763 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.846 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.847 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.870 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:53:59 np0005597378 nova_compute[238941]: 2026-01-27 13:53:59.893 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.007 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.008 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.009 238945 INFO nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Creating image(s)#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.028 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.054 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.076 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.079 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.157 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.158 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.159 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.159 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.186 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.191 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0545c86a-1cc2-486f-acb1-883a7dc19420_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 213 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.597 238945 DEBUG nova.policy [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b04c035f0fe4ea19948e498881aef64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5c4dad659994db39d3522a0f84aa97f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.601 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Successfully updated port: d0059d5e-00d8-4be4-a689-76944e28fe37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.683 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.684 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquired lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.684 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.722 238945 DEBUG nova.compute.manager [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-changed-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.723 238945 DEBUG nova.compute.manager [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Refreshing instance network info cache due to event network-changed-d0059d5e-00d8-4be4-a689-76944e28fe37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.723 238945 DEBUG oslo_concurrency.lockutils [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:54:00 np0005597378 nova_compute[238941]: 2026-01-27 13:54:00.923 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.487 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0545c86a-1cc2-486f-acb1-883a7dc19420_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.555 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] resizing rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.837 238945 DEBUG nova.objects.instance [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'migration_context' on Instance uuid 0545c86a-1cc2-486f-acb1-883a7dc19420 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.852 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.852 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Ensure instance console log exists: /var/lib/nova/instances/0545c86a-1cc2-486f-acb1-883a7dc19420/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.853 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.853 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.854 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:54:01 np0005597378 nova_compute[238941]: 2026-01-27 13:54:01.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.408 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Successfully updated port: 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:54:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 213 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.424 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.424 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquired lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.424 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.701 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.807 238945 DEBUG nova.compute.manager [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Received event network-changed-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.807 238945 DEBUG nova.compute.manager [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Refreshing instance network info cache due to event network-changed-64ba69cb-72cb-418c-ae2c-a3019b84a9d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:54:02 np0005597378 nova_compute[238941]: 2026-01-27 13:54:02.808 238945 DEBUG oslo_concurrency.lockutils [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:54:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.142 238945 DEBUG nova.network.neutron [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updating instance_info_cache with network_info: [{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:54:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 86K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 21K writes, 7221 syncs, 3.02 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 42.53 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4161 syncs, 2.54 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.166 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Releasing lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.167 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance network_info: |[{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.167 238945 DEBUG oslo_concurrency.lockutils [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.167 238945 DEBUG nova.network.neutron [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Refreshing network info cache for port d0059d5e-00d8-4be4-a689-76944e28fe37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.171 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Start _get_guest_xml network_info=[{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.176 238945 WARNING nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.182 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.182 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.191 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.192 238945 DEBUG nova.virt.libvirt.host [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.193 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.193 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.193 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.194 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.195 238945 DEBUG nova.virt.hardware [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.198 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.329 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522028.3277516, afe7a605-0545-4e95-9e9a-4938d17f4a8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.330 238945 INFO nova.compute.manager [-] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.367 238945 DEBUG nova.compute.manager [None req-6748caad-a538-4c22-846b-ac89b33a3340 - - - - - -] [instance: afe7a605-0545-4e95-9e9a-4938d17f4a8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.385 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Successfully created port: 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:54:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:54:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704753370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.819 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.843 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.846 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.880 238945 DEBUG nova.network.neutron [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updating instance_info_cache with network_info: [{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.903 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Releasing lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.904 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Instance network_info: |[{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.904 238945 DEBUG oslo_concurrency.lockutils [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.904 238945 DEBUG nova.network.neutron [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Refreshing network info cache for port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:54:03 np0005597378 nova_compute[238941]: 2026-01-27 13:54:03.908 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Start _get_guest_xml network_info=[{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.042 238945 WARNING nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.052 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.054 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.059 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.060 238945 DEBUG nova.virt.libvirt.host [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.060 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.060 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.061 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.062 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.063 238945 DEBUG nova.virt.hardware [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.066 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 247 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 4.8 MiB/s wr, 97 op/s
Jan 27 08:54:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:54:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676771485' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.470 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.472 238945 DEBUG nova.virt.libvirt.vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-517968752',display_name='tempest-ServerGroupTestJSON-server-517968752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-517968752',id=66,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f426e9a7cb05472cbc3b92502e087f8b',ramdisk_id='',reservation_id='r-oc540pm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1004301058',owner_user_name='tempest-ServerGroupTestJSON-1004301058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:55Z,user_data=None,user_id='febf3fa2bf644f59bdabf84c75d6aca3',uuid=29a2c604-0230-4d68-a604-b8762babfe58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.472 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converting VIF {"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.473 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.474 238945 DEBUG nova.objects.instance [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29a2c604-0230-4d68-a604-b8762babfe58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.489 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <uuid>29a2c604-0230-4d68-a604-b8762babfe58</uuid>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <name>instance-00000042</name>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerGroupTestJSON-server-517968752</nova:name>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:54:03</nova:creationTime>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:user uuid="febf3fa2bf644f59bdabf84c75d6aca3">tempest-ServerGroupTestJSON-1004301058-project-member</nova:user>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:project uuid="f426e9a7cb05472cbc3b92502e087f8b">tempest-ServerGroupTestJSON-1004301058</nova:project>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <nova:port uuid="d0059d5e-00d8-4be4-a689-76944e28fe37">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <entry name="serial">29a2c604-0230-4d68-a604-b8762babfe58</entry>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <entry name="uuid">29a2c604-0230-4d68-a604-b8762babfe58</entry>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/29a2c604-0230-4d68-a604-b8762babfe58_disk">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/29a2c604-0230-4d68-a604-b8762babfe58_disk.config">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:82:18:0e"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <target dev="tapd0059d5e-00"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/console.log" append="off"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:54:04 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:54:04 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:54:04 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:54:04 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.490 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Preparing to wait for external event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.491 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.491 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.491 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.492 238945 DEBUG nova.virt.libvirt.vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-517968752',display_name='tempest-ServerGroupTestJSON-server-517968752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-517968752',id=66,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f426e9a7cb05472cbc3b92502e087f8b',ramdisk_id='',reservation_id='r-oc540pm7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1004301058',owner_user_name='tempest-ServerGroupTestJSON-1004301058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:55Z,user_data=None,user_id='febf3fa2bf644f59bdabf84c75d6aca3',uuid=29a2c604-0230-4d68-a604-b8762babfe58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.493 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converting VIF {"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.493 238945 DEBUG nova.network.os_vif_util [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.494 238945 DEBUG os_vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.495 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.496 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0059d5e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.503 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0059d5e-00, col_values=(('external_ids', {'iface-id': 'd0059d5e-00d8-4be4-a689-76944e28fe37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:18:0e', 'vm-uuid': '29a2c604-0230-4d68-a604-b8762babfe58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:04 np0005597378 NetworkManager[48904]: <info>  [1769522044.5055] manager: (tapd0059d5e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.506 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.513 238945 INFO os_vif [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:18:0e,bridge_name='br-int',has_traffic_filtering=True,id=d0059d5e-00d8-4be4-a689-76944e28fe37,network=Network(30ece4e7-a802-462e-83bb-9819891d2636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0059d5e-00')#033[00m
Jan 27 08:54:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:54:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556227327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.606 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.818 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.825 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.895 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.896 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.896 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] No VIF found with MAC fa:16:3e:82:18:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.897 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Using config drive#033[00m
Jan 27 08:54:04 np0005597378 nova_compute[238941]: 2026-01-27 13:54:04.924 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:54:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024308707' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.433 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.435 238945 DEBUG nova.virt.libvirt.vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=67,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-eu5tlwup',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:56Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=89992af6-c9c9-4948-a4e8-cf46814953c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.435 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.436 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.436 238945 DEBUG nova.objects.instance [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 89992af6-c9c9-4948-a4e8-cf46814953c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.453 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <uuid>89992af6-c9c9-4948-a4e8-cf46814953c3</uuid>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <name>instance-00000043</name>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersTestJSON-server-263917086</nova:name>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:54:04</nova:creationTime>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:user uuid="e5ced810fd6141b292a2237ebe49cfc9">tempest-ServersTestJSON-467936823-project-member</nova:user>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:project uuid="c910283aa95c4275954bee4904b21d1e">tempest-ServersTestJSON-467936823</nova:project>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <nova:port uuid="64ba69cb-72cb-418c-ae2c-a3019b84a9d9">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <entry name="serial">89992af6-c9c9-4948-a4e8-cf46814953c3</entry>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <entry name="uuid">89992af6-c9c9-4948-a4e8-cf46814953c3</entry>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/89992af6-c9c9-4948-a4e8-cf46814953c3_disk">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:33:a8:da"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <target dev="tap64ba69cb-72"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/console.log" append="off"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:54:05 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:54:05 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:54:05 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:54:05 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.455 238945 DEBUG nova.compute.manager [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Preparing to wait for external event network-vif-plugged-64ba69cb-72cb-418c-ae2c-a3019b84a9d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.455 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Acquiring lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.455 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.456 238945 DEBUG oslo_concurrency.lockutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Lock "89992af6-c9c9-4948-a4e8-cf46814953c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.456 238945 DEBUG nova.virt.libvirt.vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263917086',display_name='tempest-ServersTestJSON-server-263917086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-263917086',id=67,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c910283aa95c4275954bee4904b21d1e',ramdisk_id='',reservation_id='r-eu5tlwup',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-467936823',owner_user_name='tempest-ServersTestJSON-467936823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:56Z,user_data=None,user_id='e5ced810fd6141b292a2237ebe49cfc9',uuid=89992af6-c9c9-4948-a4e8-cf46814953c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.456 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converting VIF {"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.457 238945 DEBUG nova.network.os_vif_util [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.457 238945 DEBUG os_vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.458 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.458 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.458 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.462 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64ba69cb-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64ba69cb-72, col_values=(('external_ids', {'iface-id': '64ba69cb-72cb-418c-ae2c-a3019b84a9d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:a8:da', 'vm-uuid': '89992af6-c9c9-4948-a4e8-cf46814953c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:05 np0005597378 NetworkManager[48904]: <info>  [1769522045.4652] manager: (tap64ba69cb-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.474 238945 INFO os_vif [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:a8:da,bridge_name='br-int',has_traffic_filtering=True,id=64ba69cb-72cb-418c-ae2c-a3019b84a9d9,network=Network(13754bbc-8f22-4885-aa27-198718585636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64ba69cb-72')#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.533 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.534 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.534 238945 DEBUG nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] No VIF found with MAC fa:16:3e:33:a8:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.534 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Using config drive#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.557 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.776 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Creating config drive at /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.781 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8ktfr8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.820 238945 DEBUG nova.network.neutron [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updated VIF entry in instance network info cache for port d0059d5e-00d8-4be4-a689-76944e28fe37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.821 238945 DEBUG nova.network.neutron [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Updating instance_info_cache with network_info: [{"id": "d0059d5e-00d8-4be4-a689-76944e28fe37", "address": "fa:16:3e:82:18:0e", "network": {"id": "30ece4e7-a802-462e-83bb-9819891d2636", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-887526114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f426e9a7cb05472cbc3b92502e087f8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0059d5e-00", "ovs_interfaceid": "d0059d5e-00d8-4be4-a689-76944e28fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.842 238945 DEBUG oslo_concurrency.lockutils [req-eabe748a-4875-4e83-ae14-7af4d929abf9 req-4640c538-d10f-4ad8-928f-865aad90fded 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-29a2c604-0230-4d68-a604-b8762babfe58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.927 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8ktfr8l" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.956 238945 DEBUG nova.storage.rbd_utils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] rbd image 29a2c604-0230-4d68-a604-b8762babfe58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:05 np0005597378 nova_compute[238941]: 2026-01-27 13:54:05.962 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config 29a2c604-0230-4d68-a604-b8762babfe58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.029 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Successfully updated port: 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.058 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquiring lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.058 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Acquired lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.059 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.121 238945 DEBUG nova.compute.manager [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Received event network-changed-7b6cf19e-6e20-4087-9dd8-bf2f099a9522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.121 238945 DEBUG nova.compute.manager [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Refreshing instance network info cache due to event network-changed-7b6cf19e-6e20-4087-9dd8-bf2f099a9522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.122 238945 DEBUG oslo_concurrency.lockutils [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.128 238945 DEBUG oslo_concurrency.processutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config 29a2c604-0230-4d68-a604-b8762babfe58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.129 238945 INFO nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Deleting local config drive /var/lib/nova/instances/29a2c604-0230-4d68-a604-b8762babfe58/disk.config because it was imported into RBD.#033[00m
Jan 27 08:54:06 np0005597378 NetworkManager[48904]: <info>  [1769522046.2075] manager: (tapd0059d5e-00): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Jan 27 08:54:06 np0005597378 kernel: tapd0059d5e-00: entered promiscuous mode
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:06Z|00587|binding|INFO|Claiming lport d0059d5e-00d8-4be4-a689-76944e28fe37 for this chassis.
Jan 27 08:54:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:06Z|00588|binding|INFO|d0059d5e-00d8-4be4-a689-76944e28fe37: Claiming fa:16:3e:82:18:0e 10.100.0.7
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.243 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:18:0e 10.100.0.7'], port_security=['fa:16:3e:82:18:0e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '29a2c604-0230-4d68-a604-b8762babfe58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30ece4e7-a802-462e-83bb-9819891d2636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f426e9a7cb05472cbc3b92502e087f8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38b1acc0-c3b8-4073-88ac-ebcad7676184', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdf19f81-ee11-4086-a021-c3f4df96d385, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d0059d5e-00d8-4be4-a689-76944e28fe37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.244 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d0059d5e-00d8-4be4-a689-76944e28fe37 in datapath 30ece4e7-a802-462e-83bb-9819891d2636 bound to our chassis#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.245 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30ece4e7-a802-462e-83bb-9819891d2636#033[00m
Jan 27 08:54:06 np0005597378 systemd-udevd[297851]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:54:06 np0005597378 systemd-machined[207425]: New machine qemu-74-instance-00000042.
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7da78f-9746-49a2-85e9-60ec00348da9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.261 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30ece4e7-a1 in ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.264 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30ece4e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c390d2-c04d-472c-9d91-2a1be248023a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 NetworkManager[48904]: <info>  [1769522046.2665] device (tapd0059d5e-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:54:06 np0005597378 systemd[1]: Started Virtual Machine qemu-74-instance-00000042.
Jan 27 08:54:06 np0005597378 NetworkManager[48904]: <info>  [1769522046.2672] device (tapd0059d5e-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.265 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff21307-cfb4-414f-bfde-69eb86f5639e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.281 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1694c503-694f-45ea-81ce-3904ebbd1200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cf0d0b-e0ce-4f38-a25b-3a3d3860aeed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:06Z|00589|binding|INFO|Setting lport d0059d5e-00d8-4be4-a689-76944e28fe37 ovn-installed in OVS
Jan 27 08:54:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:06Z|00590|binding|INFO|Setting lport d0059d5e-00d8-4be4-a689-76944e28fe37 up in Southbound
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.340 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7108a7c-844f-4271-b854-6533b62664ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 NetworkManager[48904]: <info>  [1769522046.3468] manager: (tap30ece4e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e42d9f6e-f325-40bf-a40c-9e28311ca575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.376 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[29d8e10f-56d4-4b04-bd23-12adfd2abb54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.381 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[baac8007-7f44-4b62-b0aa-b9e5716ff917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 NetworkManager[48904]: <info>  [1769522046.4071] device (tap30ece4e7-a0): carrier: link connected
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.413 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac00ca36-3776-405a-b690-7b91b3ae9772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 259 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 82 op/s
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.431 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65382a89-a118-484b-be08-11c741ec9d61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30ece4e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:0a:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468603, 'reachable_time': 15223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297889, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.446 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14f7ff1f-6c61-47d3-b916-00e9f4e3a22b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:aa0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468603, 'tstamp': 468603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297890, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[344c4f78-3d60-4109-96c8-baf668a4b079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30ece4e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:0a:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468603, 'reachable_time': 15223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297891, 'error': None, 'target': 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.491 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[577c8736-db67-4ba5-8ebe-ad1e2ea8a3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.550 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[980674ec-08b9-4cf7-9572-b549c42d895c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.551 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ece4e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.551 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.552 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30ece4e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 NetworkManager[48904]: <info>  [1769522046.5547] manager: (tap30ece4e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 27 08:54:06 np0005597378 kernel: tap30ece4e7-a0: entered promiscuous mode
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.560 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30ece4e7-a0, col_values=(('external_ids', {'iface-id': '3740f342-d355-4372-a2aa-dfc620ef9998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:06 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:06Z|00591|binding|INFO|Releasing lport 3740f342-d355-4372-a2aa-dfc620ef9998 from this chassis (sb_readonly=0)
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.587 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30ece4e7-a802-462e-83bb-9819891d2636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30ece4e7-a802-462e-83bb-9819891d2636.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d705f55-2c3c-46d5-978a-19ac636f3f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.589 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:54:06 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-30ece4e7-a802-462e-83bb-9819891d2636
Jan 27 08:54:06 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/30ece4e7-a802-462e-83bb-9819891d2636.pid.haproxy
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 30ece4e7-a802-462e-83bb-9819891d2636
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:54:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:06.590 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636', 'env', 'PROCESS_TAG=haproxy-30ece4e7-a802-462e-83bb-9819891d2636', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30ece4e7-a802-462e-83bb-9819891d2636.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.795 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522046.7953851, 29a2c604-0230-4d68-a604-b8762babfe58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.796 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Started (Lifecycle Event)#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.818 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.824 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522046.798261, 29a2c604-0230-4d68-a604-b8762babfe58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.824 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.853 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.857 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.877 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.993 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:54:06 np0005597378 nova_compute[238941]: 2026-01-27 13:54:06.997 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Creating config drive at /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config#033[00m
Jan 27 08:54:07 np0005597378 podman[297971]: 2026-01-27 13:54:07.001001191 +0000 UTC m=+0.076446474 container create 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.003 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoq63sinw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:07 np0005597378 podman[297971]: 2026-01-27 13:54:06.94809309 +0000 UTC m=+0.023538373 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:54:07 np0005597378 systemd[1]: Started libpod-conmon-3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e.scope.
Jan 27 08:54:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:54:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d825417ff90abafb8bc9c588e69b89a46d6d57648b19a75d1876036a60502230/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:54:07 np0005597378 podman[297971]: 2026-01-27 13:54:07.096620179 +0000 UTC m=+0.172065462 container init 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 08:54:07 np0005597378 podman[297971]: 2026-01-27 13:54:07.103652988 +0000 UTC m=+0.179098251 container start 3253b4d6ead1fdd9e447540f26b437ffb8c520e2547923fd8957b25d1c69951e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 08:54:07 np0005597378 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [NOTICE]   (297994) : New worker (297996) forked
Jan 27 08:54:07 np0005597378 neutron-haproxy-ovnmeta-30ece4e7-a802-462e-83bb-9819891d2636[297990]: [NOTICE]   (297994) : Loading success.
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.145 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoq63sinw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.175 238945 DEBUG nova.storage.rbd_utils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] rbd image 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.179 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.331 238945 DEBUG oslo_concurrency.processutils [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config 89992af6-c9c9-4948-a4e8-cf46814953c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.332 238945 INFO nova.virt.libvirt.driver [None req-af4ae2a8-5807-4268-9170-e02f3f9f4e7f e5ced810fd6141b292a2237ebe49cfc9 c910283aa95c4275954bee4904b21d1e - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Deleting local config drive /var/lib/nova/instances/89992af6-c9c9-4948-a4e8-cf46814953c3/disk.config because it was imported into RBD.#033[00m
Jan 27 08:54:07 np0005597378 kernel: tap64ba69cb-72: entered promiscuous mode
Jan 27 08:54:07 np0005597378 NetworkManager[48904]: <info>  [1769522047.3993] manager: (tap64ba69cb-72): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Jan 27 08:54:07 np0005597378 systemd-udevd[297878]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:54:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:07Z|00592|binding|INFO|Claiming lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 for this chassis.
Jan 27 08:54:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:07Z|00593|binding|INFO|64ba69cb-72cb-418c-ae2c-a3019b84a9d9: Claiming fa:16:3e:33:a8:da 10.100.0.5
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.411 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:a8:da 10.100.0.5'], port_security=['fa:16:3e:33:a8:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '89992af6-c9c9-4948-a4e8-cf46814953c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13754bbc-8f22-4885-aa27-198718585636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c910283aa95c4275954bee4904b21d1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04fdcba1-ff93-4c7e-ba72-71ff9b0df48f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4f8372-3041-457d-9bc5-0030d734c8e1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=64ba69cb-72cb-418c-ae2c-a3019b84a9d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.412 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 in datapath 13754bbc-8f22-4885-aa27-198718585636 bound to our chassis#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.413 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13754bbc-8f22-4885-aa27-198718585636#033[00m
Jan 27 08:54:07 np0005597378 NetworkManager[48904]: <info>  [1769522047.4162] device (tap64ba69cb-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:54:07 np0005597378 NetworkManager[48904]: <info>  [1769522047.4168] device (tap64ba69cb-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:54:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:07Z|00594|binding|INFO|Setting lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 ovn-installed in OVS
Jan 27 08:54:07 np0005597378 ovn_controller[144812]: 2026-01-27T13:54:07Z|00595|binding|INFO|Setting lport 64ba69cb-72cb-418c-ae2c-a3019b84a9d9 up in Southbound
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.420 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.434 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07bbeae5-dbff-44c8-86fc-8c8167127e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:07 np0005597378 systemd-machined[207425]: New machine qemu-75-instance-00000043.
Jan 27 08:54:07 np0005597378 systemd[1]: Started Virtual Machine qemu-75-instance-00000043.
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.470 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd438961-ce8f-434e-9219-7b01f2d9467f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.474 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6940c5ed-bd8d-407b-a56d-71190648f27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.506 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[454476fe-26a2-4f8f-b7c2-9bfb17b395b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.511 238945 DEBUG nova.network.neutron [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updated VIF entry in instance network info cache for port 64ba69cb-72cb-418c-ae2c-a3019b84a9d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.512 238945 DEBUG nova.network.neutron [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Updating instance_info_cache with network_info: [{"id": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "address": "fa:16:3e:33:a8:da", "network": {"id": "13754bbc-8f22-4885-aa27-198718585636", "bridge": "br-int", "label": "tempest-ServersTestJSON-1342989129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c910283aa95c4275954bee4904b21d1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64ba69cb-72", "ovs_interfaceid": "64ba69cb-72cb-418c-ae2c-a3019b84a9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.532 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d74ee6b-4662-4946-a435-f83452bd4783]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13754bbc-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461657, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298066, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.552 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[99873121-c9c4-4b01-bc5f-48e684999f48]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461672, 'tstamp': 461672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298068, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13754bbc-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461675, 'tstamp': 461675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298068, 'error': None, 'target': 'ovnmeta-13754bbc-8f22-4885-aa27-198718585636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.554 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13754bbc-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.557 238945 DEBUG oslo_concurrency.lockutils [req-c33c072a-1d39-4c97-8c9c-9e96440c3fe3 req-aafdeacf-494d-45d1-8c69-785b4f93158c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-89992af6-c9c9-4948-a4e8-cf46814953c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.557 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13754bbc-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.558 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.558 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13754bbc-80, col_values=(('external_ids', {'iface-id': '1a4e395a-c1da-494c-a8bb-160c38bbc6e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:54:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:54:07.559 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.628 238945 DEBUG nova.compute.manager [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Received event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG oslo_concurrency.lockutils [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "29a2c604-0230-4d68-a604-b8762babfe58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG oslo_concurrency.lockutils [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG oslo_concurrency.lockutils [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.629 238945 DEBUG nova.compute.manager [req-1e938103-b843-4dc4-9b5d-aa5a06b07d3a req-aba7c19f-2d58-45c8-9a61-f76f199ef667 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Processing event network-vif-plugged-d0059d5e-00d8-4be4-a689-76944e28fe37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.630 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.642 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522047.6420422, 29a2c604-0230-4d68-a604-b8762babfe58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.650 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.659 238945 INFO nova.virt.libvirt.driver [-] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Instance spawned successfully.#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.659 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.667 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.672 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.684 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.685 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.686 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.686 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.687 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.687 238945 DEBUG nova.virt.libvirt.driver [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.700 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.754 238945 INFO nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Took 11.80 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.755 238945 DEBUG nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.843 238945 INFO nova.compute.manager [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] [instance: 29a2c604-0230-4d68-a604-b8762babfe58] Took 13.22 seconds to build instance.#033[00m
Jan 27 08:54:07 np0005597378 nova_compute[238941]: 2026-01-27 13:54:07.860 238945 DEBUG oslo_concurrency.lockutils [None req-7837ed74-f4f4-4567-b968-2d56aba44a99 febf3fa2bf644f59bdabf84c75d6aca3 f426e9a7cb05472cbc3b92502e087f8b - - default default] Lock "29a2c604-0230-4d68-a604-b8762babfe58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.134 238945 DEBUG nova.network.neutron [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Updating instance_info_cache with network_info: [{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.152 238945 DEBUG oslo_concurrency.lockutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Releasing lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.152 238945 DEBUG nova.compute.manager [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Instance network_info: |[{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.153 238945 DEBUG oslo_concurrency.lockutils [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0545c86a-1cc2-486f-acb1-883a7dc19420" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.153 238945 DEBUG nova.network.neutron [req-e0c69686-f20a-4001-99c3-71a07d225354 req-aec3b3c9-5d58-4dda-88af-b0df0a825a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Refreshing network info cache for port 7b6cf19e-6e20-4087-9dd8-bf2f099a9522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.157 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] Start _get_guest_xml network_info=[{"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.164 238945 WARNING nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.169 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.170 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.182 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.183 238945 DEBUG nova.virt.libvirt.host [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.184 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.184 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.185 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.185 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:54:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 08:54:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 24K writes, 93K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8304 syncs, 2.95 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 41.74 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4837 syncs, 2.47 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.186 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.187 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.187 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.188 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.188 238945 DEBUG nova.virt.hardware [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.193 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.359 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522048.3578956, 89992af6-c9c9-4948-a4e8-cf46814953c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.359 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] VM Started (Lifecycle Event)#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.387 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.393 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522048.3586593, 89992af6-c9c9-4948-a4e8-cf46814953c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.394 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.414 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.419 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:54:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.443 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 89992af6-c9c9-4948-a4e8-cf46814953c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:54:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:54:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019732869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.816 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.851 238945 DEBUG nova.storage.rbd_utils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] rbd image 0545c86a-1cc2-486f-acb1-883a7dc19420_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:54:08 np0005597378 nova_compute[238941]: 2026-01-27 13:54:08.859 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:54:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:54:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417433759' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:54:09 np0005597378 nova_compute[238941]: 2026-01-27 13:54:09.453 238945 DEBUG oslo_concurrency.processutils [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:54:09 np0005597378 nova_compute[238941]: 2026-01-27 13:54:09.455 238945 DEBUG nova.virt.libvirt.vif [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1204512618',display_name='tempest-ServerActionsTestOtherA-server-1204512618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1204512618',id=68,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0FaCZuDrai6iqi5GMT0wvAFmJ0EXWF0gbk9T1zAV2c9kc/C2vn1dDnr2FFSLDIdbemo5/iNiAB2e70D7rRYKUqN0RgIM+SVBfBtqUayj1M2AtBdHI7i6G7kgn+lmhDkQ==',key_name='tempest-keypair-263819358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5c4dad659994db39d3522a0f84aa97f',ramdisk_id='',reservation_id='r-q3kn515y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1897291814',owner_user_name='tempest-ServerActionsTestOtherA-1897291814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:53:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b04c035f0fe4ea19948e498881aef64',uuid=0545c86a-1cc2-486f-acb1-883a7dc19420,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:54:09 np0005597378 nova_compute[238941]: 2026-01-27 13:54:09.456 238945 DEBUG nova.network.os_vif_util [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converting VIF {"id": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "address": "fa:16:3e:aa:b7:5a", "network": {"id": "f362614c-341a-4a1f-86f4-af47e7df36ff", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-18730263-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5c4dad659994db39d3522a0f84aa97f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b6cf19e-6e", "ovs_interfaceid": "7b6cf19e-6e20-4087-9dd8-bf2f099a9522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:54:09 np0005597378 nova_compute[238941]: 2026-01-27 13:54:09.457 238945 DEBUG nova.network.os_vif_util [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=7b6cf19e-6e20-4087-9dd8-bf2f099a9522,network=Network(f362614c-341a-4a1f-86f4-af47e7df36ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b6cf19e-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:54:09 np0005597378 nova_compute[238941]: 2026-01-27 13:54:09.458 238945 DEBUG nova.objects.instance [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] Lazy-loading 'pci_devices' on Instance uuid 0545c86a-1cc2-486f-acb1-883a7dc19420 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:54:09 np0005597378 nova_compute[238941]: 2026-01-27 13:54:09.474 238945 DEBUG nova.virt.libvirt.driver [None req-27c6df46-7f53-47bb-8e4e-a1e327c44b51 7b04c035f0fe4ea19948e498881aef64 f5c4dad659994db39d3522a0f84aa97f - - default default] [instance: 0545c86a-1cc2-486f-acb1-883a7dc19420] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <uuid>0545c86a-1cc2-486f-acb1-883a7dc19420</uuid>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <name>instance-00000044</name>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestOtherA-server-1204512618</nova:name>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:54:08</nova:creationTime>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:user uuid="7b04c035f0fe4ea19948e498881aef64">tempest-ServerActionsTestOtherA-1897291814-project-member</nova:user>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:project uuid="f5c4dad659994db39d3522a0f84aa97f">tempest-ServerActionsTestOtherA-1897291814</nova:project>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        <nova:port uuid="7b6cf19e-6e20-4087-9dd8-bf2f099a9522">
Jan 27 08:54:09 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <entry name="serial">0545c86a-1cc2-486f-acb1-883a7dc19420</entry>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <entry name="uuid">0545c86a-1cc2-486f-acb1-883a7dc19420</entry>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:54:09 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.268 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.268 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.285 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.362 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.363 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.370 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.371 238945 INFO nova.compute.claims [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.484 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:19 np0005597378 rsyslogd[1006]: imjournal: 9239 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.897 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.897 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:19 np0005597378 nova_compute[238941]: 2026-01-27 13:57:19.919 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.005 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:57:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134498071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.038 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.044 238945 DEBUG nova.compute.provider_tree [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.064 238945 DEBUG nova.scheduler.client.report [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.093 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.093 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.096 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.101 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.101 238945 INFO nova.compute.claims [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.179 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.179 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.202 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.257 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.289 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.342 238945 DEBUG nova.policy [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b49f56e21cd44451a1c542f97cb11a9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.391 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.393 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.394 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating image(s)#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.418 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.440 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.465 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.468 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.540 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.541 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.542 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.542 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.568 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.575 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.816 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Successfully created port: 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:57:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:57:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3578381796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.932 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.937 238945 DEBUG nova.compute.provider_tree [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.952 238945 DEBUG nova.scheduler.client.report [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.970 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:20 np0005597378 nova_compute[238941]: 2026-01-27 13:57:20.970 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.014 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.015 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.042 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.059 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.068 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.128 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] resizing rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.173 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.174 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.174 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Creating image(s)#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.194 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.214 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.233 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.236 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.270 238945 DEBUG nova.policy [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9cea2582f56e4f2ab221ea9bac7c3dfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee59a9cf9e64e7e8bb75a0d9b609a0c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.312 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.313 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.313 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.314 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.332 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.335 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.371 238945 DEBUG nova.objects.instance [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'migration_context' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.390 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.390 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Ensure instance console log exists: /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.391 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.391 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.391 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.624 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.676 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Successfully updated port: 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.681 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] resizing rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.706 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.706 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.706 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.758 238945 DEBUG nova.objects.instance [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lazy-loading 'migration_context' on Instance uuid a4189f17-0ade-4e17-9182-4ba1f5dd35b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.780 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.780 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Ensure instance console log exists: /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.781 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.781 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.781 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.790 238945 DEBUG nova.compute.manager [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.791 238945 DEBUG nova.compute.manager [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.791 238945 DEBUG oslo_concurrency.lockutils [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:57:21 np0005597378 nova_compute[238941]: 2026-01-27 13:57:21.943 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:57:22 np0005597378 nova_compute[238941]: 2026-01-27 13:57:22.089 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Successfully created port: 5ca0e79f-7590-4e89-8b25-605d37e60cbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:57:22 np0005597378 nova_compute[238941]: 2026-01-27 13:57:22.092 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 88 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 27 08:57:22 np0005597378 nova_compute[238941]: 2026-01-27 13:57:22.984 238945 DEBUG nova.network.neutron [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:57:22 np0005597378 nova_compute[238941]: 2026-01-27 13:57:22.988 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Successfully updated port: 5ca0e79f-7590-4e89-8b25-605d37e60cbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.003 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.004 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance network_info: |[{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.004 238945 DEBUG oslo_concurrency.lockutils [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.005 238945 DEBUG nova.network.neutron [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.008 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start _get_guest_xml network_info=[{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.012 238945 WARNING nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.016 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.017 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.018 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.018 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquired lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.019 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.028 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.029 238945 DEBUG nova.virt.libvirt.host [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.029 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.029 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.030 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.030 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.030 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.031 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.032 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.032 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.032 238945 DEBUG nova.virt.hardware [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.035 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.082 238945 DEBUG nova.compute.manager [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-changed-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.082 238945 DEBUG nova.compute.manager [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Refreshing instance network info cache due to event network-changed-5ca0e79f-7590-4e89-8b25-605d37e60cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.083 238945 DEBUG oslo_concurrency.lockutils [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.349 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:57:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:57:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858089320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.601 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.621 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:23 np0005597378 nova_compute[238941]: 2026-01-27 13:57:23.626 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:57:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127662304' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.239 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.241 238945 DEBUG nova.virt.libvirt.vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:20Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.241 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.242 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.243 238945 DEBUG nova.objects.instance [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.259 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <uuid>9a2cac55-b28d-4d71-b091-6a3c39cdfe14</uuid>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <name>instance-00000054</name>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1986971874</nova:name>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:57:23</nova:creationTime>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:user uuid="b49f56e21cd44451a1c542f97cb11a9c">tempest-ServerRescueTestJSONUnderV235-508111280-project-member</nova:user>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:project uuid="71ad88aa5cfe42bdb12bd409ad2842de">tempest-ServerRescueTestJSONUnderV235-508111280</nova:project>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <nova:port uuid="5e99824f-f686-4cd9-a3dd-e1e0690fc68f">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <entry name="serial">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <entry name="uuid">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:8d:02:e1"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <target dev="tap5e99824f-f6"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/console.log" append="off"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:57:24 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:57:24 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:57:24 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:57:24 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.260 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Preparing to wait for external event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.261 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.261 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.261 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.262 238945 DEBUG nova.virt.libvirt.vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:20Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.262 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.263 238945 DEBUG nova.network.os_vif_util [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.263 238945 DEBUG os_vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.264 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.264 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.267 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.267 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e99824f-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.268 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e99824f-f6, col_values=(('external_ids', {'iface-id': '5e99824f-f686-4cd9-a3dd-e1e0690fc68f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:02:e1', 'vm-uuid': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:24 np0005597378 NetworkManager[48904]: <info>  [1769522244.2707] manager: (tap5e99824f-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.277 238945 INFO os_vif [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6')#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.419 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.420 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.420 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No VIF found with MAC fa:16:3e:8d:02:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.421 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Using config drive#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.436 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.445 238945 DEBUG nova.network.neutron [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updating instance_info_cache with network_info: [{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.469 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Releasing lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.470 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance network_info: |[{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.471 238945 DEBUG oslo_concurrency.lockutils [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.471 238945 DEBUG nova.network.neutron [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Refreshing network info cache for port 5ca0e79f-7590-4e89-8b25-605d37e60cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.475 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start _get_guest_xml network_info=[{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.481 238945 WARNING nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.486 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.487 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.496 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.497 238945 DEBUG nova.virt.libvirt.host [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.498 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.498 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.499 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.500 238945 DEBUG nova.virt.hardware [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.503 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 144 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 4.0 MiB/s wr, 55 op/s
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.858 238945 DEBUG nova.network.neutron [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.859 238945 DEBUG nova.network.neutron [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:57:24 np0005597378 nova_compute[238941]: 2026-01-27 13:57:24.886 238945 DEBUG oslo_concurrency.lockutils [req-0aaecc34-df5f-44f8-bec4-76c7abffc75e req-2fae2e5f-5495-4316-8c2d-e428e9708b1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:57:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:57:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997669551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.101 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating config drive at /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.105 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpih63unjr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.135 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.161 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.166 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.242 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpih63unjr" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.268 238945 DEBUG nova.storage.rbd_utils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.272 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.497 238945 DEBUG oslo_concurrency.processutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.499 238945 INFO nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deleting local config drive /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config because it was imported into RBD.#033[00m
Jan 27 08:57:25 np0005597378 NetworkManager[48904]: <info>  [1769522245.5641] manager: (tap5e99824f-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Jan 27 08:57:25 np0005597378 kernel: tap5e99824f-f6: entered promiscuous mode
Jan 27 08:57:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:25Z|00755|binding|INFO|Claiming lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f for this chassis.
Jan 27 08:57:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:25Z|00756|binding|INFO|5e99824f-f686-4cd9-a3dd-e1e0690fc68f: Claiming fa:16:3e:8d:02:e1 10.100.0.11
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.584 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:57:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.585 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 bound to our chassis#033[00m
Jan 27 08:57:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.586 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 08:57:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:25.587 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[475b4655-603f-4d5b-b359-94788a398075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:25 np0005597378 systemd-udevd[310030]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:57:25 np0005597378 NetworkManager[48904]: <info>  [1769522245.6012] device (tap5e99824f-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:57:25 np0005597378 NetworkManager[48904]: <info>  [1769522245.6019] device (tap5e99824f-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:57:25 np0005597378 systemd-machined[207425]: New machine qemu-95-instance-00000054.
Jan 27 08:57:25 np0005597378 systemd[1]: Started Virtual Machine qemu-95-instance-00000054.
Jan 27 08:57:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:25Z|00757|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f ovn-installed in OVS
Jan 27 08:57:25 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:25Z|00758|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f up in Southbound
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.654 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:57:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717715618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.831 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.833 238945 DEBUG nova.virt.libvirt.vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1037770242',display_name='tempest-ServerMetadataTestJSON-server-1037770242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1037770242',id=85,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee59a9cf9e64e7e8bb75a0d9b609a0c',ramdisk_id='',reservation_id='r-qisaeu5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-749660720',owner_user_name='tempest-ServerMetadataTestJSON-749660720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:21Z,user_data=None,user_id='9cea2582f56e4f2ab221ea9bac7c3dfd',uuid=a4189f17-0ade-4e17-9182-4ba1f5dd35b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.833 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converting VIF {"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.834 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.835 238945 DEBUG nova.objects.instance [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lazy-loading 'pci_devices' on Instance uuid a4189f17-0ade-4e17-9182-4ba1f5dd35b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.855 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <uuid>a4189f17-0ade-4e17-9182-4ba1f5dd35b5</uuid>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <name>instance-00000055</name>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerMetadataTestJSON-server-1037770242</nova:name>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:57:24</nova:creationTime>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:user uuid="9cea2582f56e4f2ab221ea9bac7c3dfd">tempest-ServerMetadataTestJSON-749660720-project-member</nova:user>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:project uuid="bee59a9cf9e64e7e8bb75a0d9b609a0c">tempest-ServerMetadataTestJSON-749660720</nova:project>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <nova:port uuid="5ca0e79f-7590-4e89-8b25-605d37e60cbb">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <entry name="serial">a4189f17-0ade-4e17-9182-4ba1f5dd35b5</entry>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <entry name="uuid">a4189f17-0ade-4e17-9182-4ba1f5dd35b5</entry>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:6e:5d:cd"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <target dev="tap5ca0e79f-75"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/console.log" append="off"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:57:25 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:57:25 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:57:25 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:57:25 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.862 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Preparing to wait for external event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.863 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.863 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.864 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.865 238945 DEBUG nova.virt.libvirt.vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1037770242',display_name='tempest-ServerMetadataTestJSON-server-1037770242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1037770242',id=85,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee59a9cf9e64e7e8bb75a0d9b609a0c',ramdisk_id='',reservation_id='r-qisaeu5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-749660720',owner_user_name='tempest-ServerMetadataTestJSON-749660720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:21Z,user_data=None,user_id='9cea2582f56e4f2ab221ea9bac7c3dfd',uuid=a4189f17-0ade-4e17-9182-4ba1f5dd35b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.865 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converting VIF {"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.866 238945 DEBUG nova.network.os_vif_util [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.866 238945 DEBUG os_vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.868 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.868 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.875 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ca0e79f-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.875 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ca0e79f-75, col_values=(('external_ids', {'iface-id': '5ca0e79f-7590-4e89-8b25-605d37e60cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:5d:cd', 'vm-uuid': 'a4189f17-0ade-4e17-9182-4ba1f5dd35b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:25 np0005597378 NetworkManager[48904]: <info>  [1769522245.8783] manager: (tap5ca0e79f-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.883 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.884 238945 INFO os_vif [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75')#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.964 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.965 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.965 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] No VIF found with MAC fa:16:3e:6e:5d:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.965 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Using config drive#033[00m
Jan 27 08:57:25 np0005597378 nova_compute[238941]: 2026-01-27 13:57:25.996 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.171 238945 DEBUG nova.compute.manager [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.172 238945 DEBUG oslo_concurrency.lockutils [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.172 238945 DEBUG oslo_concurrency.lockutils [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.172 238945 DEBUG oslo_concurrency.lockutils [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.173 238945 DEBUG nova.compute.manager [req-b13bd85c-727b-4f17-b40c-33ed01585667 req-c3a9eed4-4f22-4715-8572-2562d51ab487 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Processing event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.357 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522246.3566008, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.357 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Started (Lifecycle Event)#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.359 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.363 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.366 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance spawned successfully.#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.367 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.386 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.392 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.397 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.398 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.398 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.399 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.399 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.400 238945 DEBUG nova.virt.libvirt.driver [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.421 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.422 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522246.3567321, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.423 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.457 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.459 238945 DEBUG nova.network.neutron [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updated VIF entry in instance network info cache for port 5ca0e79f-7590-4e89-8b25-605d37e60cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.460 238945 DEBUG nova.network.neutron [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updating instance_info_cache with network_info: [{"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.466 238945 INFO nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 6.07 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.467 238945 DEBUG nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.468 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522246.361752, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.469 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.472 238945 DEBUG oslo_concurrency.lockutils [req-b7c6fafd-f34b-4174-b41d-c042d8bafe80 req-202a7117-1c87-4394-a470-66efe446c9f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a4189f17-0ade-4e17-9182-4ba1f5dd35b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:57:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 180 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 4.5 MiB/s wr, 65 op/s
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.515 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.518 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.562 238945 INFO nova.compute.manager [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 7.23 seconds to build instance.#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.578 238945 DEBUG oslo_concurrency.lockutils [None req-52dbd01c-2486-43cc-8b2e-ce4b2a53f3e0 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.830 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Creating config drive at /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.835 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pw50v62 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:26 np0005597378 nova_compute[238941]: 2026-01-27 13:57:26.976 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8pw50v62" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.001 238945 DEBUG nova.storage.rbd_utils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] rbd image a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.005 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.532 238945 DEBUG oslo_concurrency.processutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config a4189f17-0ade-4e17-9182-4ba1f5dd35b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.533 238945 INFO nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deleting local config drive /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5/disk.config because it was imported into RBD.#033[00m
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010482810504146018 of space, bias 1.0, pg target 0.3144843151243806 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683654770252178 of space, bias 1.0, pg target 0.20050964310756533 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.087407313761258e-06 of space, bias 4.0, pg target 0.0013048887765135097 quantized to 16 (current 16)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:57:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:57:27 np0005597378 kernel: tap5ca0e79f-75: entered promiscuous mode
Jan 27 08:57:27 np0005597378 NetworkManager[48904]: <info>  [1769522247.5859] manager: (tap5ca0e79f-75): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.584 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:27Z|00759|binding|INFO|Claiming lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb for this chassis.
Jan 27 08:57:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:27Z|00760|binding|INFO|5ca0e79f-7590-4e89-8b25-605d37e60cbb: Claiming fa:16:3e:6e:5d:cd 10.100.0.14
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.591 238945 DEBUG nova.compute.manager [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG oslo_concurrency.lockutils [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG oslo_concurrency.lockutils [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG oslo_concurrency.lockutils [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.592 238945 DEBUG nova.compute.manager [req-773b4139-4697-4912-964e-04a665ed2eff req-98b9a5e6-e422-42d6-9788-f13192732882 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Processing event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.593 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.597 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:5d:cd 10.100.0.14'], port_security=['fa:16:3e:6e:5d:cd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a4189f17-0ade-4e17-9182-4ba1f5dd35b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee59a9cf9e64e7e8bb75a0d9b609a0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d0b619f-293d-4dca-9ee1-a53206fc13f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dedbe79-aef6-4af3-addd-88084c697a07, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5ca0e79f-7590-4e89-8b25-605d37e60cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.598 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca0e79f-7590-4e89-8b25-605d37e60cbb in datapath 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 bound to our chassis#033[00m
Jan 27 08:57:27 np0005597378 NetworkManager[48904]: <info>  [1769522247.6014] device (tap5ca0e79f-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:57:27 np0005597378 NetworkManager[48904]: <info>  [1769522247.6070] device (tap5ca0e79f-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.607 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.611 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522247.6113675, 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.612 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.619 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.623 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efcd08b2-fde3-4f2d-8568-042fa0f8ac0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.625 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93d3b0a9-11 in ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.627 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93d3b0a9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[857d7067-3379-4e28-a92e-a81d3e9bdd16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.628 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16384e3b-ca44-448f-89f7-a3e5569bb3c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.629 238945 INFO nova.virt.libvirt.driver [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance spawned successfully.#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.636 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:27 np0005597378 systemd-machined[207425]: New machine qemu-96-instance-00000055.
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.645 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0b53a8a6-0bbc-44c9-ad02-6a463aae098f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 podman[310151]: 2026-01-27 13:57:27.649279546 +0000 UTC m=+0.078307074 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.649 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:57:27 np0005597378 systemd[1]: Started Virtual Machine qemu-96-instance-00000055.
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.658 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.659 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.659 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.660 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.660 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.661 238945 DEBUG nova.virt.libvirt.driver [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.666 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.673 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9375a132-a085-44a6-8f76-36eaf67148a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 podman[310148]: 2026-01-27 13:57:27.691921509 +0000 UTC m=+0.122808226 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:27Z|00761|binding|INFO|Setting lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb ovn-installed in OVS
Jan 27 08:57:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:27Z|00762|binding|INFO|Setting lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb up in Southbound
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.707 238945 INFO nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 16.93 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.707 238945 DEBUG nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.709 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1beaa73f-3431-4572-b99d-af41e58ba777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 NetworkManager[48904]: <info>  [1769522247.7220] manager: (tap93d3b0a9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeea4b5-ee70-4ba3-9ab5-05d8d65a4e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.757 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfb05b1-da27-4bad-9dd4-79aa4fcb3d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.761 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[807b8ea9-a846-470c-9f18-2682a0d67baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 NetworkManager[48904]: <info>  [1769522247.7820] device (tap93d3b0a9-10): carrier: link connected
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.787 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d74b927-84b0-4206-a47a-30b7c7b24472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.791 238945 INFO nova.compute.manager [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 17.97 seconds to build instance.#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.811 238945 DEBUG oslo_concurrency.lockutils [None req-030e855b-0e76-48f5-b0a2-79acc741cf03 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f15e5409-0a3a-4bdf-896d-e7d32ce470ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93d3b0a9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:40:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488740, 'reachable_time': 36402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310233, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.830 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[64ea5f9d-635a-4bc0-859e-d3c14b02f4a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:40e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488740, 'tstamp': 488740}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310234, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.846 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f227903-053e-4095-95a7-e41293472918]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93d3b0a9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:40:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488740, 'reachable_time': 36402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310235, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.903 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[26d96b7d-ddf9-4734-ad89-ce7ea42275cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[328fb514-c609-4036-a6e3-6403a4919101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.972 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93d3b0a9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.972 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.972 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93d3b0a9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.974 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:27 np0005597378 kernel: tap93d3b0a9-10: entered promiscuous mode
Jan 27 08:57:27 np0005597378 NetworkManager[48904]: <info>  [1769522247.9746] manager: (tap93d3b0a9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93d3b0a9-10, col_values=(('external_ids', {'iface-id': '9bbbe4b2-e59d-4ca7-ad7f-ad80699fe9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:27 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:27Z|00763|binding|INFO|Releasing lport 9bbbe4b2-e59d-4ca7-ad7f-ad80699fe9e2 from this chassis (sb_readonly=0)
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.980 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.982 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7094c85-c1ac-49ff-bcad-9b199b0d5334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.983 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.pid.haproxy
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:57:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:27.983 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'env', 'PROCESS_TAG=haproxy-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93d3b0a9-1ff9-4f8a-906c-0ba5fe888125.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:57:27 np0005597378 nova_compute[238941]: 2026-01-27 13:57:27.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.070 238945 INFO nova.compute.manager [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Rescuing#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.073 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.074 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.075 238945 DEBUG nova.network.neutron [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.078 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522248.0724347, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.079 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Started (Lifecycle Event)#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.111 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.135 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522248.0725644, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.136 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.155 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.159 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.177 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:57:28 np0005597378 podman[310306]: 2026-01-27 13:57:28.365793711 +0000 UTC m=+0.072532871 container create 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:57:28 np0005597378 systemd[1]: Started libpod-conmon-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7.scope.
Jan 27 08:57:28 np0005597378 podman[310306]: 2026-01-27 13:57:28.320003435 +0000 UTC m=+0.026742615 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:57:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:57:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968d3fc2611e501c7926a9bc6a40a0a08439b2447c901bf06394d4ad8bd1fa81/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:57:28 np0005597378 podman[310306]: 2026-01-27 13:57:28.454634071 +0000 UTC m=+0.161373251 container init 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 08:57:28 np0005597378 podman[310306]: 2026-01-27 13:57:28.460121426 +0000 UTC m=+0.166860586 container start 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:57:28 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : New worker (310327) forked
Jan 27 08:57:28 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : Loading success.
Jan 27 08:57:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 180 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.878 238945 DEBUG nova.compute.manager [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.878 238945 DEBUG oslo_concurrency.lockutils [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 DEBUG oslo_concurrency.lockutils [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 DEBUG oslo_concurrency.lockutils [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 DEBUG nova.compute.manager [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:28 np0005597378 nova_compute[238941]: 2026-01-27 13:57:28.879 238945 WARNING nova.compute.manager [req-79da6f0e-4389-4ba0-b6c8-e7ae0d5495b4 req-9e23ab21-1ffb-4eda-a65e-34473a2978f4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.727 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] No waiting events found dispatching network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 WARNING nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received unexpected event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde for instance with vm_state active and task_state None.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.728 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Processing event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG oslo_concurrency.lockutils [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 DEBUG nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] No waiting events found dispatching network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.729 238945 WARNING nova.compute.manager [req-d19dd16a-35cf-4563-b867-962071dc1840 req-02b6d780-a933-482a-89ab-021377050836 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received unexpected event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.730 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.732 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522249.7326035, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.734 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.738 238945 INFO nova.virt.libvirt.driver [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance spawned successfully.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.739 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.758 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.762 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.762 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.763 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.763 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.763 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.764 238945 DEBUG nova.virt.libvirt.driver [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.768 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.811 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.845 238945 INFO nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 8.67 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.845 238945 DEBUG nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.909 238945 INFO nova.compute.manager [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 9.92 seconds to build instance.#033[00m
Jan 27 08:57:29 np0005597378 nova_compute[238941]: 2026-01-27 13:57:29.929 238945 DEBUG oslo_concurrency.lockutils [None req-f8e1b433-8866-4f21-836d-499c2c5a7ab1 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.513 238945 DEBUG nova.network.neutron [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.546 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.643 238945 DEBUG oslo_concurrency.lockutils [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.643 238945 DEBUG oslo_concurrency.lockutils [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.643 238945 DEBUG nova.compute.manager [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.648 238945 DEBUG nova.compute.manager [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.649 238945 DEBUG nova.objects.instance [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'flavor' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.670 238945 DEBUG nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:57:30 np0005597378 nova_compute[238941]: 2026-01-27 13:57:30.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:31 np0005597378 nova_compute[238941]: 2026-01-27 13:57:31.057 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:57:32 np0005597378 nova_compute[238941]: 2026-01-27 13:57:32.098 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.644511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252644550, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1339, "num_deletes": 253, "total_data_size": 1865762, "memory_usage": 1895952, "flush_reason": "Manual Compaction"}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252656700, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1834132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31675, "largest_seqno": 33013, "table_properties": {"data_size": 1827859, "index_size": 3412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14242, "raw_average_key_size": 20, "raw_value_size": 1814894, "raw_average_value_size": 2603, "num_data_blocks": 152, "num_entries": 697, "num_filter_entries": 697, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522142, "oldest_key_time": 1769522142, "file_creation_time": 1769522252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 12240 microseconds, and 4616 cpu microseconds.
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.656748) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1834132 bytes OK
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.656769) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659199) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659221) EVENT_LOG_v1 {"time_micros": 1769522252659215, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1859664, prev total WAL file size 1859664, number of live WAL files 2.
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1791KB)], [68(7908KB)]
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252660030, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 9932710, "oldest_snapshot_seqno": -1}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5882 keys, 8245540 bytes, temperature: kUnknown
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252708295, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8245540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8206431, "index_size": 23319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 147810, "raw_average_key_size": 25, "raw_value_size": 8101147, "raw_average_value_size": 1377, "num_data_blocks": 944, "num_entries": 5882, "num_filter_entries": 5882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.708542) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8245540 bytes
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.712120) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.4 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 6404, records dropped: 522 output_compression: NoCompression
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.712146) EVENT_LOG_v1 {"time_micros": 1769522252712135, "job": 38, "event": "compaction_finished", "compaction_time_micros": 48352, "compaction_time_cpu_micros": 20772, "output_level": 6, "num_output_files": 1, "total_output_size": 8245540, "num_input_records": 6404, "num_output_records": 5882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252712588, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522252713804, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.659903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:57:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:57:32.713840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:57:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.776 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.777 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.778 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.778 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.778 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.780 238945 INFO nova.compute.manager [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Terminating instance#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.780 238945 DEBUG nova.compute.manager [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:57:34 np0005597378 kernel: tap5ca0e79f-75 (unregistering): left promiscuous mode
Jan 27 08:57:34 np0005597378 NetworkManager[48904]: <info>  [1769522254.8310] device (tap5ca0e79f-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:57:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:34Z|00764|binding|INFO|Releasing lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb from this chassis (sb_readonly=0)
Jan 27 08:57:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:34Z|00765|binding|INFO|Setting lport 5ca0e79f-7590-4e89-8b25-605d37e60cbb down in Southbound
Jan 27 08:57:34 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:34Z|00766|binding|INFO|Removing iface tap5ca0e79f-75 ovn-installed in OVS
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.856 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:5d:cd 10.100.0.14'], port_security=['fa:16:3e:6e:5d:cd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a4189f17-0ade-4e17-9182-4ba1f5dd35b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee59a9cf9e64e7e8bb75a0d9b609a0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d0b619f-293d-4dca-9ee1-a53206fc13f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dedbe79-aef6-4af3-addd-88084c697a07, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5ca0e79f-7590-4e89-8b25-605d37e60cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:57:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.857 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca0e79f-7590-4e89-8b25-605d37e60cbb in datapath 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 unbound from our chassis#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:57:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.866 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9355f2-c077-4084-9f35-acfce6f5b31b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:34.867 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 namespace which is not needed anymore#033[00m
Jan 27 08:57:34 np0005597378 nova_compute[238941]: 2026-01-27 13:57:34.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:34 np0005597378 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 27 08:57:34 np0005597378 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000055.scope: Consumed 5.488s CPU time.
Jan 27 08:57:34 np0005597378 systemd-machined[207425]: Machine qemu-96-instance-00000055 terminated.
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.013 238945 INFO nova.virt.libvirt.driver [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Instance destroyed successfully.#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.014 238945 DEBUG nova.objects.instance [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lazy-loading 'resources' on Instance uuid a4189f17-0ade-4e17-9182-4ba1f5dd35b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.033 238945 DEBUG nova.virt.libvirt.vif [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1037770242',display_name='tempest-ServerMetadataTestJSON-server-1037770242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1037770242',id=85,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:57:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee59a9cf9e64e7e8bb75a0d9b609a0c',ramdisk_id='',reservation_id='r-qisaeu5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-749660720',owner_user_name='tempest-ServerMetadataTestJSON-749660720-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:57:34Z,user_data=None,user_id='9cea2582f56e4f2ab221ea9bac7c3dfd',uuid=a4189f17-0ade-4e17-9182-4ba1f5dd35b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.034 238945 DEBUG nova.network.os_vif_util [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converting VIF {"id": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "address": "fa:16:3e:6e:5d:cd", "network": {"id": "93d3b0a9-1ff9-4f8a-906c-0ba5fe888125", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1985671803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee59a9cf9e64e7e8bb75a0d9b609a0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca0e79f-75", "ovs_interfaceid": "5ca0e79f-7590-4e89-8b25-605d37e60cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.034 238945 DEBUG nova.network.os_vif_util [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.035 238945 DEBUG os_vif [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.036 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ca0e79f-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.042 238945 INFO os_vif [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:5d:cd,bridge_name='br-int',has_traffic_filtering=True,id=5ca0e79f-7590-4e89-8b25-605d37e60cbb,network=Network(93d3b0a9-1ff9-4f8a-906c-0ba5fe888125),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca0e79f-75')#033[00m
Jan 27 08:57:35 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : haproxy version is 2.8.14-c23fe91
Jan 27 08:57:35 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [NOTICE]   (310325) : path to executable is /usr/sbin/haproxy
Jan 27 08:57:35 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [WARNING]  (310325) : Exiting Master process...
Jan 27 08:57:35 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [WARNING]  (310325) : Exiting Master process...
Jan 27 08:57:35 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [ALERT]    (310325) : Current worker (310327) exited with code 143 (Terminated)
Jan 27 08:57:35 np0005597378 neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125[310321]: [WARNING]  (310325) : All workers exited. Exiting... (0)
Jan 27 08:57:35 np0005597378 systemd[1]: libpod-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7.scope: Deactivated successfully.
Jan 27 08:57:35 np0005597378 podman[310359]: 2026-01-27 13:57:35.114855245 +0000 UTC m=+0.161616998 container died 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.258 238945 DEBUG nova.compute.manager [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-unplugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.259 238945 DEBUG oslo_concurrency.lockutils [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG oslo_concurrency.lockutils [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG oslo_concurrency.lockutils [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG nova.compute.manager [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] No waiting events found dispatching network-vif-unplugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:35 np0005597378 nova_compute[238941]: 2026-01-27 13:57:35.260 238945 DEBUG nova.compute.manager [req-986e0516-c977-48a6-a44d-65ad79c1d5a2 req-d7599b71-9552-4033-aaf0-de95ce7730d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-unplugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:57:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-968d3fc2611e501c7926a9bc6a40a0a08439b2447c901bf06394d4ad8bd1fa81-merged.mount: Deactivated successfully.
Jan 27 08:57:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7-userdata-shm.mount: Deactivated successfully.
Jan 27 08:57:36 np0005597378 podman[310359]: 2026-01-27 13:57:36.269959034 +0000 UTC m=+1.316720787 container cleanup 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 08:57:36 np0005597378 systemd[1]: libpod-conmon-7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7.scope: Deactivated successfully.
Jan 27 08:57:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 181 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 249 op/s
Jan 27 08:57:36 np0005597378 podman[310418]: 2026-01-27 13:57:36.603298876 +0000 UTC m=+0.309238978 container remove 7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.609 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b86d6edd-2e6f-4ccb-832b-d03699d2f44a]: (4, ('Tue Jan 27 01:57:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 (7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7)\n7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7\nTue Jan 27 01:57:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 (7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7)\n7ff927978194a0b8e46dcc2193b496b418d3fe91a79a6ecff42b5005d8f410b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.610 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[464efda2-4ed6-4a93-91db-84558422e0f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.611 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93d3b0a9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:36 np0005597378 nova_compute[238941]: 2026-01-27 13:57:36.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:36 np0005597378 kernel: tap93d3b0a9-10: left promiscuous mode
Jan 27 08:57:36 np0005597378 nova_compute[238941]: 2026-01-27 13:57:36.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.621 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f41017ea-8217-454e-9500-5bd3cc6cfe05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:36 np0005597378 nova_compute[238941]: 2026-01-27 13:57:36.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[065c720e-29ba-4260-bd6c-315fd9095f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.647 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4974efdc-4897-4a62-b463-971e44c8574b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.663 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e22f005-b042-4345-ab0b-fce03c8ad200]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488732, 'reachable_time': 38397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310434, 'error': None, 'target': 'ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:36 np0005597378 systemd[1]: run-netns-ovnmeta\x2d93d3b0a9\x2d1ff9\x2d4f8a\x2d906c\x2d0ba5fe888125.mount: Deactivated successfully.
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.668 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93d3b0a9-1ff9-4f8a-906c-0ba5fe888125 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:57:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:36.668 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a51e62cc-d035-484f-a2ee-b45dfe7c57f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.525 238945 INFO nova.virt.libvirt.driver [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deleting instance files /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_del#033[00m
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.526 238945 INFO nova.virt.libvirt.driver [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deletion of /var/lib/nova/instances/a4189f17-0ade-4e17-9182-4ba1f5dd35b5_del complete#033[00m
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.635 238945 INFO nova.compute.manager [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 2.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.636 238945 DEBUG oslo.service.loopingcall [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.637 238945 DEBUG nova.compute.manager [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:57:37 np0005597378 nova_compute[238941]: 2026-01-27 13:57:37.637 238945 DEBUG nova.network.neutron [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.382 238945 DEBUG nova.compute.manager [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.382 238945 DEBUG oslo_concurrency.lockutils [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 DEBUG oslo_concurrency.lockutils [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 DEBUG oslo_concurrency.lockutils [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 DEBUG nova.compute.manager [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] No waiting events found dispatching network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.383 238945 WARNING nova.compute.manager [req-d798fa26-4eba-45f1-bb3e-452b12a9cac3 req-8fed9d70-b9a2-4b2f-ae21-8bce594fc037 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received unexpected event network-vif-plugged-5ca0e79f-7590-4e89-8b25-605d37e60cbb for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.433 238945 DEBUG nova.network.neutron [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.453 238945 INFO nova.compute.manager [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Took 0.82 seconds to deallocate network for instance.#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.495 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.496 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 171 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 377 KiB/s wr, 218 op/s
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.522 238945 DEBUG nova.compute.manager [req-5f9b5b51-a51c-4911-b41b-c5410ec602eb req-afa4f6f1-ba70-41dc-9f14-f829ba34edef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Received event network-vif-deleted-5ca0e79f-7590-4e89-8b25-605d37e60cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:38 np0005597378 nova_compute[238941]: 2026-01-27 13:57:38.617 238945 DEBUG oslo_concurrency.processutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:57:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972992309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:57:39 np0005597378 nova_compute[238941]: 2026-01-27 13:57:39.180 238945 DEBUG oslo_concurrency.processutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:39 np0005597378 nova_compute[238941]: 2026-01-27 13:57:39.185 238945 DEBUG nova.compute.provider_tree [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:57:39 np0005597378 nova_compute[238941]: 2026-01-27 13:57:39.255 238945 DEBUG nova.scheduler.client.report [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:57:39 np0005597378 nova_compute[238941]: 2026-01-27 13:57:39.287 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:39 np0005597378 nova_compute[238941]: 2026-01-27 13:57:39.328 238945 INFO nova.scheduler.client.report [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Deleted allocations for instance a4189f17-0ade-4e17-9182-4ba1f5dd35b5#033[00m
Jan 27 08:57:39 np0005597378 nova_compute[238941]: 2026-01-27 13:57:39.399 238945 DEBUG oslo_concurrency.lockutils [None req-90b7e84a-21cb-42d5-b226-5b3d321b76ea 9cea2582f56e4f2ab221ea9bac7c3dfd bee59a9cf9e64e7e8bb75a0d9b609a0c - - default default] Lock "a4189f17-0ade-4e17-9182-4ba1f5dd35b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:40 np0005597378 nova_compute[238941]: 2026-01-27 13:57:40.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 151 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 27 08:57:40 np0005597378 nova_compute[238941]: 2026-01-27 13:57:40.715 238945 DEBUG nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:57:41 np0005597378 nova_compute[238941]: 2026-01-27 13:57:41.096 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:57:42 np0005597378 nova_compute[238941]: 2026-01-27 13:57:42.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 151 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 27 08:57:42 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:42Z|00767|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 08:57:42 np0005597378 nova_compute[238941]: 2026-01-27 13:57:42.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 170 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Jan 27 08:57:45 np0005597378 nova_compute[238941]: 2026-01-27 13:57:45.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:45 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:45Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:8f:2d 10.100.0.11
Jan 27 08:57:45 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:45Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:8f:2d 10.100.0.11
Jan 27 08:57:45 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 27 08:57:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:46.307 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 177 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 27 08:57:47 np0005597378 nova_compute[238941]: 2026-01-27 13:57:47.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:57:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:57:48 np0005597378 nova_compute[238941]: 2026-01-27 13:57:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:48 np0005597378 nova_compute[238941]: 2026-01-27 13:57:48.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:57:48 np0005597378 nova_compute[238941]: 2026-01-27 13:57:48.408 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:57:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 185 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 475 KiB/s rd, 4.1 MiB/s wr, 111 op/s
Jan 27 08:57:49 np0005597378 nova_compute[238941]: 2026-01-27 13:57:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:49 np0005597378 nova_compute[238941]: 2026-01-27 13:57:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:50 np0005597378 nova_compute[238941]: 2026-01-27 13:57:50.012 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522255.0111954, a4189f17-0ade-4e17-9182-4ba1f5dd35b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:57:50 np0005597378 nova_compute[238941]: 2026-01-27 13:57:50.012 238945 INFO nova.compute.manager [-] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:57:50 np0005597378 nova_compute[238941]: 2026-01-27 13:57:50.033 238945 DEBUG nova.compute.manager [None req-c07d72a7-5fe6-4c31-9ab6-ca1bc1b168f3 - - - - - -] [instance: a4189f17-0ade-4e17-9182-4ba1f5dd35b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:50 np0005597378 nova_compute[238941]: 2026-01-27 13:57:50.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.414 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:51 np0005597378 nova_compute[238941]: 2026-01-27 13:57:51.757 238945 DEBUG nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:57:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:57:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876945735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.004 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.074 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.074 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.078 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.079 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.142 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.277 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.279 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3515MB free_disk=59.897104900330305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.280 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.280 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.377 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.378 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.379 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.379 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:57:52 np0005597378 nova_compute[238941]: 2026-01-27 13:57:52.445 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 2.5 MiB/s wr, 96 op/s
Jan 27 08:57:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:57:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490262620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:57:53 np0005597378 nova_compute[238941]: 2026-01-27 13:57:53.088 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:53 np0005597378 nova_compute[238941]: 2026-01-27 13:57:53.094 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:57:53 np0005597378 nova_compute[238941]: 2026-01-27 13:57:53.109 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:57:53 np0005597378 nova_compute[238941]: 2026-01-27 13:57:53.141 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:57:53 np0005597378 nova_compute[238941]: 2026-01-27 13:57:53.142 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:54 np0005597378 nova_compute[238941]: 2026-01-27 13:57:54.136 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:54 np0005597378 nova_compute[238941]: 2026-01-27 13:57:54.137 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:54 np0005597378 nova_compute[238941]: 2026-01-27 13:57:54.137 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:57:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 27 08:57:54 np0005597378 nova_compute[238941]: 2026-01-27 13:57:54.776 238945 INFO nova.virt.libvirt.driver [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance shutdown successfully after 24 seconds.#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:55 np0005597378 kernel: tap5492ae81-fe (unregistering): left promiscuous mode
Jan 27 08:57:55 np0005597378 NetworkManager[48904]: <info>  [1769522275.1051] device (tap5492ae81-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:55Z|00768|binding|INFO|Releasing lport 5492ae81-fead-4d0c-9f4b-b83fee610fde from this chassis (sb_readonly=0)
Jan 27 08:57:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:55Z|00769|binding|INFO|Setting lport 5492ae81-fead-4d0c-9f4b-b83fee610fde down in Southbound
Jan 27 08:57:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:55Z|00770|binding|INFO|Removing iface tap5492ae81-fe ovn-installed in OVS
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.127 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:8f:2d 10.100.0.11'], port_security=['fa:16:3e:3d:8f:2d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4d47c02a-bb54-4f2d-8bdd-456beb3d6deb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5492ae81-fead-4d0c-9f4b-b83fee610fde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:57:55 np0005597378 kernel: tap5e99824f-f6 (unregistering): left promiscuous mode
Jan 27 08:57:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.130 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5492ae81-fead-4d0c-9f4b-b83fee610fde in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis#033[00m
Jan 27 08:57:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.132 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:57:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.133 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a20f5431-2106-4cd8-90b2-35501c3e6108]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.133 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore#033[00m
Jan 27 08:57:55 np0005597378 NetworkManager[48904]: <info>  [1769522275.1370] device (tap5e99824f-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:55Z|00771|binding|INFO|Releasing lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f from this chassis (sb_readonly=0)
Jan 27 08:57:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:55Z|00772|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f down in Southbound
Jan 27 08:57:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:57:55Z|00773|binding|INFO|Removing iface tap5e99824f-f6 ovn-installed in OVS
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:55.160 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.179 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:55 np0005597378 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000053.scope: Deactivated successfully.
Jan 27 08:57:55 np0005597378 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000053.scope: Consumed 14.499s CPU time.
Jan 27 08:57:55 np0005597378 systemd-machined[207425]: Machine qemu-94-instance-00000053 terminated.
Jan 27 08:57:55 np0005597378 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 27 08:57:55 np0005597378 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000054.scope: Consumed 13.756s CPU time.
Jan 27 08:57:55 np0005597378 systemd-machined[207425]: Machine qemu-95-instance-00000054 terminated.
Jan 27 08:57:55 np0005597378 NetworkManager[48904]: <info>  [1769522275.4012] manager: (tap5492ae81-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Jan 27 08:57:55 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [NOTICE]   (309408) : haproxy version is 2.8.14-c23fe91
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.402 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance shutdown successfully after 24 seconds.#033[00m
Jan 27 08:57:55 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [NOTICE]   (309408) : path to executable is /usr/sbin/haproxy
Jan 27 08:57:55 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [WARNING]  (309408) : Exiting Master process...
Jan 27 08:57:55 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [WARNING]  (309408) : Exiting Master process...
Jan 27 08:57:55 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [ALERT]    (309408) : Current worker (309426) exited with code 143 (Terminated)
Jan 27 08:57:55 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[309404]: [WARNING]  (309408) : All workers exited. Exiting... (0)
Jan 27 08:57:55 np0005597378 systemd[1]: libpod-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef.scope: Deactivated successfully.
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.411 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance destroyed successfully.#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.411 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'numa_topology' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:55 np0005597378 podman[310535]: 2026-01-27 13:57:55.415469283 +0000 UTC m=+0.173226884 container died aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.419 238945 INFO nova.virt.libvirt.driver [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance destroyed successfully.#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.419 238945 DEBUG nova.objects.instance [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'numa_topology' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.430 238945 DEBUG nova.compute.manager [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.434 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Attempting rescue#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.436 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.445 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.446 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating image(s)#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.469 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.473 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.523 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef-userdata-shm.mount: Deactivated successfully.
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.548 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7e638a601724a2a8d240737b8d14d1ec3c349419d8686ebd321aa9d805c099f4-merged.mount: Deactivated successfully.
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.558 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.603 238945 DEBUG oslo_concurrency.lockutils [None req-1acc691b-ac8d-4004-9fe4-21ff933e6c31 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.642 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.642 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.643 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.643 238945 DEBUG oslo_concurrency.lockutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.671 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.675 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.708 238945 DEBUG nova.compute.manager [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.710 238945 DEBUG oslo_concurrency.lockutils [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.710 238945 DEBUG oslo_concurrency.lockutils [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.711 238945 DEBUG oslo_concurrency.lockutils [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.711 238945 DEBUG nova.compute.manager [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:55 np0005597378 nova_compute[238941]: 2026-01-27 13:57:55.711 238945 WARNING nova.compute.manager [req-d1aef779-7e4e-4dca-9c26-b6f74cf93fe1 req-5e9c296e-697d-4df2-9118-c108af57c1a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:57:55 np0005597378 podman[310535]: 2026-01-27 13:57:55.985991273 +0000 UTC m=+0.743748864 container cleanup aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:57:55 np0005597378 systemd[1]: libpod-conmon-aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef.scope: Deactivated successfully.
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.286 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.286 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.308 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.390 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.391 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.398 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.399 238945 INFO nova.compute.claims [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:57:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 753 KiB/s wr, 63 op/s
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.606 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:56 np0005597378 podman[310681]: 2026-01-27 13:57:56.947856661 +0000 UTC m=+0.935745321 container remove aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 08:57:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.955 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e772965d-787a-4c53-afbd-4b702890fbdd]: (4, ('Tue Jan 27 01:57:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef)\naad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef\nTue Jan 27 01:57:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (aad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef)\naad5a3e414f974908701005c7ea48b1dda5ea3e29fc8e3f2a62de8d4f796b1ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.956 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2673d37b-1ca9-443b-b652-ad6c05ed6287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.957 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:56 np0005597378 kernel: tap67e37534-40: left promiscuous mode
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:56 np0005597378 nova_compute[238941]: 2026-01-27 13:57:56.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:56.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b47124-4b8c-4663-a02d-ccfa6528f2f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.011 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e67182a8-219a-4696-b1cd-728ceeb424bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[416aff4e-86d3-47b7-9bda-93fd22855ae6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.033 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6b5be2-7ae2-4894-a503-eacd29100dd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487648, 'reachable_time': 17412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310725, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:57 np0005597378 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.037 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.038 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[39e62f18-8d9c-4db2-94f6-e3d03066226a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.039 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 unbound from our chassis#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.039 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 08:57:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:57:57.040 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e403046c-57ba-4c1c-a3b6-55ef6894cc0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.078 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.079 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.080 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.080 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.081 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.084 238945 INFO nova.compute.manager [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Terminating instance#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.086 238945 DEBUG nova.compute.manager [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.097 238945 INFO nova.virt.libvirt.driver [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Instance destroyed successfully.#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.097 238945 DEBUG nova.objects.instance [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.114 238945 DEBUG nova.virt.libvirt.vif [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1816959880',display_name='tempest-DeleteServersTestJSON-server-1816959880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1816959880',id=83,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:57:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-697xv1z2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:57:55Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=4d47c02a-bb54-4f2d-8bdd-456beb3d6deb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.115 238945 DEBUG nova.network.os_vif_util [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "address": "fa:16:3e:3d:8f:2d", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5492ae81-fe", "ovs_interfaceid": "5492ae81-fead-4d0c-9f4b-b83fee610fde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.117 238945 DEBUG nova.network.os_vif_util [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.118 238945 DEBUG os_vif [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.123 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5492ae81-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.144 238945 INFO os_vif [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=5492ae81-fead-4d0c-9f4b-b83fee610fde,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5492ae81-fe')#033[00m
Jan 27 08:57:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:57:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391114718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.182 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.189 238945 DEBUG nova.compute.provider_tree [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.208 238945 DEBUG nova.scheduler.client.report [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.237 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.239 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.284 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.284 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.303 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.326 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.426 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.427 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.427 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Creating image(s)#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.473 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.497 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.522 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.525 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.573 238945 DEBUG nova.policy [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6d810ffa0b094acc95ac627960258a9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5329b351a44765b175b708e70517cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG nova.compute.manager [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-unplugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG oslo_concurrency.lockutils [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG oslo_concurrency.lockutils [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.577 238945 DEBUG oslo_concurrency.lockutils [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.578 238945 DEBUG nova.compute.manager [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] No waiting events found dispatching network-vif-unplugged-5492ae81-fead-4d0c-9f4b-b83fee610fde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.578 238945 DEBUG nova.compute.manager [req-ab66cf06-133f-414f-a999-fa00e57b63ec req-9961a061-87a5-4c4f-a2df-c8c642fb7288 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-unplugged-5492ae81-fead-4d0c-9f4b-b83fee610fde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.623 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.624 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.624 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.625 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.644 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.647 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.800 238945 DEBUG nova.compute.manager [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.801 238945 DEBUG oslo_concurrency.lockutils [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.801 238945 DEBUG oslo_concurrency.lockutils [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.802 238945 DEBUG oslo_concurrency.lockutils [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.802 238945 DEBUG nova.compute.manager [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:57 np0005597378 nova_compute[238941]: 2026-01-27 13:57:57.802 238945 WARNING nova.compute.manager [req-abd7cf3f-8f40-4cb5-8085-614cd22b8e2a req-6b9b8e51-eba7-4f59-a8fe-fa7cf7facec9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.178 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Successfully created port: e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:57:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 215 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 688 KiB/s wr, 48 op/s
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.704 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.705 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'migration_context' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.721 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.722 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start _get_guest_xml network_info=[{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "vif_mac": "fa:16:3e:8d:02:e1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.723 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'resources' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.756 238945 WARNING nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.763 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.764 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.767 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.768 238945 DEBUG nova.virt.libvirt.host [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.769 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.769 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.770 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.770 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.771 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.771 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.772 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.772 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.773 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.774 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:57:58 np0005597378 podman[310840]: 2026-01-27 13:57:58.774316297 +0000 UTC m=+0.091628995 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.774 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.776 238945 DEBUG nova.virt.hardware [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.776 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:57:58 np0005597378 nova_compute[238941]: 2026-01-27 13:57:58.794 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:58 np0005597378 podman[310839]: 2026-01-27 13:57:58.847449324 +0000 UTC m=+0.164003442 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.015 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Successfully updated port: e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.041 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.041 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquired lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.042 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.225 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:57:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:57:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978760482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.404 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.406 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.545 238945 DEBUG nova.compute.manager [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.546 238945 DEBUG oslo_concurrency.lockutils [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.546 238945 DEBUG oslo_concurrency.lockutils [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.547 238945 DEBUG oslo_concurrency.lockutils [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.547 238945 DEBUG nova.compute.manager [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] No waiting events found dispatching network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.548 238945 WARNING nova.compute.manager [req-41486bac-d9fa-498d-8143-1f586c2fa8d8 req-b3366e2d-a370-4ac3-8e34-4b4c9f1e56ed 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received unexpected event network-vif-plugged-5492ae81-fead-4d0c-9f4b-b83fee610fde for instance with vm_state stopped and task_state deleting.#033[00m
Jan 27 08:57:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:57:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740043150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:57:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:57:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740043150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.916 238945 DEBUG nova.compute.manager [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-changed-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.917 238945 DEBUG nova.compute.manager [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Refreshing instance network info cache due to event network-changed-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:57:59 np0005597378 nova_compute[238941]: 2026-01-27 13:57:59.917 238945 DEBUG oslo_concurrency.lockutils [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.006 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.068 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] resizing rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:58:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952035847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.347 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.941s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.349 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 231 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 MiB/s wr, 84 op/s
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.743 238945 DEBUG nova.objects.instance [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lazy-loading 'migration_context' on Instance uuid 327a26c8-ebd5-4f42-ad95-3905ab2e1248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.756 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.757 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Ensure instance console log exists: /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.757 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.758 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:00 np0005597378 nova_compute[238941]: 2026-01-27 13:58:00.758 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259911168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.077 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.079 238945 DEBUG nova.virt.libvirt.vif [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:57:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:26Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "vif_mac": "fa:16:3e:8d:02:e1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.080 238945 DEBUG nova.network.os_vif_util [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "vif_mac": "fa:16:3e:8d:02:e1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.081 238945 DEBUG nova.network.os_vif_util [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.082 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.099 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <uuid>9a2cac55-b28d-4d71-b091-6a3c39cdfe14</uuid>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <name>instance-00000054</name>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1986971874</nova:name>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:57:58</nova:creationTime>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:user uuid="b49f56e21cd44451a1c542f97cb11a9c">tempest-ServerRescueTestJSONUnderV235-508111280-project-member</nova:user>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:project uuid="71ad88aa5cfe42bdb12bd409ad2842de">tempest-ServerRescueTestJSONUnderV235-508111280</nova:project>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <nova:port uuid="5e99824f-f686-4cd9-a3dd-e1e0690fc68f">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <entry name="serial">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <entry name="uuid">9a2cac55-b28d-4d71-b091-6a3c39cdfe14</entry>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.rescue">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <target dev="vdb" bus="virtio"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:8d:02:e1"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <target dev="tap5e99824f-f6"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/console.log" append="off"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:58:01 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:58:01 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:58:01 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:58:01 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.109 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance destroyed successfully.#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.201 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.202 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.202 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.202 238945 DEBUG nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] No VIF found with MAC fa:16:3e:8d:02:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.203 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Using config drive#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.221 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.241 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.267 238945 DEBUG nova.objects.instance [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'keypairs' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.315 238945 DEBUG nova.network.neutron [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updating instance_info_cache with network_info: [{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.332 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Releasing lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.333 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance network_info: |[{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.333 238945 DEBUG oslo_concurrency.lockutils [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.333 238945 DEBUG nova.network.neutron [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Refreshing network info cache for port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.336 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start _get_guest_xml network_info=[{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.668 238945 WARNING nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.672 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.673 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.675 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.libvirt.host [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.676 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.677 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.678 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.678 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.679 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.679 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.679 238945 DEBUG nova.virt.hardware [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:58:01 np0005597378 nova_compute[238941]: 2026-01-27 13:58:01.681 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.060 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Creating config drive at /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.066 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddzp6weq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006897974' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.217 238945 INFO nova.virt.libvirt.driver [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deleting instance files /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_del#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.217 238945 INFO nova.virt.libvirt.driver [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deletion of /var/lib/nova/instances/4d47c02a-bb54-4f2d-8bdd-456beb3d6deb_del complete#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.221 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddzp6weq" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.244 238945 DEBUG nova.storage.rbd_utils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] rbd image 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.248 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.281 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.307 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.311 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.349 238945 INFO nova.compute.manager [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 5.26 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.350 238945 DEBUG oslo.service.loopingcall [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.351 238945 DEBUG nova.compute.manager [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.351 238945 DEBUG nova.network.neutron [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:58:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 231 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.643 238945 DEBUG oslo_concurrency.processutils [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue 9a2cac55-b28d-4d71-b091-6a3c39cdfe14_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.644 238945 INFO nova.virt.libvirt.driver [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deleting local config drive /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14/disk.config.rescue because it was imported into RBD.#033[00m
Jan 27 08:58:02 np0005597378 kernel: tap5e99824f-f6: entered promiscuous mode
Jan 27 08:58:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:02Z|00774|binding|INFO|Claiming lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f for this chassis.
Jan 27 08:58:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:02Z|00775|binding|INFO|5e99824f-f686-4cd9-a3dd-e1e0690fc68f: Claiming fa:16:3e:8d:02:e1 10.100.0.11
Jan 27 08:58:02 np0005597378 NetworkManager[48904]: <info>  [1769522282.7037] manager: (tap5e99824f-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Jan 27 08:58:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.708 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.709 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 bound to our chassis#033[00m
Jan 27 08:58:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.710 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 08:58:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:02.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c9002012-01f9-4c7c-aef1-476c13eb4e6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:02Z|00776|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f up in Southbound
Jan 27 08:58:02 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:02Z|00777|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f ovn-installed in OVS
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 systemd-machined[207425]: New machine qemu-97-instance-00000054.
Jan 27 08:58:02 np0005597378 systemd[1]: Started Virtual Machine qemu-97-instance-00000054.
Jan 27 08:58:02 np0005597378 systemd-udevd[311161]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:58:02 np0005597378 NetworkManager[48904]: <info>  [1769522282.7751] device (tap5e99824f-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:58:02 np0005597378 NetworkManager[48904]: <info>  [1769522282.7760] device (tap5e99824f-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:58:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1902361660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.900 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.902 238945 DEBUG nova.virt.libvirt.vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-202038910',display_name='tempest-ServerPasswordTestJSON-server-202038910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-202038910',id=86,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5329b351a44765b175b708e70517cc',ramdisk_id='',reservation_id='r-8c2gjojj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1054775625',owner_user_name='tempest-ServerPasswordTestJSON-1054775625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:57Z,user_data=None,user_id='6d810ffa0b094acc95ac627960258a9f',uuid=327a26c8-ebd5-4f42-ad95-3905ab2e1248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.903 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converting VIF {"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.904 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.905 238945 DEBUG nova.objects.instance [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 327a26c8-ebd5-4f42-ad95-3905ab2e1248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.922 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <uuid>327a26c8-ebd5-4f42-ad95-3905ab2e1248</uuid>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <name>instance-00000056</name>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerPasswordTestJSON-server-202038910</nova:name>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:58:01</nova:creationTime>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:user uuid="6d810ffa0b094acc95ac627960258a9f">tempest-ServerPasswordTestJSON-1054775625-project-member</nova:user>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:project uuid="ca5329b351a44765b175b708e70517cc">tempest-ServerPasswordTestJSON-1054775625</nova:project>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <nova:port uuid="e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <entry name="serial">327a26c8-ebd5-4f42-ad95-3905ab2e1248</entry>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <entry name="uuid">327a26c8-ebd5-4f42-ad95-3905ab2e1248</entry>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:3d:f9:fd"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <target dev="tape4496765-86"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/console.log" append="off"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:58:02 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:58:02 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:58:02 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:58:02 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.923 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Preparing to wait for external event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.924 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.924 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.924 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.925 238945 DEBUG nova.virt.libvirt.vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-202038910',display_name='tempest-ServerPasswordTestJSON-server-202038910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-202038910',id=86,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5329b351a44765b175b708e70517cc',ramdisk_id='',reservation_id='r-8c2gjojj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1054775625',owner_user_name='tempest-ServerPasswordTestJSON-1054775625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:57:57Z,user_data=None,user_id='6d810ffa0b094acc95ac627960258a9f',uuid=327a26c8-ebd5-4f42-ad95-3905ab2e1248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.925 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converting VIF {"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.926 238945 DEBUG nova.network.os_vif_util [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.926 238945 DEBUG os_vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.927 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.927 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.931 238945 DEBUG nova.compute.manager [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.931 238945 DEBUG oslo_concurrency.lockutils [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.931 238945 DEBUG oslo_concurrency.lockutils [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.932 238945 DEBUG oslo_concurrency.lockutils [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.932 238945 DEBUG nova.compute.manager [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.932 238945 WARNING nova.compute.manager [req-32770a16-b940-4f6b-aaa4-21ea4da4a22f req-3debcee0-1575-4d08-a526-cbf2e7ebad71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.934 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4496765-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.934 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4496765-86, col_values=(('external_ids', {'iface-id': 'e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:f9:fd', 'vm-uuid': '327a26c8-ebd5-4f42-ad95-3905ab2e1248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:02 np0005597378 NetworkManager[48904]: <info>  [1769522282.9367] manager: (tape4496765-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:02 np0005597378 nova_compute[238941]: 2026-01-27 13:58:02.942 238945 INFO os_vif [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86')#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.021 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.021 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.021 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] No VIF found with MAC fa:16:3e:3d:f9:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.022 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Using config drive#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.040 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.130 238945 DEBUG nova.network.neutron [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.149 238945 INFO nova.compute.manager [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Took 0.80 seconds to deallocate network for instance.#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.202 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.202 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.236 238945 DEBUG nova.compute.manager [req-1ccda954-5a80-4167-ba46-0ad1c1e14a1f req-57e5c3ed-2b8b-4acb-83b7-650d1821b0d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Received event network-vif-deleted-5492ae81-fead-4d0c-9f4b-b83fee610fde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.280 238945 DEBUG oslo_concurrency.processutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.389 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.390 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522283.3891976, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.390 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.395 238945 DEBUG nova.compute.manager [None req-4346a91b-3627-4185-9223-127b59fcba60 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.409 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.411 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.440 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.441 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522283.3975635, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.441 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Started (Lifecycle Event)#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.451 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Creating config drive at /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.456 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptli28cmc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.496 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.501 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.543 238945 DEBUG nova.network.neutron [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updated VIF entry in instance network info cache for port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.544 238945 DEBUG nova.network.neutron [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updating instance_info_cache with network_info: [{"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.556 238945 DEBUG oslo_concurrency.lockutils [req-408bcda8-24b1-4856-9772-db84f327d030 req-5078c513-3e08-4faf-9e54-c81564a1900c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-327a26c8-ebd5-4f42-ad95-3905ab2e1248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.600 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptli28cmc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.623 238945 DEBUG nova.storage.rbd_utils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] rbd image 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.626 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.841 238945 DEBUG oslo_concurrency.processutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config 327a26c8-ebd5-4f42-ad95-3905ab2e1248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.842 238945 INFO nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deleting local config drive /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248/disk.config because it was imported into RBD.#033[00m
Jan 27 08:58:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355990754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.870 238945 DEBUG oslo_concurrency.processutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.879 238945 DEBUG nova.compute.provider_tree [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:03 np0005597378 NetworkManager[48904]: <info>  [1769522283.8978] manager: (tape4496765-86): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Jan 27 08:58:03 np0005597378 kernel: tape4496765-86: entered promiscuous mode
Jan 27 08:58:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:03Z|00778|binding|INFO|Claiming lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for this chassis.
Jan 27 08:58:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:03Z|00779|binding|INFO|e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e: Claiming fa:16:3e:3d:f9:fd 10.100.0.11
Jan 27 08:58:03 np0005597378 NetworkManager[48904]: <info>  [1769522283.9125] device (tape4496765-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:58:03 np0005597378 NetworkManager[48904]: <info>  [1769522283.9137] device (tape4496765-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.922 238945 DEBUG nova.scheduler.client.report [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.922 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:f9:fd 10.100.0.11'], port_security=['fa:16:3e:3d:f9:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '327a26c8-ebd5-4f42-ad95-3905ab2e1248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5329b351a44765b175b708e70517cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6266d667-96e8-4bfe-9ae9-83e66ad1bb5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86f7a126-6ed6-4c1b-893d-bfec6c017d78, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.923 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e in datapath 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e bound to our chassis#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.928 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e#033[00m
Jan 27 08:58:03 np0005597378 systemd-machined[207425]: New machine qemu-98-instance-00000056.
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e92bd18-b23f-496c-9f16-e9894fd96cf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.944 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap205e3bd3-61 in ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.946 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap205e3bd3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44e27ca7-3df2-41d4-99f2-319cc30722ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.947 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82689ba8-f908-41dc-b636-ff3928bd58ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.949 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:03 np0005597378 systemd[1]: Started Virtual Machine qemu-98-instance-00000056.
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.972 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[62fe9f27-fd04-4be9-8a77-f3adb63994c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.984 238945 INFO nova.scheduler.client.report [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb#033[00m
Jan 27 08:58:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:03Z|00780|binding|INFO|Setting lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e ovn-installed in OVS
Jan 27 08:58:03 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:03Z|00781|binding|INFO|Setting lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e up in Southbound
Jan 27 08:58:03 np0005597378 nova_compute[238941]: 2026-01-27 13:58:03.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:03.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ebbab1-70a6-46a7-bb29-77d5e23c4081]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.026 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e10e19d-5398-4994-af19-0eaaea176401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 NetworkManager[48904]: <info>  [1769522284.0463] manager: (tap205e3bd3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.045 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be4da3cc-96ff-41d4-85db-7131144299bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.052 238945 DEBUG oslo_concurrency.lockutils [None req-1e37f388-04b9-4480-be80-344337313f6f 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "4d47c02a-bb54-4f2d-8bdd-456beb3d6deb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.081 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbdfd67-c690-4ae6-8197-3a64ea5c0d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.085 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[024a3a17-ab80-4252-98f4-709a6870dd7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 NetworkManager[48904]: <info>  [1769522284.1083] device (tap205e3bd3-60): carrier: link connected
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.120 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6649dc8c-a615-45a2-a74e-4046c7013974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.147 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a36397-682f-49b4-9965-499e20c7fbfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap205e3bd3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:ca:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492373, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311357, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ecaf16-9b60-4adf-8e2c-9b31233ccc7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:ca26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492373, 'tstamp': 492373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311358, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.179 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa14bdb2-a206-43fd-9e8e-2c812fa76a75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap205e3bd3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:ca:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492373, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311359, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.210 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51f651c4-c127-4817-bb56-28860b3233b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.276 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4ab193-64a6-41ae-b7da-0314a1e580b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.277 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e3bd3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205e3bd3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:04 np0005597378 kernel: tap205e3bd3-60: entered promiscuous mode
Jan 27 08:58:04 np0005597378 NetworkManager[48904]: <info>  [1769522284.2802] manager: (tap205e3bd3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap205e3bd3-60, col_values=(('external_ids', {'iface-id': 'dd4d09e4-448b-4eb8-bb43-00ab29e0c33c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:04 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:04Z|00782|binding|INFO|Releasing lport dd4d09e4-448b-4eb8-bb43-00ab29e0c33c from this chassis (sb_readonly=0)
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.303 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.304 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6132d0a2-15bf-4b51-88bd-ade58735633c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.305 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.pid.haproxy
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:58:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:04.306 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'env', 'PROCESS_TAG=haproxy-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/205e3bd3-6e57-4ae5-8b1f-aa1f1099410e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.440 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522284.4393897, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.440 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Started (Lifecycle Event)#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.458 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.462 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522284.4399815, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.462 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.477 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.480 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:04 np0005597378 nova_compute[238941]: 2026-01-27 13:58:04.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:58:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 222 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.6 MiB/s wr, 79 op/s
Jan 27 08:58:04 np0005597378 podman[311431]: 2026-01-27 13:58:04.691218769 +0000 UTC m=+0.023483830 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:58:04 np0005597378 podman[311431]: 2026-01-27 13:58:04.916960526 +0000 UTC m=+0.249225557 container create a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.004 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.005 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.005 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.005 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 WARNING nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state rescued and task_state None.#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.006 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Processing event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG oslo_concurrency.lockutils [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.007 238945 DEBUG nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] No waiting events found dispatching network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.008 238945 WARNING nova.compute.manager [req-7382bc33-b4d8-46fb-99dc-71e3332d722d req-1a24b453-6f12-4db3-9b56-90b6f76005d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received unexpected event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.008 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.012 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.013 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522285.0134988, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.014 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.019 238945 INFO nova.virt.libvirt.driver [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance spawned successfully.#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.019 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.041 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.047 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.048 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.048 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.049 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.049 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.049 238945 DEBUG nova.virt.libvirt.driver [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.054 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.099 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:58:05 np0005597378 systemd[1]: Started libpod-conmon-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d.scope.
Jan 27 08:58:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.141 238945 INFO nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 7.72 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.142 238945 DEBUG nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85514a151e4429dffcee8834542d802bd14226496fac216ff9c580bb64c81ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.204 238945 INFO nova.compute.manager [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 8.84 seconds to build instance.#033[00m
Jan 27 08:58:05 np0005597378 nova_compute[238941]: 2026-01-27 13:58:05.220 238945 DEBUG oslo_concurrency.lockutils [None req-775b68eb-1b0c-4c8b-b918-bf41125e2aec 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:05 np0005597378 podman[311431]: 2026-01-27 13:58:05.225807672 +0000 UTC m=+0.558072733 container init a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:58:05 np0005597378 podman[311431]: 2026-01-27 13:58:05.23141005 +0000 UTC m=+0.563675081 container start a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:58:05 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : New worker (311453) forked
Jan 27 08:58:05 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : Loading success.
Jan 27 08:58:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 213 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.628 238945 DEBUG nova.compute.manager [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.629 238945 DEBUG nova.compute.manager [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.630 238945 DEBUG oslo_concurrency.lockutils [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.630 238945 DEBUG oslo_concurrency.lockutils [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.631 238945 DEBUG nova.network.neutron [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.709 238945 DEBUG nova.compute.manager [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.709 238945 DEBUG nova.compute.manager [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.709 238945 DEBUG oslo_concurrency.lockutils [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:07 np0005597378 nova_compute[238941]: 2026-01-27 13:58:07.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.014 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.015 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.016 238945 INFO nova.compute.manager [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Terminating instance#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.017 238945 DEBUG nova.compute.manager [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:58:08 np0005597378 kernel: tape4496765-86 (unregistering): left promiscuous mode
Jan 27 08:58:08 np0005597378 NetworkManager[48904]: <info>  [1769522288.1184] device (tape4496765-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:58:08 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:08Z|00783|binding|INFO|Releasing lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e from this chassis (sb_readonly=0)
Jan 27 08:58:08 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:08Z|00784|binding|INFO|Setting lport e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e down in Southbound
Jan 27 08:58:08 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:08Z|00785|binding|INFO|Removing iface tape4496765-86 ovn-installed in OVS
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.134 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:f9:fd 10.100.0.11'], port_security=['fa:16:3e:3d:f9:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '327a26c8-ebd5-4f42-ad95-3905ab2e1248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5329b351a44765b175b708e70517cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6266d667-96e8-4bfe-9ae9-83e66ad1bb5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86f7a126-6ed6-4c1b-893d-bfec6c017d78, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.136 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e in datapath 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e unbound from our chassis#033[00m
Jan 27 08:58:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.137 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:58:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.138 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4979f8-fd1d-43c5-a402-e9ae07538410]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:08.139 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e namespace which is not needed anymore#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:08 np0005597378 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000056.scope: Deactivated successfully.
Jan 27 08:58:08 np0005597378 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000056.scope: Consumed 3.502s CPU time.
Jan 27 08:58:08 np0005597378 systemd-machined[207425]: Machine qemu-98-instance-00000056 terminated.
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.252 238945 INFO nova.virt.libvirt.driver [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Instance destroyed successfully.#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.252 238945 DEBUG nova.objects.instance [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lazy-loading 'resources' on Instance uuid 327a26c8-ebd5-4f42-ad95-3905ab2e1248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.264 238945 DEBUG nova.virt.libvirt.vif [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-202038910',display_name='tempest-ServerPasswordTestJSON-server-202038910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-202038910',id=86,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5329b351a44765b175b708e70517cc',ramdisk_id='',reservation_id='r-8c2gjojj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1054775625',owner_user_name='tempest-ServerPasswordTestJSON-1054775625-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:06Z,user_data=None,user_id='6d810ffa0b094acc95ac627960258a9f',uuid=327a26c8-ebd5-4f42-ad95-3905ab2e1248,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.265 238945 DEBUG nova.network.os_vif_util [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converting VIF {"id": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "address": "fa:16:3e:3d:f9:fd", "network": {"id": "205e3bd3-6e57-4ae5-8b1f-aa1f1099410e", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-2070974873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5329b351a44765b175b708e70517cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4496765-86", "ovs_interfaceid": "e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.266 238945 DEBUG nova.network.os_vif_util [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.267 238945 DEBUG os_vif [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.272 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4496765-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.278 238945 INFO os_vif [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f9:fd,bridge_name='br-int',has_traffic_filtering=True,id=e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e,network=Network(205e3bd3-6e57-4ae5-8b1f-aa1f1099410e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4496765-86')#033[00m
Jan 27 08:58:08 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : haproxy version is 2.8.14-c23fe91
Jan 27 08:58:08 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [NOTICE]   (311451) : path to executable is /usr/sbin/haproxy
Jan 27 08:58:08 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [WARNING]  (311451) : Exiting Master process...
Jan 27 08:58:08 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [WARNING]  (311451) : Exiting Master process...
Jan 27 08:58:08 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [ALERT]    (311451) : Current worker (311453) exited with code 143 (Terminated)
Jan 27 08:58:08 np0005597378 neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e[311447]: [WARNING]  (311451) : All workers exited. Exiting... (0)
Jan 27 08:58:08 np0005597378 systemd[1]: libpod-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d.scope: Deactivated successfully.
Jan 27 08:58:08 np0005597378 podman[311484]: 2026-01-27 13:58:08.427647799 +0000 UTC m=+0.191725522 container died a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:58:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 214 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Jan 27 08:58:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d-userdata-shm.mount: Deactivated successfully.
Jan 27 08:58:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d85514a151e4429dffcee8834542d802bd14226496fac216ff9c580bb64c81ea-merged.mount: Deactivated successfully.
Jan 27 08:58:08 np0005597378 podman[311484]: 2026-01-27 13:58:08.822890202 +0000 UTC m=+0.586967925 container cleanup a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 08:58:08 np0005597378 systemd[1]: libpod-conmon-a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d.scope: Deactivated successfully.
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.830 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.831 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.857 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.922 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.922 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.931 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:58:08 np0005597378 nova_compute[238941]: 2026-01-27 13:58:08.932 238945 INFO nova.compute.claims [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.059 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:09 np0005597378 podman[311541]: 2026-01-27 13:58:09.076240856 +0000 UTC m=+0.230210076 container remove a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c69f786-a228-47e3-a135-78e712f02ee5]: (4, ('Tue Jan 27 01:58:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e (a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d)\na134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d\nTue Jan 27 01:58:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e (a134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d)\na134a09ecdae5917cbd405eb6f017b05069dc252ed6d4dfd0a622cf725f5255d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[26a187db-98d8-40e6-a565-c2b69bc7d0f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.089 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e3bd3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:09 np0005597378 kernel: tap205e3bd3-60: left promiscuous mode
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9f8a88-4867-44b6-ac05-a38b9124411a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aff54115-3441-46f3-ac6c-3fb96641aaae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.123 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c1600331-5625-47b7-ae1d-fc348b29d1b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.142 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aebb5cd4-df04-43e3-8d06-15cef7d7aa45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492364, 'reachable_time': 24099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311557, 'error': None, 'target': 'ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 systemd[1]: run-netns-ovnmeta\x2d205e3bd3\x2d6e57\x2d4ae5\x2d8b1f\x2daa1f1099410e.mount: Deactivated successfully.
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.145 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-205e3bd3-6e57-4ae5-8b1f-aa1f1099410e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:58:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:09.145 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb5b8e0-8a0a-42ef-a70f-bf5a02d1c164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110077055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.625 238945 DEBUG nova.network.neutron [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.626 238945 DEBUG nova.network.neutron [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.635 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.639 238945 DEBUG nova.compute.provider_tree [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.643 238945 DEBUG oslo_concurrency.lockutils [req-eec7a19a-d521-4e4d-9203-364f3383a169 req-45da8ec5-96cf-4c8d-a0b6-125f6fc5c3fd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.644 238945 DEBUG oslo_concurrency.lockutils [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.644 238945 DEBUG nova.network.neutron [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.656 238945 DEBUG nova.scheduler.client.report [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.681 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.682 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.742 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.742 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.760 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.782 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.887 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.888 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:58:09 np0005597378 nova_compute[238941]: 2026-01-27 13:58:09.889 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Creating image(s)#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.087 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.107 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.126 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.129 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.171 238945 DEBUG nova.policy [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5201d6a9a2c345a5a44f7478f19936be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c183494c4b924098a08e3761a240af9d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.178 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-unplugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.179 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.179 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.179 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] No waiting events found dispatching network-vif-unplugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-unplugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.180 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.181 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.181 238945 DEBUG oslo_concurrency.lockutils [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.181 238945 DEBUG nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] No waiting events found dispatching network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.182 238945 WARNING nova.compute.manager [req-947dc022-b462-4f69-9528-cb41b2ecfca7 req-c4c5bcf4-048c-4a07-85a5-56db39931bff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received unexpected event network-vif-plugged-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.200 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.201 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.202 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.202 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.271 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.274 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.417 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522275.4166574, 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.418 238945 INFO nova.compute.manager [-] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:58:10 np0005597378 nova_compute[238941]: 2026-01-27 13:58:10.441 238945 DEBUG nova.compute.manager [None req-252665ff-8f16-40b1-8e34-5b995862ea24 - - - - - -] [instance: 4d47c02a-bb54-4f2d-8bdd-456beb3d6deb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 198 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Jan 27 08:58:11 np0005597378 nova_compute[238941]: 2026-01-27 13:58:11.125 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Successfully created port: ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:58:11 np0005597378 nova_compute[238941]: 2026-01-27 13:58:11.708 238945 DEBUG nova.network.neutron [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:11 np0005597378 nova_compute[238941]: 2026-01-27 13:58:11.709 238945 DEBUG nova.network.neutron [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:11 np0005597378 nova_compute[238941]: 2026-01-27 13:58:11.808 238945 DEBUG oslo_concurrency.lockutils [req-4277ab5a-d2a4-45e0-982c-8ed7ceb295c6 req-18dcc054-37f7-4499-b521-59fc2fbcb347 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.099 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.825s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.159 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] resizing rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:12 np0005597378 NetworkManager[48904]: <info>  [1769522292.2082] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Jan 27 08:58:12 np0005597378 NetworkManager[48904]: <info>  [1769522292.2094] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.228 238945 INFO nova.virt.libvirt.driver [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deleting instance files /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248_del#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.228 238945 INFO nova.virt.libvirt.driver [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deletion of /var/lib/nova/instances/327a26c8-ebd5-4f42-ad95-3905ab2e1248_del complete#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.328 238945 DEBUG nova.objects.instance [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'migration_context' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.451 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.452 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Ensure instance console log exists: /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.452 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.453 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.453 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 198 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 189 op/s
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.533 238945 INFO nova.compute.manager [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 4.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.534 238945 DEBUG oslo.service.loopingcall [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.534 238945 DEBUG nova.compute.manager [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:58:12 np0005597378 nova_compute[238941]: 2026-01-27 13:58:12.535 238945 DEBUG nova.network.neutron [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.065 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Successfully updated port: ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.144 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.145 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquired lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.145 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.389 238945 DEBUG nova.compute.manager [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.390 238945 DEBUG nova.compute.manager [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.391 238945 DEBUG oslo_concurrency.lockutils [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.391 238945 DEBUG oslo_concurrency.lockutils [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.391 238945 DEBUG nova.network.neutron [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:13 np0005597378 nova_compute[238941]: 2026-01-27 13:58:13.397 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:58:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 27 08:58:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 27 08:58:13 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:14.330 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:14.331 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.396 238945 DEBUG nova.network.neutron [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.411 238945 INFO nova.compute.manager [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Took 1.88 seconds to deallocate network for instance.#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.456 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.457 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 199 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.735 238945 DEBUG oslo_concurrency.processutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.899 238945 DEBUG nova.network.neutron [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updating instance_info_cache with network_info: [{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.926 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Releasing lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.926 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance network_info: |[{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.928 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start _get_guest_xml network_info=[{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.939 238945 WARNING nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.947 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.948 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.951 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.951 238945 DEBUG nova.virt.libvirt.host [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.952 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.952 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.953 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.953 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.954 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.955 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.955 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.955 238945 DEBUG nova.virt.hardware [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:58:14 np0005597378 nova_compute[238941]: 2026-01-27 13:58:14.958 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679084035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.351 238945 DEBUG oslo_concurrency.processutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.357 238945 DEBUG nova.compute.provider_tree [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.379 238945 DEBUG nova.scheduler.client.report [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.410 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.447 238945 INFO nova.scheduler.client.report [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Deleted allocations for instance 327a26c8-ebd5-4f42-ad95-3905ab2e1248#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.526 238945 DEBUG nova.network.neutron [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.527 238945 DEBUG nova.network.neutron [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228747760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.593 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.615 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.618 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.650 238945 DEBUG oslo_concurrency.lockutils [req-b8930688-b40c-4350-9a59-2cbd7c12ee36 req-b41aef3c-000c-4330-8e44-871c569e3004 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.653 238945 DEBUG oslo_concurrency.lockutils [None req-f70b6847-be95-4f28-98c2-111333cdf8da 6d810ffa0b094acc95ac627960258a9f ca5329b351a44765b175b708e70517cc - - default default] Lock "327a26c8-ebd5-4f42-ad95-3905ab2e1248" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.656 238945 DEBUG nova.compute.manager [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-changed-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.657 238945 DEBUG nova.compute.manager [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Refreshing instance network info cache due to event network-changed-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.657 238945 DEBUG oslo_concurrency.lockutils [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.657 238945 DEBUG oslo_concurrency.lockutils [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:15 np0005597378 nova_compute[238941]: 2026-01-27 13:58:15.658 238945 DEBUG nova.network.neutron [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Refreshing network info cache for port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 27 08:58:15 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 27 08:58:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513445940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.180 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.182 238945 DEBUG nova.virt.libvirt.vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-322268330',display_name='tempest-DeleteServersTestJSON-server-322268330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-322268330',id=87,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-azf7gc5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:09Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=b562c8b8-55ba-4f30-b87c-2a7d87bf4a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.182 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.183 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.184 238945 DEBUG nova.objects.instance [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.201 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <uuid>b562c8b8-55ba-4f30-b87c-2a7d87bf4a87</uuid>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <name>instance-00000057</name>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:name>tempest-DeleteServersTestJSON-server-322268330</nova:name>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:58:14</nova:creationTime>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:user uuid="5201d6a9a2c345a5a44f7478f19936be">tempest-DeleteServersTestJSON-1703372962-project-member</nova:user>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:project uuid="c183494c4b924098a08e3761a240af9d">tempest-DeleteServersTestJSON-1703372962</nova:project>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <nova:port uuid="ad22e8f6-95c6-4527-ac19-dfc0ae20aed4">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <entry name="serial">b562c8b8-55ba-4f30-b87c-2a7d87bf4a87</entry>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <entry name="uuid">b562c8b8-55ba-4f30-b87c-2a7d87bf4a87</entry>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:28:4c:8e"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <target dev="tapad22e8f6-95"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/console.log" append="off"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:58:16 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:58:16 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:58:16 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:58:16 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.207 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Preparing to wait for external event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.207 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.207 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.208 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.209 238945 DEBUG nova.virt.libvirt.vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-322268330',display_name='tempest-DeleteServersTestJSON-server-322268330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-322268330',id=87,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-azf7gc5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:09Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=b562c8b8-55ba-4f30-b87c-2a7d87bf4a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.209 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.210 238945 DEBUG nova.network.os_vif_util [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.211 238945 DEBUG os_vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.212 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.213 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.216 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad22e8f6-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.217 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad22e8f6-95, col_values=(('external_ids', {'iface-id': 'ad22e8f6-95c6-4527-ac19-dfc0ae20aed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:4c:8e', 'vm-uuid': 'b562c8b8-55ba-4f30-b87c-2a7d87bf4a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:16 np0005597378 NetworkManager[48904]: <info>  [1769522296.2194] manager: (tapad22e8f6-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.224 238945 INFO os_vif [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95')#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.290 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.291 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.291 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] No VIF found with MAC fa:16:3e:28:4c:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.292 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Using config drive#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.313 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 202 op/s
Jan 27 08:58:16 np0005597378 podman[311945]: 2026-01-27 13:58:16.834012752 +0000 UTC m=+0.156418601 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:16 np0005597378 podman[311945]: 2026-01-27 13:58:16.94931105 +0000 UTC m=+0.271716879 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.954 238945 DEBUG nova.compute.manager [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.957 238945 DEBUG nova.compute.manager [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing instance network info cache due to event network-changed-5e99824f-f686-4cd9-a3dd-e1e0690fc68f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.957 238945 DEBUG oslo_concurrency.lockutils [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.957 238945 DEBUG oslo_concurrency.lockutils [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:16 np0005597378 nova_compute[238941]: 2026-01-27 13:58:16.958 238945 DEBUG nova.network.neutron [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Refreshing network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.011 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Creating config drive at /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.017 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7srpq7hv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:58:17
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', '.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.158 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7srpq7hv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.187 238945 DEBUG nova.storage.rbd_utils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] rbd image b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.192 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.276 238945 DEBUG nova.network.neutron [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updated VIF entry in instance network info cache for port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.277 238945 DEBUG nova.network.neutron [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updating instance_info_cache with network_info: [{"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.301 238945 DEBUG oslo_concurrency.lockutils [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.302 238945 DEBUG nova.compute.manager [req-8cd1b7ba-42a6-4a34-9519-37d9177bc79e req-bd223379-33bc-4b4f-9963-39b8807fae89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Received event network-vif-deleted-e4496765-869c-4d1f-a0eb-9f8c1d6b1e2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.651030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297651259, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 638, "num_deletes": 255, "total_data_size": 688830, "memory_usage": 700776, "flush_reason": "Manual Compaction"}
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297682938, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 682224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33014, "largest_seqno": 33651, "table_properties": {"data_size": 678815, "index_size": 1253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7741, "raw_average_key_size": 18, "raw_value_size": 671994, "raw_average_value_size": 1627, "num_data_blocks": 56, "num_entries": 413, "num_filter_entries": 413, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522253, "oldest_key_time": 1769522253, "file_creation_time": 1769522297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 31951 microseconds, and 2587 cpu microseconds.
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.682986) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 682224 bytes OK
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.683011) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.690607) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.690633) EVENT_LOG_v1 {"time_micros": 1769522297690626, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.690651) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 685381, prev total WAL file size 685831, number of live WAL files 2.
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.691092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(666KB)], [71(8052KB)]
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297691114, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 8927764, "oldest_snapshot_seqno": -1}
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.707 238945 DEBUG oslo_concurrency.processutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.708 238945 INFO nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deleting local config drive /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87/disk.config because it was imported into RBD.#033[00m
Jan 27 08:58:17 np0005597378 kernel: tapad22e8f6-95: entered promiscuous mode
Jan 27 08:58:17 np0005597378 NetworkManager[48904]: <info>  [1769522297.7775] manager: (tapad22e8f6-95): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Jan 27 08:58:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:17Z|00786|binding|INFO|Claiming lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for this chassis.
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:17Z|00787|binding|INFO|ad22e8f6-95c6-4527-ac19-dfc0ae20aed4: Claiming fa:16:3e:28:4c:8e 10.100.0.12
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.782 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:4c:8e 10.100.0.12'], port_security=['fa:16:3e:28:4c:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b562c8b8-55ba-4f30-b87c-2a7d87bf4a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.783 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca bound to our chassis#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.784 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e37534-4454-4424-9d8a-edc9ec7fdcca#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.799 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e30be7-1f24-4447-9a1f-61fe82b8f925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.800 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e37534-41 in ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.803 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e37534-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b276ce8-d05d-40e2-b02e-e37210d65973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.804 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1725bd4-d5b6-4c65-8a11-0649d0bca082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:17Z|00788|binding|INFO|Setting lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 ovn-installed in OVS
Jan 27 08:58:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:17Z|00789|binding|INFO|Setting lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 up in Southbound
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:17 np0005597378 systemd-udevd[312187]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:58:17 np0005597378 nova_compute[238941]: 2026-01-27 13:58:17.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.818 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[02b892a0-0ec8-4b9b-837c-b51d3acfcb59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:17 np0005597378 systemd-machined[207425]: New machine qemu-99-instance-00000057.
Jan 27 08:58:17 np0005597378 NetworkManager[48904]: <info>  [1769522297.8310] device (tapad22e8f6-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:58:17 np0005597378 NetworkManager[48904]: <info>  [1769522297.8315] device (tapad22e8f6-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:58:17 np0005597378 systemd[1]: Started Virtual Machine qemu-99-instance-00000057.
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.837 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[779f2d0a-9eb4-4fa2-9abf-5807e19bb65a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.867 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[880e40d4-68f6-4c4e-987d-3afc3b903474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 systemd-udevd[312190]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.872 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0f2921-5ab4-40bd-8ded-f8328f50efa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 NetworkManager[48904]: <info>  [1769522297.8742] manager: (tap67e37534-40): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5773 keys, 8807595 bytes, temperature: kUnknown
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522297888440, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8807595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8768052, "index_size": 24009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146517, "raw_average_key_size": 25, "raw_value_size": 8663607, "raw_average_value_size": 1500, "num_data_blocks": 970, "num_entries": 5773, "num_filter_entries": 5773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 27 08:58:17 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.906 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[742226c1-2655-41a8-a1ea-869dd93748ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.909 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7045cb18-e5b6-4516-81f8-689693056134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 NetworkManager[48904]: <info>  [1769522297.9341] device (tap67e37534-40): carrier: link connected
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.939 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39523db7-f7ca-40df-849f-d75d8ee06c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef8dc9f-19da-4cc5-bc18-772d7eda187b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493755, 'reachable_time': 32823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312218, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.977 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c720c415-5249-4635-9f95-054c6133f765]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8594'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493755, 'tstamp': 493755}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312219, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:17.996 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e519d1aa-8354-4212-859e-e4cddfcaa846]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e37534-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:85:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493755, 'reachable_time': 32823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312220, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.889771) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8807595 bytes
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.012711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.0 rd, 44.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(26.0) write-amplify(12.9) OK, records in: 6295, records dropped: 522 output_compression: NoCompression
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.012750) EVENT_LOG_v1 {"time_micros": 1769522298012735, "job": 40, "event": "compaction_finished", "compaction_time_micros": 198436, "compaction_time_cpu_micros": 21735, "output_level": 6, "num_output_files": 1, "total_output_size": 8807595, "num_input_records": 6295, "num_output_records": 5773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522298013386, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522298015145, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:17.691034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-13:58:18.015274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35472fe6-54a5-46d8-a9dc-8aab00a6ea94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.103 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dd833a-44c4-4a77-8eee-9b1f49a2f086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.104 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.105 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.105 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e37534-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:18 np0005597378 NetworkManager[48904]: <info>  [1769522298.1077] manager: (tap67e37534-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 27 08:58:18 np0005597378 kernel: tap67e37534-40: entered promiscuous mode
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.110 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e37534-40, col_values=(('external_ids', {'iface-id': '626d013d-3067-4c30-b108-52be84db907e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:18 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:18Z|00790|binding|INFO|Releasing lport 626d013d-3067-4c30-b108-52be84db907e from this chassis (sb_readonly=0)
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.129 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a50cb2-0ab4-4a3e-b7a0-ba07be0ac8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.131 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/67e37534-4454-4424-9d8a-edc9ec7fdcca.pid.haproxy
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 67e37534-4454-4424-9d8a-edc9ec7fdcca
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:58:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:18.133 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'env', 'PROCESS_TAG=haproxy-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e37534-4454-4424-9d8a-edc9ec7fdcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:58:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 613 KiB/s rd, 2.7 MiB/s wr, 144 op/s
Jan 27 08:58:18 np0005597378 podman[312336]: 2026-01-27 13:58:18.485133549 +0000 UTC m=+0.020578304 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.587 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522298.5869346, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.588 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Started (Lifecycle Event)#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.617 238945 DEBUG nova.network.neutron [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updated VIF entry in instance network info cache for port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.618 238945 DEBUG nova.network.neutron [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [{"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.638 238945 DEBUG oslo_concurrency.lockutils [req-bdd5cb3d-26cf-4f82-a211-b0eec5f7f300 req-47b13341-1104-4503-ab71-9ca16f0321e5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9a2cac55-b28d-4d71-b091-6a3c39cdfe14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.641 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.646 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522298.587287, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.671 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.676 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:18 np0005597378 nova_compute[238941]: 2026-01-27 13:58:18.698 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.072 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.072 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.073 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.073 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.074 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Processing event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.074 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.075 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.075 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.075 238945 DEBUG oslo_concurrency.lockutils [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.076 238945 DEBUG nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] No waiting events found dispatching network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.076 238945 WARNING nova.compute.manager [req-6a24653a-1a05-4028-a354-62bc548b8af4 req-48844acc-7760-461f-ab14-cd6727e10ef3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received unexpected event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.077 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.081 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522299.0812929, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.082 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.083 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.086 238945 INFO nova.virt.libvirt.driver [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance spawned successfully.#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.086 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.102 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.108 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.111 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.111 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.111 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.112 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.112 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.112 238945 DEBUG nova.virt.libvirt.driver [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:19 np0005597378 podman[312336]: 2026-01-27 13:58:19.115930666 +0000 UTC m=+0.651375391 container create 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.160 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.192 238945 INFO nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 9.30 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.193 238945 DEBUG nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.254 238945 INFO nova.compute.manager [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 10.35 seconds to build instance.#033[00m
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:19 np0005597378 nova_compute[238941]: 2026-01-27 13:58:19.279 238945 DEBUG oslo_concurrency.lockutils [None req-43858b20-488d-4b04-bc67-a1c66ff0c9fe 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:19 np0005597378 systemd[1]: Started libpod-conmon-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a.scope.
Jan 27 08:58:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6639664b159e8c234da186332bf76f55790b65232d261ef6282b681650fb56d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:58:19 np0005597378 podman[312336]: 2026-01-27 13:58:19.417396847 +0000 UTC m=+0.952841602 container init 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:58:19 np0005597378 podman[312336]: 2026-01-27 13:58:19.423617352 +0000 UTC m=+0.959062087 container start 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:19 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : New worker (312392) forked
Jan 27 08:58:19 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : Loading success.
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:58:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:58:19 np0005597378 podman[312463]: 2026-01-27 13:58:19.880106267 +0000 UTC m=+0.060205837 container create d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 08:58:19 np0005597378 podman[312463]: 2026-01-27 13:58:19.843213855 +0000 UTC m=+0.023313415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:58:19 np0005597378 systemd[1]: Started libpod-conmon-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope.
Jan 27 08:58:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:20 np0005597378 podman[312463]: 2026-01-27 13:58:20.008849729 +0000 UTC m=+0.188949289 container init d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:20 np0005597378 podman[312463]: 2026-01-27 13:58:20.018953245 +0000 UTC m=+0.199052785 container start d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 08:58:20 np0005597378 brave_clarke[312480]: 167 167
Jan 27 08:58:20 np0005597378 systemd[1]: libpod-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope: Deactivated successfully.
Jan 27 08:58:20 np0005597378 conmon[312480]: conmon d5b3f3a46e7b00c4f58f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope/container/memory.events
Jan 27 08:58:20 np0005597378 podman[312463]: 2026-01-27 13:58:20.038540611 +0000 UTC m=+0.218640151 container attach d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:58:20 np0005597378 podman[312463]: 2026-01-27 13:58:20.038848188 +0000 UTC m=+0.218947738 container died d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:58:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cbd907ca83a8c9d435e0f1d320d31d3c1690a400c1927ff734cc7f66686a74c8-merged.mount: Deactivated successfully.
Jan 27 08:58:20 np0005597378 podman[312463]: 2026-01-27 13:58:20.269513475 +0000 UTC m=+0.449613015 container remove d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:58:20 np0005597378 systemd[1]: libpod-conmon-d5b3f3a46e7b00c4f58f61bb60eee03dcad4dfe8830fc1a774b96e6f7c455f95.scope: Deactivated successfully.
Jan 27 08:58:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:58:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:58:20 np0005597378 podman[312503]: 2026-01-27 13:58:20.48529489 +0000 UTC m=+0.072125731 container create 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.489 238945 DEBUG nova.objects.instance [None req-fbfa8cc9-c399-4e81-aa27-100e14273f6b 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'pci_devices' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.509 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522300.5093212, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:58:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.526 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.530 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:20 np0005597378 podman[312503]: 2026-01-27 13:58:20.443296613 +0000 UTC m=+0.030127454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:58:20 np0005597378 systemd[1]: Started libpod-conmon-354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e.scope.
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.550 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 08:58:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:20 np0005597378 podman[312503]: 2026-01-27 13:58:20.755385905 +0000 UTC m=+0.342216776 container init 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:20 np0005597378 podman[312503]: 2026-01-27 13:58:20.76240386 +0000 UTC m=+0.349234701 container start 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:58:20 np0005597378 podman[312503]: 2026-01-27 13:58:20.78899982 +0000 UTC m=+0.375830661 container attach 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:58:20 np0005597378 kernel: tapad22e8f6-95 (unregistering): left promiscuous mode
Jan 27 08:58:20 np0005597378 NetworkManager[48904]: <info>  [1769522300.8240] device (tapad22e8f6-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:58:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:20Z|00791|binding|INFO|Releasing lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 from this chassis (sb_readonly=0)
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:20Z|00792|binding|INFO|Setting lport ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 down in Southbound
Jan 27 08:58:20 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:20Z|00793|binding|INFO|Removing iface tapad22e8f6-95 ovn-installed in OVS
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.841 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:4c:8e 10.100.0.12'], port_security=['fa:16:3e:28:4c:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b562c8b8-55ba-4f30-b87c-2a7d87bf4a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c183494c4b924098a08e3761a240af9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd816528e-b7eb-4ffc-9235-0b8dcdb029e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0290b5c0-79de-4044-be1f-027446a5556e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.843 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 in datapath 67e37534-4454-4424-9d8a-edc9ec7fdcca unbound from our chassis#033[00m
Jan 27 08:58:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.844 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e37534-4454-4424-9d8a-edc9ec7fdcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:58:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.845 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[717b333a-ea88-4bc3-9392-f63b839af34a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:20.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca namespace which is not needed anymore#033[00m
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:20 np0005597378 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Deactivated successfully.
Jan 27 08:58:20 np0005597378 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Consumed 2.058s CPU time.
Jan 27 08:58:20 np0005597378 systemd-machined[207425]: Machine qemu-99-instance-00000057 terminated.
Jan 27 08:58:20 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : haproxy version is 2.8.14-c23fe91
Jan 27 08:58:20 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [NOTICE]   (312390) : path to executable is /usr/sbin/haproxy
Jan 27 08:58:20 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [WARNING]  (312390) : Exiting Master process...
Jan 27 08:58:20 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [WARNING]  (312390) : Exiting Master process...
Jan 27 08:58:20 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [ALERT]    (312390) : Current worker (312392) exited with code 143 (Terminated)
Jan 27 08:58:20 np0005597378 neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca[312386]: [WARNING]  (312390) : All workers exited. Exiting... (0)
Jan 27 08:58:20 np0005597378 systemd[1]: libpod-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a.scope: Deactivated successfully.
Jan 27 08:58:20 np0005597378 nova_compute[238941]: 2026-01-27 13:58:20.987 238945 DEBUG nova.compute.manager [None req-fbfa8cc9-c399-4e81-aa27-100e14273f6b 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:20 np0005597378 podman[312550]: 2026-01-27 13:58:20.988245769 +0000 UTC m=+0.057288381 container died 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 08:58:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a-userdata-shm.mount: Deactivated successfully.
Jan 27 08:58:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6639664b159e8c234da186332bf76f55790b65232d261ef6282b681650fb56d2-merged.mount: Deactivated successfully.
Jan 27 08:58:21 np0005597378 podman[312550]: 2026-01-27 13:58:21.126744307 +0000 UTC m=+0.195786899 container cleanup 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 08:58:21 np0005597378 systemd[1]: libpod-conmon-1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a.scope: Deactivated successfully.
Jan 27 08:58:21 np0005597378 funny_wing[312521]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:58:21 np0005597378 funny_wing[312521]: --> All data devices are unavailable
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:21 np0005597378 systemd[1]: libpod-354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e.scope: Deactivated successfully.
Jan 27 08:58:21 np0005597378 podman[312503]: 2026-01-27 13:58:21.24221768 +0000 UTC m=+0.829048521 container died 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:21 np0005597378 podman[312601]: 2026-01-27 13:58:21.42065589 +0000 UTC m=+0.271426151 container remove 1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.426 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[769be594-1171-4ebc-8c5f-5eb9280712c4]: (4, ('Tue Jan 27 01:58:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a)\n1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a\nTue Jan 27 01:58:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca (1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a)\n1c2be479dab0b8d1bb5f4e7ba4e2ea1e1cfb08332193ebd6e923623ac415b70a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.428 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca199fd6-ecad-4ac0-9973-60ef32eeeff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.429 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e37534-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.431 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:21 np0005597378 kernel: tap67e37534-40: left promiscuous mode
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.436 238945 DEBUG nova.compute.manager [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-unplugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG oslo_concurrency.lockutils [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG oslo_concurrency.lockutils [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG oslo_concurrency.lockutils [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.437 238945 DEBUG nova.compute.manager [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] No waiting events found dispatching network-vif-unplugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.438 238945 WARNING nova.compute.manager [req-32e98c06-e4be-4c3c-ba66-a1e073eb4d97 req-0dfeae4d-32fb-4f46-93cf-8770204435c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received unexpected event network-vif-unplugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for instance with vm_state suspended and task_state None.#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.450 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ac4d294ed975da37b71e18c50c0310cf1db5b2824111a0d972a5bca295ac22fc-merged.mount: Deactivated successfully.
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.455 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fea50fa4-868d-46d1-8046-73641385598b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.472 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60545f7d-4557-4b2a-aa60-9db0fd9c0455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb45354-ccfb-4c34-b56f-99bb17866a03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.495 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[13016f50-f6dd-46e0-9a19-a0d9b2125e3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493748, 'reachable_time': 19119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312635, 'error': None, 'target': 'ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.498 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e37534-4454-4424-9d8a-edc9ec7fdcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.498 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4726ea28-9cd6-43ca-8c9b-9007d35c90da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 systemd[1]: run-netns-ovnmeta\x2d67e37534\x2d4454\x2d4424\x2d9d8a\x2dedc9ec7fdcca.mount: Deactivated successfully.
Jan 27 08:58:21 np0005597378 podman[312503]: 2026-01-27 13:58:21.556745025 +0000 UTC m=+1.143575866 container remove 354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:58:21 np0005597378 systemd[1]: libpod-conmon-354bf0d5092ec71a3f416e5228a49ef4af651d597cbc48409a974b290af3885e.scope: Deactivated successfully.
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.632 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.632 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.633 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.634 238945 INFO nova.compute.manager [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Terminating instance#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.635 238945 DEBUG nova.compute.manager [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:58:21 np0005597378 kernel: tap5e99824f-f6 (unregistering): left promiscuous mode
Jan 27 08:58:21 np0005597378 NetworkManager[48904]: <info>  [1769522301.8266] device (tap5e99824f-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:58:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:21Z|00794|binding|INFO|Releasing lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f from this chassis (sb_readonly=0)
Jan 27 08:58:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:21Z|00795|binding|INFO|Setting lport 5e99824f-f686-4cd9-a3dd-e1e0690fc68f down in Southbound
Jan 27 08:58:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:21Z|00796|binding|INFO|Removing iface tap5e99824f-f6 ovn-installed in OVS
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.840 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:02:e1 10.100.0.11'], port_security=['fa:16:3e:8d:02:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9a2cac55-b28d-4d71-b091-6a3c39cdfe14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44ebe489-75a7-40e8-9613-68b01eb29b28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71ad88aa5cfe42bdb12bd409ad2842de', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7f1b77e0-421f-4420-8a9a-51183baa7071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38bf5827-4194-4494-af88-b3e7b8a5e805, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5e99824f-f686-4cd9-a3dd-e1e0690fc68f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.841 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5e99824f-f686-4cd9-a3dd-e1e0690fc68f in datapath 44ebe489-75a7-40e8-9613-68b01eb29b28 unbound from our chassis#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.842 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 44ebe489-75a7-40e8-9613-68b01eb29b28 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 08:58:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:21.843 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53955f4d-c08f-475e-9f1a-63c5e0e0fca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:21 np0005597378 nova_compute[238941]: 2026-01-27 13:58:21.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:21 np0005597378 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 27 08:58:21 np0005597378 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Consumed 13.419s CPU time.
Jan 27 08:58:21 np0005597378 systemd-machined[207425]: Machine qemu-97-instance-00000054 terminated.
Jan 27 08:58:22 np0005597378 podman[312704]: 2026-01-27 13:58:22.042950104 +0000 UTC m=+0.075260784 container create 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:22 np0005597378 NetworkManager[48904]: <info>  [1769522302.0545] manager: (tap5e99824f-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.076 238945 INFO nova.virt.libvirt.driver [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Instance destroyed successfully.#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.077 238945 DEBUG nova.objects.instance [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lazy-loading 'resources' on Instance uuid 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:22 np0005597378 podman[312704]: 2026-01-27 13:58:21.98699956 +0000 UTC m=+0.019310220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.091 238945 DEBUG nova.compute.manager [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG oslo_concurrency.lockutils [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG oslo_concurrency.lockutils [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG oslo_concurrency.lockutils [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.092 238945 DEBUG nova.compute.manager [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.093 238945 DEBUG nova.compute.manager [req-eee83406-f617-41f3-a188-e6c457fbf6d4 req-76e553e0-497b-4c6d-b358-c44ee588c8e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-unplugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.108 238945 DEBUG nova.virt.libvirt.vif [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1986971874',display_name='tempest-ServerRescueTestJSONUnderV235-server-1986971874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1986971874',id=84,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71ad88aa5cfe42bdb12bd409ad2842de',ramdisk_id='',reservation_id='r-kytri84c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-508111280',owner_user_name='tempest-ServerRescueTestJSONUnderV235-508111280-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:03Z,user_data=None,user_id='b49f56e21cd44451a1c542f97cb11a9c',uuid=9a2cac55-b28d-4d71-b091-6a3c39cdfe14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.109 238945 DEBUG nova.network.os_vif_util [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converting VIF {"id": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "address": "fa:16:3e:8d:02:e1", "network": {"id": "44ebe489-75a7-40e8-9613-68b01eb29b28", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1500941585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71ad88aa5cfe42bdb12bd409ad2842de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e99824f-f6", "ovs_interfaceid": "5e99824f-f686-4cd9-a3dd-e1e0690fc68f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.109 238945 DEBUG nova.network.os_vif_util [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.110 238945 DEBUG os_vif [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.112 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e99824f-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.120 238945 INFO os_vif [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:02:e1,bridge_name='br-int',has_traffic_filtering=True,id=5e99824f-f686-4cd9-a3dd-e1e0690fc68f,network=Network(44ebe489-75a7-40e8-9613-68b01eb29b28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e99824f-f6')#033[00m
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:22 np0005597378 systemd[1]: Started libpod-conmon-611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c.scope.
Jan 27 08:58:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:22 np0005597378 nova_compute[238941]: 2026-01-27 13:58:22.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:22 np0005597378 podman[312704]: 2026-01-27 13:58:22.411434031 +0000 UTC m=+0.443744681 container init 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:58:22 np0005597378 podman[312704]: 2026-01-27 13:58:22.419444912 +0000 UTC m=+0.451755572 container start 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:58:22 np0005597378 xenodochial_moser[312753]: 167 167
Jan 27 08:58:22 np0005597378 systemd[1]: libpod-611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c.scope: Deactivated successfully.
Jan 27 08:58:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 436 KiB/s wr, 152 op/s
Jan 27 08:58:22 np0005597378 podman[312704]: 2026-01-27 13:58:22.538696664 +0000 UTC m=+0.571007334 container attach 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:22 np0005597378 podman[312704]: 2026-01-27 13:58:22.539085074 +0000 UTC m=+0.571395714 container died 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 08:58:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 27 08:58:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 27 08:58:22 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 27 08:58:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bd56289b244bb8aeaebc2b3e210ad93bbaf515e41141c4e87a2cfcdddf188e37-merged.mount: Deactivated successfully.
Jan 27 08:58:23 np0005597378 podman[312704]: 2026-01-27 13:58:23.13264179 +0000 UTC m=+1.164952440 container remove 611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:58:23 np0005597378 systemd[1]: libpod-conmon-611ea554a5b78b9557758db192e7200794096ea81ae18624f22020d8e26f8b5c.scope: Deactivated successfully.
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.250 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522288.249965, 327a26c8-ebd5-4f42-ad95-3905ab2e1248 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.252 238945 INFO nova.compute.manager [-] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.272 238945 DEBUG nova.compute.manager [None req-84fa78c1-5143-42d0-a69d-340fe7fcfbba - - - - - -] [instance: 327a26c8-ebd5-4f42-ad95-3905ab2e1248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:23.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:23 np0005597378 podman[312778]: 2026-01-27 13:58:23.31143497 +0000 UTC m=+0.034876919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:58:23 np0005597378 podman[312778]: 2026-01-27 13:58:23.621633272 +0000 UTC m=+0.345075231 container create a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.641 238945 DEBUG nova.compute.manager [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.641 238945 DEBUG oslo_concurrency.lockutils [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 DEBUG oslo_concurrency.lockutils [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 DEBUG oslo_concurrency.lockutils [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 DEBUG nova.compute.manager [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] No waiting events found dispatching network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.642 238945 WARNING nova.compute.manager [req-9ac7b4de-9115-43f1-a84a-3ca0cf948613 req-1d038a75-d38e-4569-b58f-2b91256aef93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received unexpected event network-vif-plugged-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 for instance with vm_state suspended and task_state None.#033[00m
Jan 27 08:58:23 np0005597378 systemd[1]: Started libpod-conmon-a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a.scope.
Jan 27 08:58:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.886 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.887 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.889 238945 INFO nova.compute.manager [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Terminating instance#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.890 238945 DEBUG nova.compute.manager [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.897 238945 INFO nova.virt.libvirt.driver [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Instance destroyed successfully.#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.898 238945 DEBUG nova.objects.instance [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lazy-loading 'resources' on Instance uuid b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.911 238945 DEBUG nova.virt.libvirt.vif [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-322268330',display_name='tempest-DeleteServersTestJSON-server-322268330',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-322268330',id=87,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c183494c4b924098a08e3761a240af9d',ramdisk_id='',reservation_id='r-azf7gc5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1703372962',owner_user_name='tempest-DeleteServersTestJSON-1703372962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:21Z,user_data=None,user_id='5201d6a9a2c345a5a44f7478f19936be',uuid=b562c8b8-55ba-4f30-b87c-2a7d87bf4a87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.912 238945 DEBUG nova.network.os_vif_util [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converting VIF {"id": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "address": "fa:16:3e:28:4c:8e", "network": {"id": "67e37534-4454-4424-9d8a-edc9ec7fdcca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-871937218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c183494c4b924098a08e3761a240af9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad22e8f6-95", "ovs_interfaceid": "ad22e8f6-95c6-4527-ac19-dfc0ae20aed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.912 238945 DEBUG nova.network.os_vif_util [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.913 238945 DEBUG os_vif [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.915 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad22e8f6-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:58:23 np0005597378 nova_compute[238941]: 2026-01-27 13:58:23.921 238945 INFO os_vif [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:4c:8e,bridge_name='br-int',has_traffic_filtering=True,id=ad22e8f6-95c6-4527-ac19-dfc0ae20aed4,network=Network(67e37534-4454-4424-9d8a-edc9ec7fdcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad22e8f6-95')#033[00m
Jan 27 08:58:23 np0005597378 podman[312778]: 2026-01-27 13:58:23.948198554 +0000 UTC m=+0.671640563 container init a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:23 np0005597378 podman[312778]: 2026-01-27 13:58:23.958798754 +0000 UTC m=+0.682240703 container start a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:58:24 np0005597378 podman[312778]: 2026-01-27 13:58:24.026305182 +0000 UTC m=+0.749747151 container attach a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 08:58:24 np0005597378 pensive_jang[312794]: {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:    "0": [
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:        {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "devices": [
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "/dev/loop3"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            ],
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_name": "ceph_lv0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_size": "21470642176",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "name": "ceph_lv0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "tags": {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cluster_name": "ceph",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.crush_device_class": "",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.encrypted": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.objectstore": "bluestore",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osd_id": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.type": "block",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.vdo": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.with_tpm": "0"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            },
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "type": "block",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "vg_name": "ceph_vg0"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:        }
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:    ],
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:    "1": [
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:        {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "devices": [
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "/dev/loop4"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            ],
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_name": "ceph_lv1",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_size": "21470642176",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "name": "ceph_lv1",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "tags": {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cluster_name": "ceph",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.crush_device_class": "",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.encrypted": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.objectstore": "bluestore",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osd_id": "1",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.type": "block",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.vdo": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.with_tpm": "0"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            },
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "type": "block",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "vg_name": "ceph_vg1"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:        }
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:    ],
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:    "2": [
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:        {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "devices": [
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "/dev/loop5"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            ],
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_name": "ceph_lv2",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_size": "21470642176",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "name": "ceph_lv2",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "tags": {
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.cluster_name": "ceph",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.crush_device_class": "",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.encrypted": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.objectstore": "bluestore",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osd_id": "2",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.type": "block",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.vdo": "0",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:                "ceph.with_tpm": "0"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            },
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "type": "block",
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:            "vg_name": "ceph_vg2"
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:        }
Jan 27 08:58:24 np0005597378 pensive_jang[312794]:    ]
Jan 27 08:58:24 np0005597378 pensive_jang[312794]: }
Jan 27 08:58:24 np0005597378 systemd[1]: libpod-a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a.scope: Deactivated successfully.
Jan 27 08:58:24 np0005597378 podman[312778]: 2026-01-27 13:58:24.24981312 +0000 UTC m=+0.973255079 container died a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:24 np0005597378 nova_compute[238941]: 2026-01-27 13:58:24.489 238945 DEBUG nova.compute.manager [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:24 np0005597378 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG oslo_concurrency.lockutils [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:24 np0005597378 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG oslo_concurrency.lockutils [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:24 np0005597378 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG oslo_concurrency.lockutils [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:24 np0005597378 nova_compute[238941]: 2026-01-27 13:58:24.490 238945 DEBUG nova.compute.manager [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] No waiting events found dispatching network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:24 np0005597378 nova_compute[238941]: 2026-01-27 13:58:24.491 238945 WARNING nova.compute.manager [req-92ee463a-e57a-4283-8d17-3402ca6bd6b8 req-1e945eb0-642c-4788-9299-b8ca9f9e0cd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received unexpected event network-vif-plugged-5e99824f-f686-4cd9-a3dd-e1e0690fc68f for instance with vm_state rescued and task_state deleting.#033[00m
Jan 27 08:58:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 179 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 47 KiB/s wr, 184 op/s
Jan 27 08:58:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bee0c27318d33be8eca62898aee23a1bfa1151a5e723166311d5818fb4712f0d-merged.mount: Deactivated successfully.
Jan 27 08:58:24 np0005597378 podman[312778]: 2026-01-27 13:58:24.775147409 +0000 UTC m=+1.498589358 container remove a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:24 np0005597378 systemd[1]: libpod-conmon-a7042234176299ffec213bf052c9d8e1ac190da4b8bdd95bcc3dd7649796e68a.scope: Deactivated successfully.
Jan 27 08:58:25 np0005597378 podman[312897]: 2026-01-27 13:58:25.312866534 +0000 UTC m=+0.085549304 container create c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 08:58:25 np0005597378 podman[312897]: 2026-01-27 13:58:25.247862762 +0000 UTC m=+0.020545562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:58:25 np0005597378 systemd[1]: Started libpod-conmon-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope.
Jan 27 08:58:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:25 np0005597378 podman[312897]: 2026-01-27 13:58:25.546200342 +0000 UTC m=+0.318883142 container init c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:58:25 np0005597378 podman[312897]: 2026-01-27 13:58:25.553651278 +0000 UTC m=+0.326334048 container start c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 08:58:25 np0005597378 vibrant_tharp[312914]: 167 167
Jan 27 08:58:25 np0005597378 systemd[1]: libpod-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope: Deactivated successfully.
Jan 27 08:58:25 np0005597378 conmon[312914]: conmon c9657df56462011e8d1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope/container/memory.events
Jan 27 08:58:25 np0005597378 podman[312897]: 2026-01-27 13:58:25.70520681 +0000 UTC m=+0.477889670 container attach c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:58:25 np0005597378 podman[312897]: 2026-01-27 13:58:25.705757255 +0000 UTC m=+0.478440055 container died c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 08:58:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1c1094477e7b6e6791d3cd8b425189900bd622adf92720101c790cfa8b63ed6b-merged.mount: Deactivated successfully.
Jan 27 08:58:26 np0005597378 podman[312897]: 2026-01-27 13:58:26.420914544 +0000 UTC m=+1.193597314 container remove c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:26 np0005597378 systemd[1]: libpod-conmon-c9657df56462011e8d1b6926ff88a694954e12c6709d1de887db2e9294957a0c.scope: Deactivated successfully.
Jan 27 08:58:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 152 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 177 op/s
Jan 27 08:58:26 np0005597378 podman[312939]: 2026-01-27 13:58:26.585702695 +0000 UTC m=+0.024130966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:58:26 np0005597378 podman[312939]: 2026-01-27 13:58:26.812373767 +0000 UTC m=+0.250802018 container create afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:58:26 np0005597378 systemd[1]: Started libpod-conmon-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope.
Jan 27 08:58:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:27 np0005597378 podman[312939]: 2026-01-27 13:58:27.070792455 +0000 UTC m=+0.509220726 container init afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:58:27 np0005597378 podman[312939]: 2026-01-27 13:58:27.077278735 +0000 UTC m=+0.515706986 container start afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:27 np0005597378 podman[312939]: 2026-01-27 13:58:27.127122549 +0000 UTC m=+0.565550830 container attach afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006970230890467676 of space, bias 1.0, pg target 0.20910692671403028 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668449529782088 of space, bias 1.0, pg target 0.2005348589346264 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1010072795939708e-06 of space, bias 4.0, pg target 0.001321208735512765 quantized to 16 (current 16)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:58:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:58:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.705 238945 INFO nova.virt.libvirt.driver [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deleting instance files /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_del#033[00m
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.706 238945 INFO nova.virt.libvirt.driver [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deletion of /var/lib/nova/instances/b562c8b8-55ba-4f30-b87c-2a7d87bf4a87_del complete#033[00m
Jan 27 08:58:27 np0005597378 lvm[313035]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:58:27 np0005597378 lvm[313034]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:58:27 np0005597378 lvm[313035]: VG ceph_vg1 finished
Jan 27 08:58:27 np0005597378 lvm[313034]: VG ceph_vg0 finished
Jan 27 08:58:27 np0005597378 lvm[313037]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:58:27 np0005597378 lvm[313037]: VG ceph_vg2 finished
Jan 27 08:58:27 np0005597378 lvm[313038]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:58:27 np0005597378 lvm[313038]: VG ceph_vg0 finished
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.786 238945 INFO nova.compute.manager [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 3.90 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.787 238945 DEBUG oslo.service.loopingcall [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.787 238945 DEBUG nova.compute.manager [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:58:27 np0005597378 nova_compute[238941]: 2026-01-27 13:58:27.787 238945 DEBUG nova.network.neutron [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:58:27 np0005597378 pedantic_almeida[312955]: {}
Jan 27 08:58:27 np0005597378 systemd[1]: libpod-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope: Deactivated successfully.
Jan 27 08:58:27 np0005597378 systemd[1]: libpod-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope: Consumed 1.258s CPU time.
Jan 27 08:58:27 np0005597378 podman[312939]: 2026-01-27 13:58:27.837558394 +0000 UTC m=+1.275986645 container died afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:58:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1a435a6df4f723b9177a022747cfca7fd179ee6b29baeeef5fdd53bc66e82f81-merged.mount: Deactivated successfully.
Jan 27 08:58:28 np0005597378 podman[312939]: 2026-01-27 13:58:28.32484164 +0000 UTC m=+1.763269931 container remove afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 08:58:28 np0005597378 systemd[1]: libpod-conmon-afada150f27a51d4be8b4cf2dc674c7e862430c86f10834f8a5b5065792c063e.scope: Deactivated successfully.
Jan 27 08:58:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:58:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:58:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 112 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 43 KiB/s wr, 189 op/s
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.605 238945 DEBUG nova.network.neutron [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.618 238945 INFO nova.compute.manager [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Took 0.83 seconds to deallocate network for instance.#033[00m
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.659 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.660 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.724 238945 DEBUG oslo_concurrency.processutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.772 238945 DEBUG nova.compute.manager [req-fbd55d95-a4a6-4471-897b-a59fc690ee79 req-c4078884-731f-47b5-a1bc-fc163e697bb1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Received event network-vif-deleted-ad22e8f6-95c6-4527-ac19-dfc0ae20aed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:28 np0005597378 nova_compute[238941]: 2026-01-27 13:58:28.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.206 238945 INFO nova.virt.libvirt.driver [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deleting instance files /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_del#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.207 238945 INFO nova.virt.libvirt.driver [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deletion of /var/lib/nova/instances/9a2cac55-b28d-4d71-b091-6a3c39cdfe14_del complete#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.265 238945 INFO nova.compute.manager [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 7.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.265 238945 DEBUG oslo.service.loopingcall [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.266 238945 DEBUG nova.compute.manager [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.266 238945 DEBUG nova.network.neutron [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:58:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256165188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.305 238945 DEBUG oslo_concurrency.processutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.310 238945 DEBUG nova.compute.provider_tree [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.327 238945 DEBUG nova.scheduler.client.report [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.350 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.384 238945 INFO nova.scheduler.client.report [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Deleted allocations for instance b562c8b8-55ba-4f30-b87c-2a7d87bf4a87#033[00m
Jan 27 08:58:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:58:29 np0005597378 nova_compute[238941]: 2026-01-27 13:58:29.447 238945 DEBUG oslo_concurrency.lockutils [None req-4108d0eb-45cd-432d-9030-228ac3f8f273 5201d6a9a2c345a5a44f7478f19936be c183494c4b924098a08e3761a240af9d - - default default] Lock "b562c8b8-55ba-4f30-b87c-2a7d87bf4a87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:29 np0005597378 podman[313101]: 2026-01-27 13:58:29.723294031 +0000 UTC m=+0.057537207 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 27 08:58:29 np0005597378 podman[313100]: 2026-01-27 13:58:29.748803062 +0000 UTC m=+0.087786133 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 08:58:30 np0005597378 nova_compute[238941]: 2026-01-27 13:58:30.379 238945 DEBUG nova.network.neutron [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:30 np0005597378 nova_compute[238941]: 2026-01-27 13:58:30.396 238945 INFO nova.compute.manager [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Took 1.13 seconds to deallocate network for instance.#033[00m
Jan 27 08:58:30 np0005597378 nova_compute[238941]: 2026-01-27 13:58:30.438 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:30 np0005597378 nova_compute[238941]: 2026-01-27 13:58:30.439 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:30 np0005597378 nova_compute[238941]: 2026-01-27 13:58:30.508 238945 DEBUG oslo_concurrency.processutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 68 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 161 op/s
Jan 27 08:58:30 np0005597378 nova_compute[238941]: 2026-01-27 13:58:30.833 238945 DEBUG nova.compute.manager [req-409842d5-1cfb-4fb1-a223-0ecca673238a req-53e459f9-2d18-45bd-b783-4108517f9377 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Received event network-vif-deleted-5e99824f-f686-4cd9-a3dd-e1e0690fc68f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487919496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:31 np0005597378 nova_compute[238941]: 2026-01-27 13:58:31.062 238945 DEBUG oslo_concurrency.processutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:31 np0005597378 nova_compute[238941]: 2026-01-27 13:58:31.068 238945 DEBUG nova.compute.provider_tree [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:31 np0005597378 nova_compute[238941]: 2026-01-27 13:58:31.210 238945 DEBUG nova.scheduler.client.report [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:31 np0005597378 nova_compute[238941]: 2026-01-27 13:58:31.242 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:31 np0005597378 nova_compute[238941]: 2026-01-27 13:58:31.281 238945 INFO nova.scheduler.client.report [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Deleted allocations for instance 9a2cac55-b28d-4d71-b091-6a3c39cdfe14#033[00m
Jan 27 08:58:31 np0005597378 nova_compute[238941]: 2026-01-27 13:58:31.343 238945 DEBUG oslo_concurrency.lockutils [None req-903f1d0a-eaf2-4266-86e5-1bd15c9a7900 b49f56e21cd44451a1c542f97cb11a9c 71ad88aa5cfe42bdb12bd409ad2842de - - default default] Lock "9a2cac55-b28d-4d71-b091-6a3c39cdfe14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:32 np0005597378 nova_compute[238941]: 2026-01-27 13:58:32.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 68 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 161 op/s
Jan 27 08:58:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:33 np0005597378 nova_compute[238941]: 2026-01-27 13:58:33.921 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 140 op/s
Jan 27 08:58:35 np0005597378 nova_compute[238941]: 2026-01-27 13:58:35.988 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522300.9856296, b562c8b8-55ba-4f30-b87c-2a7d87bf4a87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:35 np0005597378 nova_compute[238941]: 2026-01-27 13:58:35.988 238945 INFO nova.compute.manager [-] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:58:36 np0005597378 nova_compute[238941]: 2026-01-27 13:58:36.014 238945 DEBUG nova.compute.manager [None req-cabdd297-6143-4e08-8f16-69b5cfe51894 - - - - - -] [instance: b562c8b8-55ba-4f30-b87c-2a7d87bf4a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 3.1 KiB/s wr, 80 op/s
Jan 27 08:58:37 np0005597378 nova_compute[238941]: 2026-01-27 13:58:37.073 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522302.0723975, 9a2cac55-b28d-4d71-b091-6a3c39cdfe14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:37 np0005597378 nova_compute[238941]: 2026-01-27 13:58:37.074 238945 INFO nova.compute.manager [-] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:58:37 np0005597378 nova_compute[238941]: 2026-01-27 13:58:37.092 238945 DEBUG nova.compute.manager [None req-31887229-ffe4-4d85-ab34-d7ac701dbdac - - - - - -] [instance: 9a2cac55-b28d-4d71-b091-6a3c39cdfe14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:37 np0005597378 nova_compute[238941]: 2026-01-27 13:58:37.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.1 KiB/s wr, 65 op/s
Jan 27 08:58:38 np0005597378 nova_compute[238941]: 2026-01-27 13:58:38.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.111 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.111 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.128 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.197 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.197 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.204 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.204 238945 INFO nova.compute.claims [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.317 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 767 B/s wr, 33 op/s
Jan 27 08:58:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697833126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.939 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.945 238945 DEBUG nova.compute.provider_tree [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.961 238945 DEBUG nova.scheduler.client.report [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.983 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:40 np0005597378 nova_compute[238941]: 2026-01-27 13:58:40.984 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.026 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.027 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.047 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.067 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.178 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.179 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.179 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Creating image(s)#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.197 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.216 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.239 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.243 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.320 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.321 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.322 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.322 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.341 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.345 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4f9558a-acfd-48f7-974d-003be7605ede_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.434 238945 DEBUG nova.policy [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f15d118498f406a8f37e6740b9a193c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f648e7d1298f439294591a8ee545b15b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.923 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b4f9558a-acfd-48f7-974d-003be7605ede_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:41 np0005597378 nova_compute[238941]: 2026-01-27 13:58:41.977 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] resizing rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.055 238945 DEBUG nova.objects.instance [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lazy-loading 'migration_context' on Instance uuid b4f9558a-acfd-48f7-974d-003be7605ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.078 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.078 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Ensure instance console log exists: /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.079 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.079 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.080 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 1 op/s
Jan 27 08:58:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:42 np0005597378 nova_compute[238941]: 2026-01-27 13:58:42.790 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Successfully created port: a964944f-dff4-47b5-8ba0-d9a3d830032b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:58:43 np0005597378 nova_compute[238941]: 2026-01-27 13:58:43.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.449 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Successfully updated port: a964944f-dff4-47b5-8ba0-d9a3d830032b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.464 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.464 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquired lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.464 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:58:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 69 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 27 op/s
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.676 238945 DEBUG nova.compute.manager [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-changed-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.676 238945 DEBUG nova.compute.manager [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Refreshing instance network info cache due to event network-changed-a964944f-dff4-47b5-8ba0-d9a3d830032b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:58:44 np0005597378 nova_compute[238941]: 2026-01-27 13:58:44.676 238945 DEBUG oslo_concurrency.lockutils [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:58:45 np0005597378 nova_compute[238941]: 2026-01-27 13:58:45.118 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:58:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:46.308 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.311 238945 DEBUG nova.network.neutron [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updating instance_info_cache with network_info: [{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.330 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Releasing lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.330 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance network_info: |[{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.330 238945 DEBUG oslo_concurrency.lockutils [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.331 238945 DEBUG nova.network.neutron [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Refreshing network info cache for port a964944f-dff4-47b5-8ba0-d9a3d830032b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.333 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start _get_guest_xml network_info=[{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.337 238945 WARNING nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.341 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.342 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.345 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.345 238945 DEBUG nova.virt.libvirt.host [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.345 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.346 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.347 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.348 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.348 238945 DEBUG nova.virt.hardware [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.350 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:58:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:58:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017806681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.887 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.907 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:47 np0005597378 nova_compute[238941]: 2026-01-27 13:58:47.911 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:58:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814404287' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.496 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.499 238945 DEBUG nova.virt.libvirt.vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-822533793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-822533793',id=88,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f648e7d1298f439294591a8ee545b15b',ramdisk_id='',reservation_id='r-7rqf0hdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1691246908',owner_user_name='tempest-ServerTagsTestJSON-1691246908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:41Z,user_data=None,user_id='1f15d118498f406a8f37e6740b9a193c',uuid=b4f9558a-acfd-48f7-974d-003be7605ede,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.499 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converting VIF {"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.500 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.501 238945 DEBUG nova.objects.instance [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lazy-loading 'pci_devices' on Instance uuid b4f9558a-acfd-48f7-974d-003be7605ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.522 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <uuid>b4f9558a-acfd-48f7-974d-003be7605ede</uuid>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <name>instance-00000058</name>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerTagsTestJSON-server-822533793</nova:name>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:58:47</nova:creationTime>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:user uuid="1f15d118498f406a8f37e6740b9a193c">tempest-ServerTagsTestJSON-1691246908-project-member</nova:user>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:project uuid="f648e7d1298f439294591a8ee545b15b">tempest-ServerTagsTestJSON-1691246908</nova:project>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <nova:port uuid="a964944f-dff4-47b5-8ba0-d9a3d830032b">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <entry name="serial">b4f9558a-acfd-48f7-974d-003be7605ede</entry>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <entry name="uuid">b4f9558a-acfd-48f7-974d-003be7605ede</entry>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b4f9558a-acfd-48f7-974d-003be7605ede_disk">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b4f9558a-acfd-48f7-974d-003be7605ede_disk.config">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:97:9f:1e"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <target dev="tapa964944f-df"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/console.log" append="off"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:58:48 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:58:48 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:58:48 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:58:48 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.523 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Preparing to wait for external event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.523 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.523 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.524 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.524 238945 DEBUG nova.virt.libvirt.vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-822533793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-822533793',id=88,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f648e7d1298f439294591a8ee545b15b',ramdisk_id='',reservation_id='r-7rqf0hdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1691246908',owner_user_name='tempest-ServerTagsTestJSON-1691246908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:58:41Z,user_data=None,user_id='1f15d118498f406a8f37e6740b9a193c',uuid=b4f9558a-acfd-48f7-974d-003be7605ede,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.524 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converting VIF {"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.525 238945 DEBUG nova.network.os_vif_util [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.525 238945 DEBUG os_vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.526 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.526 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.530 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa964944f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.530 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa964944f-df, col_values=(('external_ids', {'iface-id': 'a964944f-dff4-47b5-8ba0-d9a3d830032b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:9f:1e', 'vm-uuid': 'b4f9558a-acfd-48f7-974d-003be7605ede'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:48 np0005597378 NetworkManager[48904]: <info>  [1769522328.5330] manager: (tapa964944f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:58:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.540 238945 INFO os_vif [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df')#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.599 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.599 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.599 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] No VIF found with MAC fa:16:3e:97:9f:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.600 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Using config drive#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.620 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.947 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Creating config drive at /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config#033[00m
Jan 27 08:58:48 np0005597378 nova_compute[238941]: 2026-01-27 13:58:48.952 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0scjhkxf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.093 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0scjhkxf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.136 238945 DEBUG nova.storage.rbd_utils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] rbd image b4f9558a-acfd-48f7-974d-003be7605ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.140 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config b4f9558a-acfd-48f7-974d-003be7605ede_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.329 238945 DEBUG oslo_concurrency.processutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config b4f9558a-acfd-48f7-974d-003be7605ede_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.330 238945 INFO nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deleting local config drive /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede/disk.config because it was imported into RBD.#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:58:49 np0005597378 kernel: tapa964944f-df: entered promiscuous mode
Jan 27 08:58:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:49Z|00797|binding|INFO|Claiming lport a964944f-dff4-47b5-8ba0-d9a3d830032b for this chassis.
Jan 27 08:58:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:49Z|00798|binding|INFO|a964944f-dff4-47b5-8ba0-d9a3d830032b: Claiming fa:16:3e:97:9f:1e 10.100.0.10
Jan 27 08:58:49 np0005597378 NetworkManager[48904]: <info>  [1769522329.4366] manager: (tapa964944f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.440 238945 DEBUG nova.network.neutron [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updated VIF entry in instance network info cache for port a964944f-dff4-47b5-8ba0-d9a3d830032b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.441 238945 DEBUG nova.network.neutron [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updating instance_info_cache with network_info: [{"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.444 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.452 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:9f:1e 10.100.0.10'], port_security=['fa:16:3e:97:9f:1e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4f9558a-acfd-48f7-974d-003be7605ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f648e7d1298f439294591a8ee545b15b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d9fb182-c08f-4821-b406-6dc437cbe9cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686942ec-7481-45ae-a50a-94b249d7ebe1, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a964944f-dff4-47b5-8ba0-d9a3d830032b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.453 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a964944f-dff4-47b5-8ba0-d9a3d830032b in datapath 5c0e4370-54f6-4299-9eca-c6ff40d0b355 bound to our chassis#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.454 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c0e4370-54f6-4299-9eca-c6ff40d0b355#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.467 238945 DEBUG oslo_concurrency.lockutils [req-d2e7d04a-26e2-4695-bfa6-0f5ace373fba req-71c91e43-f28b-40a6-a766-f9fa60a51f13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b4f9558a-acfd-48f7-974d-003be7605ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcc69c8-00e6-425f-9a83-7e654a9c87ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 systemd-udevd[313489]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.470 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c0e4370-51 in ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.473 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c0e4370-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a662900d-f254-4896-b482-93ea6fa02b9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.474 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fab641c7-4b11-4623-8135-5490bd8cc4ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 systemd-machined[207425]: New machine qemu-100-instance-00000058.
Jan 27 08:58:49 np0005597378 NetworkManager[48904]: <info>  [1769522329.4890] device (tapa964944f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:58:49 np0005597378 NetworkManager[48904]: <info>  [1769522329.4897] device (tapa964944f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.489 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2efadcd0-8617-447d-8baf-8ebdd048acbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 systemd[1]: Started Virtual Machine qemu-100-instance-00000058.
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.514 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75d19c-7aef-49b5-8532-245765b616c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:49Z|00799|binding|INFO|Setting lport a964944f-dff4-47b5-8ba0-d9a3d830032b ovn-installed in OVS
Jan 27 08:58:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:49Z|00800|binding|INFO|Setting lport a964944f-dff4-47b5-8ba0-d9a3d830032b up in Southbound
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.516 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.541 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7eaf89-0b11-435c-bdc6-8e8bc41c9d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a5d248-a469-45eb-a610-127adfb17e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 NetworkManager[48904]: <info>  [1769522329.5473] manager: (tap5c0e4370-50): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.573 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[faa07ad0-5a06-4c78-ad97-eb043d7dd6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.576 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[28774078-0d13-4244-92fe-68f33b791b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 NetworkManager[48904]: <info>  [1769522329.5959] device (tap5c0e4370-50): carrier: link connected
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.600 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2275d0-a429-4760-a602-dcbb75f3ec88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.718 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07093f68-6bf3-44e4-a3cf-d0d336ce3d1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c0e4370-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:c1:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496922, 'reachable_time': 32960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313522, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.735 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30f682fa-51f5-4b02-95bd-a8f1b31db068]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:c1d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496922, 'tstamp': 496922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313523, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.753 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ae8d18-58b2-45a8-990c-4ad9babb9763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c0e4370-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:c1:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496922, 'reachable_time': 32960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313524, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.783 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[424e9128-d5ef-4a23-8f58-aab567814303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.840 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61c04582-6efb-476a-8338-f356d8b918eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.841 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c0e4370-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.842 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.842 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c0e4370-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:49 np0005597378 kernel: tap5c0e4370-50: entered promiscuous mode
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 NetworkManager[48904]: <info>  [1769522329.8477] manager: (tap5c0e4370-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.847 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c0e4370-50, col_values=(('external_ids', {'iface-id': 'e6690f8f-6bcf-496e-a56a-fcdd15c83a47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:49 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:49Z|00801|binding|INFO|Releasing lport e6690f8f-6bcf-496e-a56a-fcdd15c83a47 from this chassis (sb_readonly=0)
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.853 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c0e4370-54f6-4299-9eca-c6ff40d0b355.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c0e4370-54f6-4299-9eca-c6ff40d0b355.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.854 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04b737a1-d638-4f3c-8aed-30185d1fe954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.855 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-5c0e4370-54f6-4299-9eca-c6ff40d0b355
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/5c0e4370-54f6-4299-9eca-c6ff40d0b355.pid.haproxy
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 5c0e4370-54f6-4299-9eca-c6ff40d0b355
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:58:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:49.855 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'env', 'PROCESS_TAG=haproxy-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c0e4370-54f6-4299-9eca-c6ff40d0b355.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:58:49 np0005597378 nova_compute[238941]: 2026-01-27 13:58:49.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.025 238945 DEBUG nova.compute.manager [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG oslo_concurrency.lockutils [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG oslo_concurrency.lockutils [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG oslo_concurrency.lockutils [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.026 238945 DEBUG nova.compute.manager [req-1c437c4c-8bbb-426b-953b-4dc208def745 req-048a3478-3a87-4cc8-a56e-4104de8ab0fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Processing event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.083 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.083 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522330.0825434, b4f9558a-acfd-48f7-974d-003be7605ede => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.084 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Started (Lifecycle Event)#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.087 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.090 238945 INFO nova.virt.libvirt.driver [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance spawned successfully.#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.090 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.119 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.125 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.128 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.129 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.129 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.130 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.130 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.131 238945 DEBUG nova.virt.libvirt.driver [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.156 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.156 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522330.0844104, b4f9558a-acfd-48f7-974d-003be7605ede => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.156 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.182 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.185 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522330.0859032, b4f9558a-acfd-48f7-974d-003be7605ede => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.185 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.209 238945 INFO nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 9.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.209 238945 DEBUG nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.211 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.216 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:58:50 np0005597378 podman[313597]: 2026-01-27 13:58:50.225224503 +0000 UTC m=+0.052350131 container create 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.251 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:58:50 np0005597378 systemd[1]: Started libpod-conmon-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530.scope.
Jan 27 08:58:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.292 238945 INFO nova.compute.manager [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 10.12 seconds to build instance.#033[00m
Jan 27 08:58:50 np0005597378 podman[313597]: 2026-01-27 13:58:50.197573124 +0000 UTC m=+0.024698782 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:58:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e826a204548baa46d3d416ec131518a490de796a2b21bded38e5b4f3839f2f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.307 238945 DEBUG oslo_concurrency.lockutils [None req-be00dfcb-0aba-48b5-add7-95471293e72e 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:50 np0005597378 podman[313597]: 2026-01-27 13:58:50.310078887 +0000 UTC m=+0.137204515 container init 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:58:50 np0005597378 podman[313597]: 2026-01-27 13:58:50.317614367 +0000 UTC m=+0.144740005 container start 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:58:50 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : New worker (313618) forked
Jan 27 08:58:50 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : Loading success.
Jan 27 08:58:50 np0005597378 nova_compute[238941]: 2026-01-27 13:58:50.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.407 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.408 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/912087594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:51 np0005597378 nova_compute[238941]: 2026-01-27 13:58:51.991 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.078 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.079 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.164 238945 DEBUG nova.compute.manager [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.164 238945 DEBUG oslo_concurrency.lockutils [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 DEBUG oslo_concurrency.lockutils [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 DEBUG oslo_concurrency.lockutils [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 DEBUG nova.compute.manager [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] No waiting events found dispatching network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.165 238945 WARNING nova.compute.manager [req-0015bcf4-10c4-4099-ab7f-a3af49627280 req-5dabb2ea-ba1d-4f3a-aac1-7595664bdf11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received unexpected event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b for instance with vm_state active and task_state None.#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.232 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.234 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3761MB free_disk=59.967015714384615GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.306 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b4f9558a-acfd-48f7-974d-003be7605ede actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.306 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.307 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.325 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.346 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.347 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.365 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.387 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 08:58:52 np0005597378 nova_compute[238941]: 2026-01-27 13:58:52.433 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 08:58:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4024125684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:53 np0005597378 nova_compute[238941]: 2026-01-27 13:58:53.020 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:53 np0005597378 nova_compute[238941]: 2026-01-27 13:58:53.026 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:53 np0005597378 nova_compute[238941]: 2026-01-27 13:58:53.042 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:53 np0005597378 nova_compute[238941]: 2026-01-27 13:58:53.065 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:58:53 np0005597378 nova_compute[238941]: 2026-01-27 13:58:53.066 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:53 np0005597378 nova_compute[238941]: 2026-01-27 13:58:53.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.382 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.383 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.383 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.384 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.385 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.386 238945 INFO nova.compute.manager [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Terminating instance#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.387 238945 DEBUG nova.compute.manager [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:58:54 np0005597378 kernel: tapa964944f-df (unregistering): left promiscuous mode
Jan 27 08:58:54 np0005597378 NetworkManager[48904]: <info>  [1769522334.4916] device (tapa964944f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:58:54 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:54Z|00802|binding|INFO|Releasing lport a964944f-dff4-47b5-8ba0-d9a3d830032b from this chassis (sb_readonly=0)
Jan 27 08:58:54 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:54Z|00803|binding|INFO|Setting lport a964944f-dff4-47b5-8ba0-d9a3d830032b down in Southbound
Jan 27 08:58:54 np0005597378 ovn_controller[144812]: 2026-01-27T13:58:54Z|00804|binding|INFO|Removing iface tapa964944f-df ovn-installed in OVS
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.507 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.511 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 08:58:54 np0005597378 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 27 08:58:54 np0005597378 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Consumed 4.931s CPU time.
Jan 27 08:58:54 np0005597378 systemd-machined[207425]: Machine qemu-100-instance-00000058 terminated.
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.582 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:9f:1e 10.100.0.10'], port_security=['fa:16:3e:97:9f:1e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4f9558a-acfd-48f7-974d-003be7605ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f648e7d1298f439294591a8ee545b15b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d9fb182-c08f-4821-b406-6dc437cbe9cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686942ec-7481-45ae-a50a-94b249d7ebe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a964944f-dff4-47b5-8ba0-d9a3d830032b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.583 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a964944f-dff4-47b5-8ba0-d9a3d830032b in datapath 5c0e4370-54f6-4299-9eca-c6ff40d0b355 unbound from our chassis#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.584 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c0e4370-54f6-4299-9eca-c6ff40d0b355, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9660e22a-8852-4d2d-b88c-edfc54c24dc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.586 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 namespace which is not needed anymore#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.612 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.623 238945 INFO nova.virt.libvirt.driver [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Instance destroyed successfully.#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.623 238945 DEBUG nova.objects.instance [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lazy-loading 'resources' on Instance uuid b4f9558a-acfd-48f7-974d-003be7605ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.736 238945 DEBUG nova.virt.libvirt.vif [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-822533793',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-822533793',id=88,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:58:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f648e7d1298f439294591a8ee545b15b',ramdisk_id='',reservation_id='r-7rqf0hdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1691246908',owner_user_name='tempest-ServerTagsTestJSON-1691246908-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:58:50Z,user_data=None,user_id='1f15d118498f406a8f37e6740b9a193c',uuid=b4f9558a-acfd-48f7-974d-003be7605ede,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.737 238945 DEBUG nova.network.os_vif_util [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converting VIF {"id": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "address": "fa:16:3e:97:9f:1e", "network": {"id": "5c0e4370-54f6-4299-9eca-c6ff40d0b355", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1985071593-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f648e7d1298f439294591a8ee545b15b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa964944f-df", "ovs_interfaceid": "a964944f-dff4-47b5-8ba0-d9a3d830032b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.737 238945 DEBUG nova.network.os_vif_util [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.738 238945 DEBUG os_vif [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.739 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa964944f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.746 238945 INFO os_vif [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:9f:1e,bridge_name='br-int',has_traffic_filtering=True,id=a964944f-dff4-47b5-8ba0-d9a3d830032b,network=Network(5c0e4370-54f6-4299-9eca-c6ff40d0b355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa964944f-df')#033[00m
Jan 27 08:58:54 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : haproxy version is 2.8.14-c23fe91
Jan 27 08:58:54 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [NOTICE]   (313616) : path to executable is /usr/sbin/haproxy
Jan 27 08:58:54 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [WARNING]  (313616) : Exiting Master process...
Jan 27 08:58:54 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [ALERT]    (313616) : Current worker (313618) exited with code 143 (Terminated)
Jan 27 08:58:54 np0005597378 neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355[313612]: [WARNING]  (313616) : All workers exited. Exiting... (0)
Jan 27 08:58:54 np0005597378 systemd[1]: libpod-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530.scope: Deactivated successfully.
Jan 27 08:58:54 np0005597378 podman[313708]: 2026-01-27 13:58:54.758428432 +0000 UTC m=+0.065485315 container died 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 08:58:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8e826a204548baa46d3d416ec131518a490de796a2b21bded38e5b4f3839f2f8-merged.mount: Deactivated successfully.
Jan 27 08:58:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530-userdata-shm.mount: Deactivated successfully.
Jan 27 08:58:54 np0005597378 podman[313708]: 2026-01-27 13:58:54.849884882 +0000 UTC m=+0.156941745 container cleanup 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 08:58:54 np0005597378 systemd[1]: libpod-conmon-332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530.scope: Deactivated successfully.
Jan 27 08:58:54 np0005597378 podman[313757]: 2026-01-27 13:58:54.922792723 +0000 UTC m=+0.052168865 container remove 332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.928 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd1edcb-bdb9-4a1f-b32e-d4bcc3d90026]: (4, ('Tue Jan 27 01:58:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 (332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530)\n332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530\nTue Jan 27 01:58:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 (332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530)\n332f08ec524ad3c79561472b01f7ff36d82652a4c20a6353b0dbfbc2b8a50530\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[76f80cb1-d80f-4b66-ad31-2fbdbae72ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.931 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c0e4370-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 kernel: tap5c0e4370-50: left promiscuous mode
Jan 27 08:58:54 np0005597378 nova_compute[238941]: 2026-01-27 13:58:54.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2492c9-8cb7-4588-8220-38e34e0a5938]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e393fcd5-7950-4739-9cbb-b71d1b2869c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c38d1461-ebed-4669-84b5-28bbcc6714c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.982 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9923c898-eeba-4515-a0b7-375c918822ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496916, 'reachable_time': 42133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313771, 'error': None, 'target': 'ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:54 np0005597378 systemd[1]: run-netns-ovnmeta\x2d5c0e4370\x2d54f6\x2d4299\x2d9eca\x2dc6ff40d0b355.mount: Deactivated successfully.
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.986 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c0e4370-54f6-4299-9eca-c6ff40d0b355 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:58:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:58:54.987 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[41295104-26db-49b9-9c75-cbfa9846d58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.084 238945 INFO nova.virt.libvirt.driver [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deleting instance files /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede_del#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.086 238945 INFO nova.virt.libvirt.driver [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deletion of /var/lib/nova/instances/b4f9558a-acfd-48f7-974d-003be7605ede_del complete#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.177 238945 INFO nova.compute.manager [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.177 238945 DEBUG oslo.service.loopingcall [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.178 238945 DEBUG nova.compute.manager [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.178 238945 DEBUG nova.network.neutron [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.397 238945 DEBUG nova.compute.manager [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-unplugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.398 238945 DEBUG oslo_concurrency.lockutils [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.398 238945 DEBUG oslo_concurrency.lockutils [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.398 238945 DEBUG oslo_concurrency.lockutils [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.399 238945 DEBUG nova.compute.manager [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] No waiting events found dispatching network-vif-unplugged-a964944f-dff4-47b5-8ba0-d9a3d830032b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:55 np0005597378 nova_compute[238941]: 2026-01-27 13:58:55.399 238945 DEBUG nova.compute.manager [req-839d696a-f731-4f1a-a4f3-7fc1555735e5 req-6e4ba443-862f-4b6f-8d5d-e0cae8eeae9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-unplugged-a964944f-dff4-47b5-8ba0-d9a3d830032b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.060 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.061 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.062 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.289 238945 DEBUG nova.network.neutron [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.311 238945 INFO nova.compute.manager [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Took 1.13 seconds to deallocate network for instance.#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.353 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.354 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:56 np0005597378 nova_compute[238941]: 2026-01-27 13:58:56.403 238945 DEBUG oslo_concurrency.processutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 69 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 697 KiB/s wr, 87 op/s
Jan 27 08:58:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:58:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508457173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.017 238945 DEBUG oslo_concurrency.processutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.025 238945 DEBUG nova.compute.provider_tree [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.049 238945 DEBUG nova.scheduler.client.report [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.082 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.125 238945 INFO nova.scheduler.client.report [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Deleted allocations for instance b4f9558a-acfd-48f7-974d-003be7605ede#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.132 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.223 238945 DEBUG oslo_concurrency.lockutils [None req-aa50ff64-db70-4840-a427-5783e8f37977 1f15d118498f406a8f37e6740b9a193c f648e7d1298f439294591a8ee545b15b - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.654 238945 DEBUG nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.654 238945 DEBUG oslo_concurrency.lockutils [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.655 238945 DEBUG oslo_concurrency.lockutils [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.655 238945 DEBUG oslo_concurrency.lockutils [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b4f9558a-acfd-48f7-974d-003be7605ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.655 238945 DEBUG nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] No waiting events found dispatching network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.656 238945 WARNING nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received unexpected event network-vif-plugged-a964944f-dff4-47b5-8ba0-d9a3d830032b for instance with vm_state deleted and task_state None.#033[00m
Jan 27 08:58:57 np0005597378 nova_compute[238941]: 2026-01-27 13:58:57.656 238945 DEBUG nova.compute.manager [req-ac6b31e6-d240-4a58-b118-2f31d6faf297 req-43ce0ec4-7b02-4a69-bdcd-a618daa4112c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Received event network-vif-deleted-a964944f-dff4-47b5-8ba0-d9a3d830032b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:58:58 np0005597378 nova_compute[238941]: 2026-01-27 13:58:58.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 55 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.489 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.490 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.510 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:58:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:58:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2223092159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:58:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:58:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2223092159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.580 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.581 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.587 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.587 238945 INFO nova.compute.claims [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.680 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:58:59 np0005597378 nova_compute[238941]: 2026-01-27 13:58:59.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026445992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.264 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.272 238945 DEBUG nova.compute.provider_tree [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.292 238945 DEBUG nova.scheduler.client.report [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.319 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.321 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.502 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.503 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:59:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.690 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:59:00 np0005597378 podman[313817]: 2026-01-27 13:59:00.726264436 +0000 UTC m=+0.057657990 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 08:59:00 np0005597378 podman[313816]: 2026-01-27 13:59:00.764111653 +0000 UTC m=+0.095367003 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.833 238945 DEBUG nova.policy [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb4fa068674a79bbe5079fd5113d85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b7675f34a66499383b81c1799f8ef4e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:59:00 np0005597378 nova_compute[238941]: 2026-01-27 13:59:00.943 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.445 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.447 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.447 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Creating image(s)#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.469 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.490 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.512 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.516 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.622 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.623 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.624 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.625 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.649 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.654 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:01 np0005597378 nova_compute[238941]: 2026-01-27 13:59:01.989 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.071 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] resizing rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.184 238945 DEBUG nova.objects.instance [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'migration_context' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.345 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.346 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Ensure instance console log exists: /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.346 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.347 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.347 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.484 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.484 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.503 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:59:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 93 op/s
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.567 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Successfully created port: ac2842da-30db-4e63-af6c-ba1f0abe6de9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.601 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.601 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.608 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.609 238945 INFO nova.compute.claims [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:59:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:02 np0005597378 nova_compute[238941]: 2026-01-27 13:59:02.730 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2286702703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.284 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.292 238945 DEBUG nova.compute.provider_tree [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.313 238945 DEBUG nova.scheduler.client.report [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.444 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.446 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.629 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.630 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:59:03 np0005597378 nova_compute[238941]: 2026-01-27 13:59:03.989 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.027 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.146 238945 DEBUG nova.policy [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84aa975dea454d9dafe5d1583c4d0f0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '393fd88e226e4f0e95954956b0fc8f40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.176 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.177 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.177 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Creating image(s)#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.199 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.222 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.242 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.246 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.324 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.325 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.326 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.327 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.349 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.353 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.388 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 61 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 108 op/s
Jan 27 08:59:04 np0005597378 nova_compute[238941]: 2026-01-27 13:59:04.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.181 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.829s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.251 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] resizing rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.415 238945 DEBUG nova.objects.instance [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'migration_context' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.430 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.431 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Ensure instance console log exists: /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.431 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.432 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.432 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.793 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.794 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.813 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.875 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.876 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.884 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:59:05 np0005597378 nova_compute[238941]: 2026-01-27 13:59:05.884 238945 INFO nova.compute.claims [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.042 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.303 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Successfully updated port: ac2842da-30db-4e63-af6c-ba1f0abe6de9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.321 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquired lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.321 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.443 238945 DEBUG nova.compute.manager [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-changed-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.444 238945 DEBUG nova.compute.manager [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Refreshing instance network info cache due to event network-changed-ac2842da-30db-4e63-af6c-ba1f0abe6de9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.444 238945 DEBUG oslo_concurrency.lockutils [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 103 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 08:59:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691464662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.604 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.610 238945 DEBUG nova.compute.provider_tree [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.613 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.618 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Successfully created port: b405c0ca-029a-4203-9890-f05309eea795 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.624 238945 DEBUG nova.scheduler.client.report [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.662 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.663 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.733 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.734 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.768 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.811 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.935 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.936 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.937 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating image(s)#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.954 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.973 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.993 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:06 np0005597378 nova_compute[238941]: 2026-01-27 13:59:06.996 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.025 238945 DEBUG nova.policy [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84aa975dea454d9dafe5d1583c4d0f0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '393fd88e226e4f0e95954956b0fc8f40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.060 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.061 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.062 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.062 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.083 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.086 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.603 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.673 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Successfully created port: c3e32fae-fe60-4d39-980d-58000d56deee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.681 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] resizing rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.758 238945 DEBUG nova.objects.instance [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'migration_context' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.785 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.786 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Ensure instance console log exists: /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.786 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.786 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:07 np0005597378 nova_compute[238941]: 2026-01-27 13:59:07.787 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.369 238945 DEBUG nova.network.neutron [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.392 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Releasing lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.393 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance network_info: |[{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.393 238945 DEBUG oslo_concurrency.lockutils [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.393 238945 DEBUG nova.network.neutron [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Refreshing network info cache for port ac2842da-30db-4e63-af6c-ba1f0abe6de9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.396 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start _get_guest_xml network_info=[{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.400 238945 WARNING nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.405 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.405 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.408 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.408 238945 DEBUG nova.virt.libvirt.host [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.409 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.409 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.409 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.410 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.411 238945 DEBUG nova.virt.hardware [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.414 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.536 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Successfully updated port: b405c0ca-029a-4203-9890-f05309eea795 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:59:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 126 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 MiB/s wr, 61 op/s
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.553 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.553 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquired lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.554 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.827 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:59:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523316528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.949 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.972 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:08 np0005597378 nova_compute[238941]: 2026-01-27 13:59:08.976 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.098 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Successfully updated port: c3e32fae-fe60-4d39-980d-58000d56deee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.123 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.124 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquired lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.124 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.238 238945 DEBUG nova.compute.manager [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-changed-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.238 238945 DEBUG nova.compute.manager [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Refreshing instance network info cache due to event network-changed-b405c0ca-029a-4203-9890-f05309eea795. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.239 238945 DEBUG oslo_concurrency.lockutils [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.307 238945 DEBUG nova.compute.manager [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-changed-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.308 238945 DEBUG nova.compute.manager [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Refreshing instance network info cache due to event network-changed-c3e32fae-fe60-4d39-980d-58000d56deee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.308 238945 DEBUG oslo_concurrency.lockutils [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/88169052' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.518 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.519 238945 DEBUG nova.virt.libvirt.vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:01Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.519 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.520 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.521 238945 DEBUG nova.objects.instance [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'pci_devices' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.537 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <uuid>19f85ef5-f10f-49b4-b970-ad91d542cbe8</uuid>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <name>instance-00000059</name>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:name>tempest-InstanceActionsTestJSON-server-1913905016</nova:name>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:59:08</nova:creationTime>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:user uuid="f8eb4fa068674a79bbe5079fd5113d85">tempest-InstanceActionsTestJSON-2136507556-project-member</nova:user>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:project uuid="6b7675f34a66499383b81c1799f8ef4e">tempest-InstanceActionsTestJSON-2136507556</nova:project>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <nova:port uuid="ac2842da-30db-4e63-af6c-ba1f0abe6de9">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <entry name="serial">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <entry name="uuid">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:8b:2c:21"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <target dev="tapac2842da-30"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/console.log" append="off"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:59:09 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:59:09 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:59:09 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:59:09 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.538 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Preparing to wait for external event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.539 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.539 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.539 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.540 238945 DEBUG nova.virt.libvirt.vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:01Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.540 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.541 238945 DEBUG nova.network.os_vif_util [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.542 238945 DEBUG os_vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.542 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac2842da-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac2842da-30, col_values=(('external_ids', {'iface-id': 'ac2842da-30db-4e63-af6c-ba1f0abe6de9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:2c:21', 'vm-uuid': '19f85ef5-f10f-49b4-b970-ad91d542cbe8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:09 np0005597378 NetworkManager[48904]: <info>  [1769522349.5488] manager: (tapac2842da-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.554 238945 INFO os_vif [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.605 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.606 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.606 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] No VIF found with MAC fa:16:3e:8b:2c:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.607 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Using config drive#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.626 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.631 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522334.620691, b4f9558a-acfd-48f7-974d-003be7605ede => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.631 238945 INFO nova.compute.manager [-] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:59:09 np0005597378 nova_compute[238941]: 2026-01-27 13:59:09.661 238945 DEBUG nova.compute.manager [None req-b1d9710c-f20d-46fe-86f0-eb7c9929a066 - - - - - -] [instance: b4f9558a-acfd-48f7-974d-003be7605ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.134 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.481 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Creating config drive at /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.487 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5nhg7jq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 MiB/s wr, 89 op/s
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.632 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5nhg7jq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.655 238945 DEBUG nova.storage.rbd_utils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] rbd image 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.658 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.775 238945 DEBUG oslo_concurrency.processutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config 19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.776 238945 INFO nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deleting local config drive /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/disk.config because it was imported into RBD.#033[00m
Jan 27 08:59:10 np0005597378 kernel: tapac2842da-30: entered promiscuous mode
Jan 27 08:59:10 np0005597378 NetworkManager[48904]: <info>  [1769522350.8497] manager: (tapac2842da-30): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Jan 27 08:59:10 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:10Z|00805|binding|INFO|Claiming lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 for this chassis.
Jan 27 08:59:10 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:10Z|00806|binding|INFO|ac2842da-30db-4e63-af6c-ba1f0abe6de9: Claiming fa:16:3e:8b:2c:21 10.100.0.6
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.863 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.864 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b bound to our chassis#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.865 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f4fd9-29dd-4d02-9398-6a6676f5e8f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.879 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde664c2c-c1 in ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.881 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde664c2c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07a15c85-13bd-4cb5-9e28-1e6e23a954a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.882 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f4ecb-57cf-49de-af64-d937127433db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:10 np0005597378 systemd-machined[207425]: New machine qemu-101-instance-00000059.
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.896 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6f092847-118e-494b-9a54-c49b24e8ddbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8ee13f-dfe4-4317-96d4-b288b91de125]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:10 np0005597378 systemd[1]: Started Virtual Machine qemu-101-instance-00000059.
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:10 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:10Z|00807|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 ovn-installed in OVS
Jan 27 08:59:10 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:10Z|00808|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 up in Southbound
Jan 27 08:59:10 np0005597378 nova_compute[238941]: 2026-01-27 13:59:10.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:10 np0005597378 systemd-udevd[314543]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.950 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[355ffed0-f2d0-4faf-aa4e-182fb430b3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:10 np0005597378 NetworkManager[48904]: <info>  [1769522350.9550] device (tapac2842da-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:59:10 np0005597378 NetworkManager[48904]: <info>  [1769522350.9557] device (tapac2842da-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:59:10 np0005597378 NetworkManager[48904]: <info>  [1769522350.9591] manager: (tapde664c2c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Jan 27 08:59:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:10.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[942397df-f5b6-4766-8247-5d04e528daf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.001 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b9298d22-6cdf-4d9c-9117-512318079199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.005 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d09e048c-bc50-4137-8e91-48be64e8af06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 NetworkManager[48904]: <info>  [1769522351.0295] device (tapde664c2c-c0): carrier: link connected
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.035 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[325bd3d4-2e47-4d08-b3a6-57c95a28f807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.055 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b07b104-33f5-4745-a1a6-ebf3f1133877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499065, 'reachable_time': 40979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314571, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eda4d985-19f7-4ba1-8ae2-9fbe2c5ccdc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:4da3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499065, 'tstamp': 499065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314572, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.100 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe59278-b031-4e04-ad41-09208d422215]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499065, 'reachable_time': 40979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314573, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22a96137-c525-4ef1-b434-29e1e7040664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.178 238945 DEBUG nova.network.neutron [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[33a4e13d-6b88-4740-b521-053ddb3c679d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.201 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde664c2c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:11 np0005597378 kernel: tapde664c2c-c0: entered promiscuous mode
Jan 27 08:59:11 np0005597378 NetworkManager[48904]: <info>  [1769522351.2042] manager: (tapde664c2c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.205 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde664c2c-c0, col_values=(('external_ids', {'iface-id': '31d712d1-bf3a-4ae0-b986-fb5558dfacd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:11 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:11Z|00809|binding|INFO|Releasing lport 31d712d1-bf3a-4ae0-b986-fb5558dfacd2 from this chassis (sb_readonly=0)
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.210 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Releasing lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.211 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance network_info: |[{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.212 238945 DEBUG oslo_concurrency.lockutils [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.213 238945 DEBUG nova.network.neutron [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Refreshing network info cache for port b405c0ca-029a-4203-9890-f05309eea795 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.216 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start _get_guest_xml network_info=[{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.223 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.224 238945 WARNING nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.225 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[336e7a5b-ea50-4acb-a92d-9edd746fc71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.226 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:59:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:11.226 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'env', 'PROCESS_TAG=haproxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.236 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.237 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.241 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.242 238945 DEBUG nova.virt.libvirt.host [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.243 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.243 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.245 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.245 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.245 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.246 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.246 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.247 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.247 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.248 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.248 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.249 238945 DEBUG nova.virt.hardware [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.254 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:11 np0005597378 podman[314625]: 2026-01-27 13:59:11.591266778 +0000 UTC m=+0.025566965 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.808 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522351.8074107, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.809 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Started (Lifecycle Event)#033[00m
Jan 27 08:59:11 np0005597378 podman[314625]: 2026-01-27 13:59:11.830051998 +0000 UTC m=+0.264352145 container create e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.842 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.847 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522351.8076944, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.848 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:59:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064805762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.867 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.871 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:11 np0005597378 systemd[1]: Started libpod-conmon-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95.scope.
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.890 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3eec43a45986069145bb96e053f5db3e385456c60228ff105a88d7af5f201fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.909 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:11 np0005597378 podman[314625]: 2026-01-27 13:59:11.921732494 +0000 UTC m=+0.356032691 container init e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:59:11 np0005597378 podman[314625]: 2026-01-27 13:59:11.926827858 +0000 UTC m=+0.361128025 container start e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.929 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:11 np0005597378 nova_compute[238941]: 2026-01-27 13:59:11.933 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:11 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : New worker (314708) forked
Jan 27 08:59:11 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : Loading success.
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.065 238945 DEBUG nova.network.neutron [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.095 238945 DEBUG nova.network.neutron [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updated VIF entry in instance network info cache for port ac2842da-30db-4e63-af6c-ba1f0abe6de9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.096 238945 DEBUG nova.network.neutron [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.098 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Releasing lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.099 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance network_info: |[{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.099 238945 DEBUG oslo_concurrency.lockutils [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.099 238945 DEBUG nova.network.neutron [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Refreshing network info cache for port c3e32fae-fe60-4d39-980d-58000d56deee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.103 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start _get_guest_xml network_info=[{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.108 238945 WARNING nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.113 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.114 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.118 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.119 238945 DEBUG nova.virt.libvirt.host [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.119 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.119 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.120 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.120 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.121 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.122 238945 DEBUG nova.virt.hardware [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.126 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.168 238945 DEBUG oslo_concurrency.lockutils [req-376f5679-8d23-44af-a496-aea286394b88 req-65a524cd-e382-4b20-b9ec-f8cf092dc243 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.509 238945 DEBUG nova.compute.manager [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.509 238945 DEBUG oslo_concurrency.lockutils [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.510 238945 DEBUG oslo_concurrency.lockutils [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.510 238945 DEBUG oslo_concurrency.lockutils [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.510 238945 DEBUG nova.compute.manager [req-5f21ec0a-33a8-401a-b778-f4ab857742c0 req-b99dc5d3-a145-41ce-a90d-b93da2557b3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Processing event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.511 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:59:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044580342' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.529 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522352.5147648, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.529 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.539 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:59:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.547 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.554 238945 DEBUG nova.virt.libvirt.vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1523303383',display_name='tempest-ServerRescueNegativeTestJSON-server-1523303383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1523303383',id=90,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-4pp3oufn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:04Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=b869d848-1a7e-4a04-95f2-cedc16ebe1f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.555 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.555 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.557 238945 DEBUG nova.objects.instance [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'pci_devices' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.560 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance spawned successfully.#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.560 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.579 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.579 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.579 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.580 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.580 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.580 238945 DEBUG nova.virt.libvirt.driver [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.584 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.586 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <uuid>b869d848-1a7e-4a04-95f2-cedc16ebe1f7</uuid>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <name>instance-0000005a</name>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1523303383</nova:name>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:59:11</nova:creationTime>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:user uuid="84aa975dea454d9dafe5d1583c4d0f0e">tempest-ServerRescueNegativeTestJSON-1362523506-project-member</nova:user>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:project uuid="393fd88e226e4f0e95954956b0fc8f40">tempest-ServerRescueNegativeTestJSON-1362523506</nova:project>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <nova:port uuid="b405c0ca-029a-4203-9890-f05309eea795">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <entry name="serial">b869d848-1a7e-4a04-95f2-cedc16ebe1f7</entry>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <entry name="uuid">b869d848-1a7e-4a04-95f2-cedc16ebe1f7</entry>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:10:c2:09"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <target dev="tapb405c0ca-02"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/console.log" append="off"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:59:12 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:59:12 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:59:12 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:59:12 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.586 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Preparing to wait for external event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.586 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.587 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.587 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.587 238945 DEBUG nova.virt.libvirt.vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1523303383',display_name='tempest-ServerRescueNegativeTestJSON-server-1523303383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1523303383',id=90,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-4pp3oufn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:04Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=b869d848-1a7e-4a04-95f2-cedc16ebe1f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.588 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.588 238945 DEBUG nova.network.os_vif_util [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.588 238945 DEBUG os_vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.590 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.592 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb405c0ca-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.593 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb405c0ca-02, col_values=(('external_ids', {'iface-id': 'b405c0ca-029a-4203-9890-f05309eea795', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:c2:09', 'vm-uuid': 'b869d848-1a7e-4a04-95f2-cedc16ebe1f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:12 np0005597378 NetworkManager[48904]: <info>  [1769522352.5950] manager: (tapb405c0ca-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.601 238945 INFO os_vif [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02')#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.602 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:12 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.635 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.651 238945 INFO nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 11.20 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.651 238945 DEBUG nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.691 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.692 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.692 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No VIF found with MAC fa:16:3e:10:c2:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.693 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Using config drive#033[00m
Jan 27 08:59:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145044891' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.716 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.729 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.755 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.760 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.802 238945 INFO nova.compute.manager [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 13.24 seconds to build instance.#033[00m
Jan 27 08:59:12 np0005597378 nova_compute[238941]: 2026-01-27 13:59:12.829 238945 DEBUG oslo_concurrency.lockutils [None req-c55587fa-72b6-47ee-ba4f-b7621f6449b2 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.383 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Creating config drive at /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.387 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6cn3rsd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973299670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.425 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.427 238945 DEBUG nova.virt.libvirt.vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:06Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.428 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.429 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.430 238945 DEBUG nova.objects.instance [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.450 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <uuid>975c9bc3-152a-44ef-843b-135ecb2d18d3</uuid>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <name>instance-0000005b</name>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-157909793</nova:name>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:59:12</nova:creationTime>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:user uuid="84aa975dea454d9dafe5d1583c4d0f0e">tempest-ServerRescueNegativeTestJSON-1362523506-project-member</nova:user>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:project uuid="393fd88e226e4f0e95954956b0fc8f40">tempest-ServerRescueNegativeTestJSON-1362523506</nova:project>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <nova:port uuid="c3e32fae-fe60-4d39-980d-58000d56deee">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <entry name="serial">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <entry name="uuid">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:80:f6:7e"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <target dev="tapc3e32fae-fe"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/console.log" append="off"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:59:13 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:59:13 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:59:13 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:59:13 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.451 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Preparing to wait for external event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.451 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.452 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.452 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.453 238945 DEBUG nova.virt.libvirt.vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:06Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.453 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.454 238945 DEBUG nova.network.os_vif_util [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.454 238945 DEBUG os_vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.456 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.456 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.460 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3e32fae-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.461 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3e32fae-fe, col_values=(('external_ids', {'iface-id': 'c3e32fae-fe60-4d39-980d-58000d56deee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:f6:7e', 'vm-uuid': '975c9bc3-152a-44ef-843b-135ecb2d18d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:13 np0005597378 NetworkManager[48904]: <info>  [1769522353.4639] manager: (tapc3e32fae-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.475 238945 INFO os_vif [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe')#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.533 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6cn3rsd" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.560 238945 DEBUG nova.storage.rbd_utils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.564 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.628 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.628 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.628 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No VIF found with MAC fa:16:3e:80:f6:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.630 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Using config drive#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.653 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.744 238945 DEBUG nova.network.neutron [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updated VIF entry in instance network info cache for port b405c0ca-029a-4203-9890-f05309eea795. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.745 238945 DEBUG nova.network.neutron [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.765 238945 DEBUG oslo_concurrency.lockutils [req-aea81bf1-3c34-426d-8d15-d0ee3a21a712 req-81493f0a-ca5c-42b7-b279-843a7e6851af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.774 238945 DEBUG oslo_concurrency.processutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config b869d848-1a7e-4a04-95f2-cedc16ebe1f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.775 238945 INFO nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deleting local config drive /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7/disk.config because it was imported into RBD.#033[00m
Jan 27 08:59:13 np0005597378 NetworkManager[48904]: <info>  [1769522353.8231] manager: (tapb405c0ca-02): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Jan 27 08:59:13 np0005597378 kernel: tapb405c0ca-02: entered promiscuous mode
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:13Z|00810|binding|INFO|Claiming lport b405c0ca-029a-4203-9890-f05309eea795 for this chassis.
Jan 27 08:59:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:13Z|00811|binding|INFO|b405c0ca-029a-4203-9890-f05309eea795: Claiming fa:16:3e:10:c2:09 10.100.0.11
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.846 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:c2:09 10.100.0.11'], port_security=['fa:16:3e:10:c2:09 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b869d848-1a7e-4a04-95f2-cedc16ebe1f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b405c0ca-029a-4203-9890-f05309eea795) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:13 np0005597378 NetworkManager[48904]: <info>  [1769522353.8485] device (tapb405c0ca-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:59:13 np0005597378 NetworkManager[48904]: <info>  [1769522353.8496] device (tapb405c0ca-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.849 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b405c0ca-029a-4203-9890-f05309eea795 in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 bound to our chassis#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.850 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.861 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6245b7-5570-45d9-ac15-2cd48cc9c94a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.862 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c0471fd-a1 in ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.864 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c0471fd-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.864 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f19a52-688a-4908-a463-8e80fb50df02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f01cf2ab-f130-49b9-be72-82f0ccfd9709]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 systemd-machined[207425]: New machine qemu-102-instance-0000005a.
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.876 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb232e91-c536-48cd-bc8e-6fc05ddb9494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 systemd[1]: Started Virtual Machine qemu-102-instance-0000005a.
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.903 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb18bdf7-3b76-4c36-bb2d-34ef0b096d10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:13Z|00812|binding|INFO|Setting lport b405c0ca-029a-4203-9890-f05309eea795 ovn-installed in OVS
Jan 27 08:59:13 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:13Z|00813|binding|INFO|Setting lport b405c0ca-029a-4203-9890-f05309eea795 up in Southbound
Jan 27 08:59:13 np0005597378 nova_compute[238941]: 2026-01-27 13:59:13.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.934 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[863c40ee-35e6-4096-b50d-c0700f39285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.939 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92e36773-a579-4e0c-89c9-dcf46d113e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 NetworkManager[48904]: <info>  [1769522353.9398] manager: (tap8c0471fd-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Jan 27 08:59:13 np0005597378 systemd-udevd[314912]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.970 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5de50baf-add2-4799-bfc9-b404cb131665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:13.975 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f45409af-e60c-47fe-a21e-0da563a70564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 NetworkManager[48904]: <info>  [1769522354.0021] device (tap8c0471fd-a0): carrier: link connected
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.008 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f5e88f-8de9-4221-8dc7-c3e653892740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.027 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f9e069-394a-430a-9327-9a6a59cbf938]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314933, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec0f3bf-6b50-4a21-8461-fffca2428a04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:41d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499362, 'tstamp': 499362}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314934, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.064 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[acbf5fdc-85dd-47a7-8e1f-cb69dd86a227]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314935, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.094 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[474d7a65-c845-4ac7-bb68-9ea6201681dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.162 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[761d5fce-f9f1-476e-a704-6e32c3628c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.164 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.165 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.165 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:14 np0005597378 NetworkManager[48904]: <info>  [1769522354.1677] manager: (tap8c0471fd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Jan 27 08:59:14 np0005597378 kernel: tap8c0471fd-a0: entered promiscuous mode
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.173 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:14 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:14Z|00814|binding|INFO|Releasing lport 1d87c77e-a625-4816-9d54-732ad4d6236a from this chassis (sb_readonly=0)
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.201 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.202 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd596643-cc51-4847-a522-4b791a5b7a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.203 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.pid.haproxy
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 8c0471fd-a164-4ef9-bcee-a05e6b2d5892
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.203 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'env', 'PROCESS_TAG=haproxy-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c0471fd-a164-4ef9-bcee-a05e6b2d5892.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.221 238945 DEBUG nova.network.neutron [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updated VIF entry in instance network info cache for port c3e32fae-fe60-4d39-980d-58000d56deee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.221 238945 DEBUG nova.network.neutron [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.247 238945 DEBUG oslo_concurrency.lockutils [req-60142e73-9d0a-4faa-857c-ddbf3519333d req-d4b46c3c-0a39-46d2-92df-52d5929499d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.329 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522354.329002, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.330 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Started (Lifecycle Event)#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.355 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.359 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522354.329182, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.360 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.365 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.365 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.365 238945 INFO nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Rebooting instance#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.389 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.391 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.392 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquired lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.392 238945 DEBUG nova.network.neutron [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.395 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.419 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 5.3 MiB/s wr, 95 op/s
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.572 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating config drive at /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.577 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0nzo_gc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:14 np0005597378 podman[315014]: 2026-01-27 13:59:14.561035352 +0000 UTC m=+0.026280043 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:59:14 np0005597378 podman[315014]: 2026-01-27 13:59:14.661563521 +0000 UTC m=+0.126808182 container create e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 08:59:14 np0005597378 systemd[1]: Started libpod-conmon-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855.scope.
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.712 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0nzo_gc" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.740 238945 DEBUG nova.storage.rbd_utils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136136e60f46ff752a39b467fb9a07c82501b719407e9df982ae54b1df12eb7b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.751 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.772 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:14 np0005597378 podman[315014]: 2026-01-27 13:59:14.784141229 +0000 UTC m=+0.249385930 container init e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:14 np0005597378 podman[315014]: 2026-01-27 13:59:14.790806905 +0000 UTC m=+0.256051566 container start e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:59:14 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : New worker (315058) forked
Jan 27 08:59:14 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : Loading success.
Jan 27 08:59:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:14.905 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.911 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.912 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.913 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.913 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.914 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.914 238945 WARNING nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state rebooting_hard.#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.914 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG oslo_concurrency.lockutils [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.915 238945 DEBUG nova.compute.manager [req-8fdf93da-a993-4614-8bfc-9405419a8a36 req-b0b2bd4e-4098-4bf4-a0e6-8f3a2d607a76 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Processing event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.916 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.921 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522354.9202712, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.922 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.925 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.928 238945 INFO nova.virt.libvirt.driver [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance spawned successfully.#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.929 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.952 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.953 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.954 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.954 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.955 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.955 238945 DEBUG nova.virt.libvirt.driver [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.960 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.968 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:14 np0005597378 nova_compute[238941]: 2026-01-27 13:59:14.995 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.040 238945 INFO nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 10.86 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.040 238945 DEBUG nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.056 238945 DEBUG oslo_concurrency.processutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.058 238945 INFO nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deleting local config drive /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config because it was imported into RBD.#033[00m
Jan 27 08:59:15 np0005597378 kernel: tapc3e32fae-fe: entered promiscuous mode
Jan 27 08:59:15 np0005597378 NetworkManager[48904]: <info>  [1769522355.1134] manager: (tapc3e32fae-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Jan 27 08:59:15 np0005597378 systemd-udevd[314921]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:15Z|00815|binding|INFO|Claiming lport c3e32fae-fe60-4d39-980d-58000d56deee for this chassis.
Jan 27 08:59:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:15Z|00816|binding|INFO|c3e32fae-fe60-4d39-980d-58000d56deee: Claiming fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.112 238945 INFO nova.compute.manager [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 12.53 seconds to build instance.#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.125 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.126 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 bound to our chassis#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.127 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892#033[00m
Jan 27 08:59:15 np0005597378 NetworkManager[48904]: <info>  [1769522355.1317] device (tapc3e32fae-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:59:15 np0005597378 NetworkManager[48904]: <info>  [1769522355.1328] device (tapc3e32fae-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:15Z|00817|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee ovn-installed in OVS
Jan 27 08:59:15 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:15Z|00818|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee up in Southbound
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01bf8a05-f63b-4128-88ee-3a5e959ec72e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:15 np0005597378 systemd-machined[207425]: New machine qemu-103-instance-0000005b.
Jan 27 08:59:15 np0005597378 systemd[1]: Started Virtual Machine qemu-103-instance-0000005b.
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.186 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a9a956-6131-4255-a163-e914200eacdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0dfa1f-4be7-4911-bcab-7658834248d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.219 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8614fe-6284-4930-bd44-cf057c428c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[973f0126-0d96-4ce3-83ba-223401adf44d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315109, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcb5011-a39f-4d9a-9ddc-b3136c26e7c3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315110, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315110, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.260 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:15 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:15.261 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.345 238945 DEBUG oslo_concurrency.lockutils [None req-a4ede59e-98ed-4dd6-93b7-bbd56aeada7d 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.560 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522355.5597427, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.560 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Started (Lifecycle Event)#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.594 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.598 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522355.560536, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.599 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.618 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.620 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.865 238945 DEBUG nova.network.neutron [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.889 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Releasing lock "refresh_cache-19f85ef5-f10f-49b4-b970-ad91d542cbe8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:15 np0005597378 nova_compute[238941]: 2026-01-27 13:59:15.890 238945 DEBUG nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:16 np0005597378 kernel: tapac2842da-30 (unregistering): left promiscuous mode
Jan 27 08:59:16 np0005597378 NetworkManager[48904]: <info>  [1769522356.0519] device (tapac2842da-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:16Z|00819|binding|INFO|Releasing lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 from this chassis (sb_readonly=0)
Jan 27 08:59:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:16Z|00820|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 down in Southbound
Jan 27 08:59:16 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:16Z|00821|binding|INFO|Removing iface tapac2842da-30 ovn-installed in OVS
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.077 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.078 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b unbound from our chassis#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.079 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55ea2b17-4cb2-4358-a3e0-df4f6bbe30ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.081 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace which is not needed anymore#033[00m
Jan 27 08:59:16 np0005597378 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 27 08:59:16 np0005597378 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Consumed 4.361s CPU time.
Jan 27 08:59:16 np0005597378 systemd-machined[207425]: Machine qemu-101-instance-00000059 terminated.
Jan 27 08:59:16 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : haproxy version is 2.8.14-c23fe91
Jan 27 08:59:16 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [NOTICE]   (314704) : path to executable is /usr/sbin/haproxy
Jan 27 08:59:16 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [WARNING]  (314704) : Exiting Master process...
Jan 27 08:59:16 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [ALERT]    (314704) : Current worker (314708) exited with code 143 (Terminated)
Jan 27 08:59:16 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[314684]: [WARNING]  (314704) : All workers exited. Exiting... (0)
Jan 27 08:59:16 np0005597378 systemd[1]: libpod-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95.scope: Deactivated successfully.
Jan 27 08:59:16 np0005597378 podman[315175]: 2026-01-27 13:59:16.225944002 +0000 UTC m=+0.052170865 container died e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.255 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance destroyed successfully.#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.255 238945 DEBUG nova.objects.instance [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'resources' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.268 238945 DEBUG nova.virt.libvirt.vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:15Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.269 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.270 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.270 238945 DEBUG os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95-userdata-shm.mount: Deactivated successfully.
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.277 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac2842da-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c3eec43a45986069145bb96e053f5db3e385456c60228ff105a88d7af5f201fa-merged.mount: Deactivated successfully.
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.288 238945 INFO os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.295 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start _get_guest_xml network_info=[{"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.299 238945 WARNING nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.304 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.306 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.309 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.310 238945 DEBUG nova.virt.libvirt.host [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.310 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.310 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.311 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.virt.hardware [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.312 238945 DEBUG nova.objects.instance [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:16 np0005597378 podman[315175]: 2026-01-27 13:59:16.320948585 +0000 UTC m=+0.147175458 container cleanup e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 08:59:16 np0005597378 systemd[1]: libpod-conmon-e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95.scope: Deactivated successfully.
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.334 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:16 np0005597378 podman[315213]: 2026-01-27 13:59:16.419319126 +0000 UTC m=+0.076122546 container remove e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.425 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52a80d84-07aa-4775-81d7-a9b5f84a7aa5]: (4, ('Tue Jan 27 01:59:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95)\ne813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95\nTue Jan 27 01:59:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (e813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95)\ne813f4dd36b1518cc5871cf79117dd6b4a3c98c9d27d1835346499aba3f34a95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[911ef9e1-52fc-487f-b559-6500f6606397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.428 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:16 np0005597378 kernel: tapde664c2c-c0: left promiscuous mode
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.447 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.451 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[adbb85ce-6221-463e-b19d-76f21607cb1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.475 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa738b8-8ac8-4003-9dc1-01d09dfa9d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.477 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e095fa4-de72-4fb9-b49e-145d1c34ad30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.492 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fae6d7ef-1893-4524-bd4d-85fbe68bf4f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499056, 'reachable_time': 23558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315239, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 systemd[1]: run-netns-ovnmeta\x2dde664c2c\x2dc86a\x2d4e4a\x2dbf03\x2dc4ddfd1ba82b.mount: Deactivated successfully.
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.495 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:59:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:16.495 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[24682b41-cfe4-4889-b2e4-b73d3e837afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 180 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 127 op/s
Jan 27 08:59:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001890213' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.941 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:16 np0005597378 nova_compute[238941]: 2026-01-27 13:59:16.973 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.018 238945 DEBUG nova.compute.manager [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.019 238945 DEBUG oslo_concurrency.lockutils [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 DEBUG oslo_concurrency.lockutils [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 DEBUG oslo_concurrency.lockutils [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 DEBUG nova.compute.manager [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] No waiting events found dispatching network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.020 238945 WARNING nova.compute.manager [req-d8e2be49-07fe-4fdd-a03e-3ba4947e00cf req-93f99634-4ca5-45e6-8884-52ad239add12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received unexpected event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_13:59:17
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.control']
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065712073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.549 238945 DEBUG oslo_concurrency.processutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.551 238945 DEBUG nova.virt.libvirt.vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:15Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.551 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.552 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.554 238945 DEBUG nova.objects.instance [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'pci_devices' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.618 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <uuid>19f85ef5-f10f-49b4-b970-ad91d542cbe8</uuid>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <name>instance-00000059</name>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:name>tempest-InstanceActionsTestJSON-server-1913905016</nova:name>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:59:16</nova:creationTime>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:user uuid="f8eb4fa068674a79bbe5079fd5113d85">tempest-InstanceActionsTestJSON-2136507556-project-member</nova:user>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:project uuid="6b7675f34a66499383b81c1799f8ef4e">tempest-InstanceActionsTestJSON-2136507556</nova:project>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <nova:port uuid="ac2842da-30db-4e63-af6c-ba1f0abe6de9">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <entry name="serial">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <entry name="uuid">19f85ef5-f10f-49b4-b970-ad91d542cbe8</entry>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/19f85ef5-f10f-49b4-b970-ad91d542cbe8_disk.config">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:8b:2c:21"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <target dev="tapac2842da-30"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8/console.log" append="off"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:59:17 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:59:17 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:59:17 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:59:17 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.621 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.622 238945 DEBUG nova.virt.libvirt.driver [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.623 238945 DEBUG nova.virt.libvirt.vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:15Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.623 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.624 238945 DEBUG nova.network.os_vif_util [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.624 238945 DEBUG os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.625 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.625 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.628 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac2842da-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.629 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac2842da-30, col_values=(('external_ids', {'iface-id': 'ac2842da-30db-4e63-af6c-ba1f0abe6de9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:2c:21', 'vm-uuid': '19f85ef5-f10f-49b4-b970-ad91d542cbe8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 NetworkManager[48904]: <info>  [1769522357.6326] manager: (tapac2842da-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.640 238945 INFO os_vif [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')#033[00m
Jan 27 08:59:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:17 np0005597378 kernel: tapac2842da-30: entered promiscuous mode
Jan 27 08:59:17 np0005597378 NetworkManager[48904]: <info>  [1769522357.7271] manager: (tapac2842da-30): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Jan 27 08:59:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:17Z|00822|binding|INFO|Claiming lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 for this chassis.
Jan 27 08:59:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:17Z|00823|binding|INFO|ac2842da-30db-4e63-af6c-ba1f0abe6de9: Claiming fa:16:3e:8b:2c:21 10.100.0.6
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:17Z|00824|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 ovn-installed in OVS
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 nova_compute[238941]: 2026-01-27 13:59:17.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:17 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:17Z|00825|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 up in Southbound
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.757 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.758 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b bound to our chassis#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.759 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b#033[00m
Jan 27 08:59:17 np0005597378 systemd-machined[207425]: New machine qemu-104-instance-00000059.
Jan 27 08:59:17 np0005597378 systemd-udevd[315307]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54e195c3-eb9f-4c70-993c-1eea80bf10b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.773 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde664c2c-c1 in ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.775 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde664c2c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.775 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[16bb4b5d-f90e-4aa9-b2e6-d8233aa793e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 systemd[1]: Started Virtual Machine qemu-104-instance-00000059.
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.775 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1256c1-d7b6-42ff-adaf-462a297f103a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 NetworkManager[48904]: <info>  [1769522357.7873] device (tapac2842da-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:59:17 np0005597378 NetworkManager[48904]: <info>  [1769522357.7878] device (tapac2842da-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.788 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd34a37-d71d-4c75-bf3d-65f64c8df00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.816 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6877c2-eae2-4b76-88f8-bd82ebfc049b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.847 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[48bbad07-3edd-43c0-8ef4-1d96a1ca1193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 NetworkManager[48904]: <info>  [1769522357.8547] manager: (tapde664c2c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/354)
Jan 27 08:59:17 np0005597378 systemd-udevd[315310]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.856 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d13439b2-56d8-468a-a39a-9210544d830e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.885 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e52caa65-ffe2-4c68-8c59-e943b2e02b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.888 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6426bd-4601-480b-9c98-24105fa6909a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 NetworkManager[48904]: <info>  [1769522357.9110] device (tapde664c2c-c0): carrier: link connected
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.915 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bb56db12-ae42-4d35-a72f-aa5761e718e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.934 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb06bdde-67b8-4658-ae57-a466a1538b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499753, 'reachable_time': 43221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315339, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[03a3c552-5ff9-44dd-851d-e7bb666c24b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:4da3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499753, 'tstamp': 499753}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315340, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:17.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[96a1451f-a2a5-42ce-9e0d-8a648ba028cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde664c2c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4d:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499753, 'reachable_time': 43221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315341, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.007 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[431c54ec-f3fe-46e5-af0c-2bbd9fb2a5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67676ec0-f809-4db9-a534-f79220373033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.082 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.082 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.082 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde664c2c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:18 np0005597378 NetworkManager[48904]: <info>  [1769522358.0849] manager: (tapde664c2c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Jan 27 08:59:18 np0005597378 kernel: tapde664c2c-c0: entered promiscuous mode
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.088 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde664c2c-c0, col_values=(('external_ids', {'iface-id': '31d712d1-bf3a-4ae0-b986-fb5558dfacd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:18 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:18Z|00826|binding|INFO|Releasing lport 31d712d1-bf3a-4ae0-b986-fb5558dfacd2 from this chassis (sb_readonly=0)
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.112 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0850fe-d2be-4991-863a-feff5b99dbe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.114 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.pid.haproxy
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:59:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:18.115 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'env', 'PROCESS_TAG=haproxy-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.470 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 19f85ef5-f10f-49b4-b970-ad91d542cbe8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.472 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522358.469637, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.473 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.480 238945 DEBUG nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.483 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance rebooted successfully.#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.483 238945 DEBUG nova.compute.manager [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.506 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.509 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:18 np0005597378 podman[315414]: 2026-01-27 13:59:18.542696752 +0000 UTC m=+0.097746595 container create 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 08:59:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 132 op/s
Jan 27 08:59:18 np0005597378 podman[315414]: 2026-01-27 13:59:18.466719891 +0000 UTC m=+0.021769764 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:59:18 np0005597378 systemd[1]: Started libpod-conmon-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226.scope.
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.618 238945 DEBUG oslo_concurrency.lockutils [None req-533078d0-4105-4cdc-a0b9-e43ce060bac5 f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d71d6c393c7a2fe190f0e4f954be799542f1501c444fcac40f9149c1996adab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.641 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522358.4703019, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Started (Lifecycle Event)#033[00m
Jan 27 08:59:18 np0005597378 podman[315414]: 2026-01-27 13:59:18.661089522 +0000 UTC m=+0.216139385 container init 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 08:59:18 np0005597378 podman[315414]: 2026-01-27 13:59:18.666215787 +0000 UTC m=+0.221265630 container start 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:59:18 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : New worker (315436) forked
Jan 27 08:59:18 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : Loading success.
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.768 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:18 np0005597378 nova_compute[238941]: 2026-01-27 13:59:18.774 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.3 MiB/s wr, 236 op/s
Jan 27 08:59:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:20.907 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.971 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.972 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.972 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.973 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.973 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.974 238945 INFO nova.compute.manager [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Terminating instance#033[00m
Jan 27 08:59:20 np0005597378 nova_compute[238941]: 2026-01-27 13:59:20.975 238945 DEBUG nova.compute.manager [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:59:21 np0005597378 kernel: tapac2842da-30 (unregistering): left promiscuous mode
Jan 27 08:59:21 np0005597378 NetworkManager[48904]: <info>  [1769522361.0973] device (tapac2842da-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:59:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:21Z|00827|binding|INFO|Releasing lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 from this chassis (sb_readonly=0)
Jan 27 08:59:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:21Z|00828|binding|INFO|Setting lport ac2842da-30db-4e63-af6c-ba1f0abe6de9 down in Southbound
Jan 27 08:59:21 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:21Z|00829|binding|INFO|Removing iface tapac2842da-30 ovn-installed in OVS
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:21 np0005597378 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 27 08:59:21 np0005597378 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000059.scope: Consumed 3.194s CPU time.
Jan 27 08:59:21 np0005597378 systemd-machined[207425]: Machine qemu-104-instance-00000059 terminated.
Jan 27 08:59:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.169 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:2c:21 10.100.0.6'], port_security=['fa:16:3e:8b:2c:21 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19f85ef5-f10f-49b4-b970-ad91d542cbe8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b7675f34a66499383b81c1799f8ef4e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f8ffba3-b38d-486b-8940-fd84531a1608', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3085d5-267d-4977-a3d2-08eab226ca76, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac2842da-30db-4e63-af6c-ba1f0abe6de9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.170 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac2842da-30db-4e63-af6c-ba1f0abe6de9 in datapath de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b unbound from our chassis#033[00m
Jan 27 08:59:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.171 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:59:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c96755f8-7d39-458c-87b7-fcf12690d27b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:21.173 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b namespace which is not needed anymore#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.213 238945 INFO nova.virt.libvirt.driver [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Instance destroyed successfully.#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.214 238945 DEBUG nova.objects.instance [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lazy-loading 'resources' on Instance uuid 19f85ef5-f10f-49b4-b970-ad91d542cbe8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.228 238945 DEBUG nova.virt.libvirt.vif [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:58:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1913905016',display_name='tempest-InstanceActionsTestJSON-server-1913905016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1913905016',id=89,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b7675f34a66499383b81c1799f8ef4e',ramdisk_id='',reservation_id='r-uoy7688n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2136507556',owner_user_name='tempest-InstanceActionsTestJSON-2136507556-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:18Z,user_data=None,user_id='f8eb4fa068674a79bbe5079fd5113d85',uuid=19f85ef5-f10f-49b4-b970-ad91d542cbe8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.228 238945 DEBUG nova.network.os_vif_util [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converting VIF {"id": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "address": "fa:16:3e:8b:2c:21", "network": {"id": "de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1116131274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b7675f34a66499383b81c1799f8ef4e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac2842da-30", "ovs_interfaceid": "ac2842da-30db-4e63-af6c-ba1f0abe6de9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.229 238945 DEBUG nova.network.os_vif_util [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.229 238945 DEBUG os_vif [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac2842da-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:21 np0005597378 nova_compute[238941]: 2026-01-27 13:59:21.236 238945 INFO os_vif [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:2c:21,bridge_name='br-int',has_traffic_filtering=True,id=ac2842da-30db-4e63-af6c-ba1f0abe6de9,network=Network(de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac2842da-30')#033[00m
Jan 27 08:59:21 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : haproxy version is 2.8.14-c23fe91
Jan 27 08:59:21 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [NOTICE]   (315434) : path to executable is /usr/sbin/haproxy
Jan 27 08:59:21 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [WARNING]  (315434) : Exiting Master process...
Jan 27 08:59:21 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [ALERT]    (315434) : Current worker (315436) exited with code 143 (Terminated)
Jan 27 08:59:21 np0005597378 neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b[315430]: [WARNING]  (315434) : All workers exited. Exiting... (0)
Jan 27 08:59:21 np0005597378 systemd[1]: libpod-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226.scope: Deactivated successfully.
Jan 27 08:59:21 np0005597378 podman[315489]: 2026-01-27 13:59:21.405019887 +0000 UTC m=+0.132875462 container died 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 08:59:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226-userdata-shm.mount: Deactivated successfully.
Jan 27 08:59:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3d71d6c393c7a2fe190f0e4f954be799542f1501c444fcac40f9149c1996adab-merged.mount: Deactivated successfully.
Jan 27 08:59:22 np0005597378 podman[315489]: 2026-01-27 13:59:22.118230995 +0000 UTC m=+0.846086540 container cleanup 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:59:22 np0005597378 systemd[1]: libpod-conmon-339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226.scope: Deactivated successfully.
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG nova.compute.manager [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG oslo_concurrency.lockutils [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG oslo_concurrency.lockutils [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.205 238945 DEBUG oslo_concurrency.lockutils [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.206 238945 DEBUG nova.compute.manager [req-ec244d3e-91c0-4933-b495-45e4450ac531 req-f593f9fe-9563-457d-a46c-83a32bda616d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Processing event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.206 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.210 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.212 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522362.212745, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.213 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.216 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance spawned successfully.#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.217 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.248 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.251 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.263 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.263 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.264 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.264 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.264 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.265 238945 DEBUG nova.virt.libvirt.driver [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.277 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.357 238945 INFO nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 15.42 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.357 238945 DEBUG nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:22 np0005597378 podman[315528]: 2026-01-27 13:59:22.433260974 +0000 UTC m=+0.294717175 container remove 339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.441 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32bfed9f-5bf9-40e7-9f20-a4b6d207ce53]: (4, ('Tue Jan 27 01:59:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226)\n339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226\nTue Jan 27 01:59:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b (339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226)\n339b44ae57c1ee2fe6378d37264049796e32007114a8454eb8e0760abcbd4226\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.443 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af999568-a7a7-4408-aba6-2cebbc0dbf29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.443 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde664c2c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:22 np0005597378 kernel: tapde664c2c-c0: left promiscuous mode
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8299ece0-4d2a-46d5-bbab-7b09742f7c88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.455 238945 INFO nova.compute.manager [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 16.60 seconds to build instance.#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6fff7550-dd0a-4ad4-bc39-dba0219591b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.471 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf9792a-794f-405f-9875-501b7ec6f05c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 nova_compute[238941]: 2026-01-27 13:59:22.473 238945 DEBUG oslo_concurrency.lockutils [None req-a6104aa3-9f3a-4e48-a585-66c5f077225a 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.487 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f01c3497-7f10-45bb-b0fc-a18df54027d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499746, 'reachable_time': 19357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315544, 'error': None, 'target': 'ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.490 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de664c2c-c86a-4e4a-bf03-c4ddfd1ba82b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 08:59:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:22.490 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[47733199-b620-42a8-a6a0-40f8cfcad7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:22 np0005597378 systemd[1]: run-netns-ovnmeta\x2dde664c2c\x2dc86a\x2d4e4a\x2dbf03\x2dc4ddfd1ba82b.mount: Deactivated successfully.
Jan 27 08:59:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 181 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 37 KiB/s wr, 202 op/s
Jan 27 08:59:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:23 np0005597378 nova_compute[238941]: 2026-01-27 13:59:23.532 238945 INFO nova.virt.libvirt.driver [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deleting instance files /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8_del#033[00m
Jan 27 08:59:23 np0005597378 nova_compute[238941]: 2026-01-27 13:59:23.533 238945 INFO nova.virt.libvirt.driver [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deletion of /var/lib/nova/instances/19f85ef5-f10f-49b4-b970-ad91d542cbe8_del complete#033[00m
Jan 27 08:59:23 np0005597378 nova_compute[238941]: 2026-01-27 13:59:23.595 238945 INFO nova.compute.manager [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 2.62 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:59:23 np0005597378 nova_compute[238941]: 2026-01-27 13:59:23.596 238945 DEBUG oslo.service.loopingcall [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:59:23 np0005597378 nova_compute[238941]: 2026-01-27 13:59:23.596 238945 DEBUG nova.compute.manager [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:59:23 np0005597378 nova_compute[238941]: 2026-01-27 13:59:23.596 238945 DEBUG nova.network.neutron [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.407 238945 INFO nova.compute.manager [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Rescuing#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.408 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.408 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquired lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.409 238945 DEBUG nova.network.neutron [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.511 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.512 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.512 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.512 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.513 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.514 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.515 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.516 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.517 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.518 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.518 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.518 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.519 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.520 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-unplugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.521 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.522 238945 DEBUG oslo_concurrency.lockutils [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.522 238945 DEBUG nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] No waiting events found dispatching network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.522 238945 WARNING nova.compute.manager [req-442dd455-0952-4676-b186-6dd96877bd4c req-c1b33a28-2e35-4a4e-b107-acd6b41f7371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received unexpected event network-vif-plugged-ac2842da-30db-4e63-af6c-ba1f0abe6de9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.544 238945 DEBUG nova.network.neutron [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 161 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 38 KiB/s wr, 265 op/s
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.575 238945 INFO nova.compute.manager [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Took 0.98 seconds to deallocate network for instance.#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.654 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.655 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:24 np0005597378 nova_compute[238941]: 2026-01-27 13:59:24.740 238945 DEBUG oslo_concurrency.processutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782640873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:25 np0005597378 nova_compute[238941]: 2026-01-27 13:59:25.304 238945 DEBUG oslo_concurrency.processutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:25 np0005597378 nova_compute[238941]: 2026-01-27 13:59:25.310 238945 DEBUG nova.compute.provider_tree [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:25 np0005597378 nova_compute[238941]: 2026-01-27 13:59:25.343 238945 DEBUG nova.scheduler.client.report [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:25 np0005597378 nova_compute[238941]: 2026-01-27 13:59:25.385 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:25 np0005597378 nova_compute[238941]: 2026-01-27 13:59:25.416 238945 INFO nova.scheduler.client.report [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Deleted allocations for instance 19f85ef5-f10f-49b4-b970-ad91d542cbe8#033[00m
Jan 27 08:59:25 np0005597378 nova_compute[238941]: 2026-01-27 13:59:25.500 238945 DEBUG oslo_concurrency.lockutils [None req-24e5e395-2b23-4097-a4bc-f1c63d032f0b f8eb4fa068674a79bbe5079fd5113d85 6b7675f34a66499383b81c1799f8ef4e - - default default] Lock "19f85ef5-f10f-49b4-b970-ad91d542cbe8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:26 np0005597378 nova_compute[238941]: 2026-01-27 13:59:26.233 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 134 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 26 KiB/s wr, 282 op/s
Jan 27 08:59:26 np0005597378 nova_compute[238941]: 2026-01-27 13:59:26.708 238945 DEBUG nova.compute.manager [req-3d1fac77-2a1c-4ac3-ad91-c4f06b910e05 req-3b34fee5-8c43-48d0-9ad2-8291ad63c9f0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Received event network-vif-deleted-ac2842da-30db-4e63-af6c-ba1f0abe6de9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:27 np0005597378 nova_compute[238941]: 2026-01-27 13:59:27.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:27 np0005597378 nova_compute[238941]: 2026-01-27 13:59:27.182 238945 DEBUG nova.network.neutron [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:27 np0005597378 nova_compute[238941]: 2026-01-27 13:59:27.216 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Releasing lock "refresh_cache-975c9bc3-152a-44ef-843b-135ecb2d18d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:27 np0005597378 nova_compute[238941]: 2026-01-27 13:59:27.522 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007075806214271163 of space, bias 1.0, pg target 0.2122741864281349 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006684570283933862 of space, bias 1.0, pg target 0.20053710851801584 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0961324059964003e-06 of space, bias 4.0, pg target 0.0013153588871956804 quantized to 16 (current 16)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 08:59:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 08:59:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:28 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:28Z|00830|binding|INFO|Releasing lport 1d87c77e-a625-4816-9d54-732ad4d6236a from this chassis (sb_readonly=0)
Jan 27 08:59:28 np0005597378 nova_compute[238941]: 2026-01-27 13:59:28.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 158 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 1.5 MiB/s wr, 296 op/s
Jan 27 08:59:28 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:28Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:c2:09 10.100.0.11
Jan 27 08:59:28 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:28Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:c2:09 10.100.0.11
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:59:29 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.705692065 +0000 UTC m=+0.056904760 container create 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:29 np0005597378 systemd[1]: Started libpod-conmon-5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319.scope.
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.672478849 +0000 UTC m=+0.023691564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:59:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.812860658 +0000 UTC m=+0.164073373 container init 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.819622367 +0000 UTC m=+0.170835062 container start 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:29 np0005597378 strange_mirzakhani[315727]: 167 167
Jan 27 08:59:29 np0005597378 systemd[1]: libpod-5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319.scope: Deactivated successfully.
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.827568085 +0000 UTC m=+0.178780810 container attach 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.828364717 +0000 UTC m=+0.179577412 container died 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:59:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-950b8f877cb43c8a0d781b71994785ccbe435db1864df473c62c1e62f6b47393-merged.mount: Deactivated successfully.
Jan 27 08:59:29 np0005597378 podman[315711]: 2026-01-27 13:59:29.893695797 +0000 UTC m=+0.244908482 container remove 5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:29 np0005597378 systemd[1]: libpod-conmon-5fa1999d6b151fa9e01861dd5f21040285558ffcf6ccc430faacd20ed9d2c319.scope: Deactivated successfully.
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.071776049 +0000 UTC m=+0.050126942 container create d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 08:59:30 np0005597378 systemd[1]: Started libpod-conmon-d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c.scope.
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.050662263 +0000 UTC m=+0.029013186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:59:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.187915759 +0000 UTC m=+0.166266672 container init d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.194249925 +0000 UTC m=+0.172600818 container start d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.197692736 +0000 UTC m=+0.176043629 container attach d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 08:59:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.1 MiB/s wr, 295 op/s
Jan 27 08:59:30 np0005597378 epic_wiles[315769]: --> passed data devices: 0 physical, 3 LVM
Jan 27 08:59:30 np0005597378 epic_wiles[315769]: --> All data devices are unavailable
Jan 27 08:59:30 np0005597378 systemd[1]: libpod-d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c.scope: Deactivated successfully.
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.699958367 +0000 UTC m=+0.678309280 container died d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:59:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-462fe9afc61c7af0226e4fc2d1e77e6885bcfe49c878ad7c151b8d68a29d46c0-merged.mount: Deactivated successfully.
Jan 27 08:59:30 np0005597378 podman[315751]: 2026-01-27 13:59:30.854459848 +0000 UTC m=+0.832810761 container remove d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 08:59:30 np0005597378 systemd[1]: libpod-conmon-d7eb0b76f4d13d483319bf40598a59b814b9e9f4af4b3e58b03ff8154580614c.scope: Deactivated successfully.
Jan 27 08:59:30 np0005597378 podman[315801]: 2026-01-27 13:59:30.886037879 +0000 UTC m=+0.104100053 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 08:59:30 np0005597378 podman[315812]: 2026-01-27 13:59:30.926657459 +0000 UTC m=+0.085591896 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 08:59:31 np0005597378 nova_compute[238941]: 2026-01-27 13:59:31.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.343960562 +0000 UTC m=+0.062256620 container create 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.303004024 +0000 UTC m=+0.021300082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:59:31 np0005597378 systemd[1]: Started libpod-conmon-166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8.scope.
Jan 27 08:59:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.444737657 +0000 UTC m=+0.163033715 container init 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.452509462 +0000 UTC m=+0.170805500 container start 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 08:59:31 np0005597378 sharp_hermann[315924]: 167 167
Jan 27 08:59:31 np0005597378 systemd[1]: libpod-166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8.scope: Deactivated successfully.
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.460526954 +0000 UTC m=+0.178823022 container attach 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.461310524 +0000 UTC m=+0.179606562 container died 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 27 08:59:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ccc45188e142b9ab3f048cee29f82df0ad9b8fce01a2b3eb77c255248381f22d-merged.mount: Deactivated successfully.
Jan 27 08:59:31 np0005597378 podman[315907]: 2026-01-27 13:59:31.528281128 +0000 UTC m=+0.246577156 container remove 166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hermann, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 08:59:31 np0005597378 systemd[1]: libpod-conmon-166ba878549a70d1de32d63be860df2ee2e7ef8c102f44721d689f6f7535aed8.scope: Deactivated successfully.
Jan 27 08:59:31 np0005597378 podman[315950]: 2026-01-27 13:59:31.671507531 +0000 UTC m=+0.022897574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:59:31 np0005597378 podman[315950]: 2026-01-27 13:59:31.963471283 +0000 UTC m=+0.314861306 container create 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 08:59:32 np0005597378 systemd[1]: Started libpod-conmon-0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89.scope.
Jan 27 08:59:32 np0005597378 nova_compute[238941]: 2026-01-27 13:59:32.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:32 np0005597378 podman[315950]: 2026-01-27 13:59:32.190512014 +0000 UTC m=+0.541902037 container init 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 08:59:32 np0005597378 podman[315950]: 2026-01-27 13:59:32.198342749 +0000 UTC m=+0.549732762 container start 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 08:59:32 np0005597378 podman[315950]: 2026-01-27 13:59:32.204942714 +0000 UTC m=+0.556332727 container attach 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 08:59:32 np0005597378 determined_bohr[315966]: {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:    "0": [
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:        {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "devices": [
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "/dev/loop3"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            ],
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_name": "ceph_lv0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_size": "21470642176",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "name": "ceph_lv0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "tags": {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cluster_name": "ceph",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.crush_device_class": "",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.encrypted": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.objectstore": "bluestore",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osd_id": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.type": "block",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.vdo": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.with_tpm": "0"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            },
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "type": "block",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "vg_name": "ceph_vg0"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:        }
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:    ],
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:    "1": [
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:        {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "devices": [
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "/dev/loop4"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            ],
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_name": "ceph_lv1",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_size": "21470642176",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "name": "ceph_lv1",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "tags": {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cluster_name": "ceph",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.crush_device_class": "",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.encrypted": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.objectstore": "bluestore",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osd_id": "1",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.type": "block",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.vdo": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.with_tpm": "0"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            },
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "type": "block",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "vg_name": "ceph_vg1"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:        }
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:    ],
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:    "2": [
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:        {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "devices": [
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "/dev/loop5"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            ],
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_name": "ceph_lv2",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_size": "21470642176",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "name": "ceph_lv2",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "tags": {
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cephx_lockbox_secret": "",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.cluster_name": "ceph",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.crush_device_class": "",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.encrypted": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.objectstore": "bluestore",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osd_id": "2",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.type": "block",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.vdo": "0",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:                "ceph.with_tpm": "0"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            },
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "type": "block",
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:            "vg_name": "ceph_vg2"
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:        }
Jan 27 08:59:32 np0005597378 determined_bohr[315966]:    ]
Jan 27 08:59:32 np0005597378 determined_bohr[315966]: }
Jan 27 08:59:32 np0005597378 systemd[1]: libpod-0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89.scope: Deactivated successfully.
Jan 27 08:59:32 np0005597378 podman[315950]: 2026-01-27 13:59:32.507417582 +0000 UTC m=+0.858807595 container died 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 08:59:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-30695dbc1d03377108fbde0671d506d6b94a071a809b69d163ce937998dd7ca9-merged.mount: Deactivated successfully.
Jan 27 08:59:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 27 08:59:32 np0005597378 podman[315950]: 2026-01-27 13:59:32.563582871 +0000 UTC m=+0.914972874 container remove 0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 08:59:32 np0005597378 systemd[1]: libpod-conmon-0c2e4472b3b5a986798154fc80628c097ea2f9252da8a1fb3a5264322cfbbb89.scope: Deactivated successfully.
Jan 27 08:59:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:33.008760579 +0000 UTC m=+0.050144792 container create 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:33 np0005597378 systemd[1]: Started libpod-conmon-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope.
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:32.97881253 +0000 UTC m=+0.020196743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:59:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:33.094175429 +0000 UTC m=+0.135559672 container init 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:33.101551494 +0000 UTC m=+0.142935707 container start 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:33.107048068 +0000 UTC m=+0.148432271 container attach 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 08:59:33 np0005597378 systemd[1]: libpod-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope: Deactivated successfully.
Jan 27 08:59:33 np0005597378 compassionate_mirzakhani[316064]: 167 167
Jan 27 08:59:33 np0005597378 conmon[316064]: conmon 01eccb5495013b937006 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope/container/memory.events
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:33.109068132 +0000 UTC m=+0.150452345 container died 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 08:59:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-49fbcdd9e269a21a403c99797441d69b035ea26cbe763fdaff6ffdab70df88df-merged.mount: Deactivated successfully.
Jan 27 08:59:33 np0005597378 podman[316048]: 2026-01-27 13:59:33.281944515 +0000 UTC m=+0.323328728 container remove 01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mirzakhani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 08:59:33 np0005597378 systemd[1]: libpod-conmon-01eccb5495013b937006bfb24c4cd05aa828f5be15a46f1dca7d976c8b1197f2.scope: Deactivated successfully.
Jan 27 08:59:33 np0005597378 podman[316088]: 2026-01-27 13:59:33.487634574 +0000 UTC m=+0.048868108 container create 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:59:33 np0005597378 systemd[1]: Started libpod-conmon-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope.
Jan 27 08:59:33 np0005597378 podman[316088]: 2026-01-27 13:59:33.465000178 +0000 UTC m=+0.026233722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 08:59:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:33 np0005597378 podman[316088]: 2026-01-27 13:59:33.604894583 +0000 UTC m=+0.166128147 container init 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 08:59:33 np0005597378 podman[316088]: 2026-01-27 13:59:33.615844392 +0000 UTC m=+0.177077916 container start 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:59:33 np0005597378 podman[316088]: 2026-01-27 13:59:33.631512195 +0000 UTC m=+0.192745719 container attach 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 08:59:34 np0005597378 lvm[316184]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 08:59:34 np0005597378 lvm[316184]: VG ceph_vg1 finished
Jan 27 08:59:34 np0005597378 lvm[316183]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 08:59:34 np0005597378 lvm[316183]: VG ceph_vg0 finished
Jan 27 08:59:34 np0005597378 lvm[316186]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 08:59:34 np0005597378 lvm[316186]: VG ceph_vg2 finished
Jan 27 08:59:34 np0005597378 pensive_perlman[316104]: {}
Jan 27 08:59:34 np0005597378 systemd[1]: libpod-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope: Deactivated successfully.
Jan 27 08:59:34 np0005597378 systemd[1]: libpod-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope: Consumed 1.289s CPU time.
Jan 27 08:59:34 np0005597378 podman[316088]: 2026-01-27 13:59:34.497155238 +0000 UTC m=+1.058388792 container died 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 08:59:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ddbbd13f1e6a01ca1e35f7e51d376a967f5fd74331f2957a8e7caa7eeb1e71c1-merged.mount: Deactivated successfully.
Jan 27 08:59:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Jan 27 08:59:34 np0005597378 podman[316088]: 2026-01-27 13:59:34.578355577 +0000 UTC m=+1.139589141 container remove 41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 08:59:34 np0005597378 systemd[1]: libpod-conmon-41aade46f65fd057531455ea49f70f98209eb7e6700ddcb026c3d8ada665b97c.scope: Deactivated successfully.
Jan 27 08:59:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 08:59:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:59:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 08:59:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:59:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:59:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 08:59:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:35Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 08:59:35 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:35Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 08:59:36 np0005597378 nova_compute[238941]: 2026-01-27 13:59:36.211 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522361.2108717, 19f85ef5-f10f-49b4-b970-ad91d542cbe8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:36 np0005597378 nova_compute[238941]: 2026-01-27 13:59:36.212 238945 INFO nova.compute.manager [-] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] VM Stopped (Lifecycle Event)#033[00m
Jan 27 08:59:36 np0005597378 nova_compute[238941]: 2026-01-27 13:59:36.232 238945 DEBUG nova.compute.manager [None req-1677bd73-5c72-4024-8df0-10edd4898f9b - - - - - -] [instance: 19f85ef5-f10f-49b4-b970-ad91d542cbe8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:36 np0005597378 nova_compute[238941]: 2026-01-27 13:59:36.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 172 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Jan 27 08:59:37 np0005597378 nova_compute[238941]: 2026-01-27 13:59:37.149 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:37 np0005597378 nova_compute[238941]: 2026-01-27 13:59:37.573 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 08:59:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 197 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Jan 27 08:59:39 np0005597378 kernel: tapc3e32fae-fe (unregistering): left promiscuous mode
Jan 27 08:59:39 np0005597378 NetworkManager[48904]: <info>  [1769522379.8359] device (tapc3e32fae-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:59:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:39Z|00831|binding|INFO|Releasing lport c3e32fae-fe60-4d39-980d-58000d56deee from this chassis (sb_readonly=0)
Jan 27 08:59:39 np0005597378 nova_compute[238941]: 2026-01-27 13:59:39.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:39Z|00832|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee down in Southbound
Jan 27 08:59:39 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:39Z|00833|binding|INFO|Removing iface tapc3e32fae-fe ovn-installed in OVS
Jan 27 08:59:39 np0005597378 nova_compute[238941]: 2026-01-27 13:59:39.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.854 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.855 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 unbound from our chassis#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.856 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892#033[00m
Jan 27 08:59:39 np0005597378 nova_compute[238941]: 2026-01-27 13:59:39.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.878 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78edf52b-50d5-40be-b940-a740921099c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:39 np0005597378 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 27 08:59:39 np0005597378 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d0000005b.scope: Consumed 13.144s CPU time.
Jan 27 08:59:39 np0005597378 systemd-machined[207425]: Machine qemu-103-instance-0000005b terminated.
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.908 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[89a8741a-32a3-41c0-85fb-aea97a4319e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.910 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c50cffd-ebe0-42d2-becd-b7cca29111c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.933 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a86f6b-2bb5-4530-aeb6-e276ab41f4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bce7ce36-88c8-40b1-b9fc-2be4f98b9897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316238, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e5ab5e-53c1-4e61-adb6-38ef300c922a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316239, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316239, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.966 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:39 np0005597378 nova_compute[238941]: 2026-01-27 13:59:39.967 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:39 np0005597378 nova_compute[238941]: 2026-01-27 13:59:39.972 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.973 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.973 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.974 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:39.974 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.578 238945 DEBUG nova.compute.manager [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG oslo_concurrency.lockutils [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG oslo_concurrency.lockutils [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG oslo_concurrency.lockutils [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 DEBUG nova.compute.manager [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.579 238945 WARNING nova.compute.manager [req-27dace4e-b8ee-4268-a040-242e1a5d88ec req-58bfd1ae-9fcf-46c2-8595-522219d50d3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.588 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.594 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance destroyed successfully.#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.595 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'numa_topology' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.669 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Attempting rescue#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.670 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.675 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.675 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating image(s)#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.696 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.699 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.829 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.853 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.857 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.954 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.956 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.957 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.957 238945 DEBUG oslo_concurrency.lockutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.982 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:40 np0005597378 nova_compute[238941]: 2026-01-27 13:59:40.987 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:41 np0005597378 nova_compute[238941]: 2026-01-27 13:59:41.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.152 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.310 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.312 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'migration_context' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.328 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.329 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start _get_guest_xml network_info=[{"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "vif_mac": "fa:16:3e:80:f6:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.329 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'resources' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.346 238945 WARNING nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.357 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.358 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.360 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.361 238945 DEBUG nova.virt.libvirt.host [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.361 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.361 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.362 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.363 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.364 238945 DEBUG nova.virt.hardware [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.364 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.381 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 200 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 27 08:59:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.740 238945 DEBUG nova.compute.manager [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.740 238945 DEBUG oslo_concurrency.lockutils [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.741 238945 DEBUG oslo_concurrency.lockutils [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.741 238945 DEBUG oslo_concurrency.lockutils [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.741 238945 DEBUG nova.compute.manager [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.742 238945 WARNING nova.compute.manager [req-c0902a4d-7e45-4fa1-8e3f-4e0343f60666 req-57f9cf23-505c-4728-b8cc-355fd94ae3e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state active and task_state rescuing.#033[00m
Jan 27 08:59:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3646451708' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.924 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:42 np0005597378 nova_compute[238941]: 2026-01-27 13:59:42.925 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620141688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:43 np0005597378 nova_compute[238941]: 2026-01-27 13:59:43.508 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:43 np0005597378 nova_compute[238941]: 2026-01-27 13:59:43.509 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019010762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.151 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.153 238945 DEBUG nova.virt.libvirt.vif [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:22Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "vif_mac": "fa:16:3e:80:f6:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.154 238945 DEBUG nova.network.os_vif_util [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "vif_mac": "fa:16:3e:80:f6:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.155 238945 DEBUG nova.network.os_vif_util [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.156 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.174 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <uuid>975c9bc3-152a-44ef-843b-135ecb2d18d3</uuid>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <name>instance-0000005b</name>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-157909793</nova:name>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:59:42</nova:creationTime>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:user uuid="84aa975dea454d9dafe5d1583c4d0f0e">tempest-ServerRescueNegativeTestJSON-1362523506-project-member</nova:user>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:project uuid="393fd88e226e4f0e95954956b0fc8f40">tempest-ServerRescueNegativeTestJSON-1362523506</nova:project>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <nova:port uuid="c3e32fae-fe60-4d39-980d-58000d56deee">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <entry name="serial">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <entry name="uuid">975c9bc3-152a-44ef-843b-135ecb2d18d3</entry>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.rescue">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <target dev="vdb" bus="virtio"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:80:f6:7e"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <target dev="tapc3e32fae-fe"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/console.log" append="off"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:59:44 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:59:44 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:59:44 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:59:44 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.183 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance destroyed successfully.#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.247 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.247 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.248 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.248 238945 DEBUG nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] No VIF found with MAC fa:16:3e:80:f6:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.248 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Using config drive#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.269 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.291 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.323 238945 DEBUG nova.objects.instance [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'keypairs' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 229 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.755 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Creating config drive at /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.762 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnnyf91q1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.900 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnnyf91q1" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.934 238945 DEBUG nova.storage.rbd_utils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] rbd image 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:44 np0005597378 nova_compute[238941]: 2026-01-27 13:59:44.939 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.118 238945 DEBUG oslo_concurrency.processutils [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue 975c9bc3-152a-44ef-843b-135ecb2d18d3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.119 238945 INFO nova.virt.libvirt.driver [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deleting local config drive /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3/disk.config.rescue because it was imported into RBD.#033[00m
Jan 27 08:59:45 np0005597378 kernel: tapc3e32fae-fe: entered promiscuous mode
Jan 27 08:59:45 np0005597378 NetworkManager[48904]: <info>  [1769522385.1622] manager: (tapc3e32fae-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Jan 27 08:59:45 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:45Z|00834|binding|INFO|Claiming lport c3e32fae-fe60-4d39-980d-58000d56deee for this chassis.
Jan 27 08:59:45 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:45Z|00835|binding|INFO|c3e32fae-fe60-4d39-980d-58000d56deee: Claiming fa:16:3e:80:f6:7e 10.100.0.12
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:45 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:45Z|00836|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee ovn-installed in OVS
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:45 np0005597378 systemd-udevd[316483]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:45 np0005597378 systemd-machined[207425]: New machine qemu-105-instance-0000005b.
Jan 27 08:59:45 np0005597378 NetworkManager[48904]: <info>  [1769522385.1974] device (tapc3e32fae-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:59:45 np0005597378 NetworkManager[48904]: <info>  [1769522385.1980] device (tapc3e32fae-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:59:45 np0005597378 systemd[1]: Started Virtual Machine qemu-105-instance-0000005b.
Jan 27 08:59:45 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:45Z|00837|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee up in Southbound
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.209 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '5', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.210 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 bound to our chassis#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.212 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.229 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e28a153-f3ac-4ffe-95ef-360280359a20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.260 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[56674d07-7f92-4594-8250-5628dc1400dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.263 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83729c83-6a70-4113-b331-e7ef344333b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.291 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f28badd1-5c8c-47a8-a7ba-923153a84949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4ad389-18a5-48ac-a8a5-3f680b751625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316497, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b89972b9-78d7-4fa2-8e3a-545d28db568f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316498, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316498, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.327 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.330 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.330 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.331 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:45.331 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.737 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.738 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.751 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.761 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 975c9bc3-152a-44ef-843b-135ecb2d18d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.762 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522385.7604687, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.763 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.767 238945 DEBUG nova.compute.manager [None req-3229c39b-3150-484c-bf15-c22a3eb5cf7e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.796 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.801 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.840 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.841 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522385.7619092, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.841 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Started (Lifecycle Event)#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.870 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.871 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.872 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.879 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.880 238945 INFO nova.compute.claims [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 08:59:45 np0005597378 nova_compute[238941]: 2026-01-27 13:59:45.884 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.021 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 246 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Jan 27 08:59:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988512527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.632 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.637 238945 DEBUG nova.compute.provider_tree [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.941 238945 DEBUG nova.scheduler.client.report [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.972 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:46 np0005597378 nova_compute[238941]: 2026-01-27 13:59:46.973 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.031 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.032 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.052 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.067 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.166 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.167 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.167 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Creating image(s)#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.186 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.206 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.229 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.232 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.308 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.309 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.310 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.310 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.330 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.333 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:47 np0005597378 nova_compute[238941]: 2026-01-27 13:59:47.372 238945 DEBUG nova.policy [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6f4a784901be46db82915ff7ad73491a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 08:59:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 08:59:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 08:59:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 246 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 125 op/s
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.597 238945 INFO nova.compute.manager [None req-45e96117-602a-4afd-b4c5-8b0c443fb145 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Pausing#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.598 238945 DEBUG nova.objects.instance [None req-45e96117-602a-4afd-b4c5-8b0c443fb145 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'flavor' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.625 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Successfully created port: ee863bd0-e205-45e3-ac75-ed5c113dfc42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.629 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522388.628553, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.629 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.631 238945 DEBUG nova.compute.manager [None req-45e96117-602a-4afd-b4c5-8b0c443fb145 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.657 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.661 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:48 np0005597378 nova_compute[238941]: 2026-01-27 13:59:48.690 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 27 08:59:49 np0005597378 nova_compute[238941]: 2026-01-27 13:59:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:49 np0005597378 nova_compute[238941]: 2026-01-27 13:59:49.994 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Successfully updated port: ee863bd0-e205-45e3-ac75-ed5c113dfc42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.008 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.008 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquired lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.008 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.096 238945 DEBUG nova.compute.manager [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-changed-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.096 238945 DEBUG nova.compute.manager [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Refreshing instance network info cache due to event network-changed-ee863bd0-e205-45e3-ac75-ed5c113dfc42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.096 238945 DEBUG oslo_concurrency.lockutils [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.532 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 08:59:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 267 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.775 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:50 np0005597378 nova_compute[238941]: 2026-01-27 13:59:50.830 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] resizing rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.319 238945 INFO nova.compute.manager [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Unpausing#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.320 238945 DEBUG nova.objects.instance [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'flavor' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.347 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522391.3465142, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.347 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:51 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.350 238945 DEBUG nova.virt.libvirt.guest [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.351 238945 DEBUG nova.compute.manager [None req-e5b38660-3c5e-42e7-8367-cab3ea51bb04 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.372 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.376 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.421 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.422 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.423 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.888 238945 DEBUG nova.objects.instance [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lazy-loading 'migration_context' on Instance uuid d42b53d1-610a-435d-bb8a-2bac1fcef51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.904 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.904 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Ensure instance console log exists: /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.905 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.905 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:51 np0005597378 nova_compute[238941]: 2026-01-27 13:59:51.905 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.209 238945 DEBUG nova.network.neutron [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updating instance_info_cache with network_info: [{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.228 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Releasing lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.229 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance network_info: |[{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.230 238945 DEBUG oslo_concurrency.lockutils [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.231 238945 DEBUG nova.network.neutron [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Refreshing network info cache for port ee863bd0-e205-45e3-ac75-ed5c113dfc42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.241 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start _get_guest_xml network_info=[{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.247 238945 WARNING nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.253 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.254 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.262 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.262 238945 DEBUG nova.virt.libvirt.host [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.263 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.263 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.263 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.264 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.265 238945 DEBUG nova.virt.hardware [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.268 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.309 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.310 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.310 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.310 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 WARNING nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state rescued and task_state None.#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.311 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.312 238945 DEBUG oslo_concurrency.lockutils [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.312 238945 DEBUG nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.312 238945 WARNING nova.compute.manager [req-de1a205c-1083-4b24-8ed0-79e9ce0d8682 req-7a8d4962-18a6-4fc2-961e-9fb0632d609f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state rescued and task_state None.#033[00m
Jan 27 08:59:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 267 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 27 08:59:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/409405214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.844 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.863 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:52 np0005597378 nova_compute[238941]: 2026-01-27 13:59:52.867 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 08:59:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187322582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.426 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.428 238945 DEBUG nova.virt.libvirt.vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1248618057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1248618057',id=92,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c321b947f7be4f1fa44ee9f6341fd754',ramdisk_id='',reservation_id='r-hmahkx02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-106270881',owner_user_name='tempest-InstanceActionsV221TestJSON-106270881-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:47Z,user_data=None,user_id='6f4a784901be46db82915ff7ad73491a',uuid=d42b53d1-610a-435d-bb8a-2bac1fcef51c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.428 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converting VIF {"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.429 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.431 238945 DEBUG nova.objects.instance [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lazy-loading 'pci_devices' on Instance uuid d42b53d1-610a-435d-bb8a-2bac1fcef51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.446 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <uuid>d42b53d1-610a-435d-bb8a-2bac1fcef51c</uuid>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <name>instance-0000005c</name>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1248618057</nova:name>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 13:59:52</nova:creationTime>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:user uuid="6f4a784901be46db82915ff7ad73491a">tempest-InstanceActionsV221TestJSON-106270881-project-member</nova:user>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:project uuid="c321b947f7be4f1fa44ee9f6341fd754">tempest-InstanceActionsV221TestJSON-106270881</nova:project>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <nova:port uuid="ee863bd0-e205-45e3-ac75-ed5c113dfc42">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <system>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <entry name="serial">d42b53d1-610a-435d-bb8a-2bac1fcef51c</entry>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <entry name="uuid">d42b53d1-610a-435d-bb8a-2bac1fcef51c</entry>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </system>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <os>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </os>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <features>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </features>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </clock>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  <devices>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </source>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      </auth>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </disk>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:0a:31:27"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <target dev="tapee863bd0-e2"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </interface>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/console.log" append="off"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </serial>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <video>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </video>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </rng>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 08:59:53 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 08:59:53 np0005597378 nova_compute[238941]:  </devices>
Jan 27 08:59:53 np0005597378 nova_compute[238941]: </domain>
Jan 27 08:59:53 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Preparing to wait for external event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.448 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.449 238945 DEBUG nova.virt.libvirt.vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T13:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1248618057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1248618057',id=92,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c321b947f7be4f1fa44ee9f6341fd754',ramdisk_id='',reservation_id='r-hmahkx02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-106270881',owner_user_name='tempest-InstanceActionsV221TestJSON-106270881-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T13:59:47Z,user_data=None,user_id='6f4a784901be46db82915ff7ad73491a',uuid=d42b53d1-610a-435d-bb8a-2bac1fcef51c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.450 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converting VIF {"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.450 238945 DEBUG nova.network.os_vif_util [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.451 238945 DEBUG os_vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.452 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.453 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.457 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee863bd0-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.458 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee863bd0-e2, col_values=(('external_ids', {'iface-id': 'ee863bd0-e205-45e3-ac75-ed5c113dfc42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:31:27', 'vm-uuid': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:53 np0005597378 NetworkManager[48904]: <info>  [1769522393.4604] manager: (tapee863bd0-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.465 238945 INFO os_vif [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2')#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.645 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.646 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.646 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] No VIF found with MAC fa:16:3e:0a:31:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.647 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Using config drive#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.665 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.790 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [{"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.822 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-b869d848-1a7e-4a04-95f2-cedc16ebe1f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.822 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.823 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.849 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.849 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.849 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 08:59:53 np0005597378 nova_compute[238941]: 2026-01-27 13:59:53.850 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.004 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.004 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.004 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.005 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.005 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.006 238945 INFO nova.compute.manager [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Terminating instance#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.007 238945 DEBUG nova.compute.manager [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.055 238945 DEBUG nova.network.neutron [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updated VIF entry in instance network info cache for port ee863bd0-e205-45e3-ac75-ed5c113dfc42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.056 238945 DEBUG nova.network.neutron [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updating instance_info_cache with network_info: [{"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.085 238945 DEBUG oslo_concurrency.lockutils [req-7a7abd2c-06b8-4be6-93ee-a0573e2135f0 req-360288ea-d602-4b15-a833-741333603bb4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d42b53d1-610a-435d-bb8a-2bac1fcef51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.179 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Creating config drive at /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.185 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ofrx5z8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:54 np0005597378 kernel: tapc3e32fae-fe (unregistering): left promiscuous mode
Jan 27 08:59:54 np0005597378 NetworkManager[48904]: <info>  [1769522394.2866] device (tapc3e32fae-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:59:54 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:54Z|00838|binding|INFO|Releasing lport c3e32fae-fe60-4d39-980d-58000d56deee from this chassis (sb_readonly=0)
Jan 27 08:59:54 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:54Z|00839|binding|INFO|Setting lport c3e32fae-fe60-4d39-980d-58000d56deee down in Southbound
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.294 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:54Z|00840|binding|INFO|Removing iface tapc3e32fae-fe ovn-installed in OVS
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.301 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f6:7e 10.100.0.12'], port_security=['fa:16:3e:80:f6:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '975c9bc3-152a-44ef-843b-135ecb2d18d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '6', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c3e32fae-fe60-4d39-980d-58000d56deee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.302 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c3e32fae-fe60-4d39-980d-58000d56deee in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 unbound from our chassis#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.303 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.327 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ofrx5z8" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.333 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7cfd0c-2693-4308-b4a7-d51e5dbedeb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:54 np0005597378 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 27 08:59:54 np0005597378 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005b.scope: Consumed 8.814s CPU time.
Jan 27 08:59:54 np0005597378 systemd-machined[207425]: Machine qemu-105-instance-0000005b terminated.
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.358 238945 DEBUG nova.storage.rbd_utils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] rbd image d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.364 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.365 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e0251901-815a-4816-af2b-81eaf01fa327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.369 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a5aa4d22-02f1-4232-bc1a-8a3e8309aa8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.394 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8c03ae-ac96-4e5e-9bd1-c183976d6734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.412 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57052122-5817-445d-ac87-df83690e7e93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c0471fd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:41:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499362, 'reachable_time': 37055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316885, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962048030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.428 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[578532d6-07dd-4f12-a563-8276058de956]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499375, 'tstamp': 499375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316887, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c0471fd-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499378, 'tstamp': 499378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316887, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.430 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.432 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.441 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0471fd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.441 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.442 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c0471fd-a0, col_values=(('external_ids', {'iface-id': '1d87c77e-a625-4816-9d54-732ad4d6236a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:54.442 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.446 238945 INFO nova.virt.libvirt.driver [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Instance destroyed successfully.#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.447 238945 DEBUG nova.objects.instance [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'resources' on Instance uuid 975c9bc3-152a-44ef-843b-135ecb2d18d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.453 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.459 238945 DEBUG nova.virt.libvirt.vif [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-157909793',display_name='tempest-ServerRescueNegativeTestJSON-server-157909793',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-157909793',id=91,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-d8t4fgro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:45Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=975c9bc3-152a-44ef-843b-135ecb2d18d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.459 238945 DEBUG nova.network.os_vif_util [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "c3e32fae-fe60-4d39-980d-58000d56deee", "address": "fa:16:3e:80:f6:7e", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3e32fae-fe", "ovs_interfaceid": "c3e32fae-fe60-4d39-980d-58000d56deee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.460 238945 DEBUG nova.network.os_vif_util [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.460 238945 DEBUG os_vif [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.462 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3e32fae-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.470 238945 INFO os_vif [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:f6:7e,bridge_name='br-int',has_traffic_filtering=True,id=c3e32fae-fe60-4d39-980d-58000d56deee,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3e32fae-fe')#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.545 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.545 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.547 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.551 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.551 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.554 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.554 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 08:59:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 289 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.705 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.707 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3679MB free_disk=59.86080729216337GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.707 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.707 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.774 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b869d848-1a7e-4a04-95f2-cedc16ebe1f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 975c9bc3-152a-44ef-843b-135ecb2d18d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d42b53d1-610a-435d-bb8a-2bac1fcef51c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.775 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 08:59:54 np0005597378 nova_compute[238941]: 2026-01-27 13:59:54.833 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607880696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.400 238945 DEBUG nova.compute.manager [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG oslo_concurrency.lockutils [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG oslo_concurrency.lockutils [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG oslo_concurrency.lockutils [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG nova.compute.manager [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.401 238945 DEBUG nova.compute.manager [req-b1bd11f8-a463-4b1f-8a91-c27fe94890fe req-52891ca7-d821-444b-b949-91d8e99d39d4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-unplugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.423 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.426 238945 DEBUG oslo_concurrency.processutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config d42b53d1-610a-435d-bb8a-2bac1fcef51c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.426 238945 INFO nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deleting local config drive /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c/disk.config because it was imported into RBD.#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.444 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.478 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.479 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:55 np0005597378 systemd-udevd[316857]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 08:59:55 np0005597378 kernel: tapee863bd0-e2: entered promiscuous mode
Jan 27 08:59:55 np0005597378 NetworkManager[48904]: <info>  [1769522395.4817] manager: (tapee863bd0-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Jan 27 08:59:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:55Z|00841|binding|INFO|Claiming lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 for this chassis.
Jan 27 08:59:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:55Z|00842|binding|INFO|ee863bd0-e205-45e3-ac75-ed5c113dfc42: Claiming fa:16:3e:0a:31:27 10.100.0.7
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:55 np0005597378 NetworkManager[48904]: <info>  [1769522395.4964] device (tapee863bd0-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.495 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.497 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 bound to our chassis#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.498 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f1b0862-f2c6-4664-975b-93692f20a206#033[00m
Jan 27 08:59:55 np0005597378 NetworkManager[48904]: <info>  [1769522395.4985] device (tapee863bd0-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.515 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fefbf698-2b9d-4c1d-8646-5a0ce01b1e7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.516 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f1b0862-f1 in ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.518 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f1b0862-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.518 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f953b88-3287-4005-8f88-f1a7f4393f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.519 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b06dcd6-4c2d-441d-89c6-436b139bba50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 systemd-machined[207425]: New machine qemu-106-instance-0000005c.
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.537 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[338e86ad-b9d3-403c-b41b-979b98d1861f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 systemd[1]: Started Virtual Machine qemu-106-instance-0000005c.
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.552 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:55Z|00843|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 ovn-installed in OVS
Jan 27 08:59:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:55Z|00844|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 up in Southbound
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.567 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbffe3d-361f-4a41-909b-94f462ddc3eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.599 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e5dbfc2b-0e61-4577-8285-f22bcded8e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 NetworkManager[48904]: <info>  [1769522395.6055] manager: (tap0f1b0862-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.604 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[920b09b6-3380-4dd8-bcad-b13063be532f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.637 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7bdaff-fbc4-4aea-b2d7-0125f1811825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.641 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[608b7e44-808a-4c4a-a83a-ca0aaa91d073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 NetworkManager[48904]: <info>  [1769522395.6645] device (tap0f1b0862-f0): carrier: link connected
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.673 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8ad483-1b64-44b7-b93d-2b8c2ecde096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd342963-223e-499e-b593-cb62eed14532]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f1b0862-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:f1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503528, 'reachable_time': 33758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317008, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1447030-b497-4443-8251-a01dd23b0d4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:f1e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503528, 'tstamp': 503528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317009, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0e1f05-5ac5-4cf9-8fd8-1b0cf6f9cc2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f1b0862-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:f1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503528, 'reachable_time': 33758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317010, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb51421-bb70-46f9-82d6-1840d9a97918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.841 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a2037d-5a09-41ae-9aef-06e2916c1023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.843 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f1b0862-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.843 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.843 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f1b0862-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:55 np0005597378 NetworkManager[48904]: <info>  [1769522395.8459] manager: (tap0f1b0862-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:55 np0005597378 kernel: tap0f1b0862-f0: entered promiscuous mode
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f1b0862-f0, col_values=(('external_ids', {'iface-id': '35c98048-869e-42cb-a9a6-294f6b1200b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:55 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:55Z|00845|binding|INFO|Releasing lport 35c98048-869e-42cb-a9a6-294f6b1200b5 from this chassis (sb_readonly=0)
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.867 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f1b0862-f2c6-4664-975b-93692f20a206.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f1b0862-f2c6-4664-975b-93692f20a206.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6db328-6b1f-40f9-90fb-2952f8c4738a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.870 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-0f1b0862-f2c6-4664-975b-93692f20a206
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/0f1b0862-f2c6-4664-975b-93692f20a206.pid.haproxy
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 0f1b0862-f2c6-4664-975b-93692f20a206
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 08:59:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:55.872 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'env', 'PROCESS_TAG=haproxy-0f1b0862-f2c6-4664-975b-93692f20a206', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f1b0862-f2c6-4664-975b-93692f20a206.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.917 238945 DEBUG nova.compute.manager [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.917 238945 DEBUG oslo_concurrency.lockutils [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.918 238945 DEBUG oslo_concurrency.lockutils [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.918 238945 DEBUG oslo_concurrency.lockutils [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:55 np0005597378 nova_compute[238941]: 2026-01-27 13:59:55.919 238945 DEBUG nova.compute.manager [req-6518a559-4d2f-4679-be00-f0db9a963ca9 req-d53ed5b4-e748-4d47-b197-7798c359d9c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Processing event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 08:59:56 np0005597378 podman[317042]: 2026-01-27 13:59:56.340302893 +0000 UTC m=+0.100573601 container create 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 08:59:56 np0005597378 podman[317042]: 2026-01-27 13:59:56.267637719 +0000 UTC m=+0.027908447 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 08:59:56 np0005597378 systemd[1]: Started libpod-conmon-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627.scope.
Jan 27 08:59:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 08:59:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a14eb3bc6d694487d804915015cb9c6f37c1118a704d0e330dff6ee550da6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 08:59:56 np0005597378 podman[317042]: 2026-01-27 13:59:56.435187573 +0000 UTC m=+0.195458311 container init 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 08:59:56 np0005597378 nova_compute[238941]: 2026-01-27 13:59:56.433 238945 INFO nova.virt.libvirt.driver [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deleting instance files /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3_del#033[00m
Jan 27 08:59:56 np0005597378 nova_compute[238941]: 2026-01-27 13:59:56.435 238945 INFO nova.virt.libvirt.driver [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deletion of /var/lib/nova/instances/975c9bc3-152a-44ef-843b-135ecb2d18d3_del complete#033[00m
Jan 27 08:59:56 np0005597378 podman[317042]: 2026-01-27 13:59:56.44078907 +0000 UTC m=+0.201059778 container start 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 08:59:56 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : New worker (317063) forked
Jan 27 08:59:56 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : Loading success.
Jan 27 08:59:56 np0005597378 nova_compute[238941]: 2026-01-27 13:59:56.483 238945 INFO nova.compute.manager [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 2.48 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 08:59:56 np0005597378 nova_compute[238941]: 2026-01-27 13:59:56.484 238945 DEBUG oslo.service.loopingcall [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 08:59:56 np0005597378 nova_compute[238941]: 2026-01-27 13:59:56.485 238945 DEBUG nova.compute.manager [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 08:59:56 np0005597378 nova_compute[238941]: 2026-01-27 13:59:56.487 238945 DEBUG nova.network.neutron [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 08:59:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 293 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.004 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522397.0036306, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.004 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Started (Lifecycle Event)#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.006 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.008 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.011 238945 INFO nova.virt.libvirt.driver [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance spawned successfully.#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.011 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.027 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.031 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.037 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.037 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.038 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.038 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.038 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.039 238945 DEBUG nova.virt.libvirt.driver [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.059 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.059 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522397.0038774, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.060 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Paused (Lifecycle Event)#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.087 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.090 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522397.008152, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.090 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.095 238945 INFO nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 9.93 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.096 238945 DEBUG nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.107 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.110 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.145 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.165 238945 INFO nova.compute.manager [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 11.33 seconds to build instance.#033[00m
Jan 27 08:59:57 np0005597378 nova_compute[238941]: 2026-01-27 13:59:57.181 238945 DEBUG oslo_concurrency.lockutils [None req-fdc5ac11-efca-4767-9711-b6bf840d465a 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.167 238945 DEBUG nova.compute.manager [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG oslo_concurrency.lockutils [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG oslo_concurrency.lockutils [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG oslo_concurrency.lockutils [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.168 238945 DEBUG nova.compute.manager [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] No waiting events found dispatching network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.169 238945 WARNING nova.compute.manager [req-9e8383e2-3069-4dea-8ff3-8288716789ef req-1cbe87cb-d209-4b60-9459-61390a8b9612 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received unexpected event network-vif-plugged-c3e32fae-fe60-4d39-980d-58000d56deee for instance with vm_state rescued and task_state deleting.#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.300 238945 DEBUG nova.network.neutron [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.318 238945 INFO nova.compute.manager [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Took 1.83 seconds to deallocate network for instance.#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.325 238945 DEBUG nova.compute.manager [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.325 238945 DEBUG oslo_concurrency.lockutils [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 DEBUG oslo_concurrency.lockutils [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 DEBUG oslo_concurrency.lockutils [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 DEBUG nova.compute.manager [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.326 238945 WARNING nova.compute.manager [req-ac85ce02-0c4d-4af0-b1db-78e21047cd71 req-7442de6d-057c-468e-89fd-f54d632b5885 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state None.#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.362 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.362 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.425 238945 DEBUG oslo_concurrency.processutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 08:59:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 206 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 27 08:59:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 08:59:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37589321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.972 238945 DEBUG oslo_concurrency.processutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.978 238945 DEBUG nova.compute.provider_tree [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 08:59:58 np0005597378 nova_compute[238941]: 2026-01-27 13:59:58.992 238945 DEBUG nova.scheduler.client.report [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.016 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.039 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.039 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.040 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.048 238945 INFO nova.scheduler.client.report [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Deleted allocations for instance 975c9bc3-152a-44ef-843b-135ecb2d18d3#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.163 238945 DEBUG oslo_concurrency.lockutils [None req-2ace5e93-76c2-4ba9-992c-f1ddaaad832c 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "975c9bc3-152a-44ef-843b-135ecb2d18d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.401 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.402 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.402 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.402 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.403 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.404 238945 INFO nova.compute.manager [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Terminating instance#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.405 238945 DEBUG nova.compute.manager [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 08:59:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129247967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 08:59:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 08:59:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129247967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 08:59:59 np0005597378 kernel: tapb405c0ca-02 (unregistering): left promiscuous mode
Jan 27 08:59:59 np0005597378 NetworkManager[48904]: <info>  [1769522399.6552] device (tapb405c0ca-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 08:59:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:59Z|00846|binding|INFO|Releasing lport b405c0ca-029a-4203-9890-f05309eea795 from this chassis (sb_readonly=0)
Jan 27 08:59:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:59Z|00847|binding|INFO|Setting lport b405c0ca-029a-4203-9890-f05309eea795 down in Southbound
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:59 np0005597378 ovn_controller[144812]: 2026-01-27T13:59:59Z|00848|binding|INFO|Removing iface tapb405c0ca-02 ovn-installed in OVS
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.676 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:c2:09 10.100.0.11'], port_security=['fa:16:3e:10:c2:09 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b869d848-1a7e-4a04-95f2-cedc16ebe1f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '393fd88e226e4f0e95954956b0fc8f40', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45b9f9a1-27c6-4b65-9b9c-83d53e36f3ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1434688d-9957-44d1-b6e9-ebbee6df300a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b405c0ca-029a-4203-9890-f05309eea795) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.677 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b405c0ca-029a-4203-9890-f05309eea795 in datapath 8c0471fd-a164-4ef9-bcee-a05e6b2d5892 unbound from our chassis#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.678 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c0471fd-a164-4ef9-bcee-a05e6b2d5892, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e4274ce9-9523-4ced-a69e-b1ee9471627b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.684 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 namespace which is not needed anymore#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:59 np0005597378 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Jan 27 08:59:59 np0005597378 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Consumed 13.777s CPU time.
Jan 27 08:59:59 np0005597378 systemd-machined[207425]: Machine qemu-102-instance-0000005a terminated.
Jan 27 08:59:59 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : haproxy version is 2.8.14-c23fe91
Jan 27 08:59:59 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [NOTICE]   (315056) : path to executable is /usr/sbin/haproxy
Jan 27 08:59:59 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [WARNING]  (315056) : Exiting Master process...
Jan 27 08:59:59 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [WARNING]  (315056) : Exiting Master process...
Jan 27 08:59:59 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [ALERT]    (315056) : Current worker (315058) exited with code 143 (Terminated)
Jan 27 08:59:59 np0005597378 neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892[315033]: [WARNING]  (315056) : All workers exited. Exiting... (0)
Jan 27 08:59:59 np0005597378 systemd[1]: libpod-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855.scope: Deactivated successfully.
Jan 27 08:59:59 np0005597378 podman[317159]: 2026-01-27 13:59:59.821320965 +0000 UTC m=+0.043515947 container died e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.838 238945 INFO nova.virt.libvirt.driver [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Instance destroyed successfully.#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.840 238945 DEBUG nova.objects.instance [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lazy-loading 'resources' on Instance uuid b869d848-1a7e-4a04-95f2-cedc16ebe1f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 08:59:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855-userdata-shm.mount: Deactivated successfully.
Jan 27 08:59:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-136136e60f46ff752a39b467fb9a07c82501b719407e9df982ae54b1df12eb7b-merged.mount: Deactivated successfully.
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.856 238945 DEBUG nova.virt.libvirt.vif [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1523303383',display_name='tempest-ServerRescueNegativeTestJSON-server-1523303383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1523303383',id=90,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='393fd88e226e4f0e95954956b0fc8f40',ramdisk_id='',reservation_id='r-4pp3oufn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1362523506',owner_user_name='tempest-ServerRescueNegativeTestJSON-1362523506-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:51Z,user_data=None,user_id='84aa975dea454d9dafe5d1583c4d0f0e',uuid=b869d848-1a7e-4a04-95f2-cedc16ebe1f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.857 238945 DEBUG nova.network.os_vif_util [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converting VIF {"id": "b405c0ca-029a-4203-9890-f05309eea795", "address": "fa:16:3e:10:c2:09", "network": {"id": "8c0471fd-a164-4ef9-bcee-a05e6b2d5892", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1659907153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "393fd88e226e4f0e95954956b0fc8f40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb405c0ca-02", "ovs_interfaceid": "b405c0ca-029a-4203-9890-f05309eea795", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.857 238945 DEBUG nova.network.os_vif_util [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.858 238945 DEBUG os_vif [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.864 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb405c0ca-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.868 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.870 238945 INFO os_vif [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:c2:09,bridge_name='br-int',has_traffic_filtering=True,id=b405c0ca-029a-4203-9890-f05309eea795,network=Network(8c0471fd-a164-4ef9-bcee-a05e6b2d5892),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb405c0ca-02')#033[00m
Jan 27 08:59:59 np0005597378 podman[317159]: 2026-01-27 13:59:59.875716858 +0000 UTC m=+0.097911840 container cleanup e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 08:59:59 np0005597378 systemd[1]: libpod-conmon-e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855.scope: Deactivated successfully.
Jan 27 08:59:59 np0005597378 podman[317211]: 2026-01-27 13:59:59.951054583 +0000 UTC m=+0.051237241 container remove e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.955 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.955 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.956 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.956 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.956 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.957 238945 INFO nova.compute.manager [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Terminating instance#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.959 238945 DEBUG nova.compute.manager [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.966 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de4d190f-9445-4cdd-8902-1560a7ed3fe9]: (4, ('Tue Jan 27 01:59:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 (e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855)\ne7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855\nTue Jan 27 01:59:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 (e7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855)\ne7aa3dc236e118d8d63052dc952ab66ccbce37ce1e016287b4f8a0361ddda855\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.968 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66061dec-4cc6-43f1-97fe-d40a58f7e64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.969 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0471fd-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:59 np0005597378 kernel: tap8c0471fd-a0: left promiscuous mode
Jan 27 08:59:59 np0005597378 nova_compute[238941]: 2026-01-27 13:59:59.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 08:59:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 13:59:59.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[15e40439-cecd-4265-8ee3-2962c9ac4627]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.009 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1063008-c5e0-4861-aa36-175ecb59d3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[54a28bee-2628-414c-8cd1-82f56d87cb6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf3a2d2-c521-4af2-baaa-18f725109430]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499355, 'reachable_time': 43281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317228, 'error': None, 'target': 'ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:00 np0005597378 systemd[1]: run-netns-ovnmeta\x2d8c0471fd\x2da164\x2d4ef9\x2dbcee\x2da05e6b2d5892.mount: Deactivated successfully.
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.030 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c0471fd-a164-4ef9-bcee-a05e6b2d5892 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.031 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[566d9f4e-9ca6-421d-b900-175d47ed8cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:00 np0005597378 kernel: tapee863bd0-e2 (unregistering): left promiscuous mode
Jan 27 09:00:00 np0005597378 NetworkManager[48904]: <info>  [1769522400.0611] device (tapee863bd0-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00849|binding|INFO|Releasing lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 from this chassis (sb_readonly=0)
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00850|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 down in Southbound
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00851|binding|INFO|Removing iface tapee863bd0-e2 ovn-installed in OVS
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.079 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.080 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 unbound from our chassis#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.081 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f1b0862-f2c6-4664-975b-93692f20a206, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.082 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd7f164-4fe7-404c-ada8-92a538a81d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.083 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 namespace which is not needed anymore#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Jan 27 09:00:00 np0005597378 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005c.scope: Consumed 4.411s CPU time.
Jan 27 09:00:00 np0005597378 systemd-machined[207425]: Machine qemu-106-instance-0000005c terminated.
Jan 27 09:00:00 np0005597378 NetworkManager[48904]: <info>  [1769522400.1781] manager: (tapee863bd0-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Jan 27 09:00:00 np0005597378 kernel: tapee863bd0-e2: entered promiscuous mode
Jan 27 09:00:00 np0005597378 kernel: tapee863bd0-e2 (unregistering): left promiscuous mode
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00852|binding|INFO|Claiming lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 for this chassis.
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00853|binding|INFO|ee863bd0-e205-45e3-ac75-ed5c113dfc42: Claiming fa:16:3e:0a:31:27 10.100.0.7
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.201 238945 INFO nova.virt.libvirt.driver [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Instance destroyed successfully.#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.201 238945 DEBUG nova.objects.instance [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lazy-loading 'resources' on Instance uuid d42b53d1-610a-435d-bb8a-2bac1fcef51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00854|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 ovn-installed in OVS
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00855|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 up in Southbound
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00856|binding|INFO|Releasing lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 from this chassis (sb_readonly=1)
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00857|if_status|INFO|Dropped 2 log messages in last 606 seconds (most recently, 606 seconds ago) due to excessive rate
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00858|if_status|INFO|Not setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 down as sb is readonly
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00859|binding|INFO|Removing iface tapee863bd0-e2 ovn-installed in OVS
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.209 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00860|binding|INFO|Releasing lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 from this chassis (sb_readonly=0)
Jan 27 09:00:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:00Z|00861|binding|INFO|Setting lport ee863bd0-e205-45e3-ac75-ed5c113dfc42 down in Southbound
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.217 238945 DEBUG nova.virt.libvirt.vif [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T13:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1248618057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1248618057',id=92,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T13:59:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c321b947f7be4f1fa44ee9f6341fd754',ramdisk_id='',reservation_id='r-hmahkx02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-106270881',owner_user_name='tempest-InstanceActionsV221TestJSON-106270881-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T13:59:57Z,user_data=None,user_id='6f4a784901be46db82915ff7ad73491a',uuid=d42b53d1-610a-435d-bb8a-2bac1fcef51c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.218 238945 DEBUG nova.network.os_vif_util [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converting VIF {"id": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "address": "fa:16:3e:0a:31:27", "network": {"id": "0f1b0862-f2c6-4664-975b-93692f20a206", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1086849123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c321b947f7be4f1fa44ee9f6341fd754", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee863bd0-e2", "ovs_interfaceid": "ee863bd0-e205-45e3-ac75-ed5c113dfc42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:00.218 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:31:27 10.100.0.7'], port_security=['fa:16:3e:0a:31:27 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd42b53d1-610a-435d-bb8a-2bac1fcef51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f1b0862-f2c6-4664-975b-93692f20a206', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c321b947f7be4f1fa44ee9f6341fd754', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac1ee23a-3469-4fa6-8f1d-b508d5c15c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d480fec-d775-40a4-ad98-1669e1f95707, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ee863bd0-e205-45e3-ac75-ed5c113dfc42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.218 238945 DEBUG nova.network.os_vif_util [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.219 238945 DEBUG os_vif [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.220 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee863bd0-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.227 238945 INFO os_vif [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:31:27,bridge_name='br-int',has_traffic_filtering=True,id=ee863bd0-e205-45e3-ac75-ed5c113dfc42,network=Network(0f1b0862-f2c6-4664-975b-93692f20a206),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee863bd0-e2')#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.292 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Received event network-vif-deleted-c3e32fae-fe60-4d39-980d-58000d56deee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.293 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-unplugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.293 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.293 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] No waiting events found dispatching network-vif-unplugged-b405c0ca-029a-4203-9890-f05309eea795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-unplugged-b405c0ca-029a-4203-9890-f05309eea795 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.294 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 DEBUG oslo_concurrency.lockutils [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 DEBUG nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] No waiting events found dispatching network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.295 238945 WARNING nova.compute.manager [req-5ae0fa3a-88a9-490e-9af5-d2f26e0f9e3b req-f2621ab2-9fbc-4a5e-aec6-e164a3f2780d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received unexpected event network-vif-plugged-b405c0ca-029a-4203-9890-f05309eea795 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:00 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : haproxy version is 2.8.14-c23fe91
Jan 27 09:00:00 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [NOTICE]   (317061) : path to executable is /usr/sbin/haproxy
Jan 27 09:00:00 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [WARNING]  (317061) : Exiting Master process...
Jan 27 09:00:00 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [ALERT]    (317061) : Current worker (317063) exited with code 143 (Terminated)
Jan 27 09:00:00 np0005597378 neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206[317057]: [WARNING]  (317061) : All workers exited. Exiting... (0)
Jan 27 09:00:00 np0005597378 systemd[1]: libpod-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627.scope: Deactivated successfully.
Jan 27 09:00:00 np0005597378 podman[317253]: 2026-01-27 14:00:00.446145485 +0000 UTC m=+0.258643264 container died 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.446 238945 DEBUG nova.compute.manager [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.446 238945 DEBUG oslo_concurrency.lockutils [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.446 238945 DEBUG oslo_concurrency.lockutils [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.447 238945 DEBUG oslo_concurrency.lockutils [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.447 238945 DEBUG nova.compute.manager [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:00 np0005597378 nova_compute[238941]: 2026-01-27 14:00:00.447 238945 DEBUG nova.compute.manager [req-039ad7f7-f109-4296-a70f-ece21d6f3557 req-95d50dec-2059-45e4-a7f6-8961e564f561 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:00:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 167 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 27 09:00:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627-userdata-shm.mount: Deactivated successfully.
Jan 27 09:00:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-422a14eb3bc6d694487d804915015cb9c6f37c1118a704d0e330dff6ee550da6-merged.mount: Deactivated successfully.
Jan 27 09:00:01 np0005597378 podman[317253]: 2026-01-27 14:00:01.025396054 +0000 UTC m=+0.837893833 container cleanup 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:01 np0005597378 podman[317307]: 2026-01-27 14:00:01.031961918 +0000 UTC m=+0.071660279 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:00:01 np0005597378 podman[317308]: 2026-01-27 14:00:01.101389396 +0000 UTC m=+0.137450672 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:01 np0005597378 podman[317340]: 2026-01-27 14:00:01.288384213 +0000 UTC m=+0.241103783 container remove 1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e895d2e-fde6-4128-91ec-bfa7be9c158b]: (4, ('Tue Jan 27 02:00:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 (1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627)\n1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627\nTue Jan 27 02:00:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 (1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627)\n1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[756c4863-fa66-4e59-a14f-901249646212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.296 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f1b0862-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:01 np0005597378 kernel: tap0f1b0862-f0: left promiscuous mode
Jan 27 09:00:01 np0005597378 systemd[1]: libpod-conmon-1c774faeffb45648af70f296410854276f437a1cde6bc0678fcbd39e6a3c6627.scope: Deactivated successfully.
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.315 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8844bc52-379e-4c27-8bf7-890dcfb9b312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.328 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a879223-89c1-4a02-bb62-6791084ffddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.330 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcdf405-7af4-4cac-98c1-c581add1f566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4316c9-5674-4823-bff8-a6ef78f022d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503521, 'reachable_time': 22474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317370, 'error': None, 'target': 'ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.348 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f1b0862-f2c6-4664-975b-93692f20a206 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.348 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[41aea779-71ad-48ef-b861-b390af602146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.349 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 unbound from our chassis#033[00m
Jan 27 09:00:01 np0005597378 systemd[1]: run-netns-ovnmeta\x2d0f1b0862\x2df2c6\x2d4664\x2d975b\x2d93692f20a206.mount: Deactivated successfully.
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.350 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f1b0862-f2c6-4664-975b-93692f20a206, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.351 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d00ff53-2a0e-4b25-a675-04d27b7b916e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.351 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ee863bd0-e205-45e3-ac75-ed5c113dfc42 in datapath 0f1b0862-f2c6-4664-975b-93692f20a206 unbound from our chassis#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.352 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f1b0862-f2c6-4664-975b-93692f20a206, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:00:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:01.353 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad035b3a-a5be-4e5c-98e0-59d78c6decc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.808 238945 INFO nova.virt.libvirt.driver [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deleting instance files /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_del#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.810 238945 INFO nova.virt.libvirt.driver [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deletion of /var/lib/nova/instances/b869d848-1a7e-4a04-95f2-cedc16ebe1f7_del complete#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.874 238945 INFO nova.compute.manager [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 2.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.875 238945 DEBUG oslo.service.loopingcall [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.876 238945 DEBUG nova.compute.manager [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.876 238945 DEBUG nova.network.neutron [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.926 238945 INFO nova.virt.libvirt.driver [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deleting instance files /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c_del#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.927 238945 INFO nova.virt.libvirt.driver [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deletion of /var/lib/nova/instances/d42b53d1-610a-435d-bb8a-2bac1fcef51c_del complete#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.980 238945 INFO nova.compute.manager [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 2.02 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.981 238945 DEBUG oslo.service.loopingcall [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.981 238945 DEBUG nova.compute.manager [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:00:01 np0005597378 nova_compute[238941]: 2026-01-27 14:00:01.981 238945 DEBUG nova.network.neutron [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.565 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.565 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 WARNING nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.566 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 WARNING nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 167 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 138 op/s
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.567 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.568 238945 WARNING nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG oslo_concurrency.lockutils [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.569 238945 DEBUG nova.compute.manager [req-b4dfc817-f994-4f26-b480-0f7fde4bdb1f req-9df912fe-d8f4-4d82-8fc5-52c6f6b361a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-unplugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.680 238945 DEBUG nova.network.neutron [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.703 238945 INFO nova.compute.manager [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Took 0.72 seconds to deallocate network for instance.#033[00m
Jan 27 09:00:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.764 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.764 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.847 238945 DEBUG nova.network.neutron [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.866 238945 INFO nova.compute.manager [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Took 0.99 seconds to deallocate network for instance.#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.876 238945 DEBUG oslo_concurrency.processutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.937 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:02 np0005597378 nova_compute[238941]: 2026-01-27 14:00:02.938 238945 DEBUG nova.compute.manager [req-08cc5002-b27a-4be6-8d72-18411ee90cb5 req-6313b646-8cb6-42db-9a66-b5d3a0fd9fe7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Received event network-vif-deleted-b405c0ca-029a-4203-9890-f05309eea795 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202811022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.477 238945 DEBUG oslo_concurrency.processutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.483 238945 DEBUG nova.compute.provider_tree [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.514 238945 DEBUG nova.scheduler.client.report [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.538 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.541 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.567 238945 INFO nova.scheduler.client.report [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Deleted allocations for instance d42b53d1-610a-435d-bb8a-2bac1fcef51c#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.585 238945 DEBUG oslo_concurrency.processutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:03 np0005597378 nova_compute[238941]: 2026-01-27 14:00:03.643 238945 DEBUG oslo_concurrency.lockutils [None req-eea49ce1-fc3a-46ab-b0a1-5dc4cc89de3d 6f4a784901be46db82915ff7ad73491a c321b947f7be4f1fa44ee9f6341fd754 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205467216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.160 238945 DEBUG oslo_concurrency.processutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.166 238945 DEBUG nova.compute.provider_tree [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.183 238945 DEBUG nova.scheduler.client.report [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.203 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.229 238945 INFO nova.scheduler.client.report [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Deleted allocations for instance b869d848-1a7e-4a04-95f2-cedc16ebe1f7#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.287 238945 DEBUG oslo_concurrency.lockutils [None req-bde8c4f7-bd38-463c-9635-39666619f11e 84aa975dea454d9dafe5d1583c4d0f0e 393fd88e226e4f0e95954956b0fc8f40 - - default default] Lock "b869d848-1a7e-4a04-95f2-cedc16ebe1f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 91 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 995 KiB/s wr, 190 op/s
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.666 238945 DEBUG nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.666 238945 DEBUG oslo_concurrency.lockutils [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.666 238945 DEBUG oslo_concurrency.lockutils [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 DEBUG oslo_concurrency.lockutils [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d42b53d1-610a-435d-bb8a-2bac1fcef51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 DEBUG nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] No waiting events found dispatching network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 WARNING nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received unexpected event network-vif-plugged-ee863bd0-e205-45e3-ac75-ed5c113dfc42 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:00:04 np0005597378 nova_compute[238941]: 2026-01-27 14:00:04.667 238945 DEBUG nova.compute.manager [req-7f253195-7f53-4cfd-aa04-aab9fd29d52b req-34aaf64a-7dc9-45bb-8dcd-b16619f52cd3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Received event network-vif-deleted-ee863bd0-e205-45e3-ac75-ed5c113dfc42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:05 np0005597378 nova_compute[238941]: 2026-01-27 14:00:05.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:06 np0005597378 nova_compute[238941]: 2026-01-27 14:00:06.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:06 np0005597378 nova_compute[238941]: 2026-01-27 14:00:06.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 197 op/s
Jan 27 09:00:07 np0005597378 nova_compute[238941]: 2026-01-27 14:00:07.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 195 op/s
Jan 27 09:00:09 np0005597378 nova_compute[238941]: 2026-01-27 14:00:09.440 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522394.4382403, 975c9bc3-152a-44ef-843b-135ecb2d18d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:09 np0005597378 nova_compute[238941]: 2026-01-27 14:00:09.440 238945 INFO nova.compute.manager [-] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:00:09 np0005597378 nova_compute[238941]: 2026-01-27 14:00:09.460 238945 DEBUG nova.compute.manager [None req-2e3bef07-abae-4861-8bc3-379456d4fbd4 - - - - - -] [instance: 975c9bc3-152a-44ef-843b-135ecb2d18d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:10 np0005597378 nova_compute[238941]: 2026-01-27 14:00:10.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.8 KiB/s wr, 109 op/s
Jan 27 09:00:12 np0005597378 nova_compute[238941]: 2026-01-27 14:00:12.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Jan 27 09:00:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:14.799 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:14.800 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.836 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522399.8355806, b869d848-1a7e-4a04-95f2-cedc16ebe1f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.836 238945 INFO nova.compute.manager [-] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.860 238945 DEBUG nova.compute.manager [None req-b7b6e139-850c-4540-b526-d681e893b46d - - - - - -] [instance: b869d848-1a7e-4a04-95f2-cedc16ebe1f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.926 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "cc034275-7dd9-4d59-82ed-28755e2c6559" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.927 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:14 np0005597378 nova_compute[238941]: 2026-01-27 14:00:14.941 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.056 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.057 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.063 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.064 238945 INFO nova.compute.claims [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.180 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.212 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522400.1982243, d42b53d1-610a-435d-bb8a-2bac1fcef51c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.213 238945 INFO nova.compute.manager [-] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.228 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.240 238945 DEBUG nova.compute.manager [None req-47ae4ff5-4670-4278-9ca5-76a0996d86c2 - - - - - -] [instance: d42b53d1-610a-435d-bb8a-2bac1fcef51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259577543' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.792 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.800 238945 DEBUG nova.compute.provider_tree [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.818 238945 DEBUG nova.scheduler.client.report [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.851 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.852 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.903 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.921 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:00:15 np0005597378 nova_compute[238941]: 2026-01-27 14:00:15.945 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.054 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.055 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.055 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Creating image(s)#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.077 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.100 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.121 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.126 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.191 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.192 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.210 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.210 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.211 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.211 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.231 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.235 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f cc034275-7dd9-4d59-82ed-28755e2c6559_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.270 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.335 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.336 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.343 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.344 238945 INFO nova.compute.claims [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.467 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.503 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f cc034275-7dd9-4d59-82ed-28755e2c6559_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.557 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] resizing rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:00:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 341 B/s wr, 19 op/s
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.609 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.610 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.635 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.643 238945 DEBUG nova.objects.instance [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'migration_context' on Instance uuid cc034275-7dd9-4d59-82ed-28755e2c6559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.660 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.660 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Ensure instance console log exists: /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.661 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.661 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.661 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.663 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.667 238945 WARNING nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.672 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.672 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.676 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.677 238945 DEBUG nova.virt.libvirt.host [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.677 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.677 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.678 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.678 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.678 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.679 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.680 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.680 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.680 238945 DEBUG nova.virt.hardware [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.683 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:16 np0005597378 nova_compute[238941]: 2026-01-27 14:00:16.738 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761244372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.006 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.012 238945 DEBUG nova.compute.provider_tree [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.037 238945 DEBUG nova.scheduler.client.report [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.060 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.061 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.064 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.068 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.069 238945 INFO nova.compute.claims [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:00:17
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.130 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.160 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.167 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.181 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389980491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.238 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.259 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.263 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.298 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.335 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.336 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.337 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating image(s)#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.357 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.379 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.398 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.404 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.479 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.480 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.481 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.481 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.503 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.507 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.780 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270907515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.834 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] resizing rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147550781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.860 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.862 238945 DEBUG nova.objects.instance [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_devices' on Instance uuid cc034275-7dd9-4d59-82ed-28755e2c6559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.869 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.873 238945 DEBUG nova.compute.provider_tree [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.887 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <uuid>cc034275-7dd9-4d59-82ed-28755e2c6559</uuid>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <name>instance-0000005d</name>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV247Test-server-478856327</nova:name>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:16</nova:creationTime>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:user uuid="93260199bb344997ae7449060a9adee6">tempest-ServerShowV247Test-29714096-project-member</nova:user>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <nova:project uuid="6f0de6d14fb34a0b805053a94d5e8a6c">tempest-ServerShowV247Test-29714096</nova:project>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <entry name="serial">cc034275-7dd9-4d59-82ed-28755e2c6559</entry>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <entry name="uuid">cc034275-7dd9-4d59-82ed-28755e2c6559</entry>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/cc034275-7dd9-4d59-82ed-28755e2c6559_disk">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/console.log" append="off"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:17 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:17 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:17 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:17 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.917 238945 DEBUG nova.scheduler.client.report [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.930 238945 DEBUG nova.objects.instance [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'migration_context' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.952 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.952 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Ensure instance console log exists: /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.952 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.953 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.953 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.954 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.956 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.956 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.961 238945 WARNING nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.967 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.967 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.968 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Using config drive#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.988 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.993 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:17 np0005597378 nova_compute[238941]: 2026-01-27 14:00:17.994 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.005 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.005 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.010 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.011 238945 DEBUG nova.virt.libvirt.host [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.011 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.011 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.012 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.013 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.014 238945 DEBUG nova.virt.hardware [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.017 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.056 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.078 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.180 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.181 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.182 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Creating image(s)#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.201 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.220 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.245 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.252 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.322 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.323 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.323 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.324 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.341 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.344 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d133d5f9-1c2b-4996-955c-be57e53a44ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.495 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Creating config drive at /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.501 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjx3f_w5e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883461732' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.571 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 98 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.594 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.597 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.634 238945 DEBUG nova.policy [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4411258cb6240ddb5365fb25e762594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.636 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d133d5f9-1c2b-4996-955c-be57e53a44ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.664 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjx3f_w5e" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.690 238945 DEBUG nova.storage.rbd_utils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.694 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.763 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] resizing rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.851 238945 DEBUG oslo_concurrency.processutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config cc034275-7dd9-4d59-82ed-28755e2c6559_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.852 238945 INFO nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deleting local config drive /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559/disk.config because it was imported into RBD.#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.862 238945 DEBUG nova.objects.instance [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'migration_context' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.889 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.890 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Ensure instance console log exists: /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.890 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.891 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:18 np0005597378 nova_compute[238941]: 2026-01-27 14:00:18.891 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:18 np0005597378 systemd-machined[207425]: New machine qemu-107-instance-0000005d.
Jan 27 09:00:18 np0005597378 systemd[1]: Started Virtual Machine qemu-107-instance-0000005d.
Jan 27 09:00:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111433340' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.175 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.176 238945 DEBUG nova.objects.instance [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.196 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <uuid>5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</uuid>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <name>instance-0000005e</name>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV247Test-server-2098162892</nova:name>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:17</nova:creationTime>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:user uuid="93260199bb344997ae7449060a9adee6">tempest-ServerShowV247Test-29714096-project-member</nova:user>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <nova:project uuid="6f0de6d14fb34a0b805053a94d5e8a6c">tempest-ServerShowV247Test-29714096</nova:project>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <entry name="serial">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <entry name="uuid">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log" append="off"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:19 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:19 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:19 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:19 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.240 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.241 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.241 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Using config drive#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.261 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.358 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.359 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.377 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.418 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522419.4179404, cc034275-7dd9-4d59-82ed-28755e2c6559 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.419 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.422 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.422 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.427 238945 INFO nova.virt.libvirt.driver [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance spawned successfully.#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.427 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.432 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating config drive at /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.438 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6g7lh31m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.502 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.508 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.508 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.509 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.509 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.509 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.510 238945 DEBUG nova.virt.libvirt.driver [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.513 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.517 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.517 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.523 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.523 238945 INFO nova.compute.claims [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.553 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.553 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522419.421813, cc034275-7dd9-4d59-82ed-28755e2c6559 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.553 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.598 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.601 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6g7lh31m" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.631 238945 DEBUG nova.storage.rbd_utils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.635 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.679 238945 INFO nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 3.62 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.680 238945 DEBUG nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.683 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.708 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Successfully created port: e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.761 238945 INFO nova.compute.manager [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 4.73 seconds to build instance.#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.787 238945 DEBUG oslo_concurrency.lockutils [None req-40e3fdf8-2fe3-493d-8aeb-6c9155122ccc 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.824 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.989 238945 DEBUG oslo_concurrency.processutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:19 np0005597378 nova_compute[238941]: 2026-01-27 14:00:19.991 238945 INFO nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting local config drive /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config because it was imported into RBD.#033[00m
Jan 27 09:00:20 np0005597378 systemd-machined[207425]: New machine qemu-108-instance-0000005e.
Jan 27 09:00:20 np0005597378 systemd[1]: Started Virtual Machine qemu-108-instance-0000005e.
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422940735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.413 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.420 238945 DEBUG nova.compute.provider_tree [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.458 238945 DEBUG nova.scheduler.client.report [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.490 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.491 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.569 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.570 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:00:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 137 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.601 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.621 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.682 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522420.6820095, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.683 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.684 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.685 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.688 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance spawned successfully.#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.688 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.711 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.714 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.731 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.731 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.732 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.732 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.733 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.733 238945 DEBUG nova.virt.libvirt.driver [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.743 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.744 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522420.6827462, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.745 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.749 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.750 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.751 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Creating image(s)#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.770 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.792 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:20.802 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.814 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.817 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.854 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.856 238945 INFO nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 3.52 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.857 238945 DEBUG nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.861 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.891 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.893 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.893 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.894 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.894 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.919 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.924 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.965 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Successfully updated port: e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.979 238945 INFO nova.compute.manager [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 4.66 seconds to build instance.#033[00m
Jan 27 09:00:20 np0005597378 nova_compute[238941]: 2026-01-27 14:00:20.998 238945 DEBUG nova.policy [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4411258cb6240ddb5365fb25e762594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.001 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.002 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.002 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.004 238945 DEBUG oslo_concurrency.lockutils [None req-b8de63f4-36b3-4555-b152-9455bb621847 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.192 238945 DEBUG nova.compute.manager [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-changed-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.193 238945 DEBUG nova.compute.manager [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Refreshing instance network info cache due to event network-changed-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.193 238945 DEBUG oslo_concurrency.lockutils [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.206 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.266 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] resizing rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.374 238945 DEBUG nova.objects.instance [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'migration_context' on Instance uuid 5e1a13b1-a322-4bcd-a54b-0e4061979313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.422 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.422 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Ensure instance console log exists: /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.423 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.423 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.423 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.454 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.455 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.488 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.545 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.546 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.553 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.553 238945 INFO nova.compute.claims [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.576 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.703 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Successfully created port: a2303563-a056-42f8-a941-7a95b6258e2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:00:21 np0005597378 nova_compute[238941]: 2026-01-27 14:00:21.751 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424745161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.335 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.349 238945 DEBUG nova.compute.provider_tree [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.381 238945 DEBUG nova.scheduler.client.report [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.417 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Successfully updated port: a2303563-a056-42f8-a941-7a95b6258e2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.443 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.445 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.454 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.455 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.456 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.524 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.526 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.544 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.561 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:00:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 137 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.645 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.647 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.647 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Creating image(s)#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.667 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.689 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.711 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.715 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.750 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.777 238945 INFO nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Rebuilding instance#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.784 238945 DEBUG nova.network.neutron [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.787 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.787 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.788 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.788 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.805 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.809 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5606aadf-848a-49fc-9cfd-897be16be855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.962 238945 DEBUG nova.compute.manager [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-changed-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.962 238945 DEBUG nova.compute.manager [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Refreshing instance network info cache due to event network-changed-a2303563-a056-42f8-a941-7a95b6258e2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.963 238945 DEBUG oslo_concurrency.lockutils [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.965 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.966 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance network_info: |[{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.967 238945 DEBUG oslo_concurrency.lockutils [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.968 238945 DEBUG nova.network.neutron [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Refreshing network info cache for port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.971 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start _get_guest_xml network_info=[{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.976 238945 WARNING nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.993 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.993 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:22 np0005597378 nova_compute[238941]: 2026-01-27 14:00:22.996 238945 DEBUG nova.policy [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4411258cb6240ddb5365fb25e762594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.001 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.libvirt.host [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.002 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.003 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.004 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.004 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.004 238945 DEBUG nova.virt.hardware [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.007 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.110 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5606aadf-848a-49fc-9cfd-897be16be855_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.176 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] resizing rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.211 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.266 238945 DEBUG nova.objects.instance [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'migration_context' on Instance uuid 5606aadf-848a-49fc-9cfd-897be16be855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.312 238945 DEBUG nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.346 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.346 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Ensure instance console log exists: /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.347 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.347 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.348 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.380 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_requests' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.391 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.401 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'resources' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.413 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'migration_context' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.427 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.430 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:00:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4827397' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.644 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.663 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:23 np0005597378 nova_compute[238941]: 2026-01-27 14:00:23.668 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008564448' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.230 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.233 238945 DEBUG nova.virt.libvirt.vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:18Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.233 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.234 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.235 238945 DEBUG nova.objects.instance [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.250 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <uuid>d133d5f9-1c2b-4996-955c-be57e53a44ec</uuid>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <name>instance-0000005f</name>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-523711638</nova:name>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:22</nova:creationTime>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <nova:port uuid="e50fbfa4-a9d5-403e-a3ce-e3cd499555b4">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <entry name="serial">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <entry name="uuid">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:fe:6a:3d"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <target dev="tape50fbfa4-a9"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/console.log" append="off"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:24 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:24 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:24 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:24 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.251 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Preparing to wait for external event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.252 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.252 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.252 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.253 238945 DEBUG nova.virt.libvirt.vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:18Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.253 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.254 238945 DEBUG nova.network.os_vif_util [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.254 238945 DEBUG os_vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.255 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.256 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.259 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50fbfa4-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.259 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50fbfa4-a9, col_values=(('external_ids', {'iface-id': 'e50fbfa4-a9d5-403e-a3ce-e3cd499555b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:6a:3d', 'vm-uuid': 'd133d5f9-1c2b-4996-955c-be57e53a44ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:24 np0005597378 NetworkManager[48904]: <info>  [1769522424.2615] manager: (tape50fbfa4-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.268 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.269 238945 INFO os_vif [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.341 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.342 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.342 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No VIF found with MAC fa:16:3e:fe:6a:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.343 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Using config drive#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.360 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.407 238945 DEBUG nova.network.neutron [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updating instance_info_cache with network_info: [{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 227 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.8 MiB/s wr, 200 op/s
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.641 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.641 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance network_info: |[{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.642 238945 DEBUG oslo_concurrency.lockutils [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.642 238945 DEBUG nova.network.neutron [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Refreshing network info cache for port a2303563-a056-42f8-a941-7a95b6258e2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.644 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start _get_guest_xml network_info=[{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '0ee8954b-88fb-4f95-ac2f-0ee07bab09cc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.648 238945 WARNING nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.652 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.653 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.656 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.656 238945 DEBUG nova.virt.libvirt.host [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.657 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.658 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.659 238945 DEBUG nova.virt.hardware [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:24 np0005597378 nova_compute[238941]: 2026-01-27 14:00:24.662 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858937623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.250 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.271 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.279 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.318 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Creating config drive at /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.323 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb966ocuq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.461 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb966ocuq" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.489 238945 DEBUG nova.storage.rbd_utils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.494 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.806 238945 DEBUG oslo_concurrency.processutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.807 238945 INFO nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deleting local config drive /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/disk.config because it was imported into RBD.#033[00m
Jan 27 09:00:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667058186' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.846 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.848 238945 DEBUG nova.virt.libvirt.vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1050553987',display_name='tempest-ListServerFiltersTestJSON-instance-1050553987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1050553987',id=96,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-12ac9n4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:20Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5e1a13b1-a322-4bcd-a54b-0e4061979313,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.848 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.849 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:25 np0005597378 kernel: tape50fbfa4-a9: entered promiscuous mode
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.851 238945 DEBUG nova.objects.instance [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e1a13b1-a322-4bcd-a54b-0e4061979313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:25 np0005597378 NetworkManager[48904]: <info>  [1769522425.8525] manager: (tape50fbfa4-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Jan 27 09:00:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:25Z|00862|binding|INFO|Claiming lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for this chassis.
Jan 27 09:00:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:25Z|00863|binding|INFO|e50fbfa4-a9d5-403e-a3ce-e3cd499555b4: Claiming fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.859 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.867 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <uuid>5e1a13b1-a322-4bcd-a54b-0e4061979313</uuid>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <name>instance-00000060</name>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1050553987</nova:name>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:24</nova:creationTime>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <nova:port uuid="a2303563-a056-42f8-a941-7a95b6258e2c">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <entry name="serial">5e1a13b1-a322-4bcd-a54b-0e4061979313</entry>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <entry name="uuid">5e1a13b1-a322-4bcd-a54b-0e4061979313</entry>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5e1a13b1-a322-4bcd-a54b-0e4061979313_disk">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:58:bd:ff"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <target dev="tapa2303563-a0"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/console.log" append="off"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:25 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:25 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:25 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:25 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.868 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Preparing to wait for external event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.876 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.877 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.879 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.879 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.880 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.880 238945 DEBUG nova.virt.libvirt.vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1050553987',display_name='tempest-ListServerFiltersTestJSON-instance-1050553987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1050553987',id=96,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-12ac9n4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:20Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5e1a13b1-a322-4bcd-a54b-0e4061979313,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.881 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.881 238945 DEBUG nova.network.os_vif_util [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.882 238945 DEBUG os_vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:00:25 np0005597378 systemd-udevd[318906]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.883 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.883 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.886 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2303563-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.887 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2303563-a0, col_values=(('external_ids', {'iface-id': 'a2303563-a056-42f8-a941-7a95b6258e2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:bd:ff', 'vm-uuid': '5e1a13b1-a322-4bcd-a54b-0e4061979313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:25 np0005597378 NetworkManager[48904]: <info>  [1769522425.8897] manager: (tapa2303563-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.896 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d2e9c4-eeec-4014-98a4-109c7b72db06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.897 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17bd977f-b1 in ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.900 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17bd977f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7b146687-2b82-46db-ad38-6c4c7a14ad3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.901 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc3f297-e5a5-43ba-be7c-edf246e565b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 NetworkManager[48904]: <info>  [1769522425.9070] device (tape50fbfa4-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:00:25 np0005597378 NetworkManager[48904]: <info>  [1769522425.9075] device (tape50fbfa4-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.914 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe2fbae-61ee-46f2-a69c-26f5ca29acaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 systemd-machined[207425]: New machine qemu-109-instance-0000005f.
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.924 238945 INFO os_vif [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0')#033[00m
Jan 27 09:00:25 np0005597378 systemd[1]: Started Virtual Machine qemu-109-instance-0000005f.
Jan 27 09:00:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:25Z|00864|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 ovn-installed in OVS
Jan 27 09:00:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:25Z|00865|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 up in Southbound
Jan 27 09:00:25 np0005597378 nova_compute[238941]: 2026-01-27 14:00:25.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2b34d2-8c79-4663-8f69-18d67a324377]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.976 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b61c5251-52e8-4853-8451-f7450216eb35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:25.981 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea63845-4095-4b7e-8e57-a3d3c87d4d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:25 np0005597378 NetworkManager[48904]: <info>  [1769522425.9843] manager: (tap17bd977f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Jan 27 09:00:25 np0005597378 systemd-udevd[318912]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.018 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.018 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.019 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No VIF found with MAC fa:16:3e:58:bd:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.019 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Using config drive#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.022 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb2ce7f-8fe6-4036-942c-15be41dc4c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[95b01ed5-9c40-4ce4-896a-643b4aa069e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.046 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:26 np0005597378 NetworkManager[48904]: <info>  [1769522426.0503] device (tap17bd977f-b0): carrier: link connected
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.054 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0d6221-7dc2-41ed-9529-f2d1ef587f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.074 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1fe26f-5366-44c1-bb68-7f9d905c66c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318965, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.103 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9716e-b974-4960-9228-61edb2efd9d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:b243'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506567, 'tstamp': 506567}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318966, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[519c11ec-6cd0-4db5-b2fa-cc1eb7b73f2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318967, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.165 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae5587a-d963-4162-a122-2e302cea88df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.235 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[94a5aa41-fe42-4e34-9163-f1af8f0b08ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.236 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.236 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.237 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:26 np0005597378 NetworkManager[48904]: <info>  [1769522426.2392] manager: (tap17bd977f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 27 09:00:26 np0005597378 kernel: tap17bd977f-b0: entered promiscuous mode
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.244 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:26Z|00866|binding|INFO|Releasing lport c126ea8f-5f2e-4185-8f97-068a91ffc3c0 from this chassis (sb_readonly=0)
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.262 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17bd977f-b066-45e7-b87f-f20ad7836858.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17bd977f-b066-45e7-b87f-f20ad7836858.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.263 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82f9e364-ff16-44f2-ad81-31a85fa2b076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.264 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/17bd977f-b066-45e7-b87f-f20ad7836858.pid.haproxy
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 17bd977f-b066-45e7-b87f-f20ad7836858
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:00:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:26.264 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'env', 'PROCESS_TAG=haproxy-17bd977f-b066-45e7-b87f-f20ad7836858', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17bd977f-b066-45e7-b87f-f20ad7836858.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:00:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 258 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 8.1 MiB/s wr, 272 op/s
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.584 238945 DEBUG nova.network.neutron [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updated VIF entry in instance network info cache for port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.585 238945 DEBUG nova.network.neutron [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.614 238945 DEBUG oslo_concurrency.lockutils [req-472426eb-3c50-40f0-b955-fb213cf23a8f req-c174c3a3-5d1d-4a9d-bec9-e9ea4e22fc4e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.638 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Successfully created port: be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:00:26 np0005597378 podman[318998]: 2026-01-27 14:00:26.614900361 +0000 UTC m=+0.021617060 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:00:26 np0005597378 podman[318998]: 2026-01-27 14:00:26.786921082 +0000 UTC m=+0.193637761 container create 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:00:26 np0005597378 systemd[1]: Started libpod-conmon-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad.scope.
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.850 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522426.849555, d133d5f9-1c2b-4996-955c-be57e53a44ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.851 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d64967d2ad845ead7e18bb62e24f5d4b27c23a3fab6242d89ed99e913dea666/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.874 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.877 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522426.8504746, d133d5f9-1c2b-4996-955c-be57e53a44ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.878 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.899 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:26 np0005597378 podman[318998]: 2026-01-27 14:00:26.901023919 +0000 UTC m=+0.307740628 container init 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.902 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Creating config drive at /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.907 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ojeggs3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:26 np0005597378 podman[318998]: 2026-01-27 14:00:26.911550456 +0000 UTC m=+0.318267145 container start 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:00:26 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : New worker (319064) forked
Jan 27 09:00:26 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : Loading success.
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.952 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.962 238945 DEBUG nova.compute.manager [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.963 238945 DEBUG oslo_concurrency.lockutils [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.963 238945 DEBUG oslo_concurrency.lockutils [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.964 238945 DEBUG oslo_concurrency.lockutils [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.964 238945 DEBUG nova.compute.manager [req-6fad9dad-2f3b-44c6-b879-524d8a061d06 req-9ddc3939-3911-4d91-baa0-0c220c9ffa50 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Processing event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.965 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.970 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.971 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.972 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522426.969821, d133d5f9-1c2b-4996-955c-be57e53a44ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.972 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.978 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance spawned successfully.#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.978 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:00:26 np0005597378 nova_compute[238941]: 2026-01-27 14:00:26.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.000 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.007 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.007 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.008 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.009 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.009 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.010 238945 DEBUG nova.virt.libvirt.driver [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.038 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.057 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ojeggs3" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.082 238945 DEBUG nova.storage.rbd_utils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.086 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.129 238945 INFO nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 8.95 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.129 238945 DEBUG nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.189 238945 INFO nova.compute.manager [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 10.47 seconds to build instance.#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.203 238945 DEBUG oslo_concurrency.lockutils [None req-293cec9e-5750-4491-950a-fa59648a3cbf e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.451 238945 DEBUG nova.network.neutron [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updated VIF entry in instance network info cache for port a2303563-a056-42f8-a941-7a95b6258e2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.452 238945 DEBUG nova.network.neutron [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updating instance_info_cache with network_info: [{"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.471 238945 DEBUG oslo_concurrency.lockutils [req-7ce28646-7968-49de-8cfe-6696ff0963df req-fdfc66da-b95b-4f18-89f0-578ef6d25da6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5e1a13b1-a322-4bcd-a54b-0e4061979313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.490 238945 DEBUG oslo_concurrency.processutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config 5e1a13b1-a322-4bcd-a54b-0e4061979313_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.491 238945 INFO nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deleting local config drive /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313/disk.config because it was imported into RBD.#033[00m
Jan 27 09:00:27 np0005597378 kernel: tapa2303563-a0: entered promiscuous mode
Jan 27 09:00:27 np0005597378 NetworkManager[48904]: <info>  [1769522427.5516] manager: (tapa2303563-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Jan 27 09:00:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:27Z|00867|binding|INFO|Claiming lport a2303563-a056-42f8-a941-7a95b6258e2c for this chassis.
Jan 27 09:00:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:27Z|00868|binding|INFO|a2303563-a056-42f8-a941-7a95b6258e2c: Claiming fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 09:00:27 np0005597378 systemd-udevd[318929]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.563 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.564 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.566 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:00:27 np0005597378 NetworkManager[48904]: <info>  [1769522427.5675] device (tapa2303563-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:00:27 np0005597378 NetworkManager[48904]: <info>  [1769522427.5681] device (tapa2303563-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:00:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:27Z|00869|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c ovn-installed in OVS
Jan 27 09:00:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:27Z|00870|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c up in Southbound
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001594667160830057 of space, bias 1.0, pg target 0.4784001482490171 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006684644649043838 of space, bias 1.0, pg target 0.20053933947131514 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0918164351042836e-06 of space, bias 4.0, pg target 0.0013101797221251404 quantized to 16 (current 16)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:00:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf12416-2d8c-4fd2-83d0-c94aa14e1042]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:27 np0005597378 systemd-machined[207425]: New machine qemu-110-instance-00000060.
Jan 27 09:00:27 np0005597378 systemd[1]: Started Virtual Machine qemu-110-instance-00000060.
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.631 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7ef965-b295-4c4c-b983-04ba9b2a60e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.634 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[424d7d54-b103-43dc-b0c4-451bc4120e11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.675 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f28a7cfc-66ea-49aa-8b66-0b638ce9ffdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.708 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b509bc1-f964-408b-ab90-39445bcb307c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319137, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.729 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a343f6-4804-4319-a355-d830b0405ab8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319138, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319138, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.731 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:27 np0005597378 nova_compute[238941]: 2026-01-27 14:00:27.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.735 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:27.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.043 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522428.0427442, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.043 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.120 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.124 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522428.0428174, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.124 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.269 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.273 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.294 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.402 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Successfully updated port: be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.420 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.420 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.420 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:00:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 8.9 MiB/s wr, 340 op/s
Jan 27 09:00:28 np0005597378 nova_compute[238941]: 2026-01-27 14:00:28.652 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.067 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.076 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.084 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.091 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.105 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.106 238945 WARNING nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.107 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Processing event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.108 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 WARNING nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-changed-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.109 238945 DEBUG nova.compute.manager [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Refreshing instance network info cache due to event network-changed-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.110 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.111 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.115 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522429.1152878, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.116 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.118 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.122 238945 INFO nova.virt.libvirt.driver [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance spawned successfully.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.123 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.143 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.152 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.158 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.159 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.161 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.161 238945 DEBUG nova.virt.libvirt.driver [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.207 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.244 238945 INFO nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 8.49 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.245 238945 DEBUG nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.323 238945 INFO nova.compute.manager [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 9.82 seconds to build instance.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-9949e103-7d37-4084-92d7-d37c96cc2b14 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.465 238945 DEBUG nova.network.neutron [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updating instance_info_cache with network_info: [{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.487 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.488 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance network_info: |[{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.489 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.489 238945 DEBUG nova.network.neutron [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Refreshing network info cache for port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.491 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start _get_guest_xml network_info=[{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.495 238945 WARNING nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.499 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.500 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.504 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.505 238945 DEBUG nova.virt.libvirt.host [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.505 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.505 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3c1ce45b-a317-4d15-b8ae-032b726ecff3',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.506 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.506 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.507 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.508 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.508 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.509 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.509 238945 DEBUG nova.virt.hardware [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:29 np0005597378 nova_compute[238941]: 2026-01-27 14:00:29.512 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261448363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.246 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.274 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.280 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 6.9 MiB/s wr, 310 op/s
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708617905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.981 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.983 238945 DEBUG nova.virt.libvirt.vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-959422175',display_name='tempest-ListServerFiltersTestJSON-instance-959422175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-959422175',id=97,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-4cpwvl1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:22Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5606aadf-848a-49fc-9cfd-897be16be855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.984 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.985 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:30 np0005597378 nova_compute[238941]: 2026-01-27 14:00:30.988 238945 DEBUG nova.objects.instance [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5606aadf-848a-49fc-9cfd-897be16be855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.013 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <uuid>5606aadf-848a-49fc-9cfd-897be16be855</uuid>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <name>instance-00000061</name>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <memory>196608</memory>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-959422175</nova:name>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:29</nova:creationTime>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.micro">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:memory>192</nova:memory>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <nova:port uuid="be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <entry name="serial">5606aadf-848a-49fc-9cfd-897be16be855</entry>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <entry name="uuid">5606aadf-848a-49fc-9cfd-897be16be855</entry>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5606aadf-848a-49fc-9cfd-897be16be855_disk">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5606aadf-848a-49fc-9cfd-897be16be855_disk.config">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:7e:52:2e"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <target dev="tapbe0e12b2-58"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/console.log" append="off"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:31 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:31 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:31 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:31 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.014 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Preparing to wait for external event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.014 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.014 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.015 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.015 238945 DEBUG nova.virt.libvirt.vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-959422175',display_name='tempest-ListServerFiltersTestJSON-instance-959422175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-959422175',id=97,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-4cpwvl1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:00:22Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5606aadf-848a-49fc-9cfd-897be16be855,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.015 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.016 238945 DEBUG nova.network.os_vif_util [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.016 238945 DEBUG os_vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.017 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.018 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe0e12b2-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe0e12b2-58, col_values=(('external_ids', {'iface-id': 'be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:52:2e', 'vm-uuid': '5606aadf-848a-49fc-9cfd-897be16be855'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:31 np0005597378 NetworkManager[48904]: <info>  [1769522431.0255] manager: (tapbe0e12b2-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.034 238945 INFO os_vif [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58')#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.119 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] No VIF found with MAC fa:16:3e:7e:52:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.120 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Using config drive#033[00m
Jan 27 09:00:31 np0005597378 podman[319245]: 2026-01-27 14:00:31.149882108 +0000 UTC m=+0.072746868 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.154 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:31 np0005597378 podman[319281]: 2026-01-27 14:00:31.794310645 +0000 UTC m=+0.139271350 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.943 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Creating config drive at /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config#033[00m
Jan 27 09:00:31 np0005597378 nova_compute[238941]: 2026-01-27 14:00:31.951 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpooaxbjft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.106 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpooaxbjft" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.138 238945 DEBUG nova.storage.rbd_utils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] rbd image 5606aadf-848a-49fc-9cfd-897be16be855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.143 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config 5606aadf-848a-49fc-9cfd-897be16be855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.194 238945 DEBUG nova.network.neutron [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updated VIF entry in instance network info cache for port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.195 238945 DEBUG nova.network.neutron [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updating instance_info_cache with network_info: [{"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.213 238945 DEBUG oslo_concurrency.lockutils [req-f09046cb-0131-4017-9f5b-a1ac72719516 req-40cc6232-dcaf-4870-8845-496629dccecf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5606aadf-848a-49fc-9cfd-897be16be855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.314 238945 DEBUG oslo_concurrency.processutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config 5606aadf-848a-49fc-9cfd-897be16be855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.315 238945 INFO nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deleting local config drive /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855/disk.config because it was imported into RBD.#033[00m
Jan 27 09:00:32 np0005597378 NetworkManager[48904]: <info>  [1769522432.3716] manager: (tapbe0e12b2-58): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Jan 27 09:00:32 np0005597378 kernel: tapbe0e12b2-58: entered promiscuous mode
Jan 27 09:00:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:32Z|00871|binding|INFO|Claiming lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for this chassis.
Jan 27 09:00:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:32Z|00872|binding|INFO|be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98: Claiming fa:16:3e:7e:52:2e 10.100.0.6
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.385 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:52:2e 10.100.0.6'], port_security=['fa:16:3e:7e:52:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5606aadf-848a-49fc-9cfd-897be16be855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.386 154802 INFO neutron.agent.ovn.metadata.agent [-] Port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.388 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f465037-57cc-4468-b3df-10ea61bf5f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:32Z|00873|binding|INFO|Setting lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 ovn-installed in OVS
Jan 27 09:00:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:32Z|00874|binding|INFO|Setting lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 up in Southbound
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:32 np0005597378 systemd-machined[207425]: New machine qemu-111-instance-00000061.
Jan 27 09:00:32 np0005597378 systemd[1]: Started Virtual Machine qemu-111-instance-00000061.
Jan 27 09:00:32 np0005597378 systemd-udevd[319362]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.454 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[be70d9e1-8e6a-478b-ab79-951e1d2459c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.458 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[85393839-8054-44c9-8e47-2b7e7baa2e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:32 np0005597378 NetworkManager[48904]: <info>  [1769522432.4685] device (tapbe0e12b2-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:00:32 np0005597378 NetworkManager[48904]: <info>  [1769522432.4695] device (tapbe0e12b2-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.501 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0c566921-4a63-4830-86f1-6895ee30c93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a1648b33-ad43-429f-8d32-8d27b0ce4505]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319372, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.565 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44b279fa-fbc6-4646-b02a-29a5852731ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319374, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319374, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.567 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.575 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:32.577 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 273 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.4 MiB/s wr, 270 op/s
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.674 238945 DEBUG nova.compute.manager [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.675 238945 DEBUG oslo_concurrency.lockutils [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.675 238945 DEBUG oslo_concurrency.lockutils [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.675 238945 DEBUG oslo_concurrency.lockutils [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:32 np0005597378 nova_compute[238941]: 2026-01-27 14:00:32.676 238945 DEBUG nova.compute.manager [req-1f61dad4-0307-4e3a-89dd-006ba4ad49b2 req-9e1a99e6-ffee-4243-9f14-29660e962254 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Processing event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:00:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.119 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.120 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522433.1205235, 5606aadf-848a-49fc-9cfd-897be16be855 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.121 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.123 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.127 238945 INFO nova.virt.libvirt.driver [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance spawned successfully.#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.127 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.143 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.148 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.154 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.154 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.155 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.155 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.156 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.156 238945 DEBUG nova.virt.libvirt.driver [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.179 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.180 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522433.122855, 5606aadf-848a-49fc-9cfd-897be16be855 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.180 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.211 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.214 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522433.1230502, 5606aadf-848a-49fc-9cfd-897be16be855 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.214 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.225 238945 INFO nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 10.58 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.226 238945 DEBUG nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.236 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.240 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.270 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.295 238945 INFO nova.compute.manager [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 11.77 seconds to build instance.#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.316 238945 DEBUG oslo_concurrency.lockutils [None req-04d8811b-1d44-4118-84b7-fd9f2d7b6154 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:33 np0005597378 nova_compute[238941]: 2026-01-27 14:00:33.499 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 09:00:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 294 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 7.1 MiB/s wr, 385 op/s
Jan 27 09:00:34 np0005597378 nova_compute[238941]: 2026-01-27 14:00:34.852 238945 DEBUG nova.compute.manager [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:34 np0005597378 nova_compute[238941]: 2026-01-27 14:00:34.852 238945 DEBUG oslo_concurrency.lockutils [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:34 np0005597378 nova_compute[238941]: 2026-01-27 14:00:34.852 238945 DEBUG oslo_concurrency.lockutils [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:34 np0005597378 nova_compute[238941]: 2026-01-27 14:00:34.853 238945 DEBUG oslo_concurrency.lockutils [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:34 np0005597378 nova_compute[238941]: 2026-01-27 14:00:34.853 238945 DEBUG nova.compute.manager [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] No waiting events found dispatching network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:34 np0005597378 nova_compute[238941]: 2026-01-27 14:00:34.853 238945 WARNING nova.compute.manager [req-011e91c7-fe36-4bd9-a0fd-ed3103a40b6e req-b5d4c58a-8832-4e91-ad46-fbca91c82b52 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received unexpected event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:00:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:00:36 np0005597378 nova_compute[238941]: 2026-01-27 14:00:36.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.10072222 +0000 UTC m=+0.052820872 container create f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:00:36 np0005597378 systemd[1]: Started libpod-conmon-f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083.scope.
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.077048936 +0000 UTC m=+0.029147608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:00:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.194644494 +0000 UTC m=+0.146743176 container init f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.201716911 +0000 UTC m=+0.153815563 container start f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.205224703 +0000 UTC m=+0.157323375 container attach f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:00:36 np0005597378 mystifying_jepsen[319572]: 167 167
Jan 27 09:00:36 np0005597378 systemd[1]: libpod-f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083.scope: Deactivated successfully.
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.208479309 +0000 UTC m=+0.160577961 container died f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c1e9d0f07578f4dbef2e3177a2de46833582f700d462cfab8b9dcece9b2f8cb3-merged.mount: Deactivated successfully.
Jan 27 09:00:36 np0005597378 podman[319557]: 2026-01-27 14:00:36.254627255 +0000 UTC m=+0.206725907 container remove f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:00:36 np0005597378 systemd[1]: libpod-conmon-f12feea341245cdf83e3525ea170a6ca28dd601b8a3d6762b90f751e7c007083.scope: Deactivated successfully.
Jan 27 09:00:36 np0005597378 podman[319597]: 2026-01-27 14:00:36.488192788 +0000 UTC m=+0.057433644 container create 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:00:36 np0005597378 systemd[1]: Started libpod-conmon-763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3.scope.
Jan 27 09:00:36 np0005597378 podman[319597]: 2026-01-27 14:00:36.463204319 +0000 UTC m=+0.032445185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:00:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 322 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.0 MiB/s wr, 367 op/s
Jan 27 09:00:36 np0005597378 podman[319597]: 2026-01-27 14:00:36.593700247 +0000 UTC m=+0.162941123 container init 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:36 np0005597378 podman[319597]: 2026-01-27 14:00:36.60330808 +0000 UTC m=+0.172548926 container start 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:00:36 np0005597378 podman[319597]: 2026-01-27 14:00:36.607564712 +0000 UTC m=+0.176805558 container attach 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:00:37 np0005597378 funny_cartwright[319614]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:00:37 np0005597378 funny_cartwright[319614]: --> All data devices are unavailable
Jan 27 09:00:37 np0005597378 systemd[1]: libpod-763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3.scope: Deactivated successfully.
Jan 27 09:00:37 np0005597378 podman[319597]: 2026-01-27 14:00:37.168934501 +0000 UTC m=+0.738175367 container died 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:00:37 np0005597378 nova_compute[238941]: 2026-01-27 14:00:37.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3fd560e2c7dbcc860860f7f07a3bbc2c5c26b5c1039aa5ab7c41bf10a53e3517-merged.mount: Deactivated successfully.
Jan 27 09:00:37 np0005597378 podman[319597]: 2026-01-27 14:00:37.261983252 +0000 UTC m=+0.831224098 container remove 763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:00:37 np0005597378 systemd[1]: libpod-conmon-763b25b9af6d770bd87b73cf0f4bbdb46a807c3b2fe22606ff222a4c685da1d3.scope: Deactivated successfully.
Jan 27 09:00:37 np0005597378 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 27 09:00:37 np0005597378 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005e.scope: Consumed 14.125s CPU time.
Jan 27 09:00:37 np0005597378 systemd-machined[207425]: Machine qemu-108-instance-0000005e terminated.
Jan 27 09:00:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.814686682 +0000 UTC m=+0.050295715 container create c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:37 np0005597378 systemd[1]: Started libpod-conmon-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope.
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.794594963 +0000 UTC m=+0.030204026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:00:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.906910301 +0000 UTC m=+0.142519364 container init c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.915515699 +0000 UTC m=+0.151124732 container start c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.919226666 +0000 UTC m=+0.154835699 container attach c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:00:37 np0005597378 systemd[1]: libpod-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope: Deactivated successfully.
Jan 27 09:00:37 np0005597378 relaxed_bardeen[319722]: 167 167
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.933658026 +0000 UTC m=+0.169267069 container died c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:37 np0005597378 conmon[319722]: conmon c22fd3d47aef7be06c3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope/container/memory.events
Jan 27 09:00:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3a1602233fc7c638fe8a473ae1790cf8aa942e2c2925da1aaade035a54eb7737-merged.mount: Deactivated successfully.
Jan 27 09:00:37 np0005597378 podman[319706]: 2026-01-27 14:00:37.975014916 +0000 UTC m=+0.210623949 container remove c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:00:37 np0005597378 systemd[1]: libpod-conmon-c22fd3d47aef7be06c3c469c0b79e9f80351ff252d063eb26434a2540217893d.scope: Deactivated successfully.
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.195892945 +0000 UTC m=+0.046886697 container create 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:38 np0005597378 systemd[1]: Started libpod-conmon-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope.
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.172794006 +0000 UTC m=+0.023787778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:00:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.296290529 +0000 UTC m=+0.147284311 container init 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.303559231 +0000 UTC m=+0.154552983 container start 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.307226607 +0000 UTC m=+0.158220359 container attach 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:00:38 np0005597378 nova_compute[238941]: 2026-01-27 14:00:38.543 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance shutdown successfully after 15 seconds.#033[00m
Jan 27 09:00:38 np0005597378 nova_compute[238941]: 2026-01-27 14:00:38.551 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance destroyed successfully.#033[00m
Jan 27 09:00:38 np0005597378 nova_compute[238941]: 2026-01-27 14:00:38.578 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance destroyed successfully.#033[00m
Jan 27 09:00:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 336 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.1 MiB/s wr, 356 op/s
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]: {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:    "0": [
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:        {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "devices": [
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "/dev/loop3"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            ],
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_name": "ceph_lv0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_size": "21470642176",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "name": "ceph_lv0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "tags": {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cluster_name": "ceph",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.crush_device_class": "",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.encrypted": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.objectstore": "bluestore",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osd_id": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.type": "block",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.vdo": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.with_tpm": "0"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            },
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "type": "block",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "vg_name": "ceph_vg0"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:        }
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:    ],
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:    "1": [
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:        {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "devices": [
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "/dev/loop4"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            ],
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_name": "ceph_lv1",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_size": "21470642176",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "name": "ceph_lv1",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "tags": {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cluster_name": "ceph",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.crush_device_class": "",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.encrypted": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.objectstore": "bluestore",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osd_id": "1",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.type": "block",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.vdo": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.with_tpm": "0"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            },
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "type": "block",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "vg_name": "ceph_vg1"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:        }
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:    ],
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:    "2": [
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:        {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "devices": [
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "/dev/loop5"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            ],
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_name": "ceph_lv2",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_size": "21470642176",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "name": "ceph_lv2",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "tags": {
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.cluster_name": "ceph",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.crush_device_class": "",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.encrypted": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.objectstore": "bluestore",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osd_id": "2",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.type": "block",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.vdo": "0",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:                "ceph.with_tpm": "0"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            },
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "type": "block",
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:            "vg_name": "ceph_vg2"
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:        }
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]:    ]
Jan 27 09:00:38 np0005597378 heuristic_wescoff[319761]: }
Jan 27 09:00:38 np0005597378 systemd[1]: libpod-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope: Deactivated successfully.
Jan 27 09:00:38 np0005597378 conmon[319761]: conmon 58326eb3035e0accb318 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope/container/memory.events
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.648229401 +0000 UTC m=+0.499223153 container died 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-20758e506f18be614996ed56ed0259b900a4f3dc82219afd4e1a2e61ee577c77-merged.mount: Deactivated successfully.
Jan 27 09:00:38 np0005597378 podman[319745]: 2026-01-27 14:00:38.701454543 +0000 UTC m=+0.552448295 container remove 58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:00:38 np0005597378 systemd[1]: libpod-conmon-58326eb3035e0accb318e8d4ad045d0e4eae57970235a7e69f3d827bcd18e3fa.scope: Deactivated successfully.
Jan 27 09:00:38 np0005597378 nova_compute[238941]: 2026-01-27 14:00:38.940 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting instance files /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del#033[00m
Jan 27 09:00:38 np0005597378 nova_compute[238941]: 2026-01-27 14:00:38.941 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deletion of /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del complete#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.066 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.066 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating image(s)#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.091 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.119 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.151 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.158 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:39 np0005597378 podman[319914]: 2026-01-27 14:00:39.210226895 +0000 UTC m=+0.042715005 container create bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:00:39 np0005597378 systemd[1]: Started libpod-conmon-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope.
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.254 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.256 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.256 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.257 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:39 np0005597378 podman[319914]: 2026-01-27 14:00:39.277524628 +0000 UTC m=+0.110012748 container init bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:00:39 np0005597378 podman[319914]: 2026-01-27 14:00:39.283476386 +0000 UTC m=+0.115964506 container start bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:00:39 np0005597378 podman[319914]: 2026-01-27 14:00:39.288697463 +0000 UTC m=+0.121185603 container attach bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:39 np0005597378 condescending_easley[319931]: 167 167
Jan 27 09:00:39 np0005597378 systemd[1]: libpod-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope: Deactivated successfully.
Jan 27 09:00:39 np0005597378 podman[319914]: 2026-01-27 14:00:39.195200359 +0000 UTC m=+0.027688499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:00:39 np0005597378 conmon[319931]: conmon bc579fb5da35947e7ae0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope/container/memory.events
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.291 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.299 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:39 np0005597378 podman[319956]: 2026-01-27 14:00:39.333456192 +0000 UTC m=+0.026430448 container died bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:00:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c7d5763180754f4d9f55d92de632b48d7e0ce18e612233f36b4d546c6202e3af-merged.mount: Deactivated successfully.
Jan 27 09:00:39 np0005597378 podman[319956]: 2026-01-27 14:00:39.436571488 +0000 UTC m=+0.129545704 container remove bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_easley, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:00:39 np0005597378 systemd[1]: libpod-conmon-bc579fb5da35947e7ae02a57f02bf9d9351b39da7a44b47176e9cca137e21da3.scope: Deactivated successfully.
Jan 27 09:00:39 np0005597378 podman[319996]: 2026-01-27 14:00:39.653700038 +0000 UTC m=+0.049527775 container create e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.674 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:39 np0005597378 systemd[1]: Started libpod-conmon-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope.
Jan 27 09:00:39 np0005597378 podman[319996]: 2026-01-27 14:00:39.628852453 +0000 UTC m=+0.024680220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:00:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:00:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:00:39 np0005597378 podman[319996]: 2026-01-27 14:00:39.775800415 +0000 UTC m=+0.171628162 container init e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:39 np0005597378 podman[319996]: 2026-01-27 14:00:39.785637374 +0000 UTC m=+0.181465111 container start e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:00:39 np0005597378 podman[319996]: 2026-01-27 14:00:39.792488154 +0000 UTC m=+0.188315911 container attach e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.792 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] resizing rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:00:39 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.990 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.992 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Ensure instance console log exists: /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.993 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.993 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.994 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:39 np0005597378 nova_compute[238941]: 2026-01-27 14:00:39.995 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.005 238945 WARNING nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.013 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.015 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.018 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.019 238945 DEBUG nova.virt.libvirt.host [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.019 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.019 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.020 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.021 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.022 238945 DEBUG nova.virt.hardware [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.022 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.041 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:40Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 09:00:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:40Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 09:00:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 324 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 311 op/s
Jan 27 09:00:40 np0005597378 lvm[320180]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:00:40 np0005597378 lvm[320180]: VG ceph_vg0 finished
Jan 27 09:00:40 np0005597378 lvm[320183]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:00:40 np0005597378 lvm[320183]: VG ceph_vg1 finished
Jan 27 09:00:40 np0005597378 lvm[320185]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:00:40 np0005597378 lvm[320185]: VG ceph_vg2 finished
Jan 27 09:00:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567052827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.762 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:40 np0005597378 thirsty_jang[320031]: {}
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.803 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:40 np0005597378 nova_compute[238941]: 2026-01-27 14:00:40.808 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:40 np0005597378 systemd[1]: libpod-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope: Deactivated successfully.
Jan 27 09:00:40 np0005597378 systemd[1]: libpod-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope: Consumed 1.471s CPU time.
Jan 27 09:00:40 np0005597378 podman[320209]: 2026-01-27 14:00:40.857959423 +0000 UTC m=+0.029570430 container died e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:00:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c207c1444e89707f75db48c4f91a6108e25bc19a6e645f347f7f0259c993fd88-merged.mount: Deactivated successfully.
Jan 27 09:00:40 np0005597378 podman[320209]: 2026-01-27 14:00:40.922863292 +0000 UTC m=+0.094474269 container remove e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:00:40 np0005597378 systemd[1]: libpod-conmon-e65e864f1296924fe51a46794634b8523ac017a1683924bafb8568ad48b2559c.scope: Deactivated successfully.
Jan 27 09:00:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:00:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:00:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:00:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:00:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:00:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466128197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.460 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.464 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <uuid>5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</uuid>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <name>instance-0000005e</name>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV247Test-server-2098162892</nova:name>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:40</nova:creationTime>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:user uuid="93260199bb344997ae7449060a9adee6">tempest-ServerShowV247Test-29714096-project-member</nova:user>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <nova:project uuid="6f0de6d14fb34a0b805053a94d5e8a6c">tempest-ServerShowV247Test-29714096</nova:project>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <entry name="serial">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <entry name="uuid">5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2</entry>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/console.log" append="off"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:41 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:41 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:41 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:41 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.531 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.531 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.532 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Using config drive#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.553 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.568 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.602 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'keypairs' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.870 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Creating config drive at /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.876 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7bcr0joy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.912 238945 DEBUG oslo_concurrency.lockutils [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.913 238945 DEBUG oslo_concurrency.lockutils [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.913 238945 DEBUG nova.compute.manager [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.916 238945 DEBUG nova.compute.manager [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.917 238945 DEBUG nova.objects.instance [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'flavor' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:41 np0005597378 nova_compute[238941]: 2026-01-27 14:00:41.941 238945 DEBUG nova.virt.libvirt.driver [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.018 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7bcr0joy" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.045 238945 DEBUG nova.storage.rbd_utils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] rbd image 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.049 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.193 238945 DEBUG oslo_concurrency.processutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.193 238945 INFO nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting local config drive /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2/disk.config because it was imported into RBD.#033[00m
Jan 27 09:00:42 np0005597378 systemd-machined[207425]: New machine qemu-112-instance-0000005e.
Jan 27 09:00:42 np0005597378 systemd[1]: Started Virtual Machine qemu-112-instance-0000005e.
Jan 27 09:00:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 324 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 301 op/s
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.662 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.663 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522442.6617205, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.664 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.667 238945 DEBUG nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.668 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.672 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance spawned successfully.#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.672 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.690 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.697 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.701 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.701 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.702 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.702 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.703 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.703 238945 DEBUG nova.virt.libvirt.driver [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.726 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.727 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522442.666904, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.752 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.769 238945 DEBUG nova.compute.manager [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.781 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.829 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.829 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.829 238945 DEBUG nova.objects.instance [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:00:42 np0005597378 nova_compute[238941]: 2026-01-27 14:00:42.888 238945 DEBUG oslo_concurrency.lockutils [None req-eb92cc99-17ad-4eb8-9ca0-aad6402f8a97 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:44 np0005597378 kernel: tape50fbfa4-a9 (unregistering): left promiscuous mode
Jan 27 09:00:44 np0005597378 NetworkManager[48904]: <info>  [1769522444.2451] device (tape50fbfa4-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:00:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:44Z|00875|binding|INFO|Releasing lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 from this chassis (sb_readonly=0)
Jan 27 09:00:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:44Z|00876|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 down in Southbound
Jan 27 09:00:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:44Z|00877|binding|INFO|Removing iface tape50fbfa4-a9 ovn-installed in OVS
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.279 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.280 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.281 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3975d559-8df0-4cc8-8ef8-9df7c9188142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:44 np0005597378 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 27 09:00:44 np0005597378 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Consumed 14.560s CPU time.
Jan 27 09:00:44 np0005597378 systemd-machined[207425]: Machine qemu-109-instance-0000005f terminated.
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.376 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb82a8d3-64e0-49fb-95fa-a0eb420fed5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.379 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c34ab80-6d8d-4c73-9d0b-e96cf5b67eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.417 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf1df01-b24d-42ef-9039-ae65e163a022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.441 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e09f028-11d2-4bc1-b54a-5c24e5b3893c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320397, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.465 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09fcc4d6-4b25-4c88-b76f-a4fcf55a8f0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320398, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320398, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.467 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.476 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:44.476 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.490 238945 DEBUG nova.compute.manager [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.491 238945 DEBUG oslo_concurrency.lockutils [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.492 238945 DEBUG oslo_concurrency.lockutils [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.492 238945 DEBUG oslo_concurrency.lockutils [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.492 238945 DEBUG nova.compute.manager [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.493 238945 WARNING nova.compute.manager [req-b4232771-e743-4d5f-8245-277e08096372 req-f92d7dc0-ef61-43bf-9c68-d9c489203110 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state powering-off.#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 328 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.0 MiB/s wr, 399 op/s
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.659 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.660 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.661 238945 INFO nova.compute.manager [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Terminating instance#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.662 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "refresh_cache-5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.662 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquired lock "refresh_cache-5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.662 238945 DEBUG nova.network.neutron [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.829 238945 DEBUG nova.network.neutron [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.957 238945 INFO nova.virt.libvirt.driver [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.964 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance destroyed successfully.#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.965 238945 DEBUG nova.objects.instance [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'numa_topology' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:44 np0005597378 nova_compute[238941]: 2026-01-27 14:00:44.979 238945 DEBUG nova.compute.manager [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:45 np0005597378 nova_compute[238941]: 2026-01-27 14:00:45.034 238945 DEBUG oslo_concurrency.lockutils [None req-a5f83445-849d-49d1-8dbc-c62802493575 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:45 np0005597378 nova_compute[238941]: 2026-01-27 14:00:45.143 238945 DEBUG nova.network.neutron [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:45 np0005597378 nova_compute[238941]: 2026-01-27 14:00:45.157 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Releasing lock "refresh_cache-5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:45 np0005597378 nova_compute[238941]: 2026-01-27 14:00:45.159 238945 DEBUG nova.compute.manager [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:00:45 np0005597378 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 27 09:00:45 np0005597378 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005e.scope: Consumed 2.905s CPU time.
Jan 27 09:00:45 np0005597378 systemd-machined[207425]: Machine qemu-112-instance-0000005e terminated.
Jan 27 09:00:45 np0005597378 nova_compute[238941]: 2026-01-27 14:00:45.382 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance destroyed successfully.#033[00m
Jan 27 09:00:45 np0005597378 nova_compute[238941]: 2026-01-27 14:00:45.382 238945 DEBUG nova.objects.instance [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'resources' on Instance uuid 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:46Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 09:00:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:46Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.575 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'flavor' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 340 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.6 MiB/s wr, 340 op/s
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.596 238945 DEBUG nova.compute.manager [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.596 238945 DEBUG oslo_concurrency.lockutils [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.596 238945 DEBUG oslo_concurrency.lockutils [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.597 238945 DEBUG oslo_concurrency.lockutils [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.597 238945 DEBUG nova.compute.manager [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.597 238945 WARNING nova.compute.manager [req-6932f2e9-7983-4040-9211-ad7ca0dbb0b3 req-be08013e-0096-4e1a-9790-6282ef732990 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.599 238945 DEBUG oslo_concurrency.lockutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.599 238945 DEBUG oslo_concurrency.lockutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquired lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.600 238945 DEBUG nova.network.neutron [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:00:46 np0005597378 nova_compute[238941]: 2026-01-27 14:00:46.600 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'info_cache' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.343 238945 INFO nova.virt.libvirt.driver [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deleting instance files /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.344 238945 INFO nova.virt.libvirt.driver [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deletion of /var/lib/nova/instances/5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2_del complete#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.531 238945 INFO nova.compute.manager [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 2.37 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.532 238945 DEBUG oslo.service.loopingcall [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.532 238945 DEBUG nova.compute.manager [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.532 238945 DEBUG nova.network.neutron [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.686 238945 DEBUG nova.network.neutron [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.699 238945 DEBUG nova.network.neutron [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.712 238945 INFO nova.compute.manager [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Took 0.18 seconds to deallocate network for instance.#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.751 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.752 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:00:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:00:47 np0005597378 nova_compute[238941]: 2026-01-27 14:00:47.876 238945 DEBUG oslo_concurrency.processutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.160 238945 DEBUG nova.network.neutron [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.176 238945 DEBUG oslo_concurrency.lockutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Releasing lock "refresh_cache-d133d5f9-1c2b-4996-955c-be57e53a44ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.204 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance destroyed successfully.#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.204 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'numa_topology' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.223 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.238 238945 DEBUG nova.virt.libvirt.vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:45Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.238 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.239 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.240 238945 DEBUG os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.243 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50fbfa4-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.249 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.251 238945 INFO os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.259 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start _get_guest_xml network_info=[{"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.264 238945 WARNING nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.271 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.272 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.275 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.276 238945 DEBUG nova.virt.libvirt.host [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.276 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.282 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.283 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.283 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.284 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.284 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.285 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.285 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.286 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.286 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.287 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.287 238945 DEBUG nova.virt.hardware [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.288 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'vcpu_model' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.304 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216676968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.496 238945 DEBUG oslo_concurrency.processutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.504 238945 DEBUG nova.compute.provider_tree [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.526 238945 DEBUG nova.scheduler.client.report [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.549 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.577 238945 INFO nova.scheduler.client.report [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Deleted allocations for instance 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2#033[00m
Jan 27 09:00:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 339 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 337 op/s
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.648 238945 DEBUG oslo_concurrency.lockutils [None req-5b33ce89-323f-411c-8356-daede7e7f7c6 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479545851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.887 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:48 np0005597378 nova_compute[238941]: 2026-01-27 14:00:48.918 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.119 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "cc034275-7dd9-4d59-82ed-28755e2c6559" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.120 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.120 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "cc034275-7dd9-4d59-82ed-28755e2c6559-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.121 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.121 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.123 238945 INFO nova.compute.manager [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Terminating instance#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "refresh_cache-cc034275-7dd9-4d59-82ed-28755e2c6559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.124 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquired lock "refresh_cache-cc034275-7dd9-4d59-82ed-28755e2c6559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.124 238945 DEBUG nova.network.neutron [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.421 238945 DEBUG nova.network.neutron [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:00:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651612069' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.532 238945 DEBUG oslo_concurrency.processutils [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.533 238945 DEBUG nova.virt.libvirt.vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:45Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.534 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.535 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.536 238945 DEBUG nova.objects.instance [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'pci_devices' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.550 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <uuid>d133d5f9-1c2b-4996-955c-be57e53a44ec</uuid>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <name>instance-0000005f</name>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-523711638</nova:name>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:00:48</nova:creationTime>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:user uuid="e4411258cb6240ddb5365fb25e762594">tempest-ListServerFiltersTestJSON-1240027263-project-member</nova:user>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:project uuid="d302ba58879d43258f0a8abe2d81f03a">tempest-ListServerFiltersTestJSON-1240027263</nova:project>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <nova:port uuid="e50fbfa4-a9d5-403e-a3ce-e3cd499555b4">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <entry name="serial">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <entry name="uuid">d133d5f9-1c2b-4996-955c-be57e53a44ec</entry>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d133d5f9-1c2b-4996-955c-be57e53a44ec_disk.config">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:fe:6a:3d"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <target dev="tape50fbfa4-a9"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec/console.log" append="off"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:00:49 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:00:49 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:00:49 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:00:49 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.552 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.552 238945 DEBUG nova.virt.libvirt.driver [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.554 238945 DEBUG nova.virt.libvirt.vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:45Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.554 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.555 238945 DEBUG nova.network.os_vif_util [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.555 238945 DEBUG os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.556 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.557 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.559 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50fbfa4-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.560 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50fbfa4-a9, col_values=(('external_ids', {'iface-id': 'e50fbfa4-a9d5-403e-a3ce-e3cd499555b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:6a:3d', 'vm-uuid': 'd133d5f9-1c2b-4996-955c-be57e53a44ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 NetworkManager[48904]: <info>  [1769522449.5623] manager: (tape50fbfa4-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.566 238945 INFO os_vif [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')#033[00m
Jan 27 09:00:49 np0005597378 kernel: tape50fbfa4-a9: entered promiscuous mode
Jan 27 09:00:49 np0005597378 NetworkManager[48904]: <info>  [1769522449.6449] manager: (tape50fbfa4-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Jan 27 09:00:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:49Z|00878|binding|INFO|Claiming lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for this chassis.
Jan 27 09:00:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:49Z|00879|binding|INFO|e50fbfa4-a9d5-403e-a3ce-e3cd499555b4: Claiming fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:49Z|00880|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 ovn-installed in OVS
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 systemd-udevd[320531]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:00:49 np0005597378 systemd-machined[207425]: New machine qemu-113-instance-0000005f.
Jan 27 09:00:49 np0005597378 NetworkManager[48904]: <info>  [1769522449.6925] device (tape50fbfa4-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:00:49 np0005597378 NetworkManager[48904]: <info>  [1769522449.6935] device (tape50fbfa4-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:00:49 np0005597378 systemd[1]: Started Virtual Machine qemu-113-instance-0000005f.
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.698 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.699 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 bound to our chassis#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.700 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:00:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:49Z|00881|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 up in Southbound
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a76a5cb-bdbe-492b-9d0b-e1fa8086ce9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.744 238945 DEBUG nova.network.neutron [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.748 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0a950a-6588-4d83-8926-18e88538d21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.751 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2f3bd7-dbb2-4dd2-9662-7e7a7a44dbad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.761 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Releasing lock "refresh_cache-cc034275-7dd9-4d59-82ed-28755e2c6559" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.762 238945 DEBUG nova.compute.manager [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.792 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcb95bc-4f51-4ba4-9549-5c418870ed09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.823 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb723b5-2a52-4f65-87cd-939d0fe1c9ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320545, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.842 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[367be611-d3ef-46ba-9387-0e78fb220439]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320546, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320546, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:00:49 np0005597378 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 27 09:00:49 np0005597378 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005d.scope: Consumed 13.585s CPU time.
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.844 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:49 np0005597378 systemd-machined[207425]: Machine qemu-107-instance-0000005d terminated.
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.849 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.850 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:00:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:00:49.850 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG nova.compute.manager [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG oslo_concurrency.lockutils [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG oslo_concurrency.lockutils [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.982 238945 DEBUG oslo_concurrency.lockutils [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.983 238945 DEBUG nova.compute.manager [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.983 238945 WARNING nova.compute.manager [req-26d940a2-9207-4b65-ad70-990a19696470 req-48e530f9-7dcd-4165-8d28-5d1e4193bc09 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.991 238945 INFO nova.virt.libvirt.driver [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance destroyed successfully.#033[00m
Jan 27 09:00:49 np0005597378 nova_compute[238941]: 2026-01-27 14:00:49.991 238945 DEBUG nova.objects.instance [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lazy-loading 'resources' on Instance uuid cc034275-7dd9-4d59-82ed-28755e2c6559 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.278 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for d133d5f9-1c2b-4996-955c-be57e53a44ec due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.279 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522450.2777545, d133d5f9-1c2b-4996-955c-be57e53a44ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.279 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.282 238945 DEBUG nova.compute.manager [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.287 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance rebooted successfully.#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.287 238945 DEBUG nova.compute.manager [None req-5b4b3d83-e50f-482a-b737-4efe049e616c e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.317 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.321 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.346 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.347 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522450.2818313, d133d5f9-1c2b-4996-955c-be57e53a44ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.347 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Started (Lifecycle Event)#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.372 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.375 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.411 238945 INFO nova.virt.libvirt.driver [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deleting instance files /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559_del#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.412 238945 INFO nova.virt.libvirt.driver [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deletion of /var/lib/nova/instances/cc034275-7dd9-4d59-82ed-28755e2c6559_del complete#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.483 238945 INFO nova.compute.manager [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.483 238945 DEBUG oslo.service.loopingcall [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.484 238945 DEBUG nova.compute.manager [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.484 238945 DEBUG nova.network.neutron [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:00:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 342 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.3 MiB/s wr, 312 op/s
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.661 238945 DEBUG nova.network.neutron [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.678 238945 DEBUG nova.network.neutron [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.693 238945 INFO nova.compute.manager [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Took 0.21 seconds to deallocate network for instance.#033[00m
Jan 27 09:00:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:50Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:52:2e 10.100.0.6
Jan 27 09:00:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:00:50Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:52:2e 10.100.0.6
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.742 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:50 np0005597378 nova_compute[238941]: 2026-01-27 14:00:50.836 238945 DEBUG oslo_concurrency.processutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909640981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.525 238945 DEBUG oslo_concurrency.processutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.530 238945 DEBUG nova.compute.provider_tree [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.543 238945 DEBUG nova.scheduler.client.report [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.965 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.967 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.968 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.968 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:00:51 np0005597378 nova_compute[238941]: 2026-01-27 14:00:51.968 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.125 238945 INFO nova.scheduler.client.report [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Deleted allocations for instance cc034275-7dd9-4d59-82ed-28755e2c6559#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.187 238945 DEBUG oslo_concurrency.lockutils [None req-a7b1416f-d9e5-4c10-a58e-b82a814b2d3b 93260199bb344997ae7449060a9adee6 6f0de6d14fb34a0b805053a94d5e8a6c - - default default] Lock "cc034275-7dd9-4d59-82ed-28755e2c6559" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.307 238945 DEBUG nova.compute.manager [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.307 238945 DEBUG oslo_concurrency.lockutils [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 DEBUG oslo_concurrency.lockutils [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 DEBUG oslo_concurrency.lockutils [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 DEBUG nova.compute.manager [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.308 238945 WARNING nova.compute.manager [req-6d987747-74a0-425b-ad31-b5447859cdbe req-28a53107-9e71-4280-9255-7d549fd6664e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:00:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 342 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 290 op/s
Jan 27 09:00:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259943344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.617 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.695 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.696 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.701 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.701 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.705 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.705 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:00:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.893 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.895 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3320MB free_disk=59.81635571271181GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.895 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:00:52 np0005597378 nova_compute[238941]: 2026-01-27 14:00:52.896 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.060 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d133d5f9-1c2b-4996-955c-be57e53a44ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5e1a13b1-a322-4bcd-a54b-0e4061979313 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5606aadf-848a-49fc-9cfd-897be16be855 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.061 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.193 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:00:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:00:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194822179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.744 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.749 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.767 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.796 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.797 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.797 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.797 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:00:53 np0005597378 nova_compute[238941]: 2026-01-27 14:00:53.808 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:00:54 np0005597378 nova_compute[238941]: 2026-01-27 14:00:54.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 304 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.1 MiB/s wr, 407 op/s
Jan 27 09:00:54 np0005597378 nova_compute[238941]: 2026-01-27 14:00:54.807 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:54 np0005597378 nova_compute[238941]: 2026-01-27 14:00:54.808 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:00:54 np0005597378 nova_compute[238941]: 2026-01-27 14:00:54.851 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:00:54 np0005597378 nova_compute[238941]: 2026-01-27 14:00:54.851 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:56 np0005597378 nova_compute[238941]: 2026-01-27 14:00:56.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:56 np0005597378 nova_compute[238941]: 2026-01-27 14:00:56.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:00:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 279 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 323 op/s
Jan 27 09:00:57 np0005597378 nova_compute[238941]: 2026-01-27 14:00:57.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:57 np0005597378 nova_compute[238941]: 2026-01-27 14:00:57.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:00:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 267 op/s
Jan 27 09:00:59 np0005597378 nova_compute[238941]: 2026-01-27 14:00:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:00:59 np0005597378 nova_compute[238941]: 2026-01-27 14:00:59.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:00:59 np0005597378 nova_compute[238941]: 2026-01-27 14:00:59.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:00:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:00:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641174817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:00:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:00:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641174817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.001 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.001 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.002 238945 INFO nova.compute.manager [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Terminating instance#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.003 238945 DEBUG nova.compute.manager [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.380 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522445.3790467, 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.380 238945 INFO nova.compute.manager [-] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.411 238945 DEBUG nova.compute.manager [None req-7ecc608d-8ebb-4322-955c-daa5980d9e08 - - - - - -] [instance: 5b023ef8-0a6c-4bf1-8a2e-5c7789e361a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:00 np0005597378 kernel: tapbe0e12b2-58 (unregistering): left promiscuous mode
Jan 27 09:01:00 np0005597378 NetworkManager[48904]: <info>  [1769522460.5568] device (tapbe0e12b2-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:01:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:00Z|00882|binding|INFO|Releasing lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 from this chassis (sb_readonly=0)
Jan 27 09:01:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:00Z|00883|binding|INFO|Setting lport be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 down in Southbound
Jan 27 09:01:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:00Z|00884|binding|INFO|Removing iface tapbe0e12b2-58 ovn-installed in OVS
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.580 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:52:2e 10.100.0.6'], port_security=['fa:16:3e:7e:52:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5606aadf-848a-49fc-9cfd-897be16be855', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.582 154802 INFO neutron.agent.ovn.metadata.agent [-] Port be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.583 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:01:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.593 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.602 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8abd078-9b2e-4780-9a3c-6b63e7ff9d6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.629 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8b03452d-32a9-4c09-b471-9ec6b60bfeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.632 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[296f1a0c-9ba8-4cd3-810b-007741c370c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:00 np0005597378 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Deactivated successfully.
Jan 27 09:01:00 np0005597378 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Consumed 17.866s CPU time.
Jan 27 09:01:00 np0005597378 systemd-machined[207425]: Machine qemu-111-instance-00000061 terminated.
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.659 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[29316ec6-4ee0-4cef-8262-22b0fcf681c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db83cb72-9d2e-4dd4-ada4-e2043f10b427]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320688, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc39d059-49d8-42eb-92d2-e7b060ce6857]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320689, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320689, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.696 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.698 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.703 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.704 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.704 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.705 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:00.705 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.833 238945 INFO nova.virt.libvirt.driver [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Instance destroyed successfully.#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.834 238945 DEBUG nova.objects.instance [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid 5606aadf-848a-49fc-9cfd-897be16be855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.849 238945 DEBUG nova.virt.libvirt.vif [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-959422175',display_name='tempest-ListServerFiltersTestJSON-instance-959422175',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-959422175',id=97,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-4cpwvl1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:33Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5606aadf-848a-49fc-9cfd-897be16be855,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.850 238945 DEBUG nova.network.os_vif_util [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "address": "fa:16:3e:7e:52:2e", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe0e12b2-58", "ovs_interfaceid": "be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.851 238945 DEBUG nova.network.os_vif_util [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.851 238945 DEBUG os_vif [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.854 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe0e12b2-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.855 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:00 np0005597378 nova_compute[238941]: 2026-01-27 14:01:00.860 238945 INFO os_vif [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:52:2e,bridge_name='br-int',has_traffic_filtering=True,id=be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe0e12b2-58')#033[00m
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.306705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461306735, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1719, "num_deletes": 252, "total_data_size": 2650694, "memory_usage": 2689008, "flush_reason": "Manual Compaction"}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461463057, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2589708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33652, "largest_seqno": 35370, "table_properties": {"data_size": 2581914, "index_size": 4608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17178, "raw_average_key_size": 20, "raw_value_size": 2565837, "raw_average_value_size": 3047, "num_data_blocks": 205, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522297, "oldest_key_time": 1769522297, "file_creation_time": 1769522461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 156412 microseconds, and 5713 cpu microseconds.
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.465 238945 DEBUG nova.compute.manager [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-unplugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.465 238945 DEBUG oslo_concurrency.lockutils [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG oslo_concurrency.lockutils [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG oslo_concurrency.lockutils [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG nova.compute.manager [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] No waiting events found dispatching network-vif-unplugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:01 np0005597378 nova_compute[238941]: 2026-01-27 14:01:01.466 238945 DEBUG nova.compute.manager [req-32295a79-02c4-4b71-8087-e43ed28d0080 req-36683219-73e7-4266-ade0-841bc750d373 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-unplugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.463111) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2589708 bytes OK
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.463156) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585026) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585070) EVENT_LOG_v1 {"time_micros": 1769522461585060, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585094) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2643197, prev total WAL file size 2643197, number of live WAL files 2.
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.586042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2529KB)], [74(8601KB)]
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461586099, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11397303, "oldest_snapshot_seqno": -1}
Jan 27 09:01:01 np0005597378 podman[320728]: 2026-01-27 14:01:01.709463593 +0000 UTC m=+0.051023185 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6092 keys, 9784354 bytes, temperature: kUnknown
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461808769, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9784354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9741840, "index_size": 26212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 153993, "raw_average_key_size": 25, "raw_value_size": 9630989, "raw_average_value_size": 1580, "num_data_blocks": 1059, "num_entries": 6092, "num_filter_entries": 6092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.809026) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9784354 bytes
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.905261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.2 rd, 43.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6615, records dropped: 523 output_compression: NoCompression
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.905300) EVENT_LOG_v1 {"time_micros": 1769522461905284, "job": 42, "event": "compaction_finished", "compaction_time_micros": 222743, "compaction_time_cpu_micros": 27262, "output_level": 6, "num_output_files": 1, "total_output_size": 9784354, "num_input_records": 6615, "num_output_records": 6092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461906168, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522461908405, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.585949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:01:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:01:01.908506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:01:02 np0005597378 nova_compute[238941]: 2026-01-27 14:01:02.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 951 KiB/s wr, 131 op/s
Jan 27 09:01:02 np0005597378 podman[320747]: 2026-01-27 14:01:02.733797318 +0000 UTC m=+0.077755319 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:01:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:03 np0005597378 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG nova.compute.manager [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:03 np0005597378 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG oslo_concurrency.lockutils [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5606aadf-848a-49fc-9cfd-897be16be855-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:03 np0005597378 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG oslo_concurrency.lockutils [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:03 np0005597378 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG oslo_concurrency.lockutils [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:03 np0005597378 nova_compute[238941]: 2026-01-27 14:01:03.566 238945 DEBUG nova.compute.manager [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] No waiting events found dispatching network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:03 np0005597378 nova_compute[238941]: 2026-01-27 14:01:03.567 238945 WARNING nova.compute.manager [req-b6c07a1e-744c-47b1-bb3d-72e5e9cd8489 req-6bf33113-257c-4c70-ba98-430d16749341 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received unexpected event network-vif-plugged-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:01:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:04Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 09:01:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:04Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:6a:3d 10.100.0.8
Jan 27 09:01:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 960 KiB/s wr, 151 op/s
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.724 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.725 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.745 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.818 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.819 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.828 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.829 238945 INFO nova.compute.claims [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.989 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522449.9884176, cc034275-7dd9-4d59-82ed-28755e2c6559 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:04 np0005597378 nova_compute[238941]: 2026-01-27 14:01:04.990 238945 INFO nova.compute.manager [-] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.002 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.035 238945 DEBUG nova.compute.manager [None req-50199baf-bda2-4202-b2a6-40609919f8b5 - - - - - -] [instance: cc034275-7dd9-4d59-82ed-28755e2c6559] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456084830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.689 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.695 238945 DEBUG nova.compute.provider_tree [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.716 238945 DEBUG nova.scheduler.client.report [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.740 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.741 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.802 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.820 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.861 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.962 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.963 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.964 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating image(s)#033[00m
Jan 27 09:01:05 np0005597378 nova_compute[238941]: 2026-01-27 14:01:05.988 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.018 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.043 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.047 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.152 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.153 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.154 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.154 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.178 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:06 np0005597378 nova_compute[238941]: 2026-01-27 14:01:06.182 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 239 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 68 KiB/s wr, 57 op/s
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.189 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.261 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.329 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] resizing rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.548 238945 INFO nova.virt.libvirt.driver [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deleting instance files /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855_del#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.550 238945 INFO nova.virt.libvirt.driver [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deletion of /var/lib/nova/instances/5606aadf-848a-49fc-9cfd-897be16be855_del complete#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.609 238945 INFO nova.compute.manager [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 7.61 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.610 238945 DEBUG oslo.service.loopingcall [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.610 238945 DEBUG nova.compute.manager [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.611 238945 DEBUG nova.network.neutron [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.703 238945 DEBUG nova.objects.instance [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'migration_context' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.718 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.719 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Ensure instance console log exists: /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.719 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.720 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.720 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.722 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.726 238945 WARNING nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.731 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.733 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.736 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.736 238945 DEBUG nova.virt.libvirt.host [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.737 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.737 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.738 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.738 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.738 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.739 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.740 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.740 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.740 238945 DEBUG nova.virt.hardware [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:01:07 np0005597378 nova_compute[238941]: 2026-01-27 14:01:07.743 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:01:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862327271' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.331 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.353 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.357 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.390 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.567 238945 DEBUG nova.network.neutron [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 224 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 846 KiB/s wr, 94 op/s
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.596 238945 INFO nova.compute.manager [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Took 0.99 seconds to deallocate network for instance.#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.636 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.637 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.641 238945 DEBUG nova.compute.manager [req-0483e9db-d9ff-4c49-be3c-7b7e5a5d5f3f req-3fd29977-a0b9-448e-91ed-5093b9232371 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Received event network-vif-deleted-be0e12b2-58e6-4a72-b8ae-3e3dfafc9c98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.718 238945 DEBUG oslo_concurrency.processutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:01:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145816648' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.931 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.934 238945 DEBUG nova.objects.instance [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:08 np0005597378 nova_compute[238941]: 2026-01-27 14:01:08.949 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <uuid>7dfb3234-e54d-417e-93b5-5b1f17a4820a</uuid>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <name>instance-00000062</name>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV254Test-server-927234309</nova:name>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:01:07</nova:creationTime>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:user uuid="bd2f6f5d6ce541cc88ea7e10a215c460">tempest-ServerShowV254Test-1314669382-project-member</nova:user>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <nova:project uuid="6d2d99cc193e4d4e8444b64eff3dcf72">tempest-ServerShowV254Test-1314669382</nova:project>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <entry name="serial">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <entry name="uuid">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log" append="off"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:01:08 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:01:08 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:01:08 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:01:08 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.013 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.014 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.014 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Using config drive#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.032 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357896872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.298 238945 DEBUG oslo_concurrency.processutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.303 238945 DEBUG nova.compute.provider_tree [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.326 238945 DEBUG nova.scheduler.client.report [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.334 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating config drive at /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.340 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4n1rid4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.374 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.415 238945 INFO nova.scheduler.client.report [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Deleted allocations for instance 5606aadf-848a-49fc-9cfd-897be16be855#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.479 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4n1rid4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.503 238945 DEBUG nova.storage.rbd_utils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.506 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.542 238945 DEBUG oslo_concurrency.lockutils [None req-ac98ffa0-7c74-4d90-8a98-bab074eaccb4 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5606aadf-848a-49fc-9cfd-897be16be855" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.865 238945 DEBUG oslo_concurrency.processutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.866 238945 INFO nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting local config drive /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config because it was imported into RBD.#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.913 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.914 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.914 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.914 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.915 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.916 238945 INFO nova.compute.manager [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Terminating instance#033[00m
Jan 27 09:01:09 np0005597378 nova_compute[238941]: 2026-01-27 14:01:09.917 238945 DEBUG nova.compute.manager [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:01:09 np0005597378 systemd-machined[207425]: New machine qemu-114-instance-00000062.
Jan 27 09:01:09 np0005597378 systemd[1]: Started Virtual Machine qemu-114-instance-00000062.
Jan 27 09:01:09 np0005597378 kernel: tapa2303563-a0 (unregistering): left promiscuous mode
Jan 27 09:01:09 np0005597378 NetworkManager[48904]: <info>  [1769522469.9979] device (tapa2303563-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00885|binding|INFO|Releasing lport a2303563-a056-42f8-a941-7a95b6258e2c from this chassis (sb_readonly=0)
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00886|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c down in Southbound
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00887|binding|INFO|Removing iface tapa2303563-a0 ovn-installed in OVS
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.021 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.022 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.023 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c49ea51e-dd61-4338-8424-a4e54ac60e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 27 09:01:10 np0005597378 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Consumed 17.708s CPU time.
Jan 27 09:01:10 np0005597378 systemd-machined[207425]: Machine qemu-110-instance-00000060 terminated.
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.082 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdd8ca7-1174-4ef5-a00d-4aa7acdde3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.085 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[79e4014f-cceb-41e6-863f-e1f8ff1bdef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.116 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f5756675-4a81-41aa-ab4c-19eb414b7ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 kernel: tapa2303563-a0: entered promiscuous mode
Jan 27 09:01:10 np0005597378 systemd-udevd[321124]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:01:10 np0005597378 NetworkManager[48904]: <info>  [1769522470.1400] manager: (tapa2303563-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00888|binding|INFO|Claiming lport a2303563-a056-42f8-a941-7a95b6258e2c for this chassis.
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00889|binding|INFO|a2303563-a056-42f8-a941-7a95b6258e2c: Claiming fa:16:3e:58:bd:ff 10.100.0.10
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 kernel: tapa2303563-a0 (unregistering): left promiscuous mode
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.141 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d446cbb0-fe88-4083-a138-819fa9206fd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321132, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.148 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.161 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a623e0e2-4105-4525-bf01-6d4fcdd3a0ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321136, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321136, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.163 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.175 238945 INFO nova.virt.libvirt.driver [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Instance destroyed successfully.#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.175 238945 DEBUG nova.objects.instance [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid 5e1a13b1-a322-4bcd-a54b-0e4061979313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00890|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c ovn-installed in OVS
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00891|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c up in Southbound
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00892|binding|INFO|Releasing lport a2303563-a056-42f8-a941-7a95b6258e2c from this chassis (sb_readonly=1)
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00893|if_status|INFO|Dropped 2 log messages in last 70 seconds (most recently, 70 seconds ago) due to excessive rate
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00894|if_status|INFO|Not setting lport a2303563-a056-42f8-a941-7a95b6258e2c down as sb is readonly
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00895|binding|INFO|Removing iface tapa2303563-a0 ovn-installed in OVS
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00896|binding|INFO|Releasing lport a2303563-a056-42f8-a941-7a95b6258e2c from this chassis (sb_readonly=0)
Jan 27 09:01:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:10Z|00897|binding|INFO|Setting lport a2303563-a056-42f8-a941-7a95b6258e2c down in Southbound
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.203 238945 DEBUG nova.virt.libvirt.vif [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1050553987',display_name='tempest-ListServerFiltersTestJSON-instance-1050553987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1050553987',id=96,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-12ac9n4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:29Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=5e1a13b1-a322-4bcd-a54b-0e4061979313,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.203 238945 DEBUG nova.network.os_vif_util [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "a2303563-a056-42f8-a941-7a95b6258e2c", "address": "fa:16:3e:58:bd:ff", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2303563-a0", "ovs_interfaceid": "a2303563-a056-42f8-a941-7a95b6258e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.204 238945 DEBUG nova.network.os_vif_util [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.205 238945 DEBUG os_vif [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.207 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2303563-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.209 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.210 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:bd:ff 10.100.0.10'], port_security=['fa:16:3e:58:bd:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5e1a13b1-a322-4bcd-a54b-0e4061979313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a2303563-a056-42f8-a941-7a95b6258e2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.213 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.217 238945 INFO os_vif [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:bd:ff,bridge_name='br-int',has_traffic_filtering=True,id=a2303563-a056-42f8-a941-7a95b6258e2c,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2303563-a0')#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.217 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.218 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.219 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.220 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.222 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.224 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.249 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dabee5eb-fffd-42e6-8aa7-7f83634b975e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.278 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2763a3cb-1b20-4207-8641-4507acb32758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.281 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3081716f-5a87-489d-bec5-587642d23c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.306 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[feee690e-26a4-4f5e-be75-7a2cf584bfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.322 238945 DEBUG nova.compute.manager [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.322 238945 DEBUG oslo_concurrency.lockutils [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.322 238945 DEBUG oslo_concurrency.lockutils [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.323 238945 DEBUG oslo_concurrency.lockutils [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.323 238945 DEBUG nova.compute.manager [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.323 238945 DEBUG nova.compute.manager [req-f339add2-0b99-42b2-b97c-59ac772f54bf req-d6f0a024-3772-4d52-bfaa-83a6175110b3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[536c78f3-c434-4a6e-9f53-6e476c856b81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321162, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51628122-d682-4822-884c-91bda21de360]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321163, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321163, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.344 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.348 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.349 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a2303563-a056-42f8-a941-7a95b6258e2c in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.350 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bd977f-b066-45e7-b87f-f20ad7836858#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.366 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c948440d-1459-4519-a87d-537e37f6d2f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.398 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e1365c-0136-4c41-be80-594d5e22f990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.400 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c496ed28-cbb7-4305-854a-ac998e942bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.433 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd8744e-5d8e-4ffb-8029-3adfdc472d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.450 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc590d4-ae72-42a4-8589-04686397b0f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bd977f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:b2:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506567, 'reachable_time': 31130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321169, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.470 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b8c8c8-648b-4afe-92a6-10c7289d2b6a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506581, 'tstamp': 506581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321170, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap17bd977f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506585, 'tstamp': 506585}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321170, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.472 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bd977f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.475 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bd977f-b0, col_values=(('external_ids', {'iface-id': 'c126ea8f-5f2e-4185-8f97-068a91ffc3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:10.476 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 248 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.907 238945 INFO nova.virt.libvirt.driver [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deleting instance files /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313_del#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.908 238945 INFO nova.virt.libvirt.driver [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deletion of /var/lib/nova/instances/5e1a13b1-a322-4bcd-a54b-0e4061979313_del complete#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.952 238945 INFO nova.compute.manager [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.953 238945 DEBUG oslo.service.loopingcall [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.954 238945 DEBUG nova.compute.manager [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:01:10 np0005597378 nova_compute[238941]: 2026-01-27 14:01:10.954 238945 DEBUG nova.network.neutron [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.505 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522471.5054586, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.506 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.509 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.509 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.512 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance spawned successfully.#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.512 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.527 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.532 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.535 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.535 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.536 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.536 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.536 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.537 238945 DEBUG nova.virt.libvirt.driver [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.562 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.563 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522471.5065012, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.563 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Started (Lifecycle Event)#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.605 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.606 238945 DEBUG nova.network.neutron [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.610 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.614 238945 INFO nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 5.65 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.614 238945 DEBUG nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.624 238945 INFO nova.compute.manager [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Took 0.67 seconds to deallocate network for instance.#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.645 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.661 238945 DEBUG nova.compute.manager [req-4f7f7149-162c-4fdd-86f9-e2ac74ad9172 req-37e0d0fb-9f2e-400c-b339-5b5951d885af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-deleted-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.678 238945 INFO nova.compute.manager [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 6.89 seconds to build instance.#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.682 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.682 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.697 238945 DEBUG oslo_concurrency.lockutils [None req-d4d6a199-e8fb-4f58-8211-7c093b0070fc bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:11 np0005597378 nova_compute[238941]: 2026-01-27 14:01:11.766 238945 DEBUG oslo_concurrency.processutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 27 09:01:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 27 09:01:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094434450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.422 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.423 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.424 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.424 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.424 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.425 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.425 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.425 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.426 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.427 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.427 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.427 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.428 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.428 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.428 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.429 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.429 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.429 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.430 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.430 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.430 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-unplugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.431 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.431 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.431 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.432 238945 DEBUG oslo_concurrency.lockutils [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.432 238945 DEBUG nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] No waiting events found dispatching network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.432 238945 WARNING nova.compute.manager [req-be9cdf23-c93d-4d35-b82e-efcbdd66e5ae req-e7bd5db8-273d-4f05-a793-f7290cc36c95 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Received unexpected event network-vif-plugged-a2303563-a056-42f8-a941-7a95b6258e2c for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.435 238945 DEBUG oslo_concurrency.processutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.442 238945 DEBUG nova.compute.provider_tree [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.462 238945 DEBUG nova.scheduler.client.report [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.495 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.517 238945 INFO nova.scheduler.client.report [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Deleted allocations for instance 5e1a13b1-a322-4bcd-a54b-0e4061979313#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.592 238945 DEBUG oslo_concurrency.lockutils [None req-eefac0c1-7a1b-4a87-b06d-977e937e2cef e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "5e1a13b1-a322-4bcd-a54b-0e4061979313" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 248 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.853 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.854 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.854 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.854 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.855 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.856 238945 INFO nova.compute.manager [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Terminating instance#033[00m
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.857 238945 DEBUG nova.compute.manager [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:01:12 np0005597378 kernel: tape50fbfa4-a9 (unregistering): left promiscuous mode
Jan 27 09:01:12 np0005597378 NetworkManager[48904]: <info>  [1769522472.9511] device (tape50fbfa4-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:01:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:12Z|00898|binding|INFO|Releasing lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 from this chassis (sb_readonly=0)
Jan 27 09:01:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:12Z|00899|binding|INFO|Setting lport e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 down in Southbound
Jan 27 09:01:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:12Z|00900|binding|INFO|Removing iface tape50fbfa4-a9 ovn-installed in OVS
Jan 27 09:01:12 np0005597378 nova_compute[238941]: 2026-01-27 14:01:12.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.006 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:6a:3d 10.100.0.8'], port_security=['fa:16:3e:fe:6a:3d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd133d5f9-1c2b-4996-955c-be57e53a44ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bd977f-b066-45e7-b87f-f20ad7836858', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd302ba58879d43258f0a8abe2d81f03a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '54d81df8-d2ad-4a9d-b344-3490050189eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294dce1a-9527-4ff7-aba9-c84fa71e31bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.008 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 in datapath 17bd977f-b066-45e7-b87f-f20ad7836858 unbound from our chassis#033[00m
Jan 27 09:01:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.009 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17bd977f-b066-45e7-b87f-f20ad7836858, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.011 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30cc5f56-2799-44a1-ac5e-2df5f93f085a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:13.011 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 namespace which is not needed anymore#033[00m
Jan 27 09:01:13 np0005597378 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 27 09:01:13 np0005597378 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005f.scope: Consumed 13.445s CPU time.
Jan 27 09:01:13 np0005597378 systemd-machined[207425]: Machine qemu-113-instance-0000005f terminated.
Jan 27 09:01:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:13 np0005597378 NetworkManager[48904]: <info>  [1769522473.0816] manager: (tape50fbfa4-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.096 238945 INFO nova.virt.libvirt.driver [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Instance destroyed successfully.#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.097 238945 DEBUG nova.objects.instance [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lazy-loading 'resources' on Instance uuid d133d5f9-1c2b-4996-955c-be57e53a44ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.116 238945 DEBUG nova.virt.libvirt.vif [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:00:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-523711638',display_name='tempest-ListServerFiltersTestJSON-instance-523711638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-523711638',id=95,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d302ba58879d43258f0a8abe2d81f03a',ramdisk_id='',reservation_id='r-47u0a24r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1240027263',owner_user_name='tempest-ListServerFiltersTestJSON-1240027263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:00:50Z,user_data=None,user_id='e4411258cb6240ddb5365fb25e762594',uuid=d133d5f9-1c2b-4996-955c-be57e53a44ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.116 238945 DEBUG nova.network.os_vif_util [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converting VIF {"id": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "address": "fa:16:3e:fe:6a:3d", "network": {"id": "17bd977f-b066-45e7-b87f-f20ad7836858", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1716995349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d302ba58879d43258f0a8abe2d81f03a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50fbfa4-a9", "ovs_interfaceid": "e50fbfa4-a9d5-403e-a3ce-e3cd499555b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.119 238945 DEBUG nova.network.os_vif_util [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.120 238945 DEBUG os_vif [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.123 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50fbfa4-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.128 238945 INFO os_vif [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:6a:3d,bridge_name='br-int',has_traffic_filtering=True,id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4,network=Network(17bd977f-b066-45e7-b87f-f20ad7836858),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50fbfa4-a9')#033[00m
Jan 27 09:01:13 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : haproxy version is 2.8.14-c23fe91
Jan 27 09:01:13 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [NOTICE]   (319061) : path to executable is /usr/sbin/haproxy
Jan 27 09:01:13 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [WARNING]  (319061) : Exiting Master process...
Jan 27 09:01:13 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [ALERT]    (319061) : Current worker (319064) exited with code 143 (Terminated)
Jan 27 09:01:13 np0005597378 neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858[319057]: [WARNING]  (319061) : All workers exited. Exiting... (0)
Jan 27 09:01:13 np0005597378 systemd[1]: libpod-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad.scope: Deactivated successfully.
Jan 27 09:01:13 np0005597378 podman[321266]: 2026-01-27 14:01:13.470236193 +0000 UTC m=+0.333875726 container died 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.626 238945 INFO nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Rebuilding instance#033[00m
Jan 27 09:01:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad-userdata-shm.mount: Deactivated successfully.
Jan 27 09:01:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1d64967d2ad845ead7e18bb62e24f5d4b27c23a3fab6242d89ed99e913dea666-merged.mount: Deactivated successfully.
Jan 27 09:01:13 np0005597378 nova_compute[238941]: 2026-01-27 14:01:13.890 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:13 np0005597378 podman[321266]: 2026-01-27 14:01:13.966407575 +0000 UTC m=+0.830047108 container cleanup 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 09:01:13 np0005597378 systemd[1]: libpod-conmon-1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad.scope: Deactivated successfully.
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.021 238945 DEBUG nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:14 np0005597378 podman[321312]: 2026-01-27 14:01:14.063731318 +0000 UTC m=+0.070099428 container remove 1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.070 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc3ed9a-3b01-4bc1-a771-55347edc31dd]: (4, ('Tue Jan 27 02:01:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 (1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad)\n1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad\nTue Jan 27 02:01:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 (1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad)\n1ea0e1a034690c0c7c944ee2984961ed5f84505f199beca6051e4904964e5dad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.071 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[03a2d8ee-5e34-4a76-a34a-8eb146e91d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.072 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bd977f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:14 np0005597378 kernel: tap17bd977f-b0: left promiscuous mode
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.093 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5de802-89fb-48b1-8377-c163cd828eef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b84a109-2191-457a-a13f-d5c104278618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.114 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d01588a-aa25-4f65-85c6-028351ff7762]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6482b24-537d-4257-a9e4-a2009a502aa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506559, 'reachable_time': 36283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321327, 'error': None, 'target': 'ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.136 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:14 np0005597378 systemd[1]: run-netns-ovnmeta\x2d17bd977f\x2db066\x2d45e7\x2db87f\x2df20ad7836858.mount: Deactivated successfully.
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.140 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17bd977f-b066-45e7-b87f-f20ad7836858 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:01:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:14.140 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dccd0cf9-0857-4687-8e15-f54e410c3f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.154 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.176 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'resources' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.194 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'migration_context' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.210 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.213 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.512 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.512 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.513 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.513 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.513 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-unplugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.514 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.515 238945 DEBUG oslo_concurrency.lockutils [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.515 238945 DEBUG nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] No waiting events found dispatching network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.515 238945 WARNING nova.compute.manager [req-4d9c069d-4947-4f11-8ab3-4f5bce516197 req-fc4f3e89-e843-4737-b980-c35842fe009d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received unexpected event network-vif-plugged-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:01:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 190 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.820 238945 INFO nova.virt.libvirt.driver [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deleting instance files /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec_del#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.821 238945 INFO nova.virt.libvirt.driver [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deletion of /var/lib/nova/instances/d133d5f9-1c2b-4996-955c-be57e53a44ec_del complete#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.826 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.890 238945 INFO nova.compute.manager [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 2.03 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.891 238945 DEBUG oslo.service.loopingcall [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.891 238945 DEBUG nova.compute.manager [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:01:14 np0005597378 nova_compute[238941]: 2026-01-27 14:01:14.891 238945 DEBUG nova.network.neutron [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:01:15 np0005597378 nova_compute[238941]: 2026-01-27 14:01:15.832 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522460.8319595, 5606aadf-848a-49fc-9cfd-897be16be855 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:15 np0005597378 nova_compute[238941]: 2026-01-27 14:01:15.833 238945 INFO nova.compute.manager [-] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:01:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 152 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.698 238945 DEBUG nova.network.neutron [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.725 238945 DEBUG nova.compute.manager [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Received event network-vif-deleted-e50fbfa4-a9d5-403e-a3ce-e3cd499555b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.725 238945 INFO nova.compute.manager [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Neutron deleted interface e50fbfa4-a9d5-403e-a3ce-e3cd499555b4; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.725 238945 DEBUG nova.network.neutron [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.751 238945 DEBUG nova.compute.manager [None req-7b1ad74f-9481-4552-bf28-1d16a28376fb - - - - - -] [instance: 5606aadf-848a-49fc-9cfd-897be16be855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.760 238945 INFO nova.compute.manager [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Took 1.87 seconds to deallocate network for instance.#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.766 238945 DEBUG nova.compute.manager [req-2f1f1c92-078c-4239-b4de-0538293b926c req-044344a5-630a-4997-aff3-549cb7af7ac9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Detach interface failed, port_id=e50fbfa4-a9d5-403e-a3ce-e3cd499555b4, reason: Instance d133d5f9-1c2b-4996-955c-be57e53a44ec could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:16.809 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:16.811 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.835 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.836 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:16 np0005597378 nova_compute[238941]: 2026-01-27 14:01:16.906 238945 DEBUG oslo_concurrency.processutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 27 09:01:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 27 09:01:17 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:01:17
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta']
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:01:17 np0005597378 nova_compute[238941]: 2026-01-27 14:01:17.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175467007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:17 np0005597378 nova_compute[238941]: 2026-01-27 14:01:17.453 238945 DEBUG oslo_concurrency.processutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:17 np0005597378 nova_compute[238941]: 2026-01-27 14:01:17.460 238945 DEBUG nova.compute.provider_tree [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:17 np0005597378 nova_compute[238941]: 2026-01-27 14:01:17.473 238945 DEBUG nova.scheduler.client.report [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:17 np0005597378 nova_compute[238941]: 2026-01-27 14:01:17.515 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:18 np0005597378 nova_compute[238941]: 2026-01-27 14:01:18.001 238945 INFO nova.scheduler.client.report [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Deleted allocations for instance d133d5f9-1c2b-4996-955c-be57e53a44ec#033[00m
Jan 27 09:01:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:01:18 np0005597378 nova_compute[238941]: 2026-01-27 14:01:18.086 238945 DEBUG oslo_concurrency.lockutils [None req-2555e1eb-c7b2-42cb-a081-15cf9a5db799 e4411258cb6240ddb5365fb25e762594 d302ba58879d43258f0a8abe2d81f03a - - default default] Lock "d133d5f9-1c2b-4996-955c-be57e53a44ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:18 np0005597378 nova_compute[238941]: 2026-01-27 14:01:18.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 12 MiB/s wr, 238 op/s
Jan 27 09:01:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 27 09:01:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 27 09:01:19 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 27 09:01:19 np0005597378 nova_compute[238941]: 2026-01-27 14:01:19.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 27 09:01:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 27 09:01:20 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 27 09:01:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 19 MiB/s wr, 207 op/s
Jan 27 09:01:22 np0005597378 nova_compute[238941]: 2026-01-27 14:01:22.194 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 184 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 19 MiB/s wr, 90 op/s
Jan 27 09:01:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:23 np0005597378 nova_compute[238941]: 2026-01-27 14:01:23.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:24 np0005597378 nova_compute[238941]: 2026-01-27 14:01:24.253 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 09:01:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 96 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 16 MiB/s wr, 149 op/s
Jan 27 09:01:25 np0005597378 nova_compute[238941]: 2026-01-27 14:01:25.162 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522470.1574664, 5e1a13b1-a322-4bcd-a54b-0e4061979313 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:25 np0005597378 nova_compute[238941]: 2026-01-27 14:01:25.162 238945 INFO nova.compute.manager [-] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:01:25 np0005597378 nova_compute[238941]: 2026-01-27 14:01:25.188 238945 DEBUG nova.compute.manager [None req-388a85d7-4365-4a17-b275-7f43fffa5500 - - - - - -] [instance: 5e1a13b1-a322-4bcd-a54b-0e4061979313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 110 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 4.2 MiB/s wr, 114 op/s
Jan 27 09:01:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:26.814 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:27 np0005597378 nova_compute[238941]: 2026-01-27 14:01:27.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:27 np0005597378 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 27 09:01:27 np0005597378 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000062.scope: Consumed 13.887s CPU time.
Jan 27 09:01:27 np0005597378 systemd-machined[207425]: Machine qemu-114-instance-00000062 terminated.
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006516771235176926 of space, bias 1.0, pg target 0.19550313705530778 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675511713084465 of space, bias 1.0, pg target 0.20026535139253396 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.088618269623075e-06 of space, bias 4.0, pg target 0.00130634192354769 quantized to 16 (current 16)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:01:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:01:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 27 09:01:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 27 09:01:28 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.094 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522473.0939064, d133d5f9-1c2b-4996-955c-be57e53a44ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.095 238945 INFO nova.compute.manager [-] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.116 238945 DEBUG nova.compute.manager [None req-969e8a2f-bda7-4383-9a3a-080b4e4f6a48 - - - - - -] [instance: d133d5f9-1c2b-4996-955c-be57e53a44ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.267 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance shutdown successfully after 14 seconds.#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.272 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance destroyed successfully.#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.276 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance destroyed successfully.#033[00m
Jan 27 09:01:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 3.1 MiB/s wr, 144 op/s
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.631 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting instance files /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.632 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deletion of /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del complete#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.786 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.787 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating image(s)#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.809 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.834 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.865 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.869 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.949 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.950 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.951 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.952 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.974 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:28 np0005597378 nova_compute[238941]: 2026-01-27 14:01:28.977 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.314 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.372 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] resizing rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.447 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.448 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Ensure instance console log exists: /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.448 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.449 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.449 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.450 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.454 238945 WARNING nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.464 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.464 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.472 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.472 238945 DEBUG nova.virt.libvirt.host [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.473 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.473 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.473 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.474 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.475 238945 DEBUG nova.virt.hardware [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.476 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:29 np0005597378 nova_compute[238941]: 2026-01-27 14:01:29.491 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:01:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150274681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.075 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.155 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.161 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 98 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 3.0 MiB/s wr, 127 op/s
Jan 27 09:01:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:01:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655292869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.721 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.724 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <uuid>7dfb3234-e54d-417e-93b5-5b1f17a4820a</uuid>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <name>instance-00000062</name>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV254Test-server-927234309</nova:name>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:01:29</nova:creationTime>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:user uuid="bd2f6f5d6ce541cc88ea7e10a215c460">tempest-ServerShowV254Test-1314669382-project-member</nova:user>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <nova:project uuid="6d2d99cc193e4d4e8444b64eff3dcf72">tempest-ServerShowV254Test-1314669382</nova:project>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <entry name="serial">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <entry name="uuid">7dfb3234-e54d-417e-93b5-5b1f17a4820a</entry>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/console.log" append="off"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:01:30 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:01:30 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:01:30 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:01:30 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.830 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.832 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.832 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Using config drive#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.855 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:30 np0005597378 nova_compute[238941]: 2026-01-27 14:01:30.884 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.122 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Creating config drive at /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.128 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkqs545ba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.271 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkqs545ba" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.300 238945 DEBUG nova.storage.rbd_utils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] rbd image 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.304 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.779 238945 DEBUG oslo_concurrency.processutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config 7dfb3234-e54d-417e-93b5-5b1f17a4820a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:31 np0005597378 nova_compute[238941]: 2026-01-27 14:01:31.780 238945 INFO nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting local config drive /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a/disk.config because it was imported into RBD.#033[00m
Jan 27 09:01:31 np0005597378 systemd-machined[207425]: New machine qemu-115-instance-00000062.
Jan 27 09:01:31 np0005597378 systemd[1]: Started Virtual Machine qemu-115-instance-00000062.
Jan 27 09:01:31 np0005597378 podman[321667]: 2026-01-27 14:01:31.938754038 +0000 UTC m=+0.067620523 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.257 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 7dfb3234-e54d-417e-93b5-5b1f17a4820a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.258 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522492.257292, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.259 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.262 238945 DEBUG nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.263 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.266 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance spawned successfully.#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.267 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.301 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.306 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.307 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.307 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.308 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.308 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.309 238945 DEBUG nova.virt.libvirt.driver [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.314 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.352 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.353 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522492.258535, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.353 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Started (Lifecycle Event)#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.386 238945 DEBUG nova.compute.manager [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.389 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.396 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.435 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.458 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.459 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.459 238945 DEBUG nova.objects.instance [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:01:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 98 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 3.0 MiB/s wr, 127 op/s
Jan 27 09:01:32 np0005597378 nova_compute[238941]: 2026-01-27 14:01:32.613 238945 DEBUG oslo_concurrency.lockutils [None req-03a919a7-638e-4de6-9e37-19e87e73d4e0 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:33 np0005597378 nova_compute[238941]: 2026-01-27 14:01:33.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:33 np0005597378 podman[321736]: 2026-01-27 14:01:33.744057656 +0000 UTC m=+0.080249405 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:01:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.7 MiB/s wr, 157 op/s
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.857 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.858 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.858 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.859 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.859 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.860 238945 INFO nova.compute.manager [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Terminating instance#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.861 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "refresh_cache-7dfb3234-e54d-417e-93b5-5b1f17a4820a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.861 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquired lock "refresh_cache-7dfb3234-e54d-417e-93b5-5b1f17a4820a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:01:34 np0005597378 nova_compute[238941]: 2026-01-27 14:01:34.863 238945 DEBUG nova.network.neutron [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:01:35 np0005597378 nova_compute[238941]: 2026-01-27 14:01:35.184 238945 DEBUG nova.network.neutron [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:01:35 np0005597378 nova_compute[238941]: 2026-01-27 14:01:35.622 238945 DEBUG nova.network.neutron [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:35 np0005597378 nova_compute[238941]: 2026-01-27 14:01:35.739 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Releasing lock "refresh_cache-7dfb3234-e54d-417e-93b5-5b1f17a4820a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:01:35 np0005597378 nova_compute[238941]: 2026-01-27 14:01:35.740 238945 DEBUG nova.compute.manager [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:01:35 np0005597378 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 27 09:01:35 np0005597378 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000062.scope: Consumed 3.913s CPU time.
Jan 27 09:01:35 np0005597378 systemd-machined[207425]: Machine qemu-115-instance-00000062 terminated.
Jan 27 09:01:35 np0005597378 nova_compute[238941]: 2026-01-27 14:01:35.960 238945 INFO nova.virt.libvirt.driver [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance destroyed successfully.#033[00m
Jan 27 09:01:35 np0005597378 nova_compute[238941]: 2026-01-27 14:01:35.961 238945 DEBUG nova.objects.instance [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lazy-loading 'resources' on Instance uuid 7dfb3234-e54d-417e-93b5-5b1f17a4820a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.282 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.282 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.299 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.383 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.384 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.392 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.392 238945 INFO nova.compute.claims [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:01:37 np0005597378 nova_compute[238941]: 2026-01-27 14:01:37.522 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627631841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.245 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.252 238945 DEBUG nova.compute.provider_tree [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.274 238945 DEBUG nova.scheduler.client.report [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.294 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.295 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.446 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.446 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.480 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.515 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:01:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 62 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 179 op/s
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.665 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.667 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.667 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Creating image(s)#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.687 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.706 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.727 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.730 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.762 238945 DEBUG nova.policy [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8b6fd848f3a4701b63086a5fb386473', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.798 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.799 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.799 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.799 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.816 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:38 np0005597378 nova_compute[238941]: 2026-01-27 14:01:38.820 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.018 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.048 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Successfully created port: 058b32ea-7973-4220-91fa-58dc678da20a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.090 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] resizing rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:01:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 54 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.886 238945 DEBUG nova.objects.instance [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'migration_context' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.911 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.911 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Ensure instance console log exists: /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.912 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.912 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.912 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.957 238945 INFO nova.virt.libvirt.driver [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deleting instance files /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del#033[00m
Jan 27 09:01:40 np0005597378 nova_compute[238941]: 2026-01-27 14:01:40.958 238945 INFO nova.virt.libvirt.driver [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deletion of /var/lib/nova/instances/7dfb3234-e54d-417e-93b5-5b1f17a4820a_del complete#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.028 238945 INFO nova.compute.manager [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 5.29 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.028 238945 DEBUG oslo.service.loopingcall [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.028 238945 DEBUG nova.compute.manager [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.029 238945 DEBUG nova.network.neutron [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.237 238945 DEBUG nova.network.neutron [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.254 238945 DEBUG nova.network.neutron [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.283 238945 INFO nova.compute.manager [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Took 0.25 seconds to deallocate network for instance.#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.379 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.380 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.473 238945 DEBUG oslo_concurrency.processutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.680 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Successfully updated port: 058b32ea-7973-4220-91fa-58dc678da20a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.710 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.710 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.711 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.896 238945 DEBUG nova.compute.manager [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-changed-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.898 238945 DEBUG nova.compute.manager [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing instance network info cache due to event network-changed-058b32ea-7973-4220-91fa-58dc678da20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.898 238945 DEBUG oslo_concurrency.lockutils [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:01:41 np0005597378 nova_compute[238941]: 2026-01-27 14:01:41.933 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:01:42 np0005597378 nova_compute[238941]: 2026-01-27 14:01:42.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 54 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570022551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:01:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:01:42 np0005597378 nova_compute[238941]: 2026-01-27 14:01:42.677 238945 DEBUG oslo_concurrency.processutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:42 np0005597378 nova_compute[238941]: 2026-01-27 14:01:42.683 238945 DEBUG nova.compute.provider_tree [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:42 np0005597378 nova_compute[238941]: 2026-01-27 14:01:42.880 238945 DEBUG nova.scheduler.client.report [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.237 238945 DEBUG nova.network.neutron [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.248 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.314 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.314 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance network_info: |[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.314 238945 DEBUG oslo_concurrency.lockutils [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.315 238945 DEBUG nova.network.neutron [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.318 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.323 238945 WARNING nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.353 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.354 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.355 238945 INFO nova.scheduler.client.report [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Deleted allocations for instance 7dfb3234-e54d-417e-93b5-5b1f17a4820a#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.360 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.361 238945 DEBUG nova.virt.libvirt.host [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.361 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.361 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.362 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.363 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.364 238945 DEBUG nova.virt.hardware [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.366 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.612 238945 DEBUG oslo_concurrency.lockutils [None req-b2e47546-4406-4cd1-bd7a-91131552fc27 bd2f6f5d6ce541cc88ea7e10a215c460 6d2d99cc193e4d4e8444b64eff3dcf72 - - default default] Lock "7dfb3234-e54d-417e-93b5-5b1f17a4820a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:43 np0005597378 podman[322138]: 2026-01-27 14:01:43.513903288 +0000 UTC m=+0.019400942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:01:43 np0005597378 podman[322138]: 2026-01-27 14:01:43.686419183 +0000 UTC m=+0.191916807 container create 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:01:43 np0005597378 systemd[1]: Started libpod-conmon-663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a.scope.
Jan 27 09:01:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374057659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:01:43 np0005597378 nova_compute[238941]: 2026-01-27 14:01:43.922 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:01:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:01:43 np0005597378 podman[322138]: 2026-01-27 14:01:43.954812383 +0000 UTC m=+0.460310017 container init 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:43 np0005597378 podman[322138]: 2026-01-27 14:01:43.962719801 +0000 UTC m=+0.468217425 container start 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:01:43 np0005597378 reverent_ritchie[322171]: 167 167
Jan 27 09:01:43 np0005597378 systemd[1]: libpod-663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a.scope: Deactivated successfully.
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.036 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.041 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:44 np0005597378 podman[322138]: 2026-01-27 14:01:44.074515356 +0000 UTC m=+0.580012980 container attach 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:01:44 np0005597378 podman[322138]: 2026-01-27 14:01:44.07507678 +0000 UTC m=+0.580574404 container died 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:01:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d4c764aa483f4a9280ef7dc5a2adfba9d686259b707d9be0b10d0914298225e0-merged.mount: Deactivated successfully.
Jan 27 09:01:44 np0005597378 podman[322138]: 2026-01-27 14:01:44.271246309 +0000 UTC m=+0.776743923 container remove 663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:01:44 np0005597378 systemd[1]: libpod-conmon-663d2729bc5b2628bca22f20567b172ca9193746677944f49aba77b368e6368a.scope: Deactivated successfully.
Jan 27 09:01:44 np0005597378 podman[322240]: 2026-01-27 14:01:44.451054405 +0000 UTC m=+0.059889038 container create 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:44 np0005597378 systemd[1]: Started libpod-conmon-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope.
Jan 27 09:01:44 np0005597378 podman[322240]: 2026-01-27 14:01:44.414718368 +0000 UTC m=+0.023553031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:01:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:44 np0005597378 podman[322240]: 2026-01-27 14:01:44.60686035 +0000 UTC m=+0.215694983 container init 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 09:01:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 174 op/s
Jan 27 09:01:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:01:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071017558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:01:44 np0005597378 podman[322240]: 2026-01-27 14:01:44.617996123 +0000 UTC m=+0.226830746 container start 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.634 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.638 238945 DEBUG nova.virt.libvirt.vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:01:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.639 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.640 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.642 238945 DEBUG nova.objects.instance [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:44 np0005597378 podman[322240]: 2026-01-27 14:01:44.642745075 +0000 UTC m=+0.251579708 container attach 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.719 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <name>instance-00000063</name>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:01:43</nova:creationTime>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:76:b6:89"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <target dev="tap058b32ea-79"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:01:44 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:01:44 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:01:44 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:01:44 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.722 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Preparing to wait for external event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.722 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.723 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.723 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.724 238945 DEBUG nova.virt.libvirt.vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:01:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.724 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.725 238945 DEBUG nova.network.os_vif_util [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.726 238945 DEBUG os_vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.727 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.728 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.733 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.734 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:44 np0005597378 NetworkManager[48904]: <info>  [1769522504.7376] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.748 238945 INFO os_vif [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.894 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.895 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.896 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No VIF found with MAC fa:16:3e:76:b6:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.896 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Using config drive#033[00m
Jan 27 09:01:44 np0005597378 nova_compute[238941]: 2026-01-27 14:01:44.925 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:45 np0005597378 adoring_heisenberg[322257]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:01:45 np0005597378 adoring_heisenberg[322257]: --> All data devices are unavailable
Jan 27 09:01:45 np0005597378 systemd[1]: libpod-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope: Deactivated successfully.
Jan 27 09:01:45 np0005597378 conmon[322257]: conmon 7e99f250787ebe342e81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope/container/memory.events
Jan 27 09:01:45 np0005597378 podman[322240]: 2026-01-27 14:01:45.139345328 +0000 UTC m=+0.748179961 container died 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.439 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Creating config drive at /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.445 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_3h0zni execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.591 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_3h0zni" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.619 238945 DEBUG nova.storage.rbd_utils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.625 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-225a1bc34f05a0c93923f8cc7d3609f0b84621e6e1f18a084839ead3d9c3205e-merged.mount: Deactivated successfully.
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.837 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.858 238945 DEBUG nova.network.neutron [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated VIF entry in instance network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.860 238945 DEBUG nova.network.neutron [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.888 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.888 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:45 np0005597378 nova_compute[238941]: 2026-01-27 14:01:45.902 238945 DEBUG oslo_concurrency.lockutils [req-f0a34542-19df-4299-bb6c-31c7dc03c81e req-907a8ef3-6c06-4d65-83c3-276c2b54ee9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:01:46 np0005597378 podman[322240]: 2026-01-27 14:01:46.126311698 +0000 UTC m=+1.735146331 container remove 7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_heisenberg, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:01:46 np0005597378 systemd[1]: libpod-conmon-7e99f250787ebe342e816af276ef826cd9cb48a7bb8664d78a182e7af77cfdf1.scope: Deactivated successfully.
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.309 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 27 09:01:46 np0005597378 podman[322416]: 2026-01-27 14:01:46.68023426 +0000 UTC m=+0.073508487 container create ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:01:46 np0005597378 nova_compute[238941]: 2026-01-27 14:01:46.681 238945 DEBUG oslo_concurrency.processutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config 2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:46 np0005597378 nova_compute[238941]: 2026-01-27 14:01:46.681 238945 INFO nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deleting local config drive /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/disk.config because it was imported into RBD.#033[00m
Jan 27 09:01:46 np0005597378 podman[322416]: 2026-01-27 14:01:46.626742261 +0000 UTC m=+0.020016508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:01:46 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:01:46 np0005597378 systemd[1]: Started libpod-conmon-ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582.scope.
Jan 27 09:01:46 np0005597378 NetworkManager[48904]: <info>  [1769522506.7488] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Jan 27 09:01:46 np0005597378 nova_compute[238941]: 2026-01-27 14:01:46.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:46Z|00901|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:01:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:46Z|00902|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:01:46 np0005597378 nova_compute[238941]: 2026-01-27 14:01:46.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:46 np0005597378 systemd-udevd[322448]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:01:46 np0005597378 systemd-machined[207425]: New machine qemu-116-instance-00000063.
Jan 27 09:01:46 np0005597378 NetworkManager[48904]: <info>  [1769522506.8240] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:01:46 np0005597378 NetworkManager[48904]: <info>  [1769522506.8253] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:01:46 np0005597378 nova_compute[238941]: 2026-01-27 14:01:46.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:46 np0005597378 systemd[1]: Started Virtual Machine qemu-116-instance-00000063.
Jan 27 09:01:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:46Z|00903|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:01:46 np0005597378 nova_compute[238941]: 2026-01-27 14:01:46.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:46Z|00904|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.843 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.847 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:01:46 np0005597378 podman[322416]: 2026-01-27 14:01:46.85943733 +0000 UTC m=+0.252711577 container init ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e7f100-79be-4b24-94d9-d441852d3121]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 podman[322416]: 2026-01-27 14:01:46.871248232 +0000 UTC m=+0.264522459 container start ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.871 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.874 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.874 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f237f809-1290-4fc0-931e-f26ee696a54c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.876 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fcfa37-36c5-4259-98d1-2afd562a9409]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 kind_lumiere[322442]: 167 167
Jan 27 09:01:46 np0005597378 systemd[1]: libpod-ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582.scope: Deactivated successfully.
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.896 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9de115b6-8257-4013-af8c-1f6180005da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.912 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9410c38e-8dfb-4136-ba54-a23bfe844a95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.943 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6a159830-a4a6-48c9-8e2f-e29ddc9ee363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc784de-3db7-44b1-99dd-8002bf92f698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 systemd-udevd[322450]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:01:46 np0005597378 NetworkManager[48904]: <info>  [1769522506.9515] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Jan 27 09:01:46 np0005597378 podman[322416]: 2026-01-27 14:01:46.974925863 +0000 UTC m=+0.368200120 container attach ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:01:46 np0005597378 podman[322416]: 2026-01-27 14:01:46.976422823 +0000 UTC m=+0.369697070 container died ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.987 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[79d635ac-c1bf-41d3-875f-e0366f411601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:46.991 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8828d94-91a1-4a01-a9ba-9db850a832e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 NetworkManager[48904]: <info>  [1769522507.0188] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[99b379c3-c913-4e41-b462-61647b77876b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.047 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f476d254-6071-4b8f-b4d5-cba471a37e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514664, 'reachable_time': 28125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322494, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.069 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9972c52e-ad1b-4110-803f-a80aa32a4e35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514664, 'tstamp': 514664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322495, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.089 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08f09857-2b81-40f4-bf82-715bcda9a489]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514664, 'reachable_time': 28125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322496, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c1fa3a-0160-4ef0-a798-317949e54f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.212 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c731a684-c0c3-43ab-b303-b598c080dfc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.214 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.214 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.215 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:47 np0005597378 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 09:01:47 np0005597378 NetworkManager[48904]: <info>  [1769522507.2204] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:01:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:47Z|00905|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.223 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.243 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.244 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[983b9779-f159-404c-bffd-654a27599a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.245 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:01:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:01:47.246 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:01:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-784f4d6019907ce19a1405e97578d120255cd7eb6d1cdf474fc161d0e142aaf7-merged.mount: Deactivated successfully.
Jan 27 09:01:47 np0005597378 podman[322416]: 2026-01-27 14:01:47.342483846 +0000 UTC m=+0.735758083 container remove ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:01:47 np0005597378 systemd[1]: libpod-conmon-ee1e2e6d94e7afc1ce5491848e6af87671fc8f6679492a0beaf126a179dd0582.scope: Deactivated successfully.
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG nova.compute.manager [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG oslo_concurrency.lockutils [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG oslo_concurrency.lockutils [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.457 238945 DEBUG oslo_concurrency.lockutils [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:47 np0005597378 nova_compute[238941]: 2026-01-27 14:01:47.458 238945 DEBUG nova.compute.manager [req-83e78431-ea95-4fd9-a150-6bf8ec8c8447 req-7aa14841-bee8-4929-92e2-9f16e3b2b95c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Processing event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:01:47 np0005597378 podman[322514]: 2026-01-27 14:01:47.519481729 +0000 UTC m=+0.041765541 container create 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:01:47 np0005597378 systemd[1]: Started libpod-conmon-081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f.scope.
Jan 27 09:01:47 np0005597378 podman[322514]: 2026-01-27 14:01:47.50207939 +0000 UTC m=+0.024363222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:01:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:47 np0005597378 podman[322514]: 2026-01-27 14:01:47.634002365 +0000 UTC m=+0.156286177 container init 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:01:47 np0005597378 podman[322514]: 2026-01-27 14:01:47.644748488 +0000 UTC m=+0.167032300 container start 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:01:47 np0005597378 podman[322514]: 2026-01-27 14:01:47.654909936 +0000 UTC m=+0.177193748 container attach 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:01:47 np0005597378 podman[322555]: 2026-01-27 14:01:47.738881148 +0000 UTC m=+0.134329150 container create 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:01:47 np0005597378 podman[322555]: 2026-01-27 14:01:47.64672657 +0000 UTC m=+0.042174592 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:01:47 np0005597378 systemd[1]: Started libpod-conmon-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2.scope.
Jan 27 09:01:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:01:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:01:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9260db7a4e48af278aa48fcde7df8149d485c16599cc482c919caace961c902c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:47 np0005597378 podman[322555]: 2026-01-27 14:01:47.862600808 +0000 UTC m=+0.258048830 container init 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:01:47 np0005597378 podman[322555]: 2026-01-27 14:01:47.869726756 +0000 UTC m=+0.265174768 container start 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:01:47 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : New worker (322619) forked
Jan 27 09:01:47 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : Loading success.
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]: {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:    "0": [
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:        {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "devices": [
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "/dev/loop3"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            ],
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_name": "ceph_lv0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_size": "21470642176",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "name": "ceph_lv0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "tags": {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cluster_name": "ceph",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.crush_device_class": "",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.encrypted": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.objectstore": "bluestore",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osd_id": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.type": "block",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.vdo": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.with_tpm": "0"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            },
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "type": "block",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "vg_name": "ceph_vg0"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:        }
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:    ],
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:    "1": [
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:        {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "devices": [
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "/dev/loop4"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            ],
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_name": "ceph_lv1",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_size": "21470642176",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "name": "ceph_lv1",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "tags": {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cluster_name": "ceph",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.crush_device_class": "",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.encrypted": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.objectstore": "bluestore",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osd_id": "1",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.type": "block",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.vdo": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.with_tpm": "0"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            },
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "type": "block",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "vg_name": "ceph_vg1"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:        }
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:    ],
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:    "2": [
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:        {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "devices": [
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "/dev/loop5"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            ],
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_name": "ceph_lv2",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_size": "21470642176",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "name": "ceph_lv2",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "tags": {
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.cluster_name": "ceph",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.crush_device_class": "",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.encrypted": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.objectstore": "bluestore",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osd_id": "2",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.type": "block",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.vdo": "0",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:                "ceph.with_tpm": "0"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            },
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "type": "block",
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:            "vg_name": "ceph_vg2"
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:        }
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]:    ]
Jan 27 09:01:47 np0005597378 stupefied_lederberg[322547]: }
Jan 27 09:01:47 np0005597378 systemd[1]: libpod-081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f.scope: Deactivated successfully.
Jan 27 09:01:47 np0005597378 podman[322514]: 2026-01-27 14:01:47.995635172 +0000 UTC m=+0.517918984 container died 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.007 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522508.0069208, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.007 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.010 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.014 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.019 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance spawned successfully.#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.020 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:01:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6271f44c6c69f5c34a1fb7e3fda4644d2c4d2ae578cabae782c6fb6850ce2332-merged.mount: Deactivated successfully.
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.101 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.105 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.139 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.139 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.140 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.140 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.141 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.141 238945 DEBUG nova.virt.libvirt.driver [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.155 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.156 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522508.0070324, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.156 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.221 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.225 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522508.0137374, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.225 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:48 np0005597378 podman[322514]: 2026-01-27 14:01:48.274874238 +0000 UTC m=+0.797158050 container remove 081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.279 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:01:48 np0005597378 systemd[1]: libpod-conmon-081e7167056b277a4b0d559682e1e69854fd6ca339401471ccf018f07417853f.scope: Deactivated successfully.
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.291 238945 INFO nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 9.63 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.291 238945 DEBUG nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.414 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.434 238945 INFO nova.compute.manager [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 11.08 seconds to build instance.#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.484 238945 DEBUG oslo_concurrency.lockutils [None req-0f04e3bc-59cb-4b69-8b80-6fbeb8f288d2 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.484 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.485 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:01:48 np0005597378 nova_compute[238941]: 2026-01-27 14:01:48.485 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 27 09:01:48 np0005597378 podman[322706]: 2026-01-27 14:01:48.869247287 +0000 UTC m=+0.092709665 container create ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:01:48 np0005597378 podman[322706]: 2026-01-27 14:01:48.801983165 +0000 UTC m=+0.025445563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:01:48 np0005597378 systemd[1]: Started libpod-conmon-ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860.scope.
Jan 27 09:01:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:48 np0005597378 podman[322706]: 2026-01-27 14:01:48.981181474 +0000 UTC m=+0.204643882 container init ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:01:48 np0005597378 podman[322706]: 2026-01-27 14:01:48.991154758 +0000 UTC m=+0.214617136 container start ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:01:48 np0005597378 xenodochial_varahamihira[322722]: 167 167
Jan 27 09:01:48 np0005597378 systemd[1]: libpod-ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860.scope: Deactivated successfully.
Jan 27 09:01:49 np0005597378 podman[322706]: 2026-01-27 14:01:49.046648759 +0000 UTC m=+0.270111167 container attach ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:49 np0005597378 podman[322706]: 2026-01-27 14:01:49.046981758 +0000 UTC m=+0.270444136 container died ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:01:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-616e4518b2322a583cb192e8c31fff83c4cb8be17d7eafffb2941f1ae93be2fd-merged.mount: Deactivated successfully.
Jan 27 09:01:49 np0005597378 podman[322706]: 2026-01-27 14:01:49.190734735 +0000 UTC m=+0.414197113 container remove ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:01:49 np0005597378 systemd[1]: libpod-conmon-ce6d7858dca4a5ae7a8410a653cd7db0c2b5681a7041d878c3774705cbfd3860.scope: Deactivated successfully.
Jan 27 09:01:49 np0005597378 podman[322748]: 2026-01-27 14:01:49.387613922 +0000 UTC m=+0.066646657 container create 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:01:49 np0005597378 podman[322748]: 2026-01-27 14:01:49.344784373 +0000 UTC m=+0.023817118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:01:49 np0005597378 systemd[1]: Started libpod-conmon-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope.
Jan 27 09:01:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:01:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:01:49 np0005597378 podman[322748]: 2026-01-27 14:01:49.598547238 +0000 UTC m=+0.277579993 container init 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:01:49 np0005597378 podman[322748]: 2026-01-27 14:01:49.606166289 +0000 UTC m=+0.285199024 container start 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:01:49 np0005597378 podman[322748]: 2026-01-27 14:01:49.632708329 +0000 UTC m=+0.311741094 container attach 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.677 238945 DEBUG nova.compute.manager [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG oslo_concurrency.lockutils [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG oslo_concurrency.lockutils [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG oslo_concurrency.lockutils [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 DEBUG nova.compute.manager [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.678 238945 WARNING nova.compute.manager [req-98e1cfd1-2f25-47e6-a049-4ed90547d81e req-3ade9d1c-007e-4fda-906f-070a1ab23718 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:01:49 np0005597378 nova_compute[238941]: 2026-01-27 14:01:49.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:50 np0005597378 lvm[322845]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:01:50 np0005597378 lvm[322843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:01:50 np0005597378 lvm[322843]: VG ceph_vg0 finished
Jan 27 09:01:50 np0005597378 lvm[322845]: VG ceph_vg1 finished
Jan 27 09:01:50 np0005597378 lvm[322846]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:01:50 np0005597378 lvm[322846]: VG ceph_vg2 finished
Jan 27 09:01:50 np0005597378 xenodochial_burnell[322765]: {}
Jan 27 09:01:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 27 09:01:50 np0005597378 systemd[1]: libpod-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope: Deactivated successfully.
Jan 27 09:01:50 np0005597378 systemd[1]: libpod-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope: Consumed 1.641s CPU time.
Jan 27 09:01:50 np0005597378 podman[322849]: 2026-01-27 14:01:50.700093927 +0000 UTC m=+0.034222442 container died 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:01:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1fcb03013e07fc4f6a04be9e17b35d025623acafb04eb1cabbe3d0734df3d506-merged.mount: Deactivated successfully.
Jan 27 09:01:50 np0005597378 podman[322849]: 2026-01-27 14:01:50.814211274 +0000 UTC m=+0.148339779 container remove 894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_burnell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:01:50 np0005597378 systemd[1]: libpod-conmon-894505171e110e4b48b2a6d95276f3b482eb5c97b6efd4111a73987c017d4882.scope: Deactivated successfully.
Jan 27 09:01:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:01:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:01:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:01:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:01:50 np0005597378 nova_compute[238941]: 2026-01-27 14:01:50.959 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522495.9581244, 7dfb3234-e54d-417e-93b5-5b1f17a4820a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:01:50 np0005597378 nova_compute[238941]: 2026-01-27 14:01:50.960 238945 INFO nova.compute.manager [-] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:01:50 np0005597378 nova_compute[238941]: 2026-01-27 14:01:50.978 238945 DEBUG nova.compute.manager [None req-ab0b97f8-10b1-44c7-afcf-adacb982e3ca - - - - - -] [instance: 7dfb3234-e54d-417e-93b5-5b1f17a4820a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:01:51 np0005597378 nova_compute[238941]: 2026-01-27 14:01:51.433 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:01:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:52Z|00906|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:01:52 np0005597378 NetworkManager[48904]: <info>  [1769522512.2293] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.228 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:52 np0005597378 NetworkManager[48904]: <info>  [1769522512.2305] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Jan 27 09:01:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:01:52Z|00907|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.457 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:01:52 np0005597378 nova_compute[238941]: 2026-01-27 14:01:52.458 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Jan 27 09:01:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869481147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.076 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.220 238945 DEBUG nova.compute.manager [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-changed-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.221 238945 DEBUG nova.compute.manager [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing instance network info cache due to event network-changed-058b32ea-7973-4220-91fa-58dc678da20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.222 238945 DEBUG oslo_concurrency.lockutils [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.222 238945 DEBUG oslo_concurrency.lockutils [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.222 238945 DEBUG nova.network.neutron [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Refreshing network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:01:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.259 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.259 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.445 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.446 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3699MB free_disk=59.96680904366076GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.446 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.447 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.676 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.676 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.676 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:01:53 np0005597378 nova_compute[238941]: 2026-01-27 14:01:53.714 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:01:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:01:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546916635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:01:54 np0005597378 nova_compute[238941]: 2026-01-27 14:01:54.334 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:01:54 np0005597378 nova_compute[238941]: 2026-01-27 14:01:54.339 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:01:54 np0005597378 nova_compute[238941]: 2026-01-27 14:01:54.354 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:01:54 np0005597378 nova_compute[238941]: 2026-01-27 14:01:54.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:01:54 np0005597378 nova_compute[238941]: 2026-01-27 14:01:54.383 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:01:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 109 op/s
Jan 27 09:01:54 np0005597378 nova_compute[238941]: 2026-01-27 14:01:54.738 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.623 238945 DEBUG nova.network.neutron [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated VIF entry in instance network info cache for port 058b32ea-7973-4220-91fa-58dc678da20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.624 238945 DEBUG nova.network.neutron [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.645 238945 DEBUG oslo_concurrency.lockutils [req-f47a5b95-9815-47b5-bf71-b7a09f0c6c0b req-028468a6-af15-4bac-ae17-08665adbc7a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.688 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.689 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.689 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:01:55 np0005597378 nova_compute[238941]: 2026-01-27 14:01:55.689 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:01:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 27 09:01:57 np0005597378 nova_compute[238941]: 2026-01-27 14:01:57.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:01:57 np0005597378 nova_compute[238941]: 2026-01-27 14:01:57.447 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:01:57 np0005597378 nova_compute[238941]: 2026-01-27 14:01:57.464 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:01:57 np0005597378 nova_compute[238941]: 2026-01-27 14:01:57.464 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:01:57 np0005597378 nova_compute[238941]: 2026-01-27 14:01:57.465 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:01:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 27 09:01:59 np0005597378 nova_compute[238941]: 2026-01-27 14:01:59.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:59 np0005597378 nova_compute[238941]: 2026-01-27 14:01:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:01:59 np0005597378 nova_compute[238941]: 2026-01-27 14:01:59.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:01:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:01:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597734705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:01:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:01:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597734705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:01:59 np0005597378 nova_compute[238941]: 2026-01-27 14:01:59.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 66 op/s
Jan 27 09:02:01 np0005597378 nova_compute[238941]: 2026-01-27 14:02:01.671 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:02:01 np0005597378 nova_compute[238941]: 2026-01-27 14:02:01.671 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:02:02 np0005597378 nova_compute[238941]: 2026-01-27 14:02:02.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 88 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 53 op/s
Jan 27 09:02:02 np0005597378 podman[322935]: 2026-01-27 14:02:02.750385904 +0000 UTC m=+0.085503084 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:02:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 98 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Jan 27 09:02:04 np0005597378 nova_compute[238941]: 2026-01-27 14:02:04.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:04 np0005597378 podman[322955]: 2026-01-27 14:02:04.752173937 +0000 UTC m=+0.092867777 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:02:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:04Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:02:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:04Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:02:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 109 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 27 09:02:07 np0005597378 nova_compute[238941]: 2026-01-27 14:02:07.211 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 118 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:02:09 np0005597378 nova_compute[238941]: 2026-01-27 14:02:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:02:09 np0005597378 nova_compute[238941]: 2026-01-27 14:02:09.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 09:02:12 np0005597378 nova_compute[238941]: 2026-01-27 14:02:12.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 09:02:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 09:02:14 np0005597378 nova_compute[238941]: 2026-01-27 14:02:14.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 859 KiB/s wr, 29 op/s
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:02:17
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'vms']
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:02:17 np0005597378 nova_compute[238941]: 2026-01-27 14:02:17.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:17 np0005597378 nova_compute[238941]: 2026-01-27 14:02:17.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:17.706 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:02:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:17.709 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:02:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 98 KiB/s wr, 20 op/s
Jan 27 09:02:19 np0005597378 nova_compute[238941]: 2026-01-27 14:02:19.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 27 09:02:22 np0005597378 nova_compute[238941]: 2026-01-27 14:02:22.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Jan 27 09:02:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 27 09:02:24 np0005597378 nova_compute[238941]: 2026-01-27 14:02:24.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 27 09:02:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:26.711 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.222 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.443 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.443 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.444 238945 INFO nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Rebooting instance#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.458 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.459 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:02:27 np0005597378 nova_compute[238941]: 2026-01-27 14:02:27.459 238945 DEBUG nova.network.neutron [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007688631697496555 of space, bias 1.0, pg target 0.23065895092489663 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675515594353253 of space, bias 1.0, pg target 0.2002654678305976 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0840538975285344e-06 of space, bias 4.0, pg target 0.0013008646770342413 quantized to 16 (current 16)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:02:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:02:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Jan 27 09:02:29 np0005597378 nova_compute[238941]: 2026-01-27 14:02:29.623 238945 DEBUG nova.network.neutron [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:02:29 np0005597378 nova_compute[238941]: 2026-01-27 14:02:29.644 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:02:29 np0005597378 nova_compute[238941]: 2026-01-27 14:02:29.645 238945 DEBUG nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:02:29 np0005597378 nova_compute[238941]: 2026-01-27 14:02:29.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:02:30 np0005597378 NetworkManager[48904]: <info>  [1769522550.1470] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:02:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:30Z|00908|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:02:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:30Z|00909|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:30Z|00910|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.162 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:02:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.163 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:02:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.164 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:02:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27759a2f-2dfc-43dc-b20a-a0fb09db6c13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:30.166 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.175 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 09:02:30 np0005597378 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000063.scope: Consumed 15.398s CPU time.
Jan 27 09:02:30 np0005597378 systemd-machined[207425]: Machine qemu-116-instance-00000063 terminated.
Jan 27 09:02:30 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : haproxy version is 2.8.14-c23fe91
Jan 27 09:02:30 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [NOTICE]   (322614) : path to executable is /usr/sbin/haproxy
Jan 27 09:02:30 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [WARNING]  (322614) : Exiting Master process...
Jan 27 09:02:30 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [ALERT]    (322614) : Current worker (322619) exited with code 143 (Terminated)
Jan 27 09:02:30 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[322579]: [WARNING]  (322614) : All workers exited. Exiting... (0)
Jan 27 09:02:30 np0005597378 systemd[1]: libpod-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2.scope: Deactivated successfully.
Jan 27 09:02:30 np0005597378 podman[323010]: 2026-01-27 14:02:30.389526044 +0000 UTC m=+0.127740075 container died 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.405 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.406 238945 DEBUG nova.objects.instance [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.421 238945 DEBUG nova.virt.libvirt.vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.422 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.423 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.423 238945 DEBUG os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.426 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.431 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.436 238945 INFO os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.450 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.456 238945 WARNING nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.463 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.465 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.469 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.471 238945 DEBUG nova.virt.libvirt.host [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.472 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.472 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.473 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.473 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.473 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.474 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.474 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.474 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.475 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.475 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.476 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.476 238945 DEBUG nova.virt.hardware [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.476 238945 DEBUG nova.objects.instance [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.481 238945 DEBUG nova.compute.manager [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.482 238945 DEBUG oslo_concurrency.lockutils [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.482 238945 DEBUG oslo_concurrency.lockutils [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.483 238945 DEBUG oslo_concurrency.lockutils [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.483 238945 DEBUG nova.compute.manager [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.483 238945 WARNING nova.compute.manager [req-1ebc849e-8c5c-4d42-8cce-4d59d385d5a5 req-0a777250-c173-4919-b2f1-f324ed294819 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 27 09:02:30 np0005597378 nova_compute[238941]: 2026-01-27 14:02:30.514 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:02:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2-userdata-shm.mount: Deactivated successfully.
Jan 27 09:02:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9260db7a4e48af278aa48fcde7df8149d485c16599cc482c919caace961c902c-merged.mount: Deactivated successfully.
Jan 27 09:02:30 np0005597378 podman[323010]: 2026-01-27 14:02:30.579887459 +0000 UTC m=+0.318101490 container cleanup 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:02:30 np0005597378 systemd[1]: libpod-conmon-4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2.scope: Deactivated successfully.
Jan 27 09:02:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 27 09:02:30 np0005597378 podman[323052]: 2026-01-27 14:02:30.989783868 +0000 UTC m=+0.376975133 container remove 4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.000 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7794fc-089e-4175-a5ee-79f4d4c1b430]: (4, ('Tue Jan 27 02:02:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2)\n4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2\nTue Jan 27 02:02:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2)\n4e633bc2365c7af302a3780a20800b5837bcde9c8ff3accc3673f2b02d7e01c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab8770f-c5bc-4104-a18b-f903ba86391a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.003 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:31 np0005597378 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.037 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d86489eb-9211-4b1f-bdea-35c2995849fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:02:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354875236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.052 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1deccffd-e7bf-4513-ba13-252dc6847f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68e4025c-cbf1-4b8f-a406-ea53340afa88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.070 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9608b43f-fc99-419f-b9bd-6e0c87f439dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514656, 'reachable_time': 19127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323085, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.074 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:02:31 np0005597378 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 09:02:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:31.074 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[592ed6bb-73a3-448e-8b8b-a7d5b1738adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.077 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.213 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:02:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:02:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432898004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.893 238945 DEBUG oslo_concurrency.processutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.895 238945 DEBUG nova.virt.libvirt.vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.896 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.897 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.898 238945 DEBUG nova.objects.instance [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.921 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <name>instance-00000063</name>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:02:30</nova:creationTime>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:76:b6:89"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <target dev="tap058b32ea-79"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:02:31 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:02:31 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:02:31 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:02:31 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.924 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.925 238945 DEBUG nova.virt.libvirt.driver [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.926 238945 DEBUG nova.virt.libvirt.vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.927 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.929 238945 DEBUG nova.network.os_vif_util [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.930 238945 DEBUG os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.932 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.933 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.938 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.940 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:31 np0005597378 NetworkManager[48904]: <info>  [1769522551.9437] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:31 np0005597378 nova_compute[238941]: 2026-01-27 14:02:31.951 238945 INFO os_vif [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:02:32 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:02:32 np0005597378 systemd-udevd[322989]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:32Z|00911|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:02:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:32Z|00912|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:02:32 np0005597378 NetworkManager[48904]: <info>  [1769522552.1659] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Jan 27 09:02:32 np0005597378 NetworkManager[48904]: <info>  [1769522552.1787] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:02:32 np0005597378 NetworkManager[48904]: <info>  [1769522552.1795] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:02:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:32Z|00913|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.180 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:32Z|00914|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.183 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.185 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.187 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.202 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc6bab2-e8ab-497b-b304-83d68cf85378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.203 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.206 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.206 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6da10b83-53e1-4e82-b778-5a209a3623db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.207 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3e3686-3532-4994-869b-57e3fd03c754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 systemd-machined[207425]: New machine qemu-117-instance-00000063.
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.223 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[26a4aae8-507e-4ac5-b5d2-8b1a4557a0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 systemd[1]: Started Virtual Machine qemu-117-instance-00000063.
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fac779c2-1850-4e27-ae78-c2af049d2a0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.286 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6507e4aa-6da8-47e5-840c-dcbf0d9321b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb297ae-8df9-4aca-b009-26cbf891442a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 NetworkManager[48904]: <info>  [1769522552.2965] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.331 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a974868b-f192-4b1b-a795-920e49831f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.335 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a375346-b2f9-47ca-8e5d-34f991210774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 NetworkManager[48904]: <info>  [1769522552.3596] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.369 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9a97c419-9d56-496c-9955-214d9a8ecd75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[664ee6e4-d0a1-4773-8c2f-7bd921e80cac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519198, 'reachable_time': 33998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323170, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.407 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c264febf-92dc-4c62-845a-e7d81b0ce29f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519198, 'tstamp': 519198}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323171, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ded54e-dfc7-4237-81cb-d0f4093d37fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519198, 'reachable_time': 33998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323172, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.463 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e549641a-e75c-4d88-87d0-4c0ee8591a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.544 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[00c6189a-f83e-4d1a-8d20-0549ea8a2d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.545 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.545 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.546 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:32 np0005597378 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 09:02:32 np0005597378 NetworkManager[48904]: <info>  [1769522552.5482] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.551 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:32Z|00915|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.572 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce3f711-ce10-45bc-90bb-05e03974d313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.574 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:02:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:32.575 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:02:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 681 B/s rd, 1022 B/s wr, 0 op/s
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.709 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.709 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.710 238945 WARNING nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG oslo_concurrency.lockutils [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.711 238945 DEBUG nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.712 238945 WARNING nova.compute.manager [req-71c5169e-f948-44ad-9c52-939cd5e36ca1 req-b529c75a-3393-4dc4-a8f2-1c7852d327b5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.929 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.930 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522552.929021, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.930 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.933 238945 DEBUG nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.936 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.936 238945 DEBUG nova.compute.manager [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.973 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:02:32 np0005597378 nova_compute[238941]: 2026-01-27 14:02:32.977 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:02:33 np0005597378 podman[323246]: 2026-01-27 14:02:32.929702931 +0000 UTC m=+0.024683501 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:02:33 np0005597378 nova_compute[238941]: 2026-01-27 14:02:33.050 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 27 09:02:33 np0005597378 nova_compute[238941]: 2026-01-27 14:02:33.051 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522552.929798, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:02:33 np0005597378 nova_compute[238941]: 2026-01-27 14:02:33.051 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:02:33 np0005597378 nova_compute[238941]: 2026-01-27 14:02:33.093 238945 DEBUG oslo_concurrency.lockutils [None req-f6098760-a07e-4501-a23f-39cc63c01c04 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:33 np0005597378 nova_compute[238941]: 2026-01-27 14:02:33.135 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:02:33 np0005597378 nova_compute[238941]: 2026-01-27 14:02:33.138 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:02:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:33 np0005597378 podman[323246]: 2026-01-27 14:02:33.408531565 +0000 UTC m=+0.503512115 container create e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 09:02:33 np0005597378 systemd[1]: Started libpod-conmon-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f.scope.
Jan 27 09:02:33 np0005597378 podman[323257]: 2026-01-27 14:02:33.531658549 +0000 UTC m=+0.078113199 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:02:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:02:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7797aa0fbea487b1536a99dc83b5b252bef43895afdb62771da050cb62e53c53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:33 np0005597378 podman[323246]: 2026-01-27 14:02:33.640207029 +0000 UTC m=+0.735187629 container init e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:33 np0005597378 podman[323246]: 2026-01-27 14:02:33.646753241 +0000 UTC m=+0.741733791 container start e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:02:33 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : New worker (323284) forked
Jan 27 09:02:33 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : Loading success.
Jan 27 09:02:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 1022 B/s wr, 7 op/s
Jan 27 09:02:34 np0005597378 nova_compute[238941]: 2026-01-27 14:02:34.933 238945 DEBUG nova.compute.manager [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:02:34 np0005597378 nova_compute[238941]: 2026-01-27 14:02:34.933 238945 DEBUG oslo_concurrency.lockutils [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:34 np0005597378 nova_compute[238941]: 2026-01-27 14:02:34.933 238945 DEBUG oslo_concurrency.lockutils [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:34 np0005597378 nova_compute[238941]: 2026-01-27 14:02:34.934 238945 DEBUG oslo_concurrency.lockutils [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:34 np0005597378 nova_compute[238941]: 2026-01-27 14:02:34.934 238945 DEBUG nova.compute.manager [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:02:34 np0005597378 nova_compute[238941]: 2026-01-27 14:02:34.934 238945 WARNING nova.compute.manager [req-ea485ddc-ed5c-4058-803b-1606c3739d20 req-c56010be-7c05-462d-95d9-af861b051715 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:02:35 np0005597378 podman[323293]: 2026-01-27 14:02:35.738080794 +0000 UTC m=+0.081019046 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 09:02:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 0 B/s wr, 17 op/s
Jan 27 09:02:36 np0005597378 nova_compute[238941]: 2026-01-27 14:02:36.691 238945 INFO nova.compute.manager [None req-a83ce509-9b02-4243-9350-af88d45c3cdb d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Get console output#033[00m
Jan 27 09:02:36 np0005597378 nova_compute[238941]: 2026-01-27 14:02:36.699 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:02:36 np0005597378 nova_compute[238941]: 2026-01-27 14:02:36.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:37 np0005597378 nova_compute[238941]: 2026-01-27 14:02:37.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 09:02:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.046 238945 DEBUG oslo_concurrency.lockutils [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.048 238945 DEBUG oslo_concurrency.lockutils [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.048 238945 DEBUG nova.compute.manager [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.053 238945 DEBUG nova.compute.manager [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.054 238945 DEBUG nova.objects.instance [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.396 238945 DEBUG nova.virt.libvirt.driver [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:02:41 np0005597378 nova_compute[238941]: 2026-01-27 14:02:41.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:42 np0005597378 nova_compute[238941]: 2026-01-27 14:02:42.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 09:02:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 09:02:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:45Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:02:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:46.310 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 67 op/s
Jan 27 09:02:46 np0005597378 nova_compute[238941]: 2026-01-27 14:02:46.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:47 np0005597378 nova_compute[238941]: 2026-01-27 14:02:47.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:02:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:02:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 93 op/s
Jan 27 09:02:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 09:02:51 np0005597378 nova_compute[238941]: 2026-01-27 14:02:51.437 238945 DEBUG nova.virt.libvirt.driver [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:02:51 np0005597378 nova_compute[238941]: 2026-01-27 14:02:51.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:02:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:02:52 np0005597378 podman[323465]: 2026-01-27 14:02:52.329299424 +0000 UTC m=+0.021352074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.457 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.458 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.458 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:02:52 np0005597378 nova_compute[238941]: 2026-01-27 14:02:52.459 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:02:52 np0005597378 podman[323465]: 2026-01-27 14:02:52.469648012 +0000 UTC m=+0.161700652 container create 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:02:52 np0005597378 systemd[1]: Started libpod-conmon-822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec.scope.
Jan 27 09:02:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:02:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 09:02:52 np0005597378 podman[323465]: 2026-01-27 14:02:52.654429669 +0000 UTC m=+0.346482329 container init 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:02:52 np0005597378 podman[323465]: 2026-01-27 14:02:52.661257659 +0000 UTC m=+0.353310289 container start 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:52 np0005597378 blissful_moser[323482]: 167 167
Jan 27 09:02:52 np0005597378 systemd[1]: libpod-822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec.scope: Deactivated successfully.
Jan 27 09:02:52 np0005597378 podman[323465]: 2026-01-27 14:02:52.713758252 +0000 UTC m=+0.405810902 container attach 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:02:52 np0005597378 podman[323465]: 2026-01-27 14:02:52.71482783 +0000 UTC m=+0.406880500 container died 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:02:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3a26e77ade9b0b9b9a83753da1b8876f80301a06c115d4e027fa4324322cbc2d-merged.mount: Deactivated successfully.
Jan 27 09:02:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:02:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489343226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.071 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.241 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.242 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:02:53 np0005597378 podman[323465]: 2026-01-27 14:02:53.309769163 +0000 UTC m=+1.001821823 container remove 822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:02:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:53 np0005597378 systemd[1]: libpod-conmon-822bb195067d13287cb110adba12a328cd17c2c2615b8ab1703c41b806aab8ec.scope: Deactivated successfully.
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.430 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.432 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3669MB free_disk=59.942154655233026GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.432 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.433 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:53 np0005597378 podman[323529]: 2026-01-27 14:02:53.470792875 +0000 UTC m=+0.024572039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:02:53 np0005597378 podman[323529]: 2026-01-27 14:02:53.615187809 +0000 UTC m=+0.168966913 container create 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.683 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.683 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.683 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:02:53 np0005597378 nova_compute[238941]: 2026-01-27 14:02:53.721 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:02:53 np0005597378 systemd[1]: Started libpod-conmon-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope.
Jan 27 09:02:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:02:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:53 np0005597378 podman[323529]: 2026-01-27 14:02:53.842570528 +0000 UTC m=+0.396349742 container init 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:02:53 np0005597378 podman[323529]: 2026-01-27 14:02:53.857922703 +0000 UTC m=+0.411701817 container start 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:53 np0005597378 podman[323529]: 2026-01-27 14:02:53.881958806 +0000 UTC m=+0.435738010 container attach 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:02:54 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:02:54 np0005597378 NetworkManager[48904]: <info>  [1769522574.1487] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:02:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:54Z|00916|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:02:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:54Z|00917|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 09:02:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:02:54Z|00918|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:54 np0005597378 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 09:02:54 np0005597378 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000063.scope: Consumed 14.150s CPU time.
Jan 27 09:02:54 np0005597378 systemd-machined[207425]: Machine qemu-117-instance-00000063 terminated.
Jan 27 09:02:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.276 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:02:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.278 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:02:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.279 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:02:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.280 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09c4df1d-710c-4c96-b0c6-dce3c92da2fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:54.281 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore#033[00m
Jan 27 09:02:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:02:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065746240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.310 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.320 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:02:54 np0005597378 bold_ishizaka[323546]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:02:54 np0005597378 bold_ishizaka[323546]: --> All data devices are unavailable
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.358 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:02:54 np0005597378 systemd[1]: libpod-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope: Deactivated successfully.
Jan 27 09:02:54 np0005597378 conmon[323546]: conmon 76e8ab3a4118712634e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope/container/memory.events
Jan 27 09:02:54 np0005597378 podman[323529]: 2026-01-27 14:02:54.389413465 +0000 UTC m=+0.943192599 container died 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.451 238945 INFO nova.virt.libvirt.driver [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.456 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.457 238945 DEBUG nova.objects.instance [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.470 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.492 238945 DEBUG nova.compute.manager [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:02:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-32a16e52982584442ac0ccfea1dd08a84ddfc2c101ab26e889ddad9578420fca-merged.mount: Deactivated successfully.
Jan 27 09:02:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.673 238945 DEBUG oslo_concurrency.lockutils [None req-779c3f2d-9bba-4161-a5c5-9c995abb0cbf d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:54 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : haproxy version is 2.8.14-c23fe91
Jan 27 09:02:54 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [NOTICE]   (323282) : path to executable is /usr/sbin/haproxy
Jan 27 09:02:54 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [WARNING]  (323282) : Exiting Master process...
Jan 27 09:02:54 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [ALERT]    (323282) : Current worker (323284) exited with code 143 (Terminated)
Jan 27 09:02:54 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[323275]: [WARNING]  (323282) : All workers exited. Exiting... (0)
Jan 27 09:02:54 np0005597378 systemd[1]: libpod-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f.scope: Deactivated successfully.
Jan 27 09:02:54 np0005597378 podman[323529]: 2026-01-27 14:02:54.950978288 +0000 UTC m=+1.504757412 container remove 76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.952 238945 DEBUG nova.compute.manager [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.953 238945 DEBUG oslo_concurrency.lockutils [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.953 238945 DEBUG oslo_concurrency.lockutils [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.954 238945 DEBUG oslo_concurrency.lockutils [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.954 238945 DEBUG nova.compute.manager [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:02:54 np0005597378 nova_compute[238941]: 2026-01-27 14:02:54.955 238945 WARNING nova.compute.manager [req-a0ad8a65-c766-4532-ae74-667b9d632616 req-b969b9e4-41d5-4634-a222-8df5e4f46a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state None.#033[00m
Jan 27 09:02:54 np0005597378 systemd[1]: libpod-conmon-76e8ab3a4118712634e4a55fc4dc362296366a760f549a8551d1f15463ab14d9.scope: Deactivated successfully.
Jan 27 09:02:54 np0005597378 podman[323612]: 2026-01-27 14:02:54.98789556 +0000 UTC m=+0.602016520 container died e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:02:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f-userdata-shm.mount: Deactivated successfully.
Jan 27 09:02:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7797aa0fbea487b1536a99dc83b5b252bef43895afdb62771da050cb62e53c53-merged.mount: Deactivated successfully.
Jan 27 09:02:56 np0005597378 podman[323612]: 2026-01-27 14:02:56.411866893 +0000 UTC m=+2.025987883 container cleanup e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:02:56 np0005597378 systemd[1]: libpod-conmon-e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f.scope: Deactivated successfully.
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.471 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.472 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.472 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:02:56 np0005597378 podman[323724]: 2026-01-27 14:02:56.429675552 +0000 UTC m=+0.456396374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:02:56 np0005597378 podman[323724]: 2026-01-27 14:02:56.605647398 +0000 UTC m=+0.632368190 container create 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:02:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 27 09:02:56 np0005597378 systemd[1]: Started libpod-conmon-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope.
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.811 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.852 238945 DEBUG oslo_concurrency.lockutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.852 238945 DEBUG oslo_concurrency.lockutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.852 238945 DEBUG nova.network.neutron [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.853 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:56 np0005597378 nova_compute[238941]: 2026-01-27 14:02:56.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:56 np0005597378 podman[323724]: 2026-01-27 14:02:56.973118028 +0000 UTC m=+0.999838890 container init 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:56 np0005597378 podman[323724]: 2026-01-27 14:02:56.986872431 +0000 UTC m=+1.013593213 container start 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:02:56 np0005597378 magical_faraday[323750]: 167 167
Jan 27 09:02:56 np0005597378 systemd[1]: libpod-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope: Deactivated successfully.
Jan 27 09:02:56 np0005597378 conmon[323750]: conmon 317a23fca91222849614 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope/container/memory.events
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.049 238945 DEBUG nova.compute.manager [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG oslo_concurrency.lockutils [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG oslo_concurrency.lockutils [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG oslo_concurrency.lockutils [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.050 238945 DEBUG nova.compute.manager [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.052 238945 WARNING nova.compute.manager [req-17a0ddca-1713-47e4-9ea6-24723a191462 req-6ea8fb42-8ef7-4ff2-a458-2b518207f91c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:02:57 np0005597378 podman[323724]: 2026-01-27 14:02:57.134070718 +0000 UTC m=+1.160791530 container attach 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:02:57 np0005597378 podman[323724]: 2026-01-27 14:02:57.13452115 +0000 UTC m=+1.161241942 container died 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.231 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:57 np0005597378 podman[323737]: 2026-01-27 14:02:57.492998193 +0000 UTC m=+1.048916693 container remove e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.500 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f44889b2-eb7e-486a-b138-4e1e81a849d7]: (4, ('Tue Jan 27 02:02:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f)\ne719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f\nTue Jan 27 02:02:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (e719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f)\ne719e0b56c0564e64bcc31fe52b25ef29dee41b978685c10470d876ea3db7e0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.502 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d412d10-75b0-4ec8-a9eb-7aebe009d842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.502 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:57 np0005597378 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 09:02:57 np0005597378 nova_compute[238941]: 2026-01-27 14:02:57.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.543 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fdddd9-3ac8-4999-80f6-035012b95d9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.571 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea1b3e5-1c97-48f6-a48b-fc94901b04a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c358c3a2-57db-4b0e-8ec8-200ed9b8938b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:02:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8026 writes, 36K keys, 8026 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8026 writes, 8026 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1474 writes, 6659 keys, 1474 commit groups, 1.0 writes per commit group, ingest: 9.13 MB, 0.02 MB/s#012Interval WAL: 1474 writes, 1474 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.3      0.86              0.11        21    0.041       0      0       0.0       0.0#012  L6      1/0    9.33 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.7     86.0     70.9      2.19              0.40        20    0.109    103K    11K       0.0       0.0#012 Sum      1/0    9.33 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.7     61.8     64.8      3.04              0.51        41    0.074    103K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.9     52.0     53.0      0.97              0.14        10    0.097     32K   3076       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     86.0     70.9      2.19              0.40        20    0.109    103K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     49.8      0.85              0.11        20    0.042       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.1 total, 600.0 interval#012Flush(GB): cumulative 0.041, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 3.0 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 22.95 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00035 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1451,22.15 MB,7.28489%) FilterBlock(42,296.73 KB,0.0953223%) IndexBlock(42,521.73 KB,0.167601%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9df03b-b784-4903-ae19-ee0105251c42]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519190, 'reachable_time': 17190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323774, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.600 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:02:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:02:57.601 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[27a63ced-83fe-4a08-8438-a79aa738ba42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:02:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8aaaeac167c51135efc9be3b024a7ac1c80aa2ca03c11a1441a2359542a54817-merged.mount: Deactivated successfully.
Jan 27 09:02:57 np0005597378 podman[323724]: 2026-01-27 14:02:57.922927399 +0000 UTC m=+1.949648211 container remove 317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:02:57 np0005597378 systemd[1]: libpod-conmon-317a23fca9122284961411d1afbc36b2d0f40ac4e5a64125d25ac6e6da756fed.scope: Deactivated successfully.
Jan 27 09:02:58 np0005597378 podman[323786]: 2026-01-27 14:02:58.071137883 +0000 UTC m=+0.024905846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:02:58 np0005597378 podman[323786]: 2026-01-27 14:02:58.210954866 +0000 UTC m=+0.164722809 container create c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.290 238945 DEBUG nova.network.neutron [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:02:58 np0005597378 systemd[1]: Started libpod-conmon-c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0.scope.
Jan 27 09:02:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.365 238945 DEBUG oslo_concurrency.lockutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.367 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.368 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.368 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:02:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:02:58 np0005597378 podman[323786]: 2026-01-27 14:02:58.401642329 +0000 UTC m=+0.355410322 container init c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:02:58 np0005597378 podman[323786]: 2026-01-27 14:02:58.409684132 +0000 UTC m=+0.363452075 container start c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.410 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.411 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:58 np0005597378 podman[323786]: 2026-01-27 14:02:58.441654154 +0000 UTC m=+0.395422127 container attach c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.470 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.509 238945 DEBUG nova.virt.libvirt.vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.510 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.511 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.511 238945 DEBUG os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.512 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.513 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.516 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.518 238945 INFO os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.524 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.527 238945 WARNING nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.532 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.534 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.537 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.538 238945 DEBUG nova.virt.libvirt.host [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.538 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.538 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.539 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.539 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.539 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.540 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.541 238945 DEBUG nova.virt.hardware [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.541 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:02:58 np0005597378 nova_compute[238941]: 2026-01-27 14:02:58.572 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:02:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 38 KiB/s wr, 45 op/s
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]: {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:    "0": [
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:        {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "devices": [
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "/dev/loop3"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            ],
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_name": "ceph_lv0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_size": "21470642176",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "name": "ceph_lv0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "tags": {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cluster_name": "ceph",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.crush_device_class": "",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.encrypted": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.objectstore": "bluestore",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osd_id": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.type": "block",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.vdo": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.with_tpm": "0"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            },
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "type": "block",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "vg_name": "ceph_vg0"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:        }
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:    ],
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:    "1": [
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:        {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "devices": [
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "/dev/loop4"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            ],
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_name": "ceph_lv1",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_size": "21470642176",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "name": "ceph_lv1",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "tags": {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cluster_name": "ceph",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.crush_device_class": "",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.encrypted": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.objectstore": "bluestore",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osd_id": "1",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.type": "block",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.vdo": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.with_tpm": "0"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            },
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "type": "block",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "vg_name": "ceph_vg1"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:        }
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:    ],
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:    "2": [
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:        {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "devices": [
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "/dev/loop5"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            ],
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_name": "ceph_lv2",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_size": "21470642176",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "name": "ceph_lv2",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "tags": {
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.cluster_name": "ceph",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.crush_device_class": "",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.encrypted": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.objectstore": "bluestore",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osd_id": "2",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.type": "block",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.vdo": "0",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:                "ceph.with_tpm": "0"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            },
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "type": "block",
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:            "vg_name": "ceph_vg2"
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:        }
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]:    ]
Jan 27 09:02:58 np0005597378 pedantic_lamarr[323803]: }
Jan 27 09:02:58 np0005597378 systemd[1]: libpod-c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0.scope: Deactivated successfully.
Jan 27 09:02:58 np0005597378 podman[323786]: 2026-01-27 14:02:58.719452002 +0000 UTC m=+0.673219975 container died c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:02:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a420bd1e6e336bc5402618610ccdce81746dceb820f7a3283f302fb48810564e-merged.mount: Deactivated successfully.
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098694995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.156 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.205 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:02:59 np0005597378 podman[323786]: 2026-01-27 14:02:59.474659097 +0000 UTC m=+1.428427040 container remove c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:02:59 np0005597378 systemd[1]: libpod-conmon-c5cf93973dba8f396951e6c0428e074c77130e8376fd3cc0932beed48a86e6a0.scope: Deactivated successfully.
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096396240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096396240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:02:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24644441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.853 238945 DEBUG oslo_concurrency.processutils [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.855 238945 DEBUG nova.virt.libvirt.vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.856 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.857 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:02:59 np0005597378 nova_compute[238941]: 2026-01-27 14:02:59.858 238945 DEBUG nova.objects.instance [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.012 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <name>instance-00000063</name>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:02:58</nova:creationTime>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:76:b6:89"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <target dev="tap058b32ea-79"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:03:00 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:03:00 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:03:00 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:03:00 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.013 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.014 238945 DEBUG nova.virt.libvirt.driver [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.015 238945 DEBUG nova.virt.libvirt.vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:02:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.016 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.017 238945 DEBUG nova.network.os_vif_util [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.018 238945 DEBUG os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.020 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.021 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.026 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.026 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.0299] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:03:00.03809532 +0000 UTC m=+0.082172325 container create 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.041 238945 INFO os_vif [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:02:59.97925486 +0000 UTC m=+0.023331915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:03:00 np0005597378 systemd[1]: Started libpod-conmon-30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0.scope.
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.1261] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Jan 27 09:03:00 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 systemd-udevd[323775]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:03:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:00Z|00919|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:03:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:00Z|00920|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:03:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:03:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:00Z|00921|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:03:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:00Z|00922|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.1466] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.1478] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.147 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.148 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.149 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.163 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d18b2cf1-6f4e-4a54-a65e-e6a0332ed0ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.164 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.166 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c743c83-af16-4e1d-a05d-c90d95069287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.167 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b94227a5-ef12-424e-9a15-0d290529c74d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 systemd-machined[207425]: New machine qemu-118-instance-00000063.
Jan 27 09:03:00 np0005597378 systemd[1]: Started Virtual Machine qemu-118-instance-00000063.
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.184 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3fa8ea-40f1-4466-8801-7b94c19c3ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:03:00.197011637 +0000 UTC m=+0.241088752 container init 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:03:00.205916481 +0000 UTC m=+0.249993486 container start 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:03:00 np0005597378 gifted_turing[323971]: 167 167
Jan 27 09:03:00 np0005597378 systemd[1]: libpod-30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0.scope: Deactivated successfully.
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f89b8562-9b7f-4f5f-8e17-bed039ac2d89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.247 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[65e74c17-0e0b-4733-91bd-01ac0fa03658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.2526] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac1058a-8a22-4dfd-a9c3-01fc7e4aaa0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:03:00.255570439 +0000 UTC m=+0.299647484 container attach 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:03:00.257211192 +0000 UTC m=+0.301288207 container died 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.289 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e6194edb-7fd6-4cda-99a4-ba5707fc895f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.292 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac7ca57-7443-4659-a694-efd5c69397b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.3244] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.331 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6180dd90-bd14-4323-8a8f-1f9a440c0b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.349 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[42de23ba-6573-4124-bfda-ae908d405d3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521994, 'reachable_time': 32096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324027, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6d9d25-8add-4445-9ae3-fde868f001f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521994, 'tstamp': 521994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324028, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40dac7ed-5e9c-4a60-9b1c-3c919a27ee79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521994, 'reachable_time': 32096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324029, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3ecab06cebefef47ea934193e2c1c331fdcba84ef274a4d426c5d81faaa31d05-merged.mount: Deactivated successfully.
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f94ca72-c60f-4c91-af77-9e0d4fd98a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccd5bd8-1ae7-4dd2-820e-60c131647526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.511 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.511 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 NetworkManager[48904]: <info>  [1769522580.5143] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 27 09:03:00 np0005597378 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.518 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:00Z|00923|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.521 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.525 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[195e2608-8204-4271-bbeb-3297c8d59635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.526 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:03:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:00.527 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:03:00 np0005597378 podman[323948]: 2026-01-27 14:03:00.537228719 +0000 UTC m=+0.581305724 container remove 30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:00 np0005597378 systemd[1]: libpod-conmon-30360a80cefb1be92a5704ec72f7a47e83dca604653a0f88a561764cd4b7c2a0.scope: Deactivated successfully.
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.570 238945 DEBUG nova.compute.manager [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.571 238945 DEBUG oslo_concurrency.lockutils [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.571 238945 DEBUG oslo_concurrency.lockutils [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.571 238945 DEBUG oslo_concurrency.lockutils [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.572 238945 DEBUG nova.compute.manager [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.572 238945 WARNING nova.compute.manager [req-e56aff82-c4e8-422a-9d0e-d49417c9a188 req-f2752b37-aa09-40e6-9947-2b2428d89e38 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:03:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 26 KiB/s wr, 9 op/s
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.658 238945 DEBUG nova.compute.manager [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.659 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.660 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522580.6520667, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.660 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.666 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.667 238945 DEBUG nova.compute.manager [None req-40e6e8c4-3137-4703-9b55-66630c6a8cfa d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.671 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.694 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.695 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.696 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.698 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.731 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.732 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522580.6574752, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.732 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.770 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:00 np0005597378 nova_compute[238941]: 2026-01-27 14:03:00.773 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:00 np0005597378 podman[324088]: 2026-01-27 14:03:00.715874345 +0000 UTC m=+0.031320486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:03:00 np0005597378 podman[324088]: 2026-01-27 14:03:00.810690563 +0000 UTC m=+0.126136694 container create bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 09:03:00 np0005597378 systemd[1]: Started libpod-conmon-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope.
Jan 27 09:03:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:03:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:03:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:03:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:03:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:03:01 np0005597378 podman[324127]: 2026-01-27 14:03:00.924041269 +0000 UTC m=+0.025866872 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:03:01 np0005597378 podman[324088]: 2026-01-27 14:03:01.042210521 +0000 UTC m=+0.357656682 container init bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:03:01 np0005597378 podman[324088]: 2026-01-27 14:03:01.05542835 +0000 UTC m=+0.370874471 container start bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:03:01 np0005597378 podman[324088]: 2026-01-27 14:03:01.109062453 +0000 UTC m=+0.424508614 container attach bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:03:01 np0005597378 podman[324127]: 2026-01-27 14:03:01.186115023 +0000 UTC m=+0.287940596 container create 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:03:01 np0005597378 systemd[1]: Started libpod-conmon-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0.scope.
Jan 27 09:03:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:03:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbac1f1ac7b8c531f2f4372ea529418e222e4513abb94c092828b6fe51e4850a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:03:01 np0005597378 nova_compute[238941]: 2026-01-27 14:03:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:01 np0005597378 nova_compute[238941]: 2026-01-27 14:03:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:01 np0005597378 nova_compute[238941]: 2026-01-27 14:03:01.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:03:01 np0005597378 podman[324127]: 2026-01-27 14:03:01.446677197 +0000 UTC m=+0.548502770 container init 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:03:01 np0005597378 podman[324127]: 2026-01-27 14:03:01.455342205 +0000 UTC m=+0.557167778 container start 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:03:01 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : New worker (324178) forked
Jan 27 09:03:01 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : Loading success.
Jan 27 09:03:01 np0005597378 lvm[324233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:03:01 np0005597378 lvm[324233]: VG ceph_vg0 finished
Jan 27 09:03:01 np0005597378 lvm[324234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:03:01 np0005597378 lvm[324234]: VG ceph_vg1 finished
Jan 27 09:03:01 np0005597378 lvm[324236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:03:01 np0005597378 lvm[324236]: VG ceph_vg2 finished
Jan 27 09:03:01 np0005597378 bold_fermi[324128]: {}
Jan 27 09:03:01 np0005597378 systemd[1]: libpod-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope: Deactivated successfully.
Jan 27 09:03:01 np0005597378 systemd[1]: libpod-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope: Consumed 1.319s CPU time.
Jan 27 09:03:01 np0005597378 podman[324088]: 2026-01-27 14:03:01.940596359 +0000 UTC m=+1.256042480 container died bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:03:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c88b75ea5e2ee92353945d9168f821e5bba77d7de960381cbbbfd49226732cbc-merged.mount: Deactivated successfully.
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:02 np0005597378 podman[324088]: 2026-01-27 14:03:02.551724517 +0000 UTC m=+1.867170638 container remove bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:03:02 np0005597378 systemd[1]: libpod-conmon-bda565d53353da8eb0cdf02473035c40884f0c9643199e3f12fcf0245c960ed1.scope: Deactivated successfully.
Jan 27 09:03:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:03:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 26 KiB/s wr, 3 op/s
Jan 27 09:03:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:03:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:03:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.875 238945 DEBUG nova.compute.manager [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG oslo_concurrency.lockutils [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG oslo_concurrency.lockutils [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG oslo_concurrency.lockutils [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.876 238945 DEBUG nova.compute.manager [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:02 np0005597378 nova_compute[238941]: 2026-01-27 14:03:02.877 238945 WARNING nova.compute.manager [req-be0d787c-7feb-4907-91ed-5a4f7008c28c req-9f97060e-e71e-45d7-a223-927d2036ffdb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:03:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:03 np0005597378 nova_compute[238941]: 2026-01-27 14:03:03.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:03:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:03:03 np0005597378 podman[324276]: 2026-01-27 14:03:03.749299206 +0000 UTC m=+0.066045380 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:03:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 26 KiB/s wr, 64 op/s
Jan 27 09:03:05 np0005597378 nova_compute[238941]: 2026-01-27 14:03:05.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:06 np0005597378 podman[324295]: 2026-01-27 14:03:06.256514436 +0000 UTC m=+0.093543046 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:03:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 71 op/s
Jan 27 09:03:06 np0005597378 nova_compute[238941]: 2026-01-27 14:03:06.968 238945 INFO nova.compute.manager [None req-81677a3d-6d98-408c-bc92-10e93adb2298 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Pausing#033[00m
Jan 27 09:03:06 np0005597378 nova_compute[238941]: 2026-01-27 14:03:06.969 238945 DEBUG nova.objects.instance [None req-81677a3d-6d98-408c-bc92-10e93adb2298 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:07 np0005597378 nova_compute[238941]: 2026-01-27 14:03:07.031 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522587.0313025, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:07 np0005597378 nova_compute[238941]: 2026-01-27 14:03:07.032 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:03:07 np0005597378 nova_compute[238941]: 2026-01-27 14:03:07.034 238945 DEBUG nova.compute.manager [None req-81677a3d-6d98-408c-bc92-10e93adb2298 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:07 np0005597378 nova_compute[238941]: 2026-01-27 14:03:07.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:07 np0005597378 nova_compute[238941]: 2026-01-27 14:03:07.088 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:07 np0005597378 nova_compute[238941]: 2026-01-27 14:03:07.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.154 238945 INFO nova.compute.manager [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Unpausing#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.156 238945 DEBUG nova.objects.instance [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.202 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522588.2021832, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.203 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:03:08 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.208 238945 DEBUG nova.virt.libvirt.guest [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.208 238945 DEBUG nova.compute.manager [None req-14ec4a56-3570-459a-8fbc-66d7521a94d7 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.236 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:08 np0005597378 nova_compute[238941]: 2026-01-27 14:03:08.240 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 27 09:03:10 np0005597378 nova_compute[238941]: 2026-01-27 14:03:10.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:10 np0005597378 nova_compute[238941]: 2026-01-27 14:03:10.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 27 09:03:12 np0005597378 nova_compute[238941]: 2026-01-27 14:03:12.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 70 op/s
Jan 27 09:03:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:13 np0005597378 nova_compute[238941]: 2026-01-27 14:03:13.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 73 op/s
Jan 27 09:03:15 np0005597378 nova_compute[238941]: 2026-01-27 14:03:15.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:15 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:15Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:03:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 18 op/s
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:03:17
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:03:17 np0005597378 nova_compute[238941]: 2026-01-27 14:03:17.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:03:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 10 KiB/s wr, 41 op/s
Jan 27 09:03:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:18.701 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:18 np0005597378 nova_compute[238941]: 2026-01-27 14:03:18.701 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:18.703 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:03:20 np0005597378 nova_compute[238941]: 2026-01-27 14:03:20.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.313 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:72:ec 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7c4a7f8-7b28-4afa-b992-c5eb7c2d40d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f01e198b-d36a-4c2a-9a41-905ff3e8b9af) old=Port_Binding(mac=['fa:16:3e:db:72:ec 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4494bd-8350-41f2-ab4f-0218f7fca0e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.315 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f01e198b-d36a-4c2a-9a41-905ff3e8b9af in datapath fd4494bd-8350-41f2-ab4f-0218f7fca0e8 updated#033[00m
Jan 27 09:03:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.316 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4494bd-8350-41f2-ab4f-0218f7fca0e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:03:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c55bf91f-5ac4-4755-bfef-d7cf19875732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 10 KiB/s wr, 44 op/s
Jan 27 09:03:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:20.706 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:22 np0005597378 nova_compute[238941]: 2026-01-27 14:03:22.246 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 10 KiB/s wr, 44 op/s
Jan 27 09:03:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:24 np0005597378 nova_compute[238941]: 2026-01-27 14:03:24.569 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:24 np0005597378 nova_compute[238941]: 2026-01-27 14:03:24.570 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:24 np0005597378 nova_compute[238941]: 2026-01-27 14:03:24.570 238945 INFO nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Rebooting instance#033[00m
Jan 27 09:03:24 np0005597378 nova_compute[238941]: 2026-01-27 14:03:24.590 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:03:24 np0005597378 nova_compute[238941]: 2026-01-27 14:03:24.591 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:03:24 np0005597378 nova_compute[238941]: 2026-01-27 14:03:24.591 238945 DEBUG nova.network.neutron [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:03:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 46 op/s
Jan 27 09:03:25 np0005597378 nova_compute[238941]: 2026-01-27 14:03:25.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:26 np0005597378 nova_compute[238941]: 2026-01-27 14:03:26.299 238945 DEBUG nova.network.neutron [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:03:26 np0005597378 nova_compute[238941]: 2026-01-27 14:03:26.356 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:03:26 np0005597378 nova_compute[238941]: 2026-01-27 14:03:26.358 238945 DEBUG nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 525 KiB/s rd, 33 KiB/s wr, 44 op/s
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.249 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:27 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:03:27 np0005597378 NetworkManager[48904]: <info>  [1769522607.3038] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:03:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:27Z|00924|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:03:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:27Z|00925|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:27Z|00926|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.328 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.329 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:03:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.330 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:03:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9248a557-ba22-4351-916a-91403282d820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:27.332 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.337 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:27 np0005597378 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 09:03:27 np0005597378 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000063.scope: Consumed 14.656s CPU time.
Jan 27 09:03:27 np0005597378 systemd-machined[207425]: Machine qemu-118-instance-00000063 terminated.
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.559 238945 DEBUG nova.compute.manager [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG oslo_concurrency.lockutils [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG oslo_concurrency.lockutils [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG oslo_concurrency.lockutils [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 DEBUG nova.compute.manager [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.560 238945 WARNING nova.compute.manager [req-5a6699f3-7ecb-4b8f-8cf5-6ac4cfb2841b req-56eacb6c-140b-4799-93a2-448db778d786 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.564 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.564 238945 DEBUG nova.objects.instance [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007700713931982457 of space, bias 1.0, pg target 0.2310214179594737 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675515594353253 of space, bias 1.0, pg target 0.2002654678305976 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0840538975285344e-06 of space, bias 4.0, pg target 0.0013008646770342413 quantized to 16 (current 16)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:03:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.663 238945 DEBUG nova.virt.libvirt.vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.663 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.664 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.664 238945 DEBUG os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.666 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.671 238945 INFO os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.680 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.684 238945 WARNING nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.692 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.693 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.696 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.696 238945 DEBUG nova.virt.libvirt.host [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.697 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.697 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.697 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.698 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.699 238945 DEBUG nova.virt.hardware [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.700 238945 DEBUG nova.objects.instance [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:27 np0005597378 nova_compute[238941]: 2026-01-27 14:03:27.736 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:27 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : haproxy version is 2.8.14-c23fe91
Jan 27 09:03:27 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [NOTICE]   (324170) : path to executable is /usr/sbin/haproxy
Jan 27 09:03:27 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [WARNING]  (324170) : Exiting Master process...
Jan 27 09:03:27 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [WARNING]  (324170) : Exiting Master process...
Jan 27 09:03:27 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [ALERT]    (324170) : Current worker (324178) exited with code 143 (Terminated)
Jan 27 09:03:27 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324155]: [WARNING]  (324170) : All workers exited. Exiting... (0)
Jan 27 09:03:27 np0005597378 systemd[1]: libpod-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0.scope: Deactivated successfully.
Jan 27 09:03:27 np0005597378 podman[324347]: 2026-01-27 14:03:27.8149603 +0000 UTC m=+0.369282170 container died 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:03:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:03:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234153909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:03:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0-userdata-shm.mount: Deactivated successfully.
Jan 27 09:03:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fbac1f1ac7b8c531f2f4372ea529418e222e4513abb94c092828b6fe51e4850a-merged.mount: Deactivated successfully.
Jan 27 09:03:28 np0005597378 nova_compute[238941]: 2026-01-27 14:03:28.376 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:28 np0005597378 nova_compute[238941]: 2026-01-27 14:03:28.416 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 34 KiB/s wr, 47 op/s
Jan 27 09:03:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 27 09:03:28 np0005597378 podman[324347]: 2026-01-27 14:03:28.774744663 +0000 UTC m=+1.329066533 container cleanup 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:03:28 np0005597378 systemd[1]: libpod-conmon-977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0.scope: Deactivated successfully.
Jan 27 09:03:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:03:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925966041' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.260 238945 DEBUG oslo_concurrency.processutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.843s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.263 238945 DEBUG nova.virt.libvirt.vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.263 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.264 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.266 238945 DEBUG nova.objects.instance [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.294 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <name>instance-00000063</name>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:03:27</nova:creationTime>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:76:b6:89"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <target dev="tap058b32ea-79"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:03:29 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:03:29 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:03:29 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:03:29 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.297 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.298 238945 DEBUG nova.virt.libvirt.driver [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.300 238945 DEBUG nova.virt.libvirt.vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.301 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.302 238945 DEBUG nova.network.os_vif_util [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.303 238945 DEBUG os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.304 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.306 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.307 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.312 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.313 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.314 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 NetworkManager[48904]: <info>  [1769522609.3186] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.324 238945 INFO os_vif [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:03:29 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 27 09:03:29 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:03:29 np0005597378 NetworkManager[48904]: <info>  [1769522609.5156] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Jan 27 09:03:29 np0005597378 systemd-udevd[324328]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:03:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:29Z|00927|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:03:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:29Z|00928|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.519 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 NetworkManager[48904]: <info>  [1769522609.5284] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:03:29 np0005597378 NetworkManager[48904]: <info>  [1769522609.5297] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:03:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:29Z|00929|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:29 np0005597378 systemd-machined[207425]: New machine qemu-119-instance-00000063.
Jan 27 09:03:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:29Z|00930|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 09:03:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:29.564 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:29 np0005597378 systemd[1]: Started Virtual Machine qemu-119-instance-00000063.
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.666 238945 DEBUG nova.compute.manager [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.667 238945 DEBUG oslo_concurrency.lockutils [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.668 238945 DEBUG oslo_concurrency.lockutils [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.668 238945 DEBUG oslo_concurrency.lockutils [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.669 238945 DEBUG nova.compute.manager [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:29 np0005597378 nova_compute[238941]: 2026-01-27 14:03:29.669 238945 WARNING nova.compute.manager [req-291d0f2d-1b3d-46ad-b075-a72ced8100f5 req-537e7983-3aa6-4b72-92e9-670ced97f822 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 27 09:03:30 np0005597378 podman[324449]: 2026-01-27 14:03:30.121692126 +0000 UTC m=+1.324465172 container remove 977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a71644ea-edb0-43e4-ae0c-e7480cde3d1c]: (4, ('Tue Jan 27 02:03:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0)\n977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0\nTue Jan 27 02:03:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0)\n977b1cdeaa21d124f71ca0d04000421bdc8f21e212923dbabffe0a3279aa0ba0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2268e6a2-6e75-4427-a205-970decdb3a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f479f17-0945-4eca-a290-20c4b4a950bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[076c2e5e-5434-4a79-ab2e-a14c018d50bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.171 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49902622-c4f3-4dde-a524-3e6f6b8ea401]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.190 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff9bb2d-f26f-473d-8b9d-ebccbcf6d6cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521986, 'reachable_time': 18332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324488, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.195 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.195 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0986e638-9779-4e70-adaf-cddae422b97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.196 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.197 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f67116b-a2f6-4e5e-b65a-c86d04a8c075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.208 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.210 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.210 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b67b5134-50bd-4ebe-9325-31f103fe96d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.211 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27b95f3f-f489-4d4f-93c0-f71d70632874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.224 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8a585d-d86f-4858-a31b-a1a306f71eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.247 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3018297-fff7-4adc-b20b-64f2aa1f105c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.276 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7b821e03-2f91-4354-bd95-fbd92ad1800a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[015ab68c-2bf5-41bb-8eb2-52d376004f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 NetworkManager[48904]: <info>  [1769522610.2832] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.320 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e52eef16-14d9-40c1-8f8b-1f98da31f51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.324 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[80dc11a6-6d08-41cf-9f37-07ad8d97e6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 NetworkManager[48904]: <info>  [1769522610.3546] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.365 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4039753d-e163-4df7-b125-95f9da7ae1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.385 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a67b5da-ac7b-42fd-b13d-9a339f0218b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 30450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324514, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.400 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0aca37a0-68f4-44fc-8427-6d8f6006b8c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524997, 'tstamp': 524997}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324515, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc836d5d-a2db-42b1-a583-498c50b444a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 30450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324516, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.446 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db597220-2597-4c3f-b637-8f4e7c5c68b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[846c7e80-bd79-42bf-b074-fadd7bcfbe09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.510 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.511 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.513 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 NetworkManager[48904]: <info>  [1769522610.5137] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 27 09:03:30 np0005597378 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.516 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:30Z|00931|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.519 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dce1d59b-cccb-409d-a736-6095eaed3de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.531 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:03:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:30.533 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:03:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 28 KiB/s wr, 14 op/s
Jan 27 09:03:30 np0005597378 podman[324584]: 2026-01-27 14:03:30.845013602 +0000 UTC m=+0.021095407 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.955 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.955 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522610.9544122, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.956 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.958 238945 DEBUG nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.961 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.#033[00m
Jan 27 09:03:30 np0005597378 nova_compute[238941]: 2026-01-27 14:03:30.961 238945 DEBUG nova.compute.manager [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.005 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.008 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.076 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.077 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522610.9563923, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.077 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.086 238945 DEBUG oslo_concurrency.lockutils [None req-14785e51-7093-4d66-94f6-e0fb320faee3 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.097 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.100 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.822 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.823 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 WARNING nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.824 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.825 238945 DEBUG oslo_concurrency.lockutils [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.825 238945 DEBUG nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:31 np0005597378 nova_compute[238941]: 2026-01-27 14:03:31.825 238945 WARNING nova.compute.manager [req-dddcce9e-969d-427c-81b8-17a0069fa662 req-8353b8a7-b819-4ef0-847d-5e02db0e81f9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:03:32 np0005597378 nova_compute[238941]: 2026-01-27 14:03:32.251 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 28 KiB/s wr, 14 op/s
Jan 27 09:03:33 np0005597378 podman[324584]: 2026-01-27 14:03:33.31426439 +0000 UTC m=+2.490346215 container create d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:03:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:33 np0005597378 systemd[1]: Started libpod-conmon-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58.scope.
Jan 27 09:03:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:03:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eae9f3409a5182768442d5b02286bc3bddc323cd6f6bdee8d95f13a71b74b1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:03:34 np0005597378 nova_compute[238941]: 2026-01-27 14:03:34.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Jan 27 09:03:34 np0005597378 podman[324584]: 2026-01-27 14:03:34.942905344 +0000 UTC m=+4.118987159 container init d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 09:03:34 np0005597378 podman[324584]: 2026-01-27 14:03:34.950705219 +0000 UTC m=+4.126787014 container start d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:03:34 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : New worker (324621) forked
Jan 27 09:03:34 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : Loading success.
Jan 27 09:03:35 np0005597378 podman[324605]: 2026-01-27 14:03:35.46430617 +0000 UTC m=+1.512407593 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:03:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 100 op/s
Jan 27 09:03:36 np0005597378 podman[324637]: 2026-01-27 14:03:36.756642544 +0000 UTC m=+0.099898162 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:03:37 np0005597378 nova_compute[238941]: 2026-01-27 14:03:37.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 27 09:03:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 27 09:03:38 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 27 09:03:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.7 KiB/s wr, 116 op/s
Jan 27 09:03:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:39 np0005597378 nova_compute[238941]: 2026-01-27 14:03:39.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 27 09:03:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 27 09:03:40 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 27 09:03:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 KiB/s wr, 134 op/s
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.782 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.783 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.800 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.868 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.869 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.877 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:03:40 np0005597378 nova_compute[238941]: 2026-01-27 14:03:40.878 238945 INFO nova.compute.claims [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.019 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:03:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808314240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.590 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.596 238945 DEBUG nova.compute.provider_tree [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.613 238945 DEBUG nova.scheduler.client.report [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.635 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.636 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.685 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.685 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.702 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.719 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.811 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.812 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.813 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Creating image(s)#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.842 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.870 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.893 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.896 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.968 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.969 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.970 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.970 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:41 np0005597378 nova_compute[238941]: 2026-01-27 14:03:41.995 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:42 np0005597378 nova_compute[238941]: 2026-01-27 14:03:42.000 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 58152b7a-295a-46c3-a454-95a08d597abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:42 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 27 09:03:42 np0005597378 nova_compute[238941]: 2026-01-27 14:03:42.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:42 np0005597378 nova_compute[238941]: 2026-01-27 14:03:42.302 238945 DEBUG nova.policy [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17c3813514ef4adaa908639e29e969ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '87e722e7579646e9924cc852bfd49285', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:03:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 123 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.090 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Successfully created port: a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.125 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 58152b7a-295a-46c3-a454-95a08d597abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.196 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] resizing rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.342 238945 DEBUG nova.objects.instance [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'migration_context' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:43Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.362 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.362 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Ensure instance console log exists: /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.363 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.363 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.363 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.862 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Successfully updated port: a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.884 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.885 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.885 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.964 238945 DEBUG nova.compute.manager [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-changed-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.964 238945 DEBUG nova.compute.manager [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Refreshing instance network info cache due to event network-changed-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:03:43 np0005597378 nova_compute[238941]: 2026-01-27 14:03:43.965 238945 DEBUG oslo_concurrency.lockutils [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:03:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.047 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:03:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 27 09:03:44 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 156 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 480 KiB/s rd, 2.5 MiB/s wr, 134 op/s
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.746 238945 DEBUG nova.network.neutron [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.769 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.770 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance network_info: |[{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.770 238945 DEBUG oslo_concurrency.lockutils [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.771 238945 DEBUG nova.network.neutron [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Refreshing network info cache for port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.773 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start _get_guest_xml network_info=[{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.777 238945 WARNING nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.782 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.783 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.786 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.786 238945 DEBUG nova.virt.libvirt.host [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.787 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.787 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.787 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.788 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.788 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.788 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.789 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.789 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.789 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.790 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.790 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.790 238945 DEBUG nova.virt.hardware [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:03:44 np0005597378 nova_compute[238941]: 2026-01-27 14:03:44.793 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:03:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3578570652' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:03:45 np0005597378 nova_compute[238941]: 2026-01-27 14:03:45.451 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:45 np0005597378 nova_compute[238941]: 2026-01-27 14:03:45.476 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:45 np0005597378 nova_compute[238941]: 2026-01-27 14:03:45.480 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:03:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657495127' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.139 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.142 238945 DEBUG nova.virt.libvirt.vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:03:41Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.142 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.144 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.145 238945 DEBUG nova.objects.instance [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.172 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <uuid>58152b7a-295a-46c3-a454-95a08d597abd</uuid>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <name>instance-00000064</name>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestServerAdvancedOps-server-702662030</nova:name>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:03:44</nova:creationTime>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:user uuid="17c3813514ef4adaa908639e29e969ba">tempest-TestServerAdvancedOps-1351312611-project-member</nova:user>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:project uuid="87e722e7579646e9924cc852bfd49285">tempest-TestServerAdvancedOps-1351312611</nova:project>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <nova:port uuid="a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <entry name="serial">58152b7a-295a-46c3-a454-95a08d597abd</entry>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <entry name="uuid">58152b7a-295a-46c3-a454-95a08d597abd</entry>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/58152b7a-295a-46c3-a454-95a08d597abd_disk">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/58152b7a-295a-46c3-a454-95a08d597abd_disk.config">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:38:aa:8d"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <target dev="tapa3a5102d-0f"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/console.log" append="off"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:03:46 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:03:46 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:03:46 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:03:46 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.173 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Preparing to wait for external event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.174 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.175 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.175 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.176 238945 DEBUG nova.virt.libvirt.vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:03:41Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.176 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.177 238945 DEBUG nova.network.os_vif_util [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.177 238945 DEBUG os_vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.179 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.179 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.183 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3a5102d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.184 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3a5102d-0f, col_values=(('external_ids', {'iface-id': 'a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:aa:8d', 'vm-uuid': '58152b7a-295a-46c3-a454-95a08d597abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:46 np0005597378 NetworkManager[48904]: <info>  [1769522626.1863] manager: (tapa3a5102d-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.194 238945 INFO os_vif [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')#033[00m
Jan 27 09:03:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.360 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.360 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.361 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] No VIF found with MAC fa:16:3e:38:aa:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.362 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Using config drive#033[00m
Jan 27 09:03:46 np0005597378 nova_compute[238941]: 2026-01-27 14:03:46.390 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 165 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 637 KiB/s rd, 2.6 MiB/s wr, 136 op/s
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.443 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Creating config drive at /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.449 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xulfik6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.597 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xulfik6" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.622 238945 DEBUG nova.storage.rbd_utils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] rbd image 58152b7a-295a-46c3-a454-95a08d597abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.627 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config 58152b7a-295a-46c3-a454-95a08d597abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.802 238945 DEBUG oslo_concurrency.processutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config 58152b7a-295a-46c3-a454-95a08d597abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.803 238945 INFO nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deleting local config drive /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd/disk.config because it was imported into RBD.#033[00m
Jan 27 09:03:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:03:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:03:47 np0005597378 kernel: tapa3a5102d-0f: entered promiscuous mode
Jan 27 09:03:47 np0005597378 NetworkManager[48904]: <info>  [1769522627.8618] manager: (tapa3a5102d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:47Z|00932|binding|INFO|Claiming lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for this chassis.
Jan 27 09:03:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:47Z|00933|binding|INFO|a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6: Claiming fa:16:3e:38:aa:8d 10.100.0.14
Jan 27 09:03:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.877 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 bound to our chassis#033[00m
Jan 27 09:03:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.879 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:03:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:47.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fca965d2-12aa-4f05-825f-eec1f51b0a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:47 np0005597378 systemd-udevd[324987]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:47 np0005597378 systemd-machined[207425]: New machine qemu-120-instance-00000064.
Jan 27 09:03:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:47Z|00934|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 ovn-installed in OVS
Jan 27 09:03:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:47Z|00935|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 up in Southbound
Jan 27 09:03:47 np0005597378 nova_compute[238941]: 2026-01-27 14:03:47.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:47 np0005597378 NetworkManager[48904]: <info>  [1769522627.9090] device (tapa3a5102d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:03:47 np0005597378 NetworkManager[48904]: <info>  [1769522627.9101] device (tapa3a5102d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:03:47 np0005597378 systemd[1]: Started Virtual Machine qemu-120-instance-00000064.
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.046 238945 DEBUG nova.network.neutron [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updated VIF entry in instance network info cache for port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.047 238945 DEBUG nova.network.neutron [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.068 238945 DEBUG oslo_concurrency.lockutils [req-cf87cc9a-91ba-4599-9dad-e9470150b125 req-4d5ea474-e45c-43e0-bfe2-a37443d2bdf0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.241 238945 DEBUG nova.compute.manager [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.241 238945 DEBUG oslo_concurrency.lockutils [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.242 238945 DEBUG oslo_concurrency.lockutils [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.242 238945 DEBUG oslo_concurrency.lockutils [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.242 238945 DEBUG nova.compute.manager [req-201265e6-44dd-4d4b-ad04-40fcaca81d2e req-e026b852-636d-4842-860b-51536808b262 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Processing event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:03:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.283 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d952093-4df1-49df-9dd5-e09c8ee2177b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8) old=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.285 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8 in datapath 29f09a2e-6e23-4fac-a0c4-d62d8979c94e updated#033[00m
Jan 27 09:03:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.286 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29f09a2e-6e23-4fac-a0c4-d62d8979c94e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:03:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:48.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1cd17a-a0ac-4eef-b75d-985756155204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.631 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522628.631039, 58152b7a-295a-46c3-a454-95a08d597abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.632 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.633 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.637 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.641 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance spawned successfully.#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.641 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.665 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.668 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 878 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.686 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.687 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.688 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.688 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.688 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.689 238945 DEBUG nova.virt.libvirt.driver [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.694 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.695 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522628.6311798, 58152b7a-295a-46c3-a454-95a08d597abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:03:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 27 09:03:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 27 09:03:48 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.724 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.729 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522628.6359546, 58152b7a-295a-46c3-a454-95a08d597abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.729 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.753 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.758 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.772 238945 INFO nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 6.96 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.773 238945 DEBUG nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.782 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.853 238945 INFO nova.compute.manager [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 8.01 seconds to build instance.#033[00m
Jan 27 09:03:48 np0005597378 nova_compute[238941]: 2026-01-27 14:03:48.872 238945 DEBUG oslo_concurrency.lockutils [None req-4a0514ac-5355-45cf-83ba-8e39f8fef529 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:50 np0005597378 nova_compute[238941]: 2026-01-27 14:03:50.372 238945 DEBUG nova.compute.manager [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:50 np0005597378 nova_compute[238941]: 2026-01-27 14:03:50.372 238945 DEBUG oslo_concurrency.lockutils [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:50 np0005597378 nova_compute[238941]: 2026-01-27 14:03:50.373 238945 DEBUG oslo_concurrency.lockutils [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:50 np0005597378 nova_compute[238941]: 2026-01-27 14:03:50.373 238945 DEBUG oslo_concurrency.lockutils [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:50 np0005597378 nova_compute[238941]: 2026-01-27 14:03:50.373 238945 DEBUG nova.compute.manager [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:50 np0005597378 nova_compute[238941]: 2026-01-27 14:03:50.374 238945 WARNING nova.compute.manager [req-909a08b6-9686-46d7-bf5a-9963b3187b57 req-034ed5a2-c2b9-4b80-91b4-f08be5dc94a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:03:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Jan 27 09:03:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 27 09:03:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 27 09:03:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.732 238945 DEBUG nova.objects.instance [None req-153478ab-90dc-4ed9-895c-eed04365b4e0 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.763 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522631.7637513, 58152b7a-295a-46c3-a454-95a08d597abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.764 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.829 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.834 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:51 np0005597378 nova_compute[238941]: 2026-01-27 14:03:51.855 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:52 np0005597378 kernel: tapa3a5102d-0f (unregistering): left promiscuous mode
Jan 27 09:03:52 np0005597378 NetworkManager[48904]: <info>  [1769522632.5145] device (tapa3a5102d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:03:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:52Z|00936|binding|INFO|Releasing lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 from this chassis (sb_readonly=0)
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:52Z|00937|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 down in Southbound
Jan 27 09:03:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:52Z|00938|binding|INFO|Removing iface tapa3a5102d-0f ovn-installed in OVS
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.532 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.533 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 unbound from our chassis#033[00m
Jan 27 09:03:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.533 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:03:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:52.534 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c32d01c-3f89-46ab-93d8-113e39ee2a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:52 np0005597378 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 27 09:03:52 np0005597378 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000064.scope: Consumed 3.996s CPU time.
Jan 27 09:03:52 np0005597378 systemd-machined[207425]: Machine qemu-120-instance-00000064 terminated.
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.613 238945 DEBUG nova.compute.manager [None req-153478ab-90dc-4ed9-895c-eed04365b4e0 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 802 KiB/s wr, 121 op/s
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.718 238945 DEBUG nova.compute.manager [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG oslo_concurrency.lockutils [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG oslo_concurrency.lockutils [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG oslo_concurrency.lockutils [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.719 238945 DEBUG nova.compute.manager [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:52 np0005597378 nova_compute[238941]: 2026-01-27 14:03:52.720 238945 WARNING nova.compute.manager [req-8118659d-6781-4f95-be56-546bac05a714 req-059e50d1-e64b-41dd-b66e-e5e45d6dd572 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state None.#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.653 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "99cd19f8-17dc-4d81-980f-4cf584356571" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.654 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.677 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:03:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 27 09:03:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 27 09:03:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.787 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.787 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.795 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.795 238945 INFO nova.compute.claims [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.919 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.945 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.945 238945 DEBUG nova.compute.provider_tree [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.961 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:03:53 np0005597378 nova_compute[238941]: 2026-01-27 14:03:53.978 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.070 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.450 238945 INFO nova.compute.manager [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Resuming#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.451 238945 DEBUG nova.objects.instance [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'flavor' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.454 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.576 238945 DEBUG oslo_concurrency.lockutils [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.576 238945 DEBUG oslo_concurrency.lockutils [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.577 238945 DEBUG nova.network.neutron [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:03:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:03:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055730449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.678 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 166 op/s
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.683 238945 DEBUG nova.compute.provider_tree [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.713 238945 DEBUG nova.scheduler.client.report [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.776 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.777 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.779 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.780 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.780 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.780 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.974 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:03:54 np0005597378 nova_compute[238941]: 2026-01-27 14:03:54.974 238945 DEBUG nova.network.neutron [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.013 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.067 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.251 238945 DEBUG nova.compute.manager [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.252 238945 DEBUG oslo_concurrency.lockutils [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.252 238945 DEBUG oslo_concurrency.lockutils [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.253 238945 DEBUG oslo_concurrency.lockutils [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.253 238945 DEBUG nova.compute.manager [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.253 238945 WARNING nova.compute.manager [req-e49d34dc-28cb-4775-aa56-54b7d569c00e req-5b2e84f3-b916-4ec7-950b-405693a18ed4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.263 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.264 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.265 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Creating image(s)#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.290 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.312 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.335 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.339 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "e8093fe093c11961060649e4cf798940b7c4f681" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.341 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "e8093fe093c11961060649e4cf798940b7c4f681" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.345 238945 DEBUG nova.network.neutron [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.346 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:03:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:03:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734414685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.379 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.489 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.489 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.494 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.494 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.645 238945 DEBUG nova.virt.libvirt.imagebackend [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/ec504a42-336d-446e-acdb-e50fafec22d3/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/ec504a42-336d-446e-acdb-e50fafec22d3/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.690 238945 DEBUG nova.virt.libvirt.imagebackend [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/ec504a42-336d-446e-acdb-e50fafec22d3/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.691 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] cloning images/ec504a42-336d-446e-acdb-e50fafec22d3@snap to None/99cd19f8-17dc-4d81-980f-4cf584356571_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.757 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.759 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3616MB free_disk=59.92115879803896GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.759 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.759 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.855 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 58152b7a-295a-46c3-a454-95a08d597abd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 99cd19f8-17dc-4d81-980f-4cf584356571 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.856 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:03:55 np0005597378 nova_compute[238941]: 2026-01-27 14:03:55.949 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.045 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "e8093fe093c11961060649e4cf798940b7c4f681" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.167 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] resizing rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.411 238945 DEBUG nova.network.neutron [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.438 238945 DEBUG nova.objects.instance [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lazy-loading 'migration_context' on Instance uuid 99cd19f8-17dc-4d81-980f-4cf584356571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.523 238945 DEBUG oslo_concurrency.lockutils [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.528 238945 DEBUG nova.virt.libvirt.vif [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:03:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:52Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.529 238945 DEBUG nova.network.os_vif_util [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.530 238945 DEBUG nova.network.os_vif_util [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.530 238945 DEBUG os_vif [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.531 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.531 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.532 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.535 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3a5102d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.536 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3a5102d-0f, col_values=(('external_ids', {'iface-id': 'a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:aa:8d', 'vm-uuid': '58152b7a-295a-46c3-a454-95a08d597abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:03:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:03:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189361405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.536 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.536 238945 INFO os_vif [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.558 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.563 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.654 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.655 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Ensure instance console log exists: /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.656 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.656 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.656 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.658 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='94d8c731dc4c18c75f830e72d8bdf48e',container_format='bare',created_at=2026-01-27T14:03:49Z,direct_url=<?>,disk_format='raw',id=ec504a42-336d-446e-acdb-e50fafec22d3,min_disk=0,min_ram=0,name='tempest-image-dependency-test-849932242',owner='9cf05c7851f3406f80db37818456ad04',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-27T14:03:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'ec504a42-336d-446e-acdb-e50fafec22d3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.663 238945 WARNING nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.667 238945 DEBUG nova.objects.instance [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.669 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.670 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.675 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.676 238945 DEBUG nova.virt.libvirt.host [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.676 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.676 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='94d8c731dc4c18c75f830e72d8bdf48e',container_format='bare',created_at=2026-01-27T14:03:49Z,direct_url=<?>,disk_format='raw',id=ec504a42-336d-446e-acdb-e50fafec22d3,min_disk=0,min_ram=0,name='tempest-image-dependency-test-849932242',owner='9cf05c7851f3406f80db37818456ad04',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-27T14:03:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.677 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.678 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.679 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.680 238945 DEBUG nova.virt.hardware [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:03:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.683 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.718 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:03:56 np0005597378 kernel: tapa3a5102d-0f: entered promiscuous mode
Jan 27 09:03:56 np0005597378 NetworkManager[48904]: <info>  [1769522636.7712] manager: (tapa3a5102d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 27 09:03:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:56Z|00939|binding|INFO|Claiming lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for this chassis.
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:56Z|00940|binding|INFO|a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6: Claiming fa:16:3e:38:aa:8d 10.100.0.14
Jan 27 09:03:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:56Z|00941|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 ovn-installed in OVS
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.800 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.801 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:56 np0005597378 nova_compute[238941]: 2026-01-27 14:03:56.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:56 np0005597378 systemd-machined[207425]: New machine qemu-121-instance-00000064.
Jan 27 09:03:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:56Z|00942|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 up in Southbound
Jan 27 09:03:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.813 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.814 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 bound to our chassis#033[00m
Jan 27 09:03:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.815 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:03:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:56.817 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e363c04-2c45-4a9e-aab0-1000eb87f820]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:56 np0005597378 systemd[1]: Started Virtual Machine qemu-121-instance-00000064.
Jan 27 09:03:56 np0005597378 systemd-udevd[325338]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:03:56 np0005597378 NetworkManager[48904]: <info>  [1769522636.8442] device (tapa3a5102d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:03:56 np0005597378 NetworkManager[48904]: <info>  [1769522636.8447] device (tapa3a5102d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:03:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:03:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745081590' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.284 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.313 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.321 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.375 238945 DEBUG nova.compute.manager [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.376 238945 DEBUG nova.objects.instance [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.378 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 58152b7a-295a-46c3-a454-95a08d597abd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.379 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522637.3478193, 58152b7a-295a-46c3-a454-95a08d597abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.379 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.394 238945 DEBUG nova.compute.manager [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG oslo_concurrency.lockutils [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG oslo_concurrency.lockutils [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG oslo_concurrency.lockutils [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.395 238945 DEBUG nova.compute.manager [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.396 238945 WARNING nova.compute.manager [req-eaf0fa4c-ee61-485f-a1a9-178d1e4a57cf req-97d1b9a2-9ee9-4131-bb5a-d9f635aa31e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:03:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.430 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d952093-4df1-49df-9dd5-e09c8ee2177b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8) old=Port_Binding(mac=['fa:16:3e:95:d4:93 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29f09a2e-6e23-4fac-a0c4-d62d8979c94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74069726c924807876fe3ea269c8310', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.431 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cf8fc2d-25f9-4c7e-ac01-221510bbe9f8 in datapath 29f09a2e-6e23-4fac-a0c4-d62d8979c94e updated#033[00m
Jan 27 09:03:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.432 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29f09a2e-6e23-4fac-a0c4-d62d8979c94e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:03:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:57.433 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7ed35f-a608-41da-b83e-12568b78e520]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.484 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.489 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.492 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance running successfully.#033[00m
Jan 27 09:03:57 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.495 238945 DEBUG nova.virt.libvirt.guest [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.495 238945 DEBUG nova.compute.manager [None req-5be8f261-8692-43a6-9097-4eaf241ec37b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.578 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522637.3483732, 58152b7a-295a-46c3-a454-95a08d597abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.578 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.629 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.633 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:03:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535498131' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.947 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.948 238945 DEBUG nova.objects.instance [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99cd19f8-17dc-4d81-980f-4cf584356571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:57 np0005597378 nova_compute[238941]: 2026-01-27 14:03:57.983 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <uuid>99cd19f8-17dc-4d81-980f-4cf584356571</uuid>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <name>instance-00000065</name>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:name>instance-depend-image</nova:name>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:03:56</nova:creationTime>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:user uuid="a9d0ea5b6b714a968e03b217f04e9718">tempest-ImageDependencyTests-2127868411-project-member</nova:user>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <nova:project uuid="9cf05c7851f3406f80db37818456ad04">tempest-ImageDependencyTests-2127868411</nova:project>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="ec504a42-336d-446e-acdb-e50fafec22d3"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <entry name="serial">99cd19f8-17dc-4d81-980f-4cf584356571</entry>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <entry name="uuid">99cd19f8-17dc-4d81-980f-4cf584356571</entry>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/99cd19f8-17dc-4d81-980f-4cf584356571_disk">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/99cd19f8-17dc-4d81-980f-4cf584356571_disk.config">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/console.log" append="off"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:03:57 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:03:57 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:03:57 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:03:57 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.132 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.133 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.133 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Using config drive#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.154 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.329 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Creating config drive at /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.336 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjchvhko2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.481 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjchvhko2" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.507 238945 DEBUG nova.storage.rbd_utils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] rbd image 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.510 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.548 238945 DEBUG nova.objects.instance [None req-65c33d90-089c-483f-ab6d-49ab7fd2074b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.654 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522638.6545825, 58152b7a-295a-46c3-a454-95a08d597abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.655 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:03:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 154 op/s
Jan 27 09:03:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.714 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.719 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.801 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.801 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.802 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.888 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 09:03:58 np0005597378 nova_compute[238941]: 2026-01-27 14:03:58.893 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.031 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.032 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.032 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.032 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:03:59 np0005597378 kernel: tapa3a5102d-0f (unregistering): left promiscuous mode
Jan 27 09:03:59 np0005597378 NetworkManager[48904]: <info>  [1769522639.3892] device (tapa3a5102d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:03:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:59Z|00943|binding|INFO|Releasing lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 from this chassis (sb_readonly=0)
Jan 27 09:03:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:59Z|00944|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 down in Southbound
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:03:59Z|00945|binding|INFO|Removing iface tapa3a5102d-0f ovn-installed in OVS
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.415 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:03:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.429 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:03:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.430 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 unbound from our chassis#033[00m
Jan 27 09:03:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.431 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:03:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:03:59.432 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbc2beb-ce96-4243-86e4-e23557415628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:03:59 np0005597378 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 27 09:03:59 np0005597378 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000064.scope: Consumed 1.773s CPU time.
Jan 27 09:03:59 np0005597378 systemd-machined[207425]: Machine qemu-121-instance-00000064 terminated.
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.466 238945 DEBUG oslo_concurrency.processutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config 99cd19f8-17dc-4d81-980f-4cf584356571_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.466 238945 INFO nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deleting local config drive /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571/disk.config because it was imported into RBD.#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG nova.compute.manager [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG oslo_concurrency.lockutils [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG oslo_concurrency.lockutils [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.515 238945 DEBUG oslo_concurrency.lockutils [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.516 238945 DEBUG nova.compute.manager [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.516 238945 WARNING nova.compute.manager [req-a7665eeb-e244-4ea0-9a41-9cb33054f71d req-481f9d42-fcf4-4c78-8e6d-9a740d265e12 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state suspending.#033[00m
Jan 27 09:03:59 np0005597378 nova_compute[238941]: 2026-01-27 14:03:59.528 238945 DEBUG nova.compute.manager [None req-65c33d90-089c-483f-ab6d-49ab7fd2074b 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:03:59 np0005597378 systemd-machined[207425]: New machine qemu-122-instance-00000065.
Jan 27 09:03:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:03:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018768589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:03:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:03:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018768589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:03:59 np0005597378 systemd[1]: Started Virtual Machine qemu-122-instance-00000065.
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.295 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.321 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.321 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.598 238945 INFO nova.compute.manager [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Resuming#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.599 238945 DEBUG nova.objects.instance [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'flavor' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.638 238945 DEBUG oslo_concurrency.lockutils [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.639 238945 DEBUG oslo_concurrency.lockutils [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquired lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.639 238945 DEBUG nova.network.neutron [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:04:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 130 op/s
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.705 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522640.705238, 99cd19f8-17dc-4d81-980f-4cf584356571 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.706 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.708 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.709 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.713 238945 INFO nova.virt.libvirt.driver [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance spawned successfully.#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.713 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.756 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.759 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.766 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.767 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.767 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.768 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.768 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.768 238945 DEBUG nova.virt.libvirt.driver [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.805 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.806 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522640.707653, 99cd19f8-17dc-4d81-980f-4cf584356571 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.806 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] VM Started (Lifecycle Event)#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.833 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.837 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.842 238945 INFO nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 5.58 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.843 238945 DEBUG nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.865 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.897 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.900 238945 INFO nova.compute.manager [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 7.15 seconds to build instance.#033[00m
Jan 27 09:04:00 np0005597378 nova_compute[238941]: 2026-01-27 14:04:00.923 238945 DEBUG oslo_concurrency.lockutils [None req-25fbb0f1-a8b7-4afa-929c-3e8401aa1129 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.599 238945 DEBUG nova.network.neutron [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [{"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.605 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.605 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 WARNING nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.606 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.607 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.607 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.607 238945 DEBUG oslo_concurrency.lockutils [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.608 238945 DEBUG nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.608 238945 WARNING nova.compute.manager [req-674ee45c-44f6-4d55-ab72-bc8a948392f5 req-9030b370-f3b0-44e7-8b01-91b820f12343 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.615 238945 DEBUG oslo_concurrency.lockutils [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Releasing lock "refresh_cache-58152b7a-295a-46c3-a454-95a08d597abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.620 238945 DEBUG nova.virt.libvirt.vif [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:03:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:03:59Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.620 238945 DEBUG nova.network.os_vif_util [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.621 238945 DEBUG nova.network.os_vif_util [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.622 238945 DEBUG os_vif [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.623 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.623 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.626 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3a5102d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.626 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3a5102d-0f, col_values=(('external_ids', {'iface-id': 'a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:aa:8d', 'vm-uuid': '58152b7a-295a-46c3-a454-95a08d597abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.627 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.627 238945 INFO os_vif [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.742 238945 DEBUG nova.objects.instance [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:01 np0005597378 kernel: tapa3a5102d-0f: entered promiscuous mode
Jan 27 09:04:01 np0005597378 NetworkManager[48904]: <info>  [1769522641.8009] manager: (tapa3a5102d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Jan 27 09:04:01 np0005597378 systemd-udevd[325583]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:04:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:01Z|00946|binding|INFO|Claiming lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for this chassis.
Jan 27 09:04:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:01Z|00947|binding|INFO|a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6: Claiming fa:16:3e:38:aa:8d 10.100.0.14
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.809 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.811 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 bound to our chassis#033[00m
Jan 27 09:04:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.811 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:04:01 np0005597378 NetworkManager[48904]: <info>  [1769522641.8134] device (tapa3a5102d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:04:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:01.812 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[275f8a4e-9865-471e-a963-ce8a6a940350]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:01 np0005597378 NetworkManager[48904]: <info>  [1769522641.8140] device (tapa3a5102d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:04:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:01Z|00948|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 up in Southbound
Jan 27 09:04:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:01Z|00949|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 ovn-installed in OVS
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:01 np0005597378 nova_compute[238941]: 2026-01-27 14:04:01.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:01 np0005597378 systemd-machined[207425]: New machine qemu-123-instance-00000064.
Jan 27 09:04:01 np0005597378 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.421 238945 DEBUG nova.compute.manager [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.469 238945 INFO nova.compute.manager [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] instance snapshotting#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.663 238945 INFO nova.virt.libvirt.driver [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Beginning live snapshot process#033[00m
Jan 27 09:04:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 129 op/s
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.825 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] creating snapshot(7752bca01b484d5cac6c003de91f22b2) on rbd image(99cd19f8-17dc-4d81-980f-4cf584356571_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.938 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 58152b7a-295a-46c3-a454-95a08d597abd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.939 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522642.9383054, 58152b7a-295a-46c3-a454-95a08d597abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.939 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.957 238945 DEBUG nova.compute.manager [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.959 238945 DEBUG nova.objects.instance [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.962 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.969 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.979 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance running successfully.#033[00m
Jan 27 09:04:02 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.983 238945 DEBUG nova.virt.libvirt.guest [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.983 238945 DEBUG nova.compute.manager [None req-07fc70ba-47fe-4ae8-bfa3-3744f165ef7d 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.990 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.990 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522642.9411652, 58152b7a-295a-46c3-a454-95a08d597abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:02 np0005597378 nova_compute[238941]: 2026-01-27 14:04:02.990 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.024 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.027 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:04:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 29K writes, 117K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.93 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7494 writes, 31K keys, 7494 commit groups, 1.0 writes per commit group, ingest: 33.07 MB, 0.06 MB/s#012Interval WAL: 7494 writes, 2795 syncs, 2.68 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.683 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.684 238945 WARNING nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.685 238945 DEBUG oslo_concurrency.lockutils [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.686 238945 DEBUG nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:03 np0005597378 nova_compute[238941]: 2026-01-27 14:04:03.686 238945 WARNING nova.compute.manager [req-7e1eb311-d241-499f-955b-e3190d762a4f req-b2ca905e-f1a5-428d-8c40-0734e17fe0ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.779956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522643780000, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1859, "num_deletes": 253, "total_data_size": 2931506, "memory_usage": 2973352, "flush_reason": "Manual Compaction"}
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522643994603, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1757472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35371, "largest_seqno": 37229, "table_properties": {"data_size": 1751087, "index_size": 3331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16610, "raw_average_key_size": 21, "raw_value_size": 1736958, "raw_average_value_size": 2201, "num_data_blocks": 151, "num_entries": 789, "num_filter_entries": 789, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522462, "oldest_key_time": 1769522462, "file_creation_time": 1769522643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 214719 microseconds, and 4688 cpu microseconds.
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.994669) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1757472 bytes OK
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.994691) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999199) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999213) EVENT_LOG_v1 {"time_micros": 1769522643999209, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:04:03 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2923544, prev total WAL file size 2944059, number of live WAL files 2.
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1716KB)], [77(9555KB)]
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644000014, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11541826, "oldest_snapshot_seqno": -1}
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6445 keys, 9306603 bytes, temperature: kUnknown
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644159307, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9306603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9263797, "index_size": 25608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 161621, "raw_average_key_size": 25, "raw_value_size": 9148896, "raw_average_value_size": 1419, "num_data_blocks": 1041, "num_entries": 6445, "num_filter_entries": 6445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.159671) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9306603 bytes
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.196584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.4 rd, 58.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(11.9) write-amplify(5.3) OK, records in: 6881, records dropped: 436 output_compression: NoCompression
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.196653) EVENT_LOG_v1 {"time_micros": 1769522644196622, "job": 44, "event": "compaction_finished", "compaction_time_micros": 159426, "compaction_time_cpu_micros": 23147, "output_level": 6, "num_output_files": 1, "total_output_size": 9306603, "num_input_records": 6881, "num_output_records": 6445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644197173, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522644199281, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:03.999940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:04 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:04.199347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.21973367 +0000 UTC m=+0.076537351 container create dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.17330827 +0000 UTC m=+0.030111951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:04:04 np0005597378 systemd[1]: Started libpod-conmon-dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad.scope.
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.295 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] cloning vms/99cd19f8-17dc-4d81-980f-4cf584356571_disk@7752bca01b484d5cac6c003de91f22b2 to images/baf12a49-0498-4a02-be98-e917d08a8d94 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:04:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.344856476 +0000 UTC m=+0.201660177 container init dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.351922287 +0000 UTC m=+0.208725968 container start dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:04:04 np0005597378 hungry_satoshi[325861]: 167 167
Jan 27 09:04:04 np0005597378 systemd[1]: libpod-dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad.scope: Deactivated successfully.
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.416239087 +0000 UTC m=+0.273042768 container attach dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.419180336 +0000 UTC m=+0.275984017 container died dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:04:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b6398fb95757971c9bdcffcd686576717d54bc46c005797f53e37d3e5f23aff5-merged.mount: Deactivated successfully.
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.553 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] flattening images/baf12a49-0498-4a02-be98-e917d08a8d94 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:04:04 np0005597378 podman[325844]: 2026-01-27 14:04:04.63707819 +0000 UTC m=+0.493881871 container remove dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_satoshi, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:04 np0005597378 systemd[1]: libpod-conmon-dfce2b4c62e39945a9ea754b725db758a12cbab5d67db84b6e950346a061c5ad.scope: Deactivated successfully.
Jan 27 09:04:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 79 op/s
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.824 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.824 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.828 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.828 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.831 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.833 238945 INFO nova.compute.manager [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Terminating instance#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.834 238945 DEBUG nova.compute.manager [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:04:04 np0005597378 podman[325955]: 2026-01-27 14:04:04.862851744 +0000 UTC m=+0.076816148 container create a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.873 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] removing snapshot(7752bca01b484d5cac6c003de91f22b2) on rbd image(99cd19f8-17dc-4d81-980f-4cf584356571_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 09:04:04 np0005597378 kernel: tapa3a5102d-0f (unregistering): left promiscuous mode
Jan 27 09:04:04 np0005597378 NetworkManager[48904]: <info>  [1769522644.8975] device (tapa3a5102d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:04:04 np0005597378 systemd[1]: Started libpod-conmon-a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947.scope.
Jan 27 09:04:04 np0005597378 podman[325955]: 2026-01-27 14:04:04.812056888 +0000 UTC m=+0.026021312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:04:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:04Z|00950|binding|INFO|Releasing lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 from this chassis (sb_readonly=0)
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:04Z|00951|binding|INFO|Setting lport a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 down in Southbound
Jan 27 09:04:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:04Z|00952|binding|INFO|Removing iface tapa3a5102d-0f ovn-installed in OVS
Jan 27 09:04:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.918 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:aa:8d 10.100.0.14'], port_security=['fa:16:3e:38:aa:8d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58152b7a-295a-46c3-a454-95a08d597abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d012e71b-f9f0-438e-bd5a-bbabeb4df913', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87e722e7579646e9924cc852bfd49285', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fc54617d-885a-48f9-9a6b-1fd6e982d1f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40feac3b-bbdb-4afc-af91-70e591134b04, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.919 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 in datapath d012e71b-f9f0-438e-bd5a-bbabeb4df913 unbound from our chassis#033[00m
Jan 27 09:04:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.920 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d012e71b-f9f0-438e-bd5a-bbabeb4df913 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:04:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:04.921 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7132ef9-f62e-4274-94ee-f843c584b7fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:04 np0005597378 nova_compute[238941]: 2026-01-27 14:04:04.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:04 np0005597378 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 27 09:04:04 np0005597378 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 2.878s CPU time.
Jan 27 09:04:04 np0005597378 systemd-machined[207425]: Machine qemu-123-instance-00000064 terminated.
Jan 27 09:04:05 np0005597378 podman[325955]: 2026-01-27 14:04:05.016737715 +0000 UTC m=+0.230702149 container init a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:04:05 np0005597378 podman[325955]: 2026-01-27 14:04:05.024022301 +0000 UTC m=+0.237986705 container start a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:04:05 np0005597378 podman[325955]: 2026-01-27 14:04:05.032861538 +0000 UTC m=+0.246825942 container attach a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.073 238945 INFO nova.virt.libvirt.driver [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Instance destroyed successfully.#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.073 238945 DEBUG nova.objects.instance [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lazy-loading 'resources' on Instance uuid 58152b7a-295a-46c3-a454-95a08d597abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.095 238945 DEBUG nova.virt.libvirt.vif [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-702662030',display_name='tempest-TestServerAdvancedOps-server-702662030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-702662030',id=100,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:03:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='87e722e7579646e9924cc852bfd49285',ramdisk_id='',reservation_id='r-44i0u8y1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1351312611',owner_user_name='tempest-TestServerAdvancedOps-1351312611-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:03Z,user_data=None,user_id='17c3813514ef4adaa908639e29e969ba',uuid=58152b7a-295a-46c3-a454-95a08d597abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.095 238945 DEBUG nova.network.os_vif_util [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converting VIF {"id": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "address": "fa:16:3e:38:aa:8d", "network": {"id": "d012e71b-f9f0-438e-bd5a-bbabeb4df913", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1750979719-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "87e722e7579646e9924cc852bfd49285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3a5102d-0f", "ovs_interfaceid": "a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.096 238945 DEBUG nova.network.os_vif_util [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.096 238945 DEBUG os_vif [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.099 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3a5102d-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.104 238945 INFO os_vif [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:aa:8d,bridge_name='br-int',has_traffic_filtering=True,id=a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6,network=Network(d012e71b-f9f0-438e-bd5a-bbabeb4df913),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3a5102d-0f')#033[00m
Jan 27 09:04:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:05 np0005597378 gracious_taussig[325976]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:04:05 np0005597378 gracious_taussig[325976]: --> All data devices are unavailable
Jan 27 09:04:05 np0005597378 systemd[1]: libpod-a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947.scope: Deactivated successfully.
Jan 27 09:04:05 np0005597378 podman[325955]: 2026-01-27 14:04:05.541162226 +0000 UTC m=+0.755126630 container died a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-904521c72d01be3daa9ad237fd297e8d88f0558aa73db20b9ddb5b287651b3b9-merged.mount: Deactivated successfully.
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.778 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.779 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.779 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.779 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.780 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.780 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-unplugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.780 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "58152b7a-295a-46c3-a454-95a08d597abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG oslo_concurrency.lockutils [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.781 238945 DEBUG nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] No waiting events found dispatching network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:05 np0005597378 nova_compute[238941]: 2026-01-27 14:04:05.782 238945 WARNING nova.compute.manager [req-22419f16-c6e8-4834-8200-3efe8c18ab8e req-f597dd23-30cd-413e-9eea-73108b26dd8b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received unexpected event network-vif-plugged-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:04:05 np0005597378 podman[325955]: 2026-01-27 14:04:05.91963086 +0000 UTC m=+1.133595264 container remove a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:04:06 np0005597378 podman[326030]: 2026-01-27 14:04:06.010148905 +0000 UTC m=+0.430409302 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:04:06 np0005597378 systemd[1]: libpod-conmon-a481ac6fba2e4c2529ea3590147fc081779adf36bd40f3896ac218ef29f32947.scope: Deactivated successfully.
Jan 27 09:04:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 27 09:04:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 27 09:04:06 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 27 09:04:06 np0005597378 podman[326124]: 2026-01-27 14:04:06.454677026 +0000 UTC m=+0.115109768 container create debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:04:06 np0005597378 podman[326124]: 2026-01-27 14:04:06.365172888 +0000 UTC m=+0.025605650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:04:06 np0005597378 nova_compute[238941]: 2026-01-27 14:04:06.484 238945 DEBUG nova.storage.rbd_utils [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] creating snapshot(snap) on rbd image(baf12a49-0498-4a02-be98-e917d08a8d94) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:04:06 np0005597378 systemd[1]: Started libpod-conmon-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope.
Jan 27 09:04:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:06 np0005597378 podman[326124]: 2026-01-27 14:04:06.62688873 +0000 UTC m=+0.287321462 container init debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:06 np0005597378 podman[326124]: 2026-01-27 14:04:06.634543645 +0000 UTC m=+0.294976387 container start debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:04:06 np0005597378 priceless_wilson[326158]: 167 167
Jan 27 09:04:06 np0005597378 systemd[1]: libpod-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope: Deactivated successfully.
Jan 27 09:04:06 np0005597378 conmon[326158]: conmon debe4d3ae8a1489c25e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope/container/memory.events
Jan 27 09:04:06 np0005597378 podman[326124]: 2026-01-27 14:04:06.647642488 +0000 UTC m=+0.308075290 container attach debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:04:06 np0005597378 podman[326124]: 2026-01-27 14:04:06.649222441 +0000 UTC m=+0.309655183 container died debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:04:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 27 09:04:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-724ffb45a9afd70c7addcb06925665df36aeff1ac59f04e77f2bf35de135dd92-merged.mount: Deactivated successfully.
Jan 27 09:04:07 np0005597378 podman[326124]: 2026-01-27 14:04:07.058093662 +0000 UTC m=+0.718526404 container remove debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:04:07 np0005597378 systemd[1]: libpod-conmon-debe4d3ae8a1489c25e45c08fe67372c5ebe4d7c2b4054338fc299bc1c2b22a7.scope: Deactivated successfully.
Jan 27 09:04:07 np0005597378 podman[326174]: 2026-01-27 14:04:07.145687758 +0000 UTC m=+0.210199866 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:04:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:07 np0005597378 podman[326206]: 2026-01-27 14:04:07.225856856 +0000 UTC m=+0.030948484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:04:07 np0005597378 podman[326206]: 2026-01-27 14:04:07.376993073 +0000 UTC m=+0.182084681 container create 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:04:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 27 09:04:07 np0005597378 systemd[1]: Started libpod-conmon-741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063.scope.
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.478 238945 INFO nova.virt.libvirt.driver [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deleting instance files /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd_del#033[00m
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.479 238945 INFO nova.virt.libvirt.driver [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deletion of /var/lib/nova/instances/58152b7a-295a-46c3-a454-95a08d597abd_del complete#033[00m
Jan 27 09:04:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:07 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.528 238945 INFO nova.compute.manager [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.528 238945 DEBUG oslo.service.loopingcall [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.530 238945 DEBUG nova.compute.manager [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:04:07 np0005597378 nova_compute[238941]: 2026-01-27 14:04:07.530 238945 DEBUG nova.network.neutron [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:04:07 np0005597378 podman[326206]: 2026-01-27 14:04:07.59802715 +0000 UTC m=+0.403118838 container init 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:04:07 np0005597378 podman[326206]: 2026-01-27 14:04:07.604663158 +0000 UTC m=+0.409754776 container start 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:04:07 np0005597378 podman[326206]: 2026-01-27 14:04:07.63889137 +0000 UTC m=+0.443983018 container attach 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]: {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:    "0": [
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:        {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "devices": [
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "/dev/loop3"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            ],
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_name": "ceph_lv0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_size": "21470642176",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "name": "ceph_lv0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "tags": {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cluster_name": "ceph",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.crush_device_class": "",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.encrypted": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.objectstore": "bluestore",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osd_id": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.type": "block",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.vdo": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.with_tpm": "0"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            },
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "type": "block",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "vg_name": "ceph_vg0"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:        }
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:    ],
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:    "1": [
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:        {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "devices": [
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "/dev/loop4"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            ],
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_name": "ceph_lv1",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_size": "21470642176",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "name": "ceph_lv1",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "tags": {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cluster_name": "ceph",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.crush_device_class": "",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.encrypted": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.objectstore": "bluestore",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osd_id": "1",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.type": "block",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.vdo": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.with_tpm": "0"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            },
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "type": "block",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "vg_name": "ceph_vg1"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:        }
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:    ],
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:    "2": [
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:        {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "devices": [
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "/dev/loop5"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            ],
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_name": "ceph_lv2",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_size": "21470642176",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "name": "ceph_lv2",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "tags": {
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.cluster_name": "ceph",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.crush_device_class": "",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.encrypted": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.objectstore": "bluestore",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osd_id": "2",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.type": "block",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.vdo": "0",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:                "ceph.with_tpm": "0"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            },
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "type": "block",
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:            "vg_name": "ceph_vg2"
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:        }
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]:    ]
Jan 27 09:04:07 np0005597378 nifty_solomon[326223]: }
Jan 27 09:04:07 np0005597378 systemd[1]: libpod-741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063.scope: Deactivated successfully.
Jan 27 09:04:07 np0005597378 podman[326206]: 2026-01-27 14:04:07.933574769 +0000 UTC m=+0.738666427 container died 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:04:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8391b3decb64c6ef14351b226e5df1475004393df7c3d2c70cfb7053a596f948-merged.mount: Deactivated successfully.
Jan 27 09:04:08 np0005597378 podman[326206]: 2026-01-27 14:04:08.073858683 +0000 UTC m=+0.878950291 container remove 741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:04:08 np0005597378 systemd[1]: libpod-conmon-741f92351e2ca5bc97e29921815ddadc15bc886b8c9e5431462824465c929063.scope: Deactivated successfully.
Jan 27 09:04:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:04:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.3 total, 600.0 interval#012Cumulative writes: 31K writes, 121K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.84 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7430 writes, 28K keys, 7430 commit groups, 1.0 writes per commit group, ingest: 26.83 MB, 0.04 MB/s#012Interval WAL: 7430 writes, 2953 syncs, 2.52 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.516635537 +0000 UTC m=+0.052719729 container create e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:04:08 np0005597378 systemd[1]: Started libpod-conmon-e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64.scope.
Jan 27 09:04:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.485390746 +0000 UTC m=+0.021474958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.641403575 +0000 UTC m=+0.177487787 container init e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.646769228 +0000 UTC m=+0.182853460 container start e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:04:08 np0005597378 xenodochial_turing[326320]: 167 167
Jan 27 09:04:08 np0005597378 systemd[1]: libpod-e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64.scope: Deactivated successfully.
Jan 27 09:04:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 132 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 32 KiB/s wr, 169 op/s
Jan 27 09:04:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.715037216 +0000 UTC m=+0.251121428 container attach e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.715451166 +0000 UTC m=+0.251535378 container died e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:04:08 np0005597378 nova_compute[238941]: 2026-01-27 14:04:08.734 238945 DEBUG nova.network.neutron [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:08 np0005597378 nova_compute[238941]: 2026-01-27 14:04:08.755 238945 INFO nova.compute.manager [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Took 1.22 seconds to deallocate network for instance.#033[00m
Jan 27 09:04:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e05b1dab97bf5d346f861682bbcc2d840a3cc186170a740a77117e90d8ca5b15-merged.mount: Deactivated successfully.
Jan 27 09:04:08 np0005597378 nova_compute[238941]: 2026-01-27 14:04:08.808 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:08 np0005597378 nova_compute[238941]: 2026-01-27 14:04:08.810 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:08 np0005597378 nova_compute[238941]: 2026-01-27 14:04:08.818 238945 DEBUG nova.compute.manager [req-89a4ac22-076f-4175-899e-9616bb35832f req-87826862-d36a-48ff-a7e2-8deb47e63306 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Received event network-vif-deleted-a3a5102d-0f87-4479-8a2c-cf5d2e3d3eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:08 np0005597378 podman[326304]: 2026-01-27 14:04:08.861523197 +0000 UTC m=+0.397607399 container remove e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:04:08 np0005597378 systemd[1]: libpod-conmon-e59890a36cde7a056c6d84d3d64e3ca751006b8487464ba5b69e42a3ed7cdf64.scope: Deactivated successfully.
Jan 27 09:04:08 np0005597378 nova_compute[238941]: 2026-01-27 14:04:08.902 238945 DEBUG oslo_concurrency.processutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:09 np0005597378 podman[326347]: 2026-01-27 14:04:09.040159743 +0000 UTC m=+0.027260173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:04:09 np0005597378 podman[326347]: 2026-01-27 14:04:09.150426171 +0000 UTC m=+0.137526611 container create be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:04:09 np0005597378 systemd[1]: Started libpod-conmon-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope.
Jan 27 09:04:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:09 np0005597378 podman[326347]: 2026-01-27 14:04:09.499480193 +0000 UTC m=+0.486580623 container init be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:04:09 np0005597378 podman[326347]: 2026-01-27 14:04:09.50717239 +0000 UTC m=+0.494272790 container start be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:04:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418163666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.552 238945 DEBUG oslo_concurrency.processutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.559 238945 DEBUG nova.compute.provider_tree [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.576 238945 DEBUG nova.scheduler.client.report [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.582 238945 INFO nova.virt.libvirt.driver [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Snapshot image upload complete#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.582 238945 INFO nova.compute.manager [None req-18b2b3b6-0a70-4420-adc8-6bf2583992a4 a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 7.11 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 09:04:09 np0005597378 podman[326347]: 2026-01-27 14:04:09.595341263 +0000 UTC m=+0.582441693 container attach be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.606 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.633 238945 INFO nova.scheduler.client.report [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Deleted allocations for instance 58152b7a-295a-46c3-a454-95a08d597abd#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.660 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.660 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.686 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.697 238945 DEBUG oslo_concurrency.lockutils [None req-ca717671-e72d-4b6e-be71-560de75b6a98 17c3813514ef4adaa908639e29e969ba 87e722e7579646e9924cc852bfd49285 - - default default] Lock "58152b7a-295a-46c3-a454-95a08d597abd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.743 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.743 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.749 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.749 238945 INFO nova.compute.claims [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:04:09 np0005597378 nova_compute[238941]: 2026-01-27 14:04:09.881 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:10 np0005597378 lvm[326484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:04:10 np0005597378 lvm[326485]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:04:10 np0005597378 lvm[326485]: VG ceph_vg1 finished
Jan 27 09:04:10 np0005597378 lvm[326484]: VG ceph_vg0 finished
Jan 27 09:04:10 np0005597378 lvm[326487]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:04:10 np0005597378 lvm[326487]: VG ceph_vg2 finished
Jan 27 09:04:10 np0005597378 nice_hypatia[326384]: {}
Jan 27 09:04:10 np0005597378 systemd[1]: libpod-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope: Deactivated successfully.
Jan 27 09:04:10 np0005597378 systemd[1]: libpod-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope: Consumed 1.425s CPU time.
Jan 27 09:04:10 np0005597378 podman[326347]: 2026-01-27 14:04:10.373133 +0000 UTC m=+1.360233430 container died be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:04:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190476339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.459 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.467 238945 DEBUG nova.compute.provider_tree [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.487 238945 DEBUG nova.scheduler.client.report [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.517 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.518 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.565 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.566 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.597 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.615 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:04:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 123 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 6.6 KiB/s wr, 146 op/s
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.719 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.720 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.720 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating image(s)#033[00m
Jan 27 09:04:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6d95062768cfa0e47e7980802b7e62033412d2211ce111c7d13b58b85a7d04f9-merged.mount: Deactivated successfully.
Jan 27 09:04:10 np0005597378 nova_compute[238941]: 2026-01-27 14:04:10.998 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.033 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.063 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:11Z|00953|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.069 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.113 238945 DEBUG nova.policy [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8b6fd848f3a4701b63086a5fb386473', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.152 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.154 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.155 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.155 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.181 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:11 np0005597378 nova_compute[238941]: 2026-01-27 14:04:11.186 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:11 np0005597378 podman[326347]: 2026-01-27 14:04:11.305354724 +0000 UTC m=+2.292455124 container remove be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hypatia, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:04:11 np0005597378 systemd[1]: libpod-conmon-be4193c6ebbabada0ed1e91fce0658269dabd16449c33b4f9ee26f5a0e8771ce.scope: Deactivated successfully.
Jan 27 09:04:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:04:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:04:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:04:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:04:12 np0005597378 nova_compute[238941]: 2026-01-27 14:04:12.268 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:12 np0005597378 nova_compute[238941]: 2026-01-27 14:04:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:12 np0005597378 nova_compute[238941]: 2026-01-27 14:04:12.450 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Successfully created port: d95ffe66-8325-4632-8e27-469ee216e988 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:04:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 123 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 5.5 KiB/s wr, 122 op/s
Jan 27 09:04:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:04:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:04:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 27 09:04:13 np0005597378 nova_compute[238941]: 2026-01-27 14:04:13.976 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Successfully updated port: d95ffe66-8325-4632-8e27-469ee216e988 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:04:13 np0005597378 nova_compute[238941]: 2026-01-27 14:04:13.996 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:13 np0005597378 nova_compute[238941]: 2026-01-27 14:04:13.996 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:13 np0005597378 nova_compute[238941]: 2026-01-27 14:04:13.997 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:04:14 np0005597378 nova_compute[238941]: 2026-01-27 14:04:14.122 238945 DEBUG nova.compute.manager [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-changed-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:14 np0005597378 nova_compute[238941]: 2026-01-27 14:04:14.122 238945 DEBUG nova.compute.manager [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Refreshing instance network info cache due to event network-changed-d95ffe66-8325-4632-8e27-469ee216e988. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:04:14 np0005597378 nova_compute[238941]: 2026-01-27 14:04:14.123 238945 DEBUG oslo_concurrency.lockutils [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 27 09:04:14 np0005597378 nova_compute[238941]: 2026-01-27 14:04:14.222 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:04:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3001.0 total, 600.0 interval#012Cumulative writes: 25K writes, 102K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8777 syncs, 2.95 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6858 writes, 26K keys, 6858 commit groups, 1.0 writes per commit group, ingest: 27.89 MB, 0.05 MB/s#012Interval WAL: 6857 writes, 2671 syncs, 2.57 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:04:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 151 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 1.6 MiB/s wr, 144 op/s
Jan 27 09:04:14 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 27 09:04:14 np0005597378 nova_compute[238941]: 2026-01-27 14:04:14.742 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:04:14 np0005597378 nova_compute[238941]: 2026-01-27 14:04:14.784 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] resizing rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.934 238945 DEBUG nova.objects.instance [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'migration_context' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.950 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.950 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Ensure instance console log exists: /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.951 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.951 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:15 np0005597378 nova_compute[238941]: 2026-01-27 14:04:15.952 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.292 238945 DEBUG nova.network.neutron [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updating instance_info_cache with network_info: [{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.308 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.309 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance network_info: |[{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.309 238945 DEBUG oslo_concurrency.lockutils [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.310 238945 DEBUG nova.network.neutron [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Refreshing network info cache for port d95ffe66-8325-4632-8e27-469ee216e988 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.314 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start _get_guest_xml network_info=[{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.319 238945 WARNING nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.331 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.332 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.337 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.338 238945 DEBUG nova.virt.libvirt.host [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.338 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.338 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.339 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.340 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.341 238945 DEBUG nova.virt.hardware [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.343 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 27 09:04:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 27 09:04:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 165 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 2.3 MiB/s wr, 143 op/s
Jan 27 09:04:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 27 09:04:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:04:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192182202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:04:16 np0005597378 nova_compute[238941]: 2026-01-27 14:04:16.942 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.056 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.060 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:04:17
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms']
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:04:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922717309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.632 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.634 238945 DEBUG nova.virt.libvirt.vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-tempest.common.compute-instance-620412269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:10Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.635 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.636 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.637 238945 DEBUG nova.objects.instance [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.662 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <uuid>5762172c-e837-4a63-95dc-1559956fcef5</uuid>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <name>instance-00000066</name>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:name>tempest-tempest.common.compute-instance-620412269</nova:name>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:04:16</nova:creationTime>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <nova:port uuid="d95ffe66-8325-4632-8e27-469ee216e988">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <entry name="serial">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <entry name="uuid">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk.config">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:0d:0c:a7"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <target dev="tapd95ffe66-83"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log" append="off"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:04:17 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:04:17 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:04:17 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:04:17 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.664 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Preparing to wait for external event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.665 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.666 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.666 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.667 238945 DEBUG nova.virt.libvirt.vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-tempest.common.compute-instance-620412269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:10Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.668 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.668 238945 DEBUG nova.network.os_vif_util [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.669 238945 DEBUG os_vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.670 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.670 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95ffe66-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd95ffe66-83, col_values=(('external_ids', {'iface-id': 'd95ffe66-8325-4632-8e27-469ee216e988', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:0c:a7', 'vm-uuid': '5762172c-e837-4a63-95dc-1559956fcef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.676 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:17 np0005597378 NetworkManager[48904]: <info>  [1769522657.6773] manager: (tapd95ffe66-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.685 238945 INFO os_vif [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.740 238945 DEBUG nova.network.neutron [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updated VIF entry in instance network info cache for port d95ffe66-8325-4632-8e27-469ee216e988. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.741 238945 DEBUG nova.network.neutron [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updating instance_info_cache with network_info: [{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:17 np0005597378 nova_compute[238941]: 2026-01-27 14:04:17.759 238945 DEBUG oslo_concurrency.lockutils [req-28fe4bdb-36e1-4b72-b61c-291b039aa520 req-6dce8209-13c4-4295-8f23-7f9461d320cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5762172c-e837-4a63-95dc-1559956fcef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.170 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.170 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.170 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No VIF found with MAC fa:16:3e:0d:0c:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.171 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Using config drive#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.344 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.742 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating config drive at /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.748 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptohifa64 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.891 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptohifa64" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.914 238945 DEBUG nova.storage.rbd_utils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:18 np0005597378 nova_compute[238941]: 2026-01-27 14:04:18.917 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:19 np0005597378 nova_compute[238941]: 2026-01-27 14:04:19.901 238945 DEBUG oslo_concurrency.processutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.983s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:19 np0005597378 nova_compute[238941]: 2026-01-27 14:04:19.901 238945 INFO nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting local config drive /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config because it was imported into RBD.#033[00m
Jan 27 09:04:19 np0005597378 kernel: tapd95ffe66-83: entered promiscuous mode
Jan 27 09:04:19 np0005597378 nova_compute[238941]: 2026-01-27 14:04:19.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:19Z|00954|binding|INFO|Claiming lport d95ffe66-8325-4632-8e27-469ee216e988 for this chassis.
Jan 27 09:04:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:19Z|00955|binding|INFO|d95ffe66-8325-4632-8e27-469ee216e988: Claiming fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 09:04:19 np0005597378 NetworkManager[48904]: <info>  [1769522659.9706] manager: (tapd95ffe66-83): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Jan 27 09:04:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:19.977 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:19.978 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:04:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:19.979 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:04:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:19Z|00956|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 ovn-installed in OVS
Jan 27 09:04:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:19Z|00957|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 up in Southbound
Jan 27 09:04:19 np0005597378 nova_compute[238941]: 2026-01-27 14:04:19.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:19 np0005597378 nova_compute[238941]: 2026-01-27 14:04:19.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.004 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea081b3-d3cd-4455-b886-de3e653321d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:20 np0005597378 systemd-udevd[326831]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:04:20 np0005597378 systemd-machined[207425]: New machine qemu-124-instance-00000066.
Jan 27 09:04:20 np0005597378 NetworkManager[48904]: <info>  [1769522660.0283] device (tapd95ffe66-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:04:20 np0005597378 NetworkManager[48904]: <info>  [1769522660.0291] device (tapd95ffe66-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:04:20 np0005597378 systemd[1]: Started Virtual Machine qemu-124-instance-00000066.
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.043 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[32f21628-5d17-4cd0-8121-1f1844927121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.046 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[20c7aa05-b679-4375-8ae3-3c1d5f81b4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.070 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522645.069207, 58152b7a-295a-46c3-a454-95a08d597abd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.071 238945 INFO nova.compute.manager [-] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.079 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9afaecaa-eb41-4773-842d-17ab07c1f896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.099 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a040ae3-0461-4132-8910-759e687441dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326844, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2057ee-bfd0-4de0-be5a-bd560832a50b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326846, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326846, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.120 238945 DEBUG nova.compute.manager [None req-d32ff1a0-ead5-4f6d-80f2-e3bd120a3d03 - - - - - -] [instance: 58152b7a-295a-46c3-a454-95a08d597abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.121 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.387 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:20.389 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.671 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522660.6709075, 5762172c-e837-4a63-95dc-1559956fcef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.672 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Started (Lifecycle Event)#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 103 op/s
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.694 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522660.6710153, 5762172c-e837-4a63-95dc-1559956fcef5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.695 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.716 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.719 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.737 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.867 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "99cd19f8-17dc-4d81-980f-4cf584356571" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.867 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.868 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "99cd19f8-17dc-4d81-980f-4cf584356571-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.868 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.868 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.869 238945 INFO nova.compute.manager [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Terminating instance#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.870 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "refresh_cache-99cd19f8-17dc-4d81-980f-4cf584356571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.870 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquired lock "refresh_cache-99cd19f8-17dc-4d81-980f-4cf584356571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:20 np0005597378 nova_compute[238941]: 2026-01-27 14:04:20.871 238945 DEBUG nova.network.neutron [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.006 238945 DEBUG nova.network.neutron [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.212 238945 DEBUG nova.network.neutron [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.223 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Releasing lock "refresh_cache-99cd19f8-17dc-4d81-980f-4cf584356571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.224 238945 DEBUG nova.compute.manager [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:04:21 np0005597378 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000065.scope: Deactivated successfully.
Jan 27 09:04:21 np0005597378 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000065.scope: Consumed 1.597s CPU time.
Jan 27 09:04:21 np0005597378 systemd-machined[207425]: Machine qemu-122-instance-00000065 terminated.
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.449 238945 INFO nova.virt.libvirt.driver [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance destroyed successfully.#033[00m
Jan 27 09:04:21 np0005597378 nova_compute[238941]: 2026-01-27 14:04:21.450 238945 DEBUG nova.objects.instance [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lazy-loading 'resources' on Instance uuid 99cd19f8-17dc-4d81-980f-4cf584356571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.851 238945 DEBUG nova.compute.manager [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.851 238945 DEBUG oslo_concurrency.lockutils [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.851 238945 DEBUG oslo_concurrency.lockutils [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.852 238945 DEBUG oslo_concurrency.lockutils [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.852 238945 DEBUG nova.compute.manager [req-6b1b322a-3e23-43ae-b377-fc00b0f00d1a req-545fb06a-1b7e-4328-afa1-dd52ce7fa17f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Processing event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.852 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.855 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522662.8557913, 5762172c-e837-4a63-95dc-1559956fcef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.856 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.857 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.860 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance spawned successfully.#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.861 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.877 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.974 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.976 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.976 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.977 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.977 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.977 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:22 np0005597378 nova_compute[238941]: 2026-01-27 14:04:22.978 238945 DEBUG nova.virt.libvirt.driver [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:23 np0005597378 nova_compute[238941]: 2026-01-27 14:04:23.071 238945 INFO nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 12.35 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:04:23 np0005597378 nova_compute[238941]: 2026-01-27 14:04:23.072 238945 DEBUG nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 27 09:04:23 np0005597378 nova_compute[238941]: 2026-01-27 14:04:23.213 238945 INFO nova.compute.manager [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 13.48 seconds to build instance.#033[00m
Jan 27 09:04:23 np0005597378 nova_compute[238941]: 2026-01-27 14:04:23.285 238945 DEBUG oslo_concurrency.lockutils [None req-3fcf8d23-8902-4a67-9984-69607c6c17a6 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 27 09:04:23 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 27 09:04:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:23.392 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 27 09:04:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 27 09:04:24 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 27 09:04:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 87 KiB/s wr, 79 op/s
Jan 27 09:04:24 np0005597378 nova_compute[238941]: 2026-01-27 14:04:24.969 238945 DEBUG nova.compute.manager [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:24 np0005597378 nova_compute[238941]: 2026-01-27 14:04:24.969 238945 DEBUG oslo_concurrency.lockutils [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:24 np0005597378 nova_compute[238941]: 2026-01-27 14:04:24.973 238945 DEBUG oslo_concurrency.lockutils [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:24 np0005597378 nova_compute[238941]: 2026-01-27 14:04:24.973 238945 DEBUG oslo_concurrency.lockutils [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:24 np0005597378 nova_compute[238941]: 2026-01-27 14:04:24.973 238945 DEBUG nova.compute.manager [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:24 np0005597378 nova_compute[238941]: 2026-01-27 14:04:24.974 238945 WARNING nova.compute.manager [req-fc5effa7-2b40-47ce-bc87-9f9e97c69c0a req-31108db1-b517-4601-a2c6-41af7f56f020 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:04:25 np0005597378 nova_compute[238941]: 2026-01-27 14:04:25.883 238945 INFO nova.virt.libvirt.driver [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deleting instance files /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571_del#033[00m
Jan 27 09:04:25 np0005597378 nova_compute[238941]: 2026-01-27 14:04:25.884 238945 INFO nova.virt.libvirt.driver [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deletion of /var/lib/nova/instances/99cd19f8-17dc-4d81-980f-4cf584356571_del complete#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.011 238945 INFO nova.compute.manager [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 4.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.011 238945 DEBUG oslo.service.loopingcall [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.012 238945 DEBUG nova.compute.manager [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.012 238945 DEBUG nova.network.neutron [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.137 238945 DEBUG nova.network.neutron [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.156 238945 DEBUG nova.network.neutron [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.201 238945 INFO nova.compute.manager [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Took 0.19 seconds to deallocate network for instance.#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.283 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.284 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.383 238945 DEBUG oslo_concurrency.processutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.414 238945 INFO nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Rebuilding instance#033[00m
Jan 27 09:04:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 21 KiB/s wr, 102 op/s
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.751 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.781 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.843 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_requests' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.865 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.885 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.898 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'migration_context' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.914 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:04:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467018336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.919 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.939 238945 DEBUG oslo_concurrency.processutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.945 238945 DEBUG nova.compute.provider_tree [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.967 238945 DEBUG nova.scheduler.client.report [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:04:26 np0005597378 nova_compute[238941]: 2026-01-27 14:04:26.987 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:27 np0005597378 nova_compute[238941]: 2026-01-27 14:04:27.040 238945 INFO nova.scheduler.client.report [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Deleted allocations for instance 99cd19f8-17dc-4d81-980f-4cf584356571#033[00m
Jan 27 09:04:27 np0005597378 nova_compute[238941]: 2026-01-27 14:04:27.105 238945 DEBUG oslo_concurrency.lockutils [None req-fca27eb1-9e98-4eee-b0f4-9083cf2572aa a9d0ea5b6b714a968e03b217f04e9718 9cf05c7851f3406f80db37818456ad04 - - default default] Lock "99cd19f8-17dc-4d81-980f-4cf584356571" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 09:04:27 np0005597378 nova_compute[238941]: 2026-01-27 14:04:27.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011215771658156482 of space, bias 1.0, pg target 0.33647314974469444 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667746135202199 of space, bias 1.0, pg target 0.20032384056065972 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0743973007843023e-06 of space, bias 4.0, pg target 0.0012892767609411627 quantized to 16 (current 16)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:04:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:04:27 np0005597378 nova_compute[238941]: 2026-01-27 14:04:27.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 KiB/s wr, 146 op/s
Jan 27 09:04:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.213646) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669213685, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 558, "num_deletes": 252, "total_data_size": 552977, "memory_usage": 564344, "flush_reason": "Manual Compaction"}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669249197, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 547425, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37230, "largest_seqno": 37787, "table_properties": {"data_size": 544199, "index_size": 1134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7648, "raw_average_key_size": 19, "raw_value_size": 537695, "raw_average_value_size": 1389, "num_data_blocks": 49, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522643, "oldest_key_time": 1769522643, "file_creation_time": 1769522669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 35633 microseconds, and 2642 cpu microseconds.
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.249270) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 547425 bytes OK
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.249298) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.269528) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.269556) EVENT_LOG_v1 {"time_micros": 1769522669269548, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.269577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 549767, prev total WAL file size 549767, number of live WAL files 2.
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.270192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(534KB)], [80(9088KB)]
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669270256, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9854028, "oldest_snapshot_seqno": -1}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6314 keys, 8198219 bytes, temperature: kUnknown
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669349069, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8198219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8157154, "index_size": 24187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 159715, "raw_average_key_size": 25, "raw_value_size": 8045277, "raw_average_value_size": 1274, "num_data_blocks": 972, "num_entries": 6314, "num_filter_entries": 6314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.349377) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8198219 bytes
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.357911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.9 rd, 103.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(33.0) write-amplify(15.0) OK, records in: 6832, records dropped: 518 output_compression: NoCompression
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.357959) EVENT_LOG_v1 {"time_micros": 1769522669357945, "job": 46, "event": "compaction_finished", "compaction_time_micros": 78922, "compaction_time_cpu_micros": 19125, "output_level": 6, "num_output_files": 1, "total_output_size": 8198219, "num_input_records": 6832, "num_output_records": 6314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669358438, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522669360195, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.270136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:29 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:04:29.360421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:04:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 KiB/s wr, 155 op/s
Jan 27 09:04:31 np0005597378 nova_compute[238941]: 2026-01-27 14:04:31.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:32 np0005597378 nova_compute[238941]: 2026-01-27 14:04:32.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:32 np0005597378 nova_compute[238941]: 2026-01-27 14:04:32.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.2 KiB/s wr, 121 op/s
Jan 27 09:04:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 27 09:04:34 np0005597378 nova_compute[238941]: 2026-01-27 14:04:34.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 27 09:04:34 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 27 09:04:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Jan 27 09:04:35 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:35Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 09:04:35 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:35Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 09:04:36 np0005597378 nova_compute[238941]: 2026-01-27 14:04:36.448 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522661.446494, 99cd19f8-17dc-4d81-980f-4cf584356571 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:36 np0005597378 nova_compute[238941]: 2026-01-27 14:04:36.449 238945 INFO nova.compute.manager [-] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:04:36 np0005597378 nova_compute[238941]: 2026-01-27 14:04:36.470 238945 DEBUG nova.compute.manager [None req-db737cea-6c0c-4b4e-b73d-9faa19e4a154 - - - - - -] [instance: 99cd19f8-17dc-4d81-980f-4cf584356571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 172 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 176 KiB/s wr, 72 op/s
Jan 27 09:04:36 np0005597378 podman[326934]: 2026-01-27 14:04:36.722197373 +0000 UTC m=+0.058094954 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:04:36 np0005597378 nova_compute[238941]: 2026-01-27 14:04:36.962 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 09:04:37 np0005597378 nova_compute[238941]: 2026-01-27 14:04:37.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:37 np0005597378 nova_compute[238941]: 2026-01-27 14:04:37.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:37 np0005597378 podman[326953]: 2026-01-27 14:04:37.750502593 +0000 UTC m=+0.091563396 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:04:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 191 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 27 09:04:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:39 np0005597378 kernel: tapd95ffe66-83 (unregistering): left promiscuous mode
Jan 27 09:04:39 np0005597378 NetworkManager[48904]: <info>  [1769522679.2096] device (tapd95ffe66-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:39Z|00958|binding|INFO|Releasing lport d95ffe66-8325-4632-8e27-469ee216e988 from this chassis (sb_readonly=0)
Jan 27 09:04:39 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:39Z|00959|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 down in Southbound
Jan 27 09:04:39 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:39Z|00960|binding|INFO|Removing iface tapd95ffe66-83 ovn-installed in OVS
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.222 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.224 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.226 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.244 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d1a9b5-789c-4bd0-a590-b4a6759e82ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:39 np0005597378 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 27 09:04:39 np0005597378 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000066.scope: Consumed 13.087s CPU time.
Jan 27 09:04:39 np0005597378 systemd-machined[207425]: Machine qemu-124-instance-00000066 terminated.
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.277 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[80084ecb-292a-4275-9c6a-eda9e5a37569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.280 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ad74ad7e-642c-4585-a5e7-569a86035da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.311 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b6595b69-e238-4289-9976-fb9fc1bc3adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.326 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c50d5285-a413-4ffc-bbb5-2ac609ae1c5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326991, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.340 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3218b533-0597-49d5-afee-943244cab32d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326992, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326992, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.341 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:39.347 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.977 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.982 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance destroyed successfully.#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.986 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance destroyed successfully.#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.987 238945 DEBUG nova.virt.libvirt.vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:25Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.987 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.989 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.989 238945 DEBUG os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.991 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95ffe66-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.993 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG nova.compute.manager [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG oslo_concurrency.lockutils [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG oslo_concurrency.lockutils [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.998 238945 DEBUG oslo_concurrency.lockutils [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.999 238945 DEBUG nova.compute.manager [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:39 np0005597378 nova_compute[238941]: 2026-01-27 14:04:39.999 238945 WARNING nova.compute.manager [req-9437efb6-c249-41a4-9609-58f95e1b3f2a req-de9dec59-0592-4e10-be29-f48f2eef5204 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.000 238945 INFO os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.310 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting instance files /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.311 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deletion of /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del complete#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.485 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.486 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating image(s)#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.509 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.537 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.564 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.569 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.644 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.645 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.646 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.646 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.669 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:40 np0005597378 nova_compute[238941]: 2026-01-27 14:04:40.673 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.069 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5762172c-e837-4a63-95dc-1559956fcef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.123 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] resizing rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.261 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.262 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Ensure instance console log exists: /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.263 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.263 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.263 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.265 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start _get_guest_xml network_info=[{"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.270 238945 WARNING nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.282 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.282 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.286 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.287 238945 DEBUG nova.virt.libvirt.host [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.288 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.288 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.288 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.289 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.290 238945 DEBUG nova.virt.hardware [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.291 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.312 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:04:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548361971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.936 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.955 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:41 np0005597378 nova_compute[238941]: 2026-01-27 14:04:41.958 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:04:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97615626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.544 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.546 238945 DEBUG nova.virt.libvirt.vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:40Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.546 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.547 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.549 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <uuid>5762172c-e837-4a63-95dc-1559956fcef5</uuid>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <name>instance-00000066</name>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestJSON-server-895035170</nova:name>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:04:41</nova:creationTime>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <nova:port uuid="d95ffe66-8325-4632-8e27-469ee216e988">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <entry name="serial">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <entry name="uuid">5762172c-e837-4a63-95dc-1559956fcef5</entry>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5762172c-e837-4a63-95dc-1559956fcef5_disk.config">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:0d:0c:a7"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <target dev="tapd95ffe66-83"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/console.log" append="off"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:04:42 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:04:42 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:04:42 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:04:42 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Preparing to wait for external event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.550 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.551 238945 DEBUG nova.virt.libvirt.vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:40Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.551 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.552 238945 DEBUG nova.network.os_vif_util [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.552 238945 DEBUG os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.553 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.553 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.555 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95ffe66-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.556 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd95ffe66-83, col_values=(('external_ids', {'iface-id': 'd95ffe66-8325-4632-8e27-469ee216e988', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:0c:a7', 'vm-uuid': '5762172c-e837-4a63-95dc-1559956fcef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:42 np0005597378 NetworkManager[48904]: <info>  [1769522682.5580] manager: (tapd95ffe66-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.562 238945 INFO os_vif [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.640 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.641 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.641 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] No VIF found with MAC fa:16:3e:0d:0c:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.641 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Using config drive#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.661 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.669 238945 DEBUG nova.compute.manager [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.669 238945 DEBUG oslo_concurrency.lockutils [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.670 238945 DEBUG oslo_concurrency.lockutils [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.670 238945 DEBUG oslo_concurrency.lockutils [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.670 238945 DEBUG nova.compute.manager [req-8a3b18b3-a67f-4e36-9815-9ac72a2b84bc req-4817f2bb-4096-4c3a-877d-613793337cde 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Processing event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.684 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Jan 27 09:04:42 np0005597378 nova_compute[238941]: 2026-01-27 14:04:42.718 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'keypairs' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.422 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Creating config drive at /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.428 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplh1wxtxo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.567 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplh1wxtxo" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.591 238945 DEBUG nova.storage.rbd_utils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] rbd image 5762172c-e837-4a63-95dc-1559956fcef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.594 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.724 238945 DEBUG oslo_concurrency.processutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config 5762172c-e837-4a63-95dc-1559956fcef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.725 238945 INFO nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting local config drive /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5/disk.config because it was imported into RBD.#033[00m
Jan 27 09:04:43 np0005597378 kernel: tapd95ffe66-83: entered promiscuous mode
Jan 27 09:04:43 np0005597378 NetworkManager[48904]: <info>  [1769522683.7753] manager: (tapd95ffe66-83): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:43Z|00961|binding|INFO|Claiming lport d95ffe66-8325-4632-8e27-469ee216e988 for this chassis.
Jan 27 09:04:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:43Z|00962|binding|INFO|d95ffe66-8325-4632-8e27-469ee216e988: Claiming fa:16:3e:0d:0c:a7 10.100.0.9
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.782 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.784 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.785 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:04:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:43Z|00963|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 ovn-installed in OVS
Jan 27 09:04:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:43Z|00964|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 up in Southbound
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.801 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd989bce-d376-43a5-bec3-ce640bd28672]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:43 np0005597378 systemd-udevd[327325]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:04:43 np0005597378 NetworkManager[48904]: <info>  [1769522683.8150] device (tapd95ffe66-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:04:43 np0005597378 NetworkManager[48904]: <info>  [1769522683.8155] device (tapd95ffe66-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:04:43 np0005597378 systemd-machined[207425]: New machine qemu-125-instance-00000066.
Jan 27 09:04:43 np0005597378 systemd[1]: Started Virtual Machine qemu-125-instance-00000066.
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.831 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7456e5-c869-4538-9561-b64411df2f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.835 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b115d122-a361-483d-9bc7-3c1af5267000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.864 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[47136c8a-f076-46a6-862a-a7b9eb1861c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5924455c-fc4c-4c1d-b296-9fddc4a030f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327337, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1f536e-22dd-4c9e-a05c-4f2faaf0cdbe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327340, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327340, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.909 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.913 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:43.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.942 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.943 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:43 np0005597378 nova_compute[238941]: 2026-01-27 14:04:43.957 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:04:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.029 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.029 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.037 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.037 238945 INFO nova.compute.claims [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.200 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.574 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.576 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 5762172c-e837-4a63-95dc-1559956fcef5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.577 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522684.5759957, 5762172c-e837-4a63-95dc-1559956fcef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.577 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Started (Lifecycle Event)#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.582 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.589 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance spawned successfully.#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.591 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.606 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.612 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.616 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.617 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.617 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.618 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.618 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.619 238945 DEBUG nova.virt.libvirt.driver [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.643 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522684.576973, 5762172c-e837-4a63-95dc-1559956fcef5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.674 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.678 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522684.581266, 5762172c-e837-4a63-95dc-1559956fcef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.678 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.685 238945 DEBUG nova.compute.manager [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.694 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.697 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 174 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.4 MiB/s wr, 117 op/s
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.751 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031420400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.803 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.809 238945 DEBUG nova.compute.provider_tree [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.833 238945 DEBUG nova.scheduler.client.report [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.875 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.877 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.880 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:44 np0005597378 nova_compute[238941]: 2026-01-27 14:04:44.880 238945 DEBUG nova.objects.instance [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.029 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.030 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.052 238945 DEBUG oslo_concurrency.lockutils [None req-39478780-d80e-49b0-ad58-c46a850d30f9 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.100 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.118 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.225 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.227 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.228 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Creating image(s)#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.254 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.297 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.325 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.329 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.374 238945 DEBUG nova.policy [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce6bdc56696940428e2cdd474d4d48de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '126cdd69cb3d443c8ce2da310e0d0ba7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.383 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.383 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.384 238945 WARNING nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.385 238945 DEBUG oslo_concurrency.lockutils [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.386 238945 DEBUG nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.386 238945 WARNING nova.compute.manager [req-658f7b7b-3c66-43d6-a40c-f54c55d72da6 req-39050f2c-8f86-4986-a692-fca72e100d8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.429 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.430 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.430 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.431 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.455 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.459 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.887 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:45 np0005597378 nova_compute[238941]: 2026-01-27 14:04:45.943 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] resizing rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.170 238945 DEBUG nova.objects.instance [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lazy-loading 'migration_context' on Instance uuid bf112e8f-c8b9-4e70-a0ee-3024945722aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.252 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.252 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Ensure instance console log exists: /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.253 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.253 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.253 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:46.311 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:46.312 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:46.312 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:46 np0005597378 nova_compute[238941]: 2026-01-27 14:04:46.392 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Successfully created port: ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:04:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.139 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.139 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.140 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.141 238945 INFO nova.compute.manager [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Terminating instance#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.142 238945 DEBUG nova.compute.manager [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:04:47 np0005597378 kernel: tapd95ffe66-83 (unregistering): left promiscuous mode
Jan 27 09:04:47 np0005597378 NetworkManager[48904]: <info>  [1769522687.1797] device (tapd95ffe66-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:04:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:47Z|00965|binding|INFO|Releasing lport d95ffe66-8325-4632-8e27-469ee216e988 from this chassis (sb_readonly=0)
Jan 27 09:04:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:47Z|00966|binding|INFO|Setting lport d95ffe66-8325-4632-8e27-469ee216e988 down in Southbound
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:47Z|00967|binding|INFO|Removing iface tapd95ffe66-83 ovn-installed in OVS
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.200 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0c:a7 10.100.0.9'], port_security=['fa:16:3e:0d:0c:a7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5762172c-e837-4a63-95dc-1559956fcef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '593f0b96-57a7-4da0-9813-56121a32a356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d95ffe66-8325-4632-8e27-469ee216e988) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.201 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d95ffe66-8325-4632-8e27-469ee216e988 in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.202 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.213 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be32e4d5-805b-402b-970b-4b9d99842c11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:47 np0005597378 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 27 09:04:47 np0005597378 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Consumed 3.355s CPU time.
Jan 27 09:04:47 np0005597378 systemd-machined[207425]: Machine qemu-125-instance-00000066 terminated.
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.255 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a11a1480-a058-4932-8d94-cd6eba374b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.259 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1653a416-f560-46ac-8483-cdb127d787ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.288 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[815f0c4e-425b-478d-8e68-e01bf1906c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29b60326-3e78-4181-b05c-6347aa0f954b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524997, 'reachable_time': 29372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327583, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[620c2521-b4a2-445d-8afe-596eab4ec678]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525009, 'tstamp': 525009}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327584, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5dcf6e0-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525012, 'tstamp': 525012}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327584, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.328 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:47.335 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.378 238945 INFO nova.virt.libvirt.driver [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Instance destroyed successfully.#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.378 238945 DEBUG nova.objects.instance [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 5762172c-e837-4a63-95dc-1559956fcef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.391 238945 DEBUG nova.virt.libvirt.vif [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-620412269',display_name='tempest-ServerActionsTestJSON-server-895035170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-620412269',id=102,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-3av400n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:44Z,user_data=None,user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=5762172c-e837-4a63-95dc-1559956fcef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.392 238945 DEBUG nova.network.os_vif_util [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "d95ffe66-8325-4632-8e27-469ee216e988", "address": "fa:16:3e:0d:0c:a7", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd95ffe66-83", "ovs_interfaceid": "d95ffe66-8325-4632-8e27-469ee216e988", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.392 238945 DEBUG nova.network.os_vif_util [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.393 238945 DEBUG os_vif [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.395 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95ffe66-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.397 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.401 238945 INFO os_vif [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0c:a7,bridge_name='br-int',has_traffic_filtering=True,id=d95ffe66-8325-4632-8e27-469ee216e988,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd95ffe66-83')#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.537 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Successfully updated port: ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.601 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.602 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquired lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.602 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.681 238945 INFO nova.virt.libvirt.driver [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deleting instance files /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.682 238945 INFO nova.virt.libvirt.driver [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deletion of /var/lib/nova/instances/5762172c-e837-4a63-95dc-1559956fcef5_del complete#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.744 238945 DEBUG nova.compute.manager [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.745 238945 DEBUG oslo_concurrency.lockutils [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.745 238945 DEBUG oslo_concurrency.lockutils [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.746 238945 DEBUG oslo_concurrency.lockutils [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.746 238945 DEBUG nova.compute.manager [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.746 238945 DEBUG nova.compute.manager [req-2582c06f-7bb0-44c7-9f37-b7a5bd40f9fa req-e1a93d0a-0f5a-4d46-96ea-0982bce1f82f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-unplugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.749 238945 INFO nova.compute.manager [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.750 238945 DEBUG oslo.service.loopingcall [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.750 238945 DEBUG nova.compute.manager [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.750 238945 DEBUG nova.network.neutron [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.827 238945 DEBUG nova.compute.manager [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.828 238945 DEBUG nova.compute.manager [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing instance network info cache due to event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.828 238945 DEBUG oslo_concurrency.lockutils [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:04:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:04:47 np0005597378 nova_compute[238941]: 2026-01-27 14:04:47.854 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:04:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 182 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 180 op/s
Jan 27 09:04:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.657 238945 DEBUG nova.network.neutron [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.681 238945 INFO nova.compute.manager [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Took 1.93 seconds to deallocate network for instance.#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.741 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.742 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.846 238945 DEBUG oslo_concurrency.processutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.888 238945 DEBUG nova.compute.manager [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.888 238945 DEBUG oslo_concurrency.lockutils [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5762172c-e837-4a63-95dc-1559956fcef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 DEBUG oslo_concurrency.lockutils [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 DEBUG oslo_concurrency.lockutils [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 DEBUG nova.compute.manager [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] No waiting events found dispatching network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.889 238945 WARNING nova.compute.manager [req-a8141336-ba82-49b3-b9cc-b19e31cad6c9 req-d00f3216-126c-42db-89c7-1e702b62c67c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received unexpected event network-vif-plugged-d95ffe66-8325-4632-8e27-469ee216e988 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.925 238945 DEBUG nova.compute.manager [req-789fb05a-60f5-4651-a605-13c61f019b3b req-4e702c47-874f-402c-9433-2801315ba3cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Received event network-vif-deleted-d95ffe66-8325-4632-8e27-469ee216e988 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.943 238945 DEBUG nova.network.neutron [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.968 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Releasing lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.969 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance network_info: |[{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.969 238945 DEBUG oslo_concurrency.lockutils [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.969 238945 DEBUG nova.network.neutron [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.972 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start _get_guest_xml network_info=[{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.976 238945 WARNING nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.984 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.985 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.989 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.989 238945 DEBUG nova.virt.libvirt.host [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.990 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.991 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.992 238945 DEBUG nova.virt.hardware [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:04:49 np0005597378 nova_compute[238941]: 2026-01-27 14:04:49.996 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862197083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.429 238945 DEBUG oslo_concurrency.processutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.435 238945 DEBUG nova.compute.provider_tree [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.461 238945 DEBUG nova.scheduler.client.report [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.502 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:04:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275571859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.535 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.556 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.560 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.592 238945 INFO nova.scheduler.client.report [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Deleted allocations for instance 5762172c-e837-4a63-95dc-1559956fcef5#033[00m
Jan 27 09:04:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 169 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 212 op/s
Jan 27 09:04:50 np0005597378 nova_compute[238941]: 2026-01-27 14:04:50.741 238945 DEBUG oslo_concurrency.lockutils [None req-68ba4455-a33e-426e-90a1-5b2ad9d21ef0 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "5762172c-e837-4a63-95dc-1559956fcef5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:04:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692667543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.166 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.168 238945 DEBUG nova.virt.libvirt.vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1731374456',display_name='tempest-TestServerBasicOps-server-1731374456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1731374456',id=103,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJYCiWTKcHUlPm+XAxp5EGDuYWOZbR/Sgh/L3oWq5tGrIoiHD+N+kLQ55ZP7QxRv/5HMcwgKFb3+Sd+ixC35turrRyVFex50LDNIdV9vs6C6I+w6n/gReHuAdGrtc7shg==',key_name='tempest-TestServerBasicOps-819815299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='126cdd69cb3d443c8ce2da310e0d0ba7',ramdisk_id='',reservation_id='r-roae2lh3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-574754001',owner_user_name='tempest-TestServerBasicOps-574754001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce6bdc56696940428e2cdd474d4d48de',uuid=bf112e8f-c8b9-4e70-a0ee-3024945722aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.168 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converting VIF {"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.169 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.170 238945 DEBUG nova.objects.instance [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf112e8f-c8b9-4e70-a0ee-3024945722aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.280 238945 DEBUG nova.network.neutron [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updated VIF entry in instance network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.281 238945 DEBUG nova.network.neutron [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.305 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <uuid>bf112e8f-c8b9-4e70-a0ee-3024945722aa</uuid>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <name>instance-00000067</name>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestServerBasicOps-server-1731374456</nova:name>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:04:49</nova:creationTime>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:user uuid="ce6bdc56696940428e2cdd474d4d48de">tempest-TestServerBasicOps-574754001-project-member</nova:user>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:project uuid="126cdd69cb3d443c8ce2da310e0d0ba7">tempest-TestServerBasicOps-574754001</nova:project>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <nova:port uuid="ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <entry name="serial">bf112e8f-c8b9-4e70-a0ee-3024945722aa</entry>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <entry name="uuid">bf112e8f-c8b9-4e70-a0ee-3024945722aa</entry>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:4b:c4:6a"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <target dev="tapddc57d7c-5a"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/console.log" append="off"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:04:51 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:04:51 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:04:51 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:04:51 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Preparing to wait for external event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.306 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.307 238945 DEBUG nova.virt.libvirt.vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1731374456',display_name='tempest-TestServerBasicOps-server-1731374456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1731374456',id=103,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJYCiWTKcHUlPm+XAxp5EGDuYWOZbR/Sgh/L3oWq5tGrIoiHD+N+kLQ55ZP7QxRv/5HMcwgKFb3+Sd+ixC35turrRyVFex50LDNIdV9vs6C6I+w6n/gReHuAdGrtc7shg==',key_name='tempest-TestServerBasicOps-819815299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='126cdd69cb3d443c8ce2da310e0d0ba7',ramdisk_id='',reservation_id='r-roae2lh3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-574754001',owner_user_name='tempest-TestServerBasicOps-574754001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:04:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce6bdc56696940428e2cdd474d4d48de',uuid=bf112e8f-c8b9-4e70-a0ee-3024945722aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.308 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converting VIF {"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.308 238945 DEBUG nova.network.os_vif_util [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.309 238945 DEBUG os_vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.310 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.310 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.314 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddc57d7c-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.314 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapddc57d7c-5a, col_values=(('external_ids', {'iface-id': 'ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:c4:6a', 'vm-uuid': 'bf112e8f-c8b9-4e70-a0ee-3024945722aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:51 np0005597378 NetworkManager[48904]: <info>  [1769522691.3177] manager: (tapddc57d7c-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.321 238945 INFO os_vif [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a')#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.324 238945 DEBUG oslo_concurrency.lockutils [req-fa5c0a84-31a4-413a-afb6-1462e80c6a04 req-bb746984-172c-4c27-bbab-bf138ba0f3e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.388 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.389 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.389 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] No VIF found with MAC fa:16:3e:4b:c4:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.389 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Using config drive#033[00m
Jan 27 09:04:51 np0005597378 nova_compute[238941]: 2026-01-27 14:04:51.415 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.533 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Creating config drive at /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.543 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w5yx5f1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.682 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w5yx5f1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 169 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.706 238945 DEBUG nova.storage.rbd_utils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] rbd image bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.709 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.835 238945 DEBUG oslo_concurrency.processutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config bf112e8f-c8b9-4e70-a0ee-3024945722aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.836 238945 INFO nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deleting local config drive /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa/disk.config because it was imported into RBD.#033[00m
Jan 27 09:04:52 np0005597378 kernel: tapddc57d7c-5a: entered promiscuous mode
Jan 27 09:04:52 np0005597378 NetworkManager[48904]: <info>  [1769522692.8822] manager: (tapddc57d7c-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Jan 27 09:04:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:52Z|00968|binding|INFO|Claiming lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for this chassis.
Jan 27 09:04:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:52Z|00969|binding|INFO|ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2: Claiming fa:16:3e:4b:c4:6a 10.100.0.10
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.890 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:c4:6a 10.100.0.10'], port_security=['fa:16:3e:4b:c4:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bf112e8f-c8b9-4e70-a0ee-3024945722aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8227184-a0b2-457f-9458-e3d8638d23a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '126cdd69cb3d443c8ce2da310e0d0ba7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2866f282-f823-4d38-9ed0-28ed718ea4d3 b347b5f2-7cf0-4389-9ef4-8349a580e7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bd8e682-d603-4ddb-8447-eea4c78d8c2e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.891 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 in datapath b8227184-a0b2-457f-9458-e3d8638d23a8 bound to our chassis#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.893 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8227184-a0b2-457f-9458-e3d8638d23a8#033[00m
Jan 27 09:04:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:52Z|00970|binding|INFO|Setting lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 ovn-installed in OVS
Jan 27 09:04:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:52Z|00971|binding|INFO|Setting lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 up in Southbound
Jan 27 09:04:52 np0005597378 nova_compute[238941]: 2026-01-27 14:04:52.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:52 np0005597378 systemd-udevd[327774]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbd24ff-be8b-447e-857d-54f2b087feba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.907 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8227184-a1 in ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.909 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8227184-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.909 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5249ee7-2ac6-41ce-ac24-f9fa8311a654]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.910 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eabe04c2-0ae1-42cb-b5e4-c84880256c04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:52 np0005597378 NetworkManager[48904]: <info>  [1769522692.9170] device (tapddc57d7c-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:04:52 np0005597378 NetworkManager[48904]: <info>  [1769522692.9176] device (tapddc57d7c-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:04:52 np0005597378 systemd-machined[207425]: New machine qemu-126-instance-00000067.
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.925 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fc512c74-4b40-4545-b2d7-e072d06046e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:52 np0005597378 systemd[1]: Started Virtual Machine qemu-126-instance-00000067.
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.940 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2d36a8-b370-4e83-a0be-1d1fca1e6b95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.973 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[798335ac-b7d0-4f47-a81f-84b11d9086f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:52 np0005597378 systemd-udevd[327778]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:04:52 np0005597378 NetworkManager[48904]: <info>  [1769522692.9801] manager: (tapb8227184-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Jan 27 09:04:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:52.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b586411c-281f-406b-bfa6-5ffb31bc016d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.013 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[34279744-47bd-4868-9536-3c4be575c180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.017 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9c91936d-3354-42ff-9fcc-f42469acb468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 NetworkManager[48904]: <info>  [1769522693.0446] device (tapb8227184-a0): carrier: link connected
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.050 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[70a00a2b-2b15-435c-bdc1-720b14fefd21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec764979-41c2-4b4c-bae9-bd41aa88eb5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8227184-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:68:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533266, 'reachable_time': 39657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327807, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8325d4-f5a3-4ef4-852a-3185486936cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:6884'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533266, 'tstamp': 533266}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327808, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[63d2d7b4-05c5-45e9-a308-ac0e4366d24f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8227184-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:68:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533266, 'reachable_time': 39657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327809, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.127 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd2f75d-4545-40d4-aeff-c0c17fe34a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.189 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e02c4405-e108-4089-9d8b-b53c2e78bac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.191 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8227184-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.191 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.192 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8227184-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:53 np0005597378 NetworkManager[48904]: <info>  [1769522693.1943] manager: (tapb8227184-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 27 09:04:53 np0005597378 kernel: tapb8227184-a0: entered promiscuous mode
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.202 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8227184-a0, col_values=(('external_ids', {'iface-id': 'a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:53Z|00972|binding|INFO|Releasing lport a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4 from this chassis (sb_readonly=0)
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.207 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8227184-a0b2-457f-9458-e3d8638d23a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8227184-a0b2-457f-9458-e3d8638d23a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58fdf7f5-843a-4ef8-b027-6d907c9e14de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.209 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b8227184-a0b2-457f-9458-e3d8638d23a8
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b8227184-a0b2-457f-9458-e3d8638d23a8.pid.haproxy
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b8227184-a0b2-457f-9458-e3d8638d23a8
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:04:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:53.209 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'env', 'PROCESS_TAG=haproxy-b8227184-a0b2-457f-9458-e3d8638d23a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8227184-a0b2-457f-9458-e3d8638d23a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.594 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522693.5928829, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.594 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Started (Lifecycle Event)#033[00m
Jan 27 09:04:53 np0005597378 podman[327879]: 2026-01-27 14:04:53.600958393 +0000 UTC m=+0.058214597 container create 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.625 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.629 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522693.5933666, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.629 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:04:53 np0005597378 systemd[1]: Started libpod-conmon-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85.scope.
Jan 27 09:04:53 np0005597378 podman[327879]: 2026-01-27 14:04:53.567114422 +0000 UTC m=+0.024370646 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:04:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.685 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9d59e33be80c52a289ab36ae73467414f6cf2208d0fcfd15c0a3f7dcb92608/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:04:53 np0005597378 podman[327879]: 2026-01-27 14:04:53.706361558 +0000 UTC m=+0.163617782 container init 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.709 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:04:53 np0005597378 podman[327879]: 2026-01-27 14:04:53.713011028 +0000 UTC m=+0.170267232 container start 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:04:53 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : New worker (327902) forked
Jan 27 09:04:53 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : Loading success.
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.751 238945 DEBUG oslo_concurrency.lockutils [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.752 238945 DEBUG oslo_concurrency.lockutils [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.752 238945 DEBUG nova.compute.manager [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.756 238945 DEBUG nova.compute.manager [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.757 238945 DEBUG nova.objects.instance [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:53 np0005597378 nova_compute[238941]: 2026-01-27 14:04:53.790 238945 DEBUG nova.virt.libvirt.driver [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:04:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.420 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.420 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:04:54 np0005597378 nova_compute[238941]: 2026-01-27 14:04:54.420 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 169 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 27 09:04:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460515122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.025 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.148 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.148 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.151 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.152 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.306 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.307 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.92128160409629GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.308 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.308 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bf112e8f-c8b9-4e70-a0ee-3024945722aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.442 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:04:55 np0005597378 nova_compute[238941]: 2026-01-27 14:04:55.513 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:04:56 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:04:56 np0005597378 NetworkManager[48904]: <info>  [1769522696.0216] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00973|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00974|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00975|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 09:04:56 np0005597378 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Consumed 17.698s CPU time.
Jan 27 09:04:56 np0005597378 systemd-machined[207425]: Machine qemu-119-instance-00000063 terminated.
Jan 27 09:04:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:04:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078511115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.151 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.157 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.204 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.206 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.207 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea0b660-5246-4ba5-8462-2a5a955cabdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.208 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.211 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:04:56 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:04:56 np0005597378 systemd-udevd[327956]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00976|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:04:56 np0005597378 NetworkManager[48904]: <info>  [1769522696.2565] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00977|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.276 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.276 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00978|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00979|if_status|INFO|Dropped 2 log messages in last 226 seconds (most recently, 226 seconds ago) due to excessive rate
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00980|if_status|INFO|Not setting lport 058b32ea-7973-4220-91fa-58dc678da20a down as sb is readonly
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:04:56Z|00981|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.287 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.300 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.312 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : haproxy version is 2.8.14-c23fe91
Jan 27 09:04:56 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [NOTICE]   (324619) : path to executable is /usr/sbin/haproxy
Jan 27 09:04:56 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [WARNING]  (324619) : Exiting Master process...
Jan 27 09:04:56 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [ALERT]    (324619) : Current worker (324621) exited with code 143 (Terminated)
Jan 27 09:04:56 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[324606]: [WARNING]  (324619) : All workers exited. Exiting... (0)
Jan 27 09:04:56 np0005597378 systemd[1]: libpod-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58.scope: Deactivated successfully.
Jan 27 09:04:56 np0005597378 podman[327983]: 2026-01-27 14:04:56.352755126 +0000 UTC m=+0.053573373 container died d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.411 238945 DEBUG nova.compute.manager [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.412 238945 DEBUG oslo_concurrency.lockutils [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.412 238945 DEBUG oslo_concurrency.lockutils [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.412 238945 DEBUG oslo_concurrency.lockutils [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.413 238945 DEBUG nova.compute.manager [req-bc497dd3-c15f-4c5f-9435-3b376507fd7b req-94feacef-9828-4da9-925d-0189e3154f4d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Processing event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.413 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.418 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522696.4175448, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.418 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.420 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:04:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58-userdata-shm.mount: Deactivated successfully.
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.424 238945 INFO nova.virt.libvirt.driver [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance spawned successfully.#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.425 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:04:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5eae9f3409a5182768442d5b02286bc3bddc323cd6f6bdee8d95f13a71b74b1a-merged.mount: Deactivated successfully.
Jan 27 09:04:56 np0005597378 podman[327983]: 2026-01-27 14:04:56.438904873 +0000 UTC m=+0.139723120 container cleanup d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:04:56 np0005597378 systemd[1]: libpod-conmon-d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58.scope: Deactivated successfully.
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.491 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:56 np0005597378 podman[328011]: 2026-01-27 14:04:56.497631774 +0000 UTC m=+0.039332740 container remove d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.499 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.499 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.500 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.500 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.501 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.501 238945 DEBUG nova.virt.libvirt.driver [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.503 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edd09831-c9d2-4539-9dd9-fa486c8b3d82]: (4, ('Tue Jan 27 02:04:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58)\nd8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58\nTue Jan 27 02:04:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (d8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58)\nd8ad11b9815980b0ceaae90c0c3275741828e2e6c5f9ce98d268bcd50d7d8e58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.505 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38c1215b-4677-42a1-9346-2320b27aec98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.506 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.507 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.529 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d12d3075-225a-4811-b75a-9931ad69f076]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.546 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d3b8f8-fcd4-4160-9de7-645fb3b0b34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.549 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[47960fbd-82eb-4342-aa54-b9ef9ecff307]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.567 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be35fad5-496b-448e-8b82-f59eaa1d517a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524989, 'reachable_time': 18574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328029, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.570 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.570 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[468576b3-8091-4c53-a1f2-e37ec9088f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.572 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.573 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.575 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.574 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ea0985-08d4-4dbd-a523-b9f2f627495d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.576 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.577 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:04:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:04:56.578 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2875cfc6-1d15-4261-be9a-5c94a958330b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.647 238945 INFO nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 11.42 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.647 238945 DEBUG nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 155 op/s
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.771 238945 INFO nova.compute.manager [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 12.77 seconds to build instance.#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.805 238945 INFO nova.virt.libvirt.driver [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.811 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.812 238945 DEBUG nova.objects.instance [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.849 238945 DEBUG oslo_concurrency.lockutils [None req-b83e07c1-075a-46c4-8b9d-52d0d02b181c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:56 np0005597378 nova_compute[238941]: 2026-01-27 14:04:56.909 238945 DEBUG nova.compute.manager [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:04:57 np0005597378 nova_compute[238941]: 2026-01-27 14:04:57.068 238945 DEBUG oslo_concurrency.lockutils [None req-581e1314-4349-4582-b308-65e73b550b32 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:57 np0005597378 nova_compute[238941]: 2026-01-27 14:04:57.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.275 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.447 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.639 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.640 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.641 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.641 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.642 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] No waiting events found dispatching network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.643 238945 WARNING nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received unexpected event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.643 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.644 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.645 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.646 238945 DEBUG oslo_concurrency.lockutils [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.646 238945 DEBUG nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.647 238945 WARNING nova.compute.manager [req-a522bfd5-53ee-4099-9e87-4626a7c03ac2 req-83013001-8e16-49f2-b3c3-21b116a932cb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state None.#033[00m
Jan 27 09:04:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.898 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.989 238945 DEBUG oslo_concurrency.lockutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.989 238945 DEBUG oslo_concurrency.lockutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.989 238945 DEBUG nova.network.neutron [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:04:58 np0005597378 nova_compute[238941]: 2026-01-27 14:04:58.990 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'info_cache' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:04:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:04:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:04:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906508496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:04:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:04:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906508496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:05:00 np0005597378 nova_compute[238941]: 2026-01-27 14:05:00.438 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 253 KiB/s wr, 140 op/s
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.091 238945 DEBUG nova.compute.manager [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG oslo_concurrency.lockutils [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG oslo_concurrency.lockutils [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG oslo_concurrency.lockutils [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.092 238945 DEBUG nova.compute.manager [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.093 238945 WARNING nova.compute.manager [req-81d7b523-941b-4176-b40c-3c110dafdb86 req-cd6593d5-8f31-4cf8-b9ad-f7650734563d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:05:01 np0005597378 nova_compute[238941]: 2026-01-27 14:05:01.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.287 238945 DEBUG nova.compute.manager [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.287 238945 DEBUG nova.compute.manager [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing instance network info cache due to event network-changed-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.288 238945 DEBUG oslo_concurrency.lockutils [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.288 238945 DEBUG oslo_concurrency.lockutils [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.288 238945 DEBUG nova.network.neutron [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Refreshing network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.376 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522687.3756921, 5762172c-e837-4a63-95dc-1559956fcef5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.377 238945 INFO nova.compute.manager [-] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.399 238945 DEBUG nova.compute.manager [None req-87d88b3e-9b6d-435e-80ed-30caf7c699ba - - - - - -] [instance: 5762172c-e837-4a63-95dc-1559956fcef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 27 09:05:02 np0005597378 nova_compute[238941]: 2026-01-27 14:05:02.926 238945 DEBUG nova.network.neutron [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.025 238945 DEBUG oslo_concurrency.lockutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.110 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.111 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.148 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.169 238945 DEBUG nova.virt.libvirt.vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.170 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.172 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.172 238945 DEBUG os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.174 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.176 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.180 238945 INFO os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.187 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start _get_guest_xml network_info=[{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.190 238945 WARNING nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.195 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.196 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.210 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.211 238945 DEBUG nova.virt.libvirt.host [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.212 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.212 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.212 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.213 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.213 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.213 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.214 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.215 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.215 238945 DEBUG nova.virt.hardware [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.215 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.245 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:03 np0005597378 nova_compute[238941]: 2026-01-27 14:05:03.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:05:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:05:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398685501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.020 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.103 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.339 238945 DEBUG nova.network.neutron [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updated VIF entry in instance network info cache for port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.341 238945 DEBUG nova.network.neutron [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [{"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.421 238945 DEBUG oslo_concurrency.lockutils [req-d48a62f3-5c6e-4119-9bf9-4ed04683615a req-151e783a-8658-499b-8585-ff9328c0b34d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bf112e8f-c8b9-4e70-a0ee-3024945722aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:05:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:05:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378391053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.652 238945 DEBUG oslo_concurrency.processutils [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.653 238945 DEBUG nova.virt.libvirt.vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.654 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.655 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.656 238945 DEBUG nova.objects.instance [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.691 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <uuid>2b352ec7-34b6-47bb-af67-779b4d1f27cd</uuid>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <name>instance-00000063</name>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerActionsTestJSON-server-281114499</nova:name>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:05:03</nova:creationTime>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:user uuid="d8b6fd848f3a4701b63086a5fb386473">tempest-ServerActionsTestJSON-1485108214-project-member</nova:user>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:project uuid="6d73908bf91048dd99fbe4b9a8bcce9a">tempest-ServerActionsTestJSON-1485108214</nova:project>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <nova:port uuid="058b32ea-7973-4220-91fa-58dc678da20a">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <entry name="serial">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <entry name="uuid">2b352ec7-34b6-47bb-af67-779b4d1f27cd</entry>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2b352ec7-34b6-47bb-af67-779b4d1f27cd_disk.config">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:76:b6:89"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <target dev="tap058b32ea-79"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd/console.log" append="off"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:05:04 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:05:04 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:05:04 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:05:04 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.700 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.701 238945 DEBUG nova.virt.libvirt.driver [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.703 238945 DEBUG nova.virt.libvirt.vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.704 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.705 238945 DEBUG nova.network.os_vif_util [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.706 238945 DEBUG os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.707 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.708 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:05:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 79 op/s
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.712 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.713 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:04 np0005597378 NetworkManager[48904]: <info>  [1769522704.7163] manager: (tap058b32ea-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.721 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.722 238945 INFO os_vif [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:05:04 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:05:04 np0005597378 NetworkManager[48904]: <info>  [1769522704.8210] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Jan 27 09:05:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:04Z|00982|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:05:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:04Z|00983|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:04Z|00984|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.842 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:04Z|00985|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.850 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:05:04 np0005597378 nova_compute[238941]: 2026-01-27 14:05:04.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.852 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.853 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:05:04 np0005597378 systemd-udevd[328107]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:05:04 np0005597378 systemd-machined[207425]: New machine qemu-127-instance-00000063.
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.863 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dca4b5e6-8c86-4a54-b0c5-442f92520d14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.865 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[226bc843-b709-411d-bbe4-e00031e30e4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.867 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f2ed27-cdf3-4ad5-9f0c-d0c74dfc6b92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 NetworkManager[48904]: <info>  [1769522704.8750] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:05:04 np0005597378 NetworkManager[48904]: <info>  [1769522704.8760] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:05:04 np0005597378 systemd[1]: Started Virtual Machine qemu-127-instance-00000063.
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.882 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9f69416d-1a5d-470d-a009-c630371e66c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f06f132-d43d-411f-a3b5-782c1b7a442a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.945 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[613b55f1-a2d9-459b-98df-6702d8773035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 systemd-udevd[328111]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:05:04 np0005597378 NetworkManager[48904]: <info>  [1769522704.9584] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.957 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c59cf37-9171-40f9-ae23-46287bf73bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:04.998 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb00d58-ec35-4f16-9edd-c5acab162663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.001 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb23606-85e5-4b87-878f-c721bdccce84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 NetworkManager[48904]: <info>  [1769522705.0218] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.025 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8522340d-e73e-43e5-8448-ada1f02f7eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.044 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[deadbb6b-23da-48d7-934f-f661a98919de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534464, 'reachable_time': 25243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328140, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5102aef-fdc4-4aaa-bdce-a93f7de7eba3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534464, 'tstamp': 534464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328141, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef6efb-a021-4358-8465-13eb4f4fef82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534464, 'reachable_time': 25243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328142, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6458241-4832-4f18-9ce4-daad15e041c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.156 238945 DEBUG nova.compute.manager [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.156 238945 DEBUG oslo_concurrency.lockutils [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 DEBUG oslo_concurrency.lockutils [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 DEBUG oslo_concurrency.lockutils [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 DEBUG nova.compute.manager [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.157 238945 WARNING nova.compute.manager [req-43b40a10-de72-4bff-9c20-d50458958b05 req-61392bc3-1966-446e-9aa5-b0b9d8cf486c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.176 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f231569b-b2f2-49b0-910f-ec93ac39ad2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.180 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.180 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.181 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:05 np0005597378 NetworkManager[48904]: <info>  [1769522705.1842] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:05 np0005597378 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.186 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:05Z|00986|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.206 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.207 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44edc529-1969-4664-97a2-72e13c966d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.208 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:05:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:05.211 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.558 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.558 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522705.5575008, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.558 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.561 238945 DEBUG nova.compute.manager [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.564 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance rebooted successfully.#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.564 238945 DEBUG nova.compute.manager [None req-330c54ff-48ee-4d24-b987-4aed69c8a93d d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.684 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:05:05 np0005597378 podman[328214]: 2026-01-27 14:05:05.59688171 +0000 UTC m=+0.033223355 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.826 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.827 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522705.5577264, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.827 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.923 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:05 np0005597378 podman[328214]: 2026-01-27 14:05:05.925383679 +0000 UTC m=+0.361725304 container create bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 09:05:05 np0005597378 nova_compute[238941]: 2026-01-27 14:05:05.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:05:06 np0005597378 systemd[1]: Started libpod-conmon-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc.scope.
Jan 27 09:05:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc1b60de7246453d88073a0b3030f9ae09a96eaffc50de5c357499cbdaa1e6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:06 np0005597378 podman[328214]: 2026-01-27 14:05:06.134021963 +0000 UTC m=+0.570363608 container init bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:05:06 np0005597378 podman[328214]: 2026-01-27 14:05:06.140159938 +0000 UTC m=+0.576501563 container start bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:05:06 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : New worker (328236) forked
Jan 27 09:05:06 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : Loading success.
Jan 27 09:05:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 169 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 83 op/s
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.256 238945 DEBUG nova.compute.manager [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.256 238945 DEBUG oslo_concurrency.lockutils [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.256 238945 DEBUG oslo_concurrency.lockutils [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.257 238945 DEBUG oslo_concurrency.lockutils [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.257 238945 DEBUG nova.compute.manager [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.257 238945 WARNING nova.compute.manager [req-a7694543-890b-423d-bd29-c5960eb3c8ed req-37302436-434f-434d-8295-303abc5a077c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:07 np0005597378 nova_compute[238941]: 2026-01-27 14:05:07.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:07 np0005597378 podman[328245]: 2026-01-27 14:05:07.721059256 +0000 UTC m=+0.054779975 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 09:05:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 176 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 754 KiB/s wr, 165 op/s
Jan 27 09:05:08 np0005597378 podman[328264]: 2026-01-27 14:05:08.769475346 +0000 UTC m=+0.091063811 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:09 np0005597378 nova_compute[238941]: 2026-01-27 14:05:09.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:09.808249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522709808294, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 259, "total_data_size": 582547, "memory_usage": 594520, "flush_reason": "Manual Compaction"}
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522709871097, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 576815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37788, "largest_seqno": 38380, "table_properties": {"data_size": 573684, "index_size": 1039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7457, "raw_average_key_size": 18, "raw_value_size": 567224, "raw_average_value_size": 1425, "num_data_blocks": 47, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522670, "oldest_key_time": 1769522670, "file_creation_time": 1769522709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 62920 microseconds, and 3041 cpu microseconds.
Jan 27 09:05:09 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:09.871169) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 576815 bytes OK
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:09.871213) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.198722) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.198765) EVENT_LOG_v1 {"time_micros": 1769522710198757, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.198790) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 579253, prev total WAL file size 607332, number of live WAL files 2.
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.199400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353131' seq:0, type:0; will stop at (end)
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(563KB)], [83(8006KB)]
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710199427, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8775034, "oldest_snapshot_seqno": -1}
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6181 keys, 8653910 bytes, temperature: kUnknown
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710671771, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8653910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8612683, "index_size": 24669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 157895, "raw_average_key_size": 25, "raw_value_size": 8502062, "raw_average_value_size": 1375, "num_data_blocks": 990, "num_entries": 6181, "num_filter_entries": 6181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:05:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 178 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.672059) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8653910 bytes
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.736184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.6 rd, 18.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.8 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(30.2) write-amplify(15.0) OK, records in: 6712, records dropped: 531 output_compression: NoCompression
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.736222) EVENT_LOG_v1 {"time_micros": 1769522710736209, "job": 48, "event": "compaction_finished", "compaction_time_micros": 472439, "compaction_time_cpu_micros": 20042, "output_level": 6, "num_output_files": 1, "total_output_size": 8653910, "num_input_records": 6712, "num_output_records": 6181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710736493, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522710737821, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.199259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:05:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:05:10.737859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:05:11 np0005597378 nova_compute[238941]: 2026-01-27 14:05:11.601 238945 DEBUG nova.objects.instance [None req-e401c997-4023-4f1e-9712-0915d5c52f86 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:11 np0005597378 nova_compute[238941]: 2026-01-27 14:05:11.629 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522711.6293006, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:11 np0005597378 nova_compute[238941]: 2026-01-27 14:05:11.629 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:05:11 np0005597378 nova_compute[238941]: 2026-01-27 14:05:11.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:11 np0005597378 nova_compute[238941]: 2026-01-27 14:05:11.694 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:05:11 np0005597378 nova_compute[238941]: 2026-01-27 14:05:11.720 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 09:05:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:11Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:c4:6a 10.100.0.10
Jan 27 09:05:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:11Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:c4:6a 10.100.0.10
Jan 27 09:05:12 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:05:12 np0005597378 NetworkManager[48904]: <info>  [1769522712.1125] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:05:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:12Z|00987|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:05:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:12Z|00988|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 09:05:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:12Z|00989|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 09:05:12 np0005597378 nova_compute[238941]: 2026-01-27 14:05:12.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:12 np0005597378 nova_compute[238941]: 2026-01-27 14:05:12.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:12 np0005597378 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 09:05:12 np0005597378 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Consumed 7.055s CPU time.
Jan 27 09:05:12 np0005597378 systemd-machined[207425]: Machine qemu-127-instance-00000063 terminated.
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.194 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.196 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.198 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.200 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3eaa53fd-d0fb-428c-b7b8-8958b3f9a655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.201 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore#033[00m
Jan 27 09:05:12 np0005597378 nova_compute[238941]: 2026-01-27 14:05:12.264 238945 DEBUG nova.compute.manager [None req-e401c997-4023-4f1e-9712-0915d5c52f86 d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:05:12 np0005597378 nova_compute[238941]: 2026-01-27 14:05:12.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:05:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:12 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : haproxy version is 2.8.14-c23fe91
Jan 27 09:05:12 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [NOTICE]   (328234) : path to executable is /usr/sbin/haproxy
Jan 27 09:05:12 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [WARNING]  (328234) : Exiting Master process...
Jan 27 09:05:12 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [ALERT]    (328234) : Current worker (328236) exited with code 143 (Terminated)
Jan 27 09:05:12 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[328230]: [WARNING]  (328234) : All workers exited. Exiting... (0)
Jan 27 09:05:12 np0005597378 systemd[1]: libpod-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc.scope: Deactivated successfully.
Jan 27 09:05:12 np0005597378 podman[328400]: 2026-01-27 14:05:12.396275424 +0000 UTC m=+0.076102899 container died bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:05:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc-userdata-shm.mount: Deactivated successfully.
Jan 27 09:05:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cbc1b60de7246453d88073a0b3030f9ae09a96eaffc50de5c357499cbdaa1e6a-merged.mount: Deactivated successfully.
Jan 27 09:05:12 np0005597378 podman[328400]: 2026-01-27 14:05:12.454627634 +0000 UTC m=+0.134455109 container cleanup bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:05:12 np0005597378 systemd[1]: libpod-conmon-bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc.scope: Deactivated successfully.
Jan 27 09:05:12 np0005597378 podman[328478]: 2026-01-27 14:05:12.517002022 +0000 UTC m=+0.043391199 container remove bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.523 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c283647-d869-4f63-af99-cafee6d28973]: (4, ('Tue Jan 27 02:05:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc)\nbccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc\nTue Jan 27 02:05:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (bccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc)\nbccfb504f303f6bbc1db3e1ecc0726c5af14a1320832080d558150f7c9a9a7fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a598a9e-5b4d-4345-bc00-48ec522c593a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.525 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:12 np0005597378 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 09:05:12 np0005597378 nova_compute[238941]: 2026-01-27 14:05:12.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:12 np0005597378 nova_compute[238941]: 2026-01-27 14:05:12.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.549 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35c566b7-7e4a-49f8-944b-0d61f66191ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.568 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9571969-318a-4f63-80a9-c1e730a70cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.570 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[17adce3d-7a73-46ac-9aea-0d5dbe7887ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.589 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab93f7af-2a53-4132-8673-a4a61ea7648d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534455, 'reachable_time': 26031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328499, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.595 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:05:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:12.596 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe1cccf-6fa2-48c4-9be6-c1f79e68b66f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 178 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 126 op/s
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:05:13 np0005597378 nova_compute[238941]: 2026-01-27 14:05:13.327 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:05:13 np0005597378 nova_compute[238941]: 2026-01-27 14:05:13.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.491213446 +0000 UTC m=+0.044721115 container create 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:05:13 np0005597378 systemd[1]: Started libpod-conmon-489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116.scope.
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.473606852 +0000 UTC m=+0.027114541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:05:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.59320579 +0000 UTC m=+0.146713479 container init 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.601608066 +0000 UTC m=+0.155115735 container start 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:05:13 np0005597378 silly_chaplygin[328610]: 167 167
Jan 27 09:05:13 np0005597378 systemd[1]: libpod-489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116.scope: Deactivated successfully.
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.611299757 +0000 UTC m=+0.164807516 container attach 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.612793407 +0000 UTC m=+0.166301106 container died 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:05:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1e22a1ec7066b6f588aea1832ff796ae4a1faedd5f5ff17bc1a76b670e451877-merged.mount: Deactivated successfully.
Jan 27 09:05:13 np0005597378 podman[328593]: 2026-01-27 14:05:13.661477517 +0000 UTC m=+0.214985186 container remove 489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:05:13 np0005597378 systemd[1]: libpod-conmon-489b10b2a5de83683627e40990367ee82c7e9a5114dc86d6c410d5c7de18f116.scope: Deactivated successfully.
Jan 27 09:05:13 np0005597378 podman[328633]: 2026-01-27 14:05:13.841908502 +0000 UTC m=+0.048836225 container create 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:05:13 np0005597378 systemd[1]: Started libpod-conmon-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope.
Jan 27 09:05:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:13 np0005597378 podman[328633]: 2026-01-27 14:05:13.823746184 +0000 UTC m=+0.030673927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:05:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:13 np0005597378 podman[328633]: 2026-01-27 14:05:13.933507747 +0000 UTC m=+0.140435490 container init 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:13 np0005597378 podman[328633]: 2026-01-27 14:05:13.943192487 +0000 UTC m=+0.150120210 container start 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:05:13 np0005597378 podman[328633]: 2026-01-27 14:05:13.947218036 +0000 UTC m=+0.154145759 container attach 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:05:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:14 np0005597378 nifty_hamilton[328649]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:05:14 np0005597378 nifty_hamilton[328649]: --> All data devices are unavailable
Jan 27 09:05:14 np0005597378 systemd[1]: libpod-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope: Deactivated successfully.
Jan 27 09:05:14 np0005597378 conmon[328649]: conmon 58dc9f55b2bdbefa77a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope/container/memory.events
Jan 27 09:05:14 np0005597378 podman[328633]: 2026-01-27 14:05:14.436377547 +0000 UTC m=+0.643305280 container died 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:05:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c3d2808e04173fe3f7b8712f326ec24ab1ed81cf75ef7722c631449c3815811e-merged.mount: Deactivated successfully.
Jan 27 09:05:14 np0005597378 podman[328633]: 2026-01-27 14:05:14.484818541 +0000 UTC m=+0.691746254 container remove 58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:05:14 np0005597378 systemd[1]: libpod-conmon-58dc9f55b2bdbefa77a81b46609f508608b26f360be7836c1f74f33f8ec46ae6.scope: Deactivated successfully.
Jan 27 09:05:14 np0005597378 nova_compute[238941]: 2026-01-27 14:05:14.643 238945 INFO nova.compute.manager [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Resuming#033[00m
Jan 27 09:05:14 np0005597378 nova_compute[238941]: 2026-01-27 14:05:14.646 238945 DEBUG nova.objects.instance [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'flavor' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:14 np0005597378 nova_compute[238941]: 2026-01-27 14:05:14.681 238945 DEBUG oslo_concurrency.lockutils [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:05:14 np0005597378 nova_compute[238941]: 2026-01-27 14:05:14.681 238945 DEBUG oslo_concurrency.lockutils [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquired lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:05:14 np0005597378 nova_compute[238941]: 2026-01-27 14:05:14.682 238945 DEBUG nova.network.neutron [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:05:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 199 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Jan 27 09:05:14 np0005597378 nova_compute[238941]: 2026-01-27 14:05:14.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:14 np0005597378 podman[328741]: 2026-01-27 14:05:14.938878628 +0000 UTC m=+0.039577345 container create 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:05:14 np0005597378 systemd[1]: Started libpod-conmon-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope.
Jan 27 09:05:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:15 np0005597378 podman[328741]: 2026-01-27 14:05:14.922799846 +0000 UTC m=+0.023498583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:05:15 np0005597378 podman[328741]: 2026-01-27 14:05:15.026194268 +0000 UTC m=+0.126893005 container init 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:05:15 np0005597378 podman[328741]: 2026-01-27 14:05:15.031936752 +0000 UTC m=+0.132635469 container start 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:05:15 np0005597378 podman[328741]: 2026-01-27 14:05:15.035473248 +0000 UTC m=+0.136171995 container attach 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:05:15 np0005597378 nervous_moser[328757]: 167 167
Jan 27 09:05:15 np0005597378 systemd[1]: libpod-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope: Deactivated successfully.
Jan 27 09:05:15 np0005597378 conmon[328757]: conmon 34e6789d5489a7092d97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope/container/memory.events
Jan 27 09:05:15 np0005597378 podman[328741]: 2026-01-27 14:05:15.03965168 +0000 UTC m=+0.140350397 container died 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cf329309f865fed7baaea5987e15e2b9811a0ebb94af192e39268de160f955cc-merged.mount: Deactivated successfully.
Jan 27 09:05:15 np0005597378 podman[328741]: 2026-01-27 14:05:15.082529444 +0000 UTC m=+0.183228161 container remove 34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:05:15 np0005597378 systemd[1]: libpod-conmon-34e6789d5489a7092d9778b36395e7f3fd14b5370942e7d3f2871e9aa4412908.scope: Deactivated successfully.
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.257496282 +0000 UTC m=+0.045594668 container create baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:05:15 np0005597378 systemd[1]: Started libpod-conmon-baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81.scope.
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.236124716 +0000 UTC m=+0.024223122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:05:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.355970601 +0000 UTC m=+0.144068977 container init baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.364659255 +0000 UTC m=+0.152757631 container start baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.370017759 +0000 UTC m=+0.158116135 container attach baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]: {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:    "0": [
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:        {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "devices": [
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "/dev/loop3"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            ],
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_name": "ceph_lv0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_size": "21470642176",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "name": "ceph_lv0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "tags": {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cluster_name": "ceph",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.crush_device_class": "",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.encrypted": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.objectstore": "bluestore",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osd_id": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.type": "block",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.vdo": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.with_tpm": "0"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            },
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "type": "block",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "vg_name": "ceph_vg0"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:        }
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:    ],
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:    "1": [
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:        {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "devices": [
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "/dev/loop4"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            ],
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_name": "ceph_lv1",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_size": "21470642176",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "name": "ceph_lv1",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "tags": {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cluster_name": "ceph",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.crush_device_class": "",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.encrypted": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.objectstore": "bluestore",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osd_id": "1",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.type": "block",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.vdo": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.with_tpm": "0"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            },
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "type": "block",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "vg_name": "ceph_vg1"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:        }
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:    ],
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:    "2": [
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:        {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "devices": [
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "/dev/loop5"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            ],
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_name": "ceph_lv2",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_size": "21470642176",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "name": "ceph_lv2",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "tags": {
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.cluster_name": "ceph",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.crush_device_class": "",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.encrypted": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.objectstore": "bluestore",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osd_id": "2",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.type": "block",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.vdo": "0",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:                "ceph.with_tpm": "0"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            },
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "type": "block",
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:            "vg_name": "ceph_vg2"
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:        }
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]:    ]
Jan 27 09:05:15 np0005597378 reverent_visvesvaraya[328798]: }
Jan 27 09:05:15 np0005597378 systemd[1]: libpod-baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81.scope: Deactivated successfully.
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.667917555 +0000 UTC m=+0.456015931 container died baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:05:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f211ec32e2880366ea56a6826318efb9415fcb98476c948077c6c1ab643406c4-merged.mount: Deactivated successfully.
Jan 27 09:05:15 np0005597378 podman[328781]: 2026-01-27 14:05:15.759209032 +0000 UTC m=+0.547307408 container remove baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:15 np0005597378 systemd[1]: libpod-conmon-baf302a67f120ede3cd8ff5ecadf4505b7749e87ec8d19b9af0afb6cb1634f81.scope: Deactivated successfully.
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.260454899 +0000 UTC m=+0.058434124 container create fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:05:16 np0005597378 systemd[1]: Started libpod-conmon-fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75.scope.
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.227639185 +0000 UTC m=+0.025618440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:05:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.378268899 +0000 UTC m=+0.176248154 container init fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.387451756 +0000 UTC m=+0.185430991 container start fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:05:16 np0005597378 frosty_williams[328899]: 167 167
Jan 27 09:05:16 np0005597378 systemd[1]: libpod-fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75.scope: Deactivated successfully.
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.40547024 +0000 UTC m=+0.203449485 container attach fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.407145526 +0000 UTC m=+0.205124761 container died fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:05:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-56e5529d10fade2e03ce65b13f6b9d43be44ef729e14f0aa00ec200f9c37cf7e-merged.mount: Deactivated successfully.
Jan 27 09:05:16 np0005597378 podman[328883]: 2026-01-27 14:05:16.512573812 +0000 UTC m=+0.310553037 container remove fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_williams, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:05:16 np0005597378 systemd[1]: libpod-conmon-fedeaee3f93b3c38a2895eaf8b888f577380bd77f0671231b01c3ad7f8463b75.scope: Deactivated successfully.
Jan 27 09:05:16 np0005597378 podman[328925]: 2026-01-27 14:05:16.709553423 +0000 UTC m=+0.074115125 container create ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:05:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 27 09:05:16 np0005597378 podman[328925]: 2026-01-27 14:05:16.658281183 +0000 UTC m=+0.022842905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:05:16 np0005597378 systemd[1]: Started libpod-conmon-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope.
Jan 27 09:05:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:16 np0005597378 podman[328925]: 2026-01-27 14:05:16.834728901 +0000 UTC m=+0.199290633 container init ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:05:16 np0005597378 podman[328925]: 2026-01-27 14:05:16.842317274 +0000 UTC m=+0.206878986 container start ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:05:16 np0005597378 podman[328925]: 2026-01-27 14:05:16.861306846 +0000 UTC m=+0.225868548 container attach ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.051 238945 DEBUG nova.network.neutron [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [{"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:05:17
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', 'images', 'vms', 'volumes', 'default.rgw.log']
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.095 238945 DEBUG oslo_concurrency.lockutils [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Releasing lock "refresh_cache-2b352ec7-34b6-47bb-af67-779b4d1f27cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.101 238945 DEBUG nova.virt.libvirt.vif [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:05:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.101 238945 DEBUG nova.network.os_vif_util [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.102 238945 DEBUG nova.network.os_vif_util [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.102 238945 DEBUG os_vif [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.103 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.103 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap058b32ea-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap058b32ea-79, col_values=(('external_ids', {'iface-id': '058b32ea-7973-4220-91fa-58dc678da20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b6:89', 'vm-uuid': '2b352ec7-34b6-47bb-af67-779b4d1f27cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.106 238945 INFO os_vif [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.156 238945 DEBUG nova.objects.instance [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:17 np0005597378 NetworkManager[48904]: <info>  [1769522717.2351] manager: (tap058b32ea-79): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Jan 27 09:05:17 np0005597378 kernel: tap058b32ea-79: entered promiscuous mode
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.251 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:17Z|00990|binding|INFO|Claiming lport 058b32ea-7973-4220-91fa-58dc678da20a for this chassis.
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:17Z|00991|binding|INFO|058b32ea-7973-4220-91fa-58dc678da20a: Claiming fa:16:3e:76:b6:89 10.100.0.5
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.269 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:05:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:17Z|00992|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a up in Southbound
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.270 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 bound to our chassis#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.272 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151#033[00m
Jan 27 09:05:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:17Z|00993|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a ovn-installed in OVS
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 systemd-udevd[328979]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:05:17 np0005597378 systemd-machined[207425]: New machine qemu-128-instance-00000063.
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.288 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8980c451-7f9d-4135-a715-e08d8ec42a1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 NetworkManager[48904]: <info>  [1769522717.2902] device (tap058b32ea-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:05:17 np0005597378 NetworkManager[48904]: <info>  [1769522717.2911] device (tap058b32ea-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 systemd[1]: Started Virtual Machine qemu-128-instance-00000063.
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.291 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5dcf6e0-71 in ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.294 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5dcf6e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7654031b-fb34-4ea9-b363-e859e569b690]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.295 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dec83924-1daf-4597-87a9-27151f6c6aab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.307 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec83b54-7ad2-4852-86b1-a39b89305242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.334 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[876811ca-08a8-4467-a391-52ecdf1f8a75]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.377 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f3368851-523a-4285-87db-1eb7f6948bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 NetworkManager[48904]: <info>  [1769522717.3869] manager: (tapb5dcf6e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd45a038-3465-4d4c-9b9d-95aeb36a6905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.431 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f892872f-e6fe-4a9b-8745-e79281397362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.436 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e95cabd2-2d8e-44f5-aff5-6766fad4c0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 NetworkManager[48904]: <info>  [1769522717.4657] device (tapb5dcf6e0-70): carrier: link connected
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.472 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a50acb7c-a6f0-4234-8cb2-fd9dc3caee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.493 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61165d0e-11fd-4d05-9e4d-e30ea9f45bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535708, 'reachable_time': 15436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329049, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.512 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b04e9df4-003a-4773-a275-cf0da08aa7dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:8cf1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535708, 'tstamp': 535708}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329053, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.531 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f8b779-b9c4-4ae9-ad79-61663f29a58a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5dcf6e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:8c:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535708, 'reachable_time': 15436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329055, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.573 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2d3494-e229-4c9a-b66a-132f8d6dd1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[599bc9d1-401e-4195-94bf-fc595065548b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.665 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.667 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.668 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5dcf6e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 NetworkManager[48904]: <info>  [1769522717.6709] manager: (tapb5dcf6e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Jan 27 09:05:17 np0005597378 kernel: tapb5dcf6e0-70: entered promiscuous mode
Jan 27 09:05:17 np0005597378 lvm[329090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:05:17 np0005597378 lvm[329090]: VG ceph_vg0 finished
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.676 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5dcf6e0-70, col_values=(('external_ids', {'iface-id': '9fc164ea-bedd-4f63-8b81-7b9d3c502aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:17Z|00994|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.683 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.685 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6260c05b-7008-427b-984d-3bbec03893d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.686 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.pid.haproxy
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:05:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:17.688 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'env', 'PROCESS_TAG=haproxy-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:17 np0005597378 lvm[329109]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:05:17 np0005597378 lvm[329109]: VG ceph_vg2 finished
Jan 27 09:05:17 np0005597378 lvm[329107]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:05:17 np0005597378 lvm[329107]: VG ceph_vg1 finished
Jan 27 09:05:17 np0005597378 youthful_wu[328941]: {}
Jan 27 09:05:17 np0005597378 systemd[1]: libpod-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope: Deactivated successfully.
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:17 np0005597378 systemd[1]: libpod-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope: Consumed 1.504s CPU time.
Jan 27 09:05:17 np0005597378 podman[328925]: 2026-01-27 14:05:17.830980128 +0000 UTC m=+1.195541830 container died ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0846f44d6540a76c904323b48cf25f7d2c0e61f29a60b37815b46e43c607a60f-merged.mount: Deactivated successfully.
Jan 27 09:05:17 np0005597378 podman[328925]: 2026-01-27 14:05:17.89686404 +0000 UTC m=+1.261425742 container remove ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:05:17 np0005597378 systemd[1]: libpod-conmon-ceec65358c47f070d66c194693a474c791f5c50836f86507656437b46946beb8.scope: Deactivated successfully.
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.921 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2b352ec7-34b6-47bb-af67-779b4d1f27cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.923 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522717.9205537, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.923 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.947 238945 DEBUG nova.compute.manager [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.947 238945 DEBUG nova.objects.instance [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:05:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.971 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.975 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.981 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance running successfully.#033[00m
Jan 27 09:05:17 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.986 238945 DEBUG nova.virt.libvirt.guest [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:05:17 np0005597378 nova_compute[238941]: 2026-01-27 14:05:17.987 238945 DEBUG nova.compute.manager [None req-394fbaa3-568e-4e93-8d00-57a301b2c07f d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.034 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.034 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522717.933564, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.035 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.059 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.064 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:05:18 np0005597378 podman[329179]: 2026-01-27 14:05:18.117658871 +0000 UTC m=+0.053126051 container create 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:05:18 np0005597378 systemd[1]: Started libpod-conmon-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73.scope.
Jan 27 09:05:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:05:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2799d2b970b2b7a9687c31fea57f1671d0f79b14a9e46baf71b785dc56e51d5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:05:18 np0005597378 podman[329179]: 2026-01-27 14:05:18.087980172 +0000 UTC m=+0.023447382 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:05:18 np0005597378 podman[329179]: 2026-01-27 14:05:18.19344933 +0000 UTC m=+0.128916510 container init 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:18 np0005597378 podman[329179]: 2026-01-27 14:05:18.199148344 +0000 UTC m=+0.134615524 container start 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:05:18 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : New worker (329201) forked
Jan 27 09:05:18 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : Loading success.
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.559 238945 DEBUG nova.compute.manager [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG oslo_concurrency.lockutils [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG oslo_concurrency.lockutils [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG oslo_concurrency.lockutils [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 DEBUG nova.compute.manager [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:18 np0005597378 nova_compute[238941]: 2026-01-27 14:05:18.560 238945 WARNING nova.compute.manager [req-aebd23f6-f613-47fa-b9eb-aa90226d4396 req-209180fd-25b7-4b79-ad0f-810b514dbc92 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state None.#033[00m
Jan 27 09:05:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 202 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:18.810 155189 DEBUG eventlet.wsgi.server [-] (155189) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:18.812 155189 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: Accept: */*#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: Connection: close#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: Content-Type: text/plain#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: Host: 169.254.169.254#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: User-Agent: curl/7.84.0#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: X-Forwarded-For: 10.100.0.10#015
Jan 27 09:05:18 np0005597378 ovn_metadata_agent[154797]: X-Ovn-Network-Id: b8227184-a0b2-457f-9458-e3d8638d23a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 27 09:05:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:05:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.382 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.383 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.384 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.384 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.384 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.386 238945 INFO nova.compute.manager [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Terminating instance#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.387 238945 DEBUG nova.compute.manager [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:05:19 np0005597378 kernel: tap058b32ea-79 (unregistering): left promiscuous mode
Jan 27 09:05:19 np0005597378 NetworkManager[48904]: <info>  [1769522719.4222] device (tap058b32ea-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:05:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:19Z|00995|binding|INFO|Releasing lport 058b32ea-7973-4220-91fa-58dc678da20a from this chassis (sb_readonly=0)
Jan 27 09:05:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:19Z|00996|binding|INFO|Setting lport 058b32ea-7973-4220-91fa-58dc678da20a down in Southbound
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.432 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:19Z|00997|binding|INFO|Removing iface tap058b32ea-79 ovn-installed in OVS
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.457 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b6:89 10.100.0.5'], port_security=['fa:16:3e:76:b6:89 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2b352ec7-34b6-47bb-af67-779b4d1f27cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d73908bf91048dd99fbe4b9a8bcce9a', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'e4c08184-5f6a-4291-a454-a073d7b6f1e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08ed6370-7525-43f2-b4c7-1f7780c8bee8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=058b32ea-7973-4220-91fa-58dc678da20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.459 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 058b32ea-7973-4220-91fa-58dc678da20a in datapath b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 unbound from our chassis#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.460 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f60ff5d6-70f7-47e8-85a3-938a46994cb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.462 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 namespace which is not needed anymore#033[00m
Jan 27 09:05:19 np0005597378 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 27 09:05:19 np0005597378 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000063.scope: Consumed 1.996s CPU time.
Jan 27 09:05:19 np0005597378 systemd-machined[207425]: Machine qemu-128-instance-00000063 terminated.
Jan 27 09:05:19 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : haproxy version is 2.8.14-c23fe91
Jan 27 09:05:19 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [NOTICE]   (329199) : path to executable is /usr/sbin/haproxy
Jan 27 09:05:19 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [WARNING]  (329199) : Exiting Master process...
Jan 27 09:05:19 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [WARNING]  (329199) : Exiting Master process...
Jan 27 09:05:19 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [ALERT]    (329199) : Current worker (329201) exited with code 143 (Terminated)
Jan 27 09:05:19 np0005597378 neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151[329195]: [WARNING]  (329199) : All workers exited. Exiting... (0)
Jan 27 09:05:19 np0005597378 systemd[1]: libpod-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73.scope: Deactivated successfully.
Jan 27 09:05:19 np0005597378 podman[329231]: 2026-01-27 14:05:19.594253572 +0000 UTC m=+0.044315984 container died 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.626 238945 INFO nova.virt.libvirt.driver [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Instance destroyed successfully.#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.626 238945 DEBUG nova.objects.instance [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lazy-loading 'resources' on Instance uuid 2b352ec7-34b6-47bb-af67-779b4d1f27cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73-userdata-shm.mount: Deactivated successfully.
Jan 27 09:05:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2799d2b970b2b7a9687c31fea57f1671d0f79b14a9e46baf71b785dc56e51d5e-merged.mount: Deactivated successfully.
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.638 238945 DEBUG nova.virt.libvirt.vif [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-281114499',display_name='tempest-ServerActionsTestJSON-server-281114499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-281114499',id=99,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEvRd+IY7J2aC6E9RgWawikZonprZm9Q8AqxJ3+oPoKigVSs4XDlHsdJq7NrUsO73nspoXvZYMqqtcMHiKq3YCjSpdNYsRNABYnL0LaMxEWFr9mis93NV9bvZrqj1hVUVw==',key_name='tempest-keypair-1750182752',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d73908bf91048dd99fbe4b9a8bcce9a',ramdisk_id='',reservation_id='r-y6ov8a5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1485108214',owner_user_name='tempest-ServerActionsTestJSON-1485108214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:05:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d8b6fd848f3a4701b63086a5fb386473',uuid=2b352ec7-34b6-47bb-af67-779b4d1f27cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.639 238945 DEBUG nova.network.os_vif_util [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converting VIF {"id": "058b32ea-7973-4220-91fa-58dc678da20a", "address": "fa:16:3e:76:b6:89", "network": {"id": "b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-354423393-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d73908bf91048dd99fbe4b9a8bcce9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap058b32ea-79", "ovs_interfaceid": "058b32ea-7973-4220-91fa-58dc678da20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.639 238945 DEBUG nova.network.os_vif_util [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.640 238945 DEBUG os_vif [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.641 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap058b32ea-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.643 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 podman[329231]: 2026-01-27 14:05:19.648500492 +0000 UTC m=+0.098562894 container cleanup 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.647 238945 INFO os_vif [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:b6:89,bridge_name='br-int',has_traffic_filtering=True,id=058b32ea-7973-4220-91fa-58dc678da20a,network=Network(b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap058b32ea-79')#033[00m
Jan 27 09:05:19 np0005597378 systemd[1]: libpod-conmon-8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73.scope: Deactivated successfully.
Jan 27 09:05:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:19Z|00998|binding|INFO|Releasing lport a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4 from this chassis (sb_readonly=0)
Jan 27 09:05:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:19Z|00999|binding|INFO|Releasing lport 9fc164ea-bedd-4f63-8b81-7b9d3c502aeb from this chassis (sb_readonly=0)
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 podman[329276]: 2026-01-27 14:05:19.758161372 +0000 UTC m=+0.080789265 container remove 8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.764 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4e3732-7a85-494d-bd4d-0869c1cfaa93]: (4, ('Tue Jan 27 02:05:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73)\n8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73\nTue Jan 27 02:05:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 (8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73)\n8fdec0e9db2a83d109caca22b83e7f21a33e25a5bf98da92bc699e9c8f7bcd73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.765 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcebf7d-263f-4042-b07b-979650208ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.766 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5dcf6e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.768 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 kernel: tapb5dcf6e0-70: left promiscuous mode
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.786 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43f93984-d846-4873-8429-4d511bf844a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.802 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[116ed602-6ec6-44a1-8105-9aa554b08d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.804 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9188a37-e516-48a0-9f44-8835a1cfe257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.820 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05db8111-b9f6-4bb4-ba8c-af49b16d8547]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535699, 'reachable_time': 31445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329299, 'error': None, 'target': 'ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 systemd[1]: run-netns-ovnmeta\x2db5dcf6e0\x2d7c15\x2d42fb\x2d8f7e\x2d747d7fd9f151.mount: Deactivated successfully.
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.824 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5dcf6e0-7c15-42fb-8f7e-747d7fd9f151 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:05:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:19.824 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[65e89634-3967-41b0-b307-ef83caef5c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.946 238945 INFO nova.virt.libvirt.driver [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deleting instance files /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd_del#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.947 238945 INFO nova.virt.libvirt.driver [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deletion of /var/lib/nova/instances/2b352ec7-34b6-47bb-af67-779b4d1f27cd_del complete#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.993 238945 INFO nova.compute.manager [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.993 238945 DEBUG oslo.service.loopingcall [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.994 238945 DEBUG nova.compute.manager [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:05:19 np0005597378 nova_compute[238941]: 2026-01-27 14:05:19.994 238945 DEBUG nova.network.neutron [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.341 155189 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.341 155189 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5295093#033[00m
Jan 27 09:05:20 np0005597378 haproxy-metadata-proxy-b8227184-a0b2-457f-9458-e3d8638d23a8[327902]: 10.100.0.10:59572 [27/Jan/2026:14:05:18.809] listener listener/metadata 0/0/0/1532/1532 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.467 155189 DEBUG eventlet.wsgi.server [-] (155189) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.469 155189 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: Accept: */*#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: Connection: close#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: Content-Length: 100#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: Content-Type: application/x-www-form-urlencoded#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: Host: 169.254.169.254#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: User-Agent: curl/7.84.0#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: X-Forwarded-For: 10.100.0.10#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: X-Ovn-Network-Id: b8227184-a0b2-457f-9458-e3d8638d23a8#015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: #015
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.673 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.673 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 WARNING nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.674 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG oslo_concurrency.lockutils [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.675 238945 DEBUG nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.676 238945 WARNING nova.compute.manager [req-e7904417-1bed-4d69-918f-1eb210db234e req-021dd060-6518-4a5f-a970-a58f6889b8a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:05:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 183 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 1.4 MiB/s wr, 104 op/s
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.750 155189 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.750 155189 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2815685#033[00m
Jan 27 09:05:20 np0005597378 haproxy-metadata-proxy-b8227184-a0b2-457f-9458-e3d8638d23a8[327902]: 10.100.0.10:51704 [27/Jan/2026:14:05:20.466] listener listener/metadata 0/0/0/284/284 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.947 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:05:20 np0005597378 nova_compute[238941]: 2026-01-27 14:05:20.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:20.948 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:05:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:21Z|01000|binding|INFO|Releasing lport a5a4d358-d6d7-4d0a-b8fd-21631aca1cd4 from this chassis (sb_readonly=0)
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.737 238945 DEBUG nova.network.neutron [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.762 238945 INFO nova.compute.manager [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Took 1.77 seconds to deallocate network for instance.#033[00m
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.822 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.822 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.854 238945 DEBUG nova.compute.manager [req-167d0b05-a768-4602-a673-73bb915e3f23 req-27bd4169-9622-42b0-983e-34977ac0f86b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-deleted-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:21 np0005597378 nova_compute[238941]: 2026-01-27 14:05:21.908 238945 DEBUG oslo_concurrency.processutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.294 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:05:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091620028' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.512 238945 DEBUG oslo_concurrency.processutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.520 238945 DEBUG nova.compute.provider_tree [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.620 238945 DEBUG nova.scheduler.client.report [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:05:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 183 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.870 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.871 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.873 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.873 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.874 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.876 238945 INFO nova.compute.manager [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Terminating instance#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.878 238945 DEBUG nova.compute.manager [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.882 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.890 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.891 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.892 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.892 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.892 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.893 238945 WARNING nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.893 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.893 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.894 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.894 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.894 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.895 238945 WARNING nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-unplugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.895 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.896 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.896 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.896 238945 DEBUG oslo_concurrency.lockutils [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.897 238945 DEBUG nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] No waiting events found dispatching network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.897 238945 WARNING nova.compute.manager [req-ab706ed6-4c82-42c1-bb3c-dd659783a581 req-e45cda05-b26c-4ad0-8e2c-7a90a6c2164f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Received unexpected event network-vif-plugged-058b32ea-7973-4220-91fa-58dc678da20a for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.946 238945 INFO nova.scheduler.client.report [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Deleted allocations for instance 2b352ec7-34b6-47bb-af67-779b4d1f27cd#033[00m
Jan 27 09:05:22 np0005597378 kernel: tapddc57d7c-5a (unregistering): left promiscuous mode
Jan 27 09:05:22 np0005597378 NetworkManager[48904]: <info>  [1769522722.9586] device (tapddc57d7c-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:05:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:22Z|01001|binding|INFO|Releasing lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 from this chassis (sb_readonly=0)
Jan 27 09:05:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:22Z|01002|binding|INFO|Setting lport ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 down in Southbound
Jan 27 09:05:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:05:22Z|01003|binding|INFO|Removing iface tapddc57d7c-5a ovn-installed in OVS
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.962 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:22 np0005597378 nova_compute[238941]: 2026-01-27 14:05:22.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.991 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:c4:6a 10.100.0.10'], port_security=['fa:16:3e:4b:c4:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bf112e8f-c8b9-4e70-a0ee-3024945722aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8227184-a0b2-457f-9458-e3d8638d23a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '126cdd69cb3d443c8ce2da310e0d0ba7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2866f282-f823-4d38-9ed0-28ed718ea4d3 b347b5f2-7cf0-4389-9ef4-8349a580e7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bd8e682-d603-4ddb-8447-eea4c78d8c2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:05:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.993 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 in datapath b8227184-a0b2-457f-9458-e3d8638d23a8 unbound from our chassis#033[00m
Jan 27 09:05:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.994 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8227184-a0b2-457f-9458-e3d8638d23a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:05:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.995 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa0ef82-cfe8-4327-8d46-5521aa9f43b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:22.996 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 namespace which is not needed anymore#033[00m
Jan 27 09:05:23 np0005597378 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 27 09:05:23 np0005597378 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000067.scope: Consumed 13.828s CPU time.
Jan 27 09:05:23 np0005597378 systemd-machined[207425]: Machine qemu-126-instance-00000067 terminated.
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.066 238945 DEBUG oslo_concurrency.lockutils [None req-f40a6172-1b6d-4962-8b0f-ff920ba769da d8b6fd848f3a4701b63086a5fb386473 6d73908bf91048dd99fbe4b9a8bcce9a - - default default] Lock "2b352ec7-34b6-47bb-af67-779b4d1f27cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:23 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : haproxy version is 2.8.14-c23fe91
Jan 27 09:05:23 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [NOTICE]   (327900) : path to executable is /usr/sbin/haproxy
Jan 27 09:05:23 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [WARNING]  (327900) : Exiting Master process...
Jan 27 09:05:23 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [WARNING]  (327900) : Exiting Master process...
Jan 27 09:05:23 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [ALERT]    (327900) : Current worker (327902) exited with code 143 (Terminated)
Jan 27 09:05:23 np0005597378 neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8[327896]: [WARNING]  (327900) : All workers exited. Exiting... (0)
Jan 27 09:05:23 np0005597378 systemd[1]: libpod-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85.scope: Deactivated successfully.
Jan 27 09:05:23 np0005597378 podman[329346]: 2026-01-27 14:05:23.131717866 +0000 UTC m=+0.043301105 container died 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.137 238945 INFO nova.virt.libvirt.driver [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Instance destroyed successfully.#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.137 238945 DEBUG nova.objects.instance [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lazy-loading 'resources' on Instance uuid bf112e8f-c8b9-4e70-a0ee-3024945722aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.150 238945 DEBUG nova.virt.libvirt.vif [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1731374456',display_name='tempest-TestServerBasicOps-server-1731374456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1731374456',id=103,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJYCiWTKcHUlPm+XAxp5EGDuYWOZbR/Sgh/L3oWq5tGrIoiHD+N+kLQ55ZP7QxRv/5HMcwgKFb3+Sd+ixC35turrRyVFex50LDNIdV9vs6C6I+w6n/gReHuAdGrtc7shg==',key_name='tempest-TestServerBasicOps-819815299',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='126cdd69cb3d443c8ce2da310e0d0ba7',ramdisk_id='',reservation_id='r-roae2lh3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-574754001',owner_user_name='tempest-TestServerBasicOps-574754001-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:05:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce6bdc56696940428e2cdd474d4d48de',uuid=bf112e8f-c8b9-4e70-a0ee-3024945722aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.151 238945 DEBUG nova.network.os_vif_util [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converting VIF {"id": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "address": "fa:16:3e:4b:c4:6a", "network": {"id": "b8227184-a0b2-457f-9458-e3d8638d23a8", "bridge": "br-int", "label": "tempest-TestServerBasicOps-487619298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "126cdd69cb3d443c8ce2da310e0d0ba7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddc57d7c-5a", "ovs_interfaceid": "ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.152 238945 DEBUG nova.network.os_vif_util [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.152 238945 DEBUG os_vif [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.154 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddc57d7c-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.160 238945 INFO os_vif [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c4:6a,bridge_name='br-int',has_traffic_filtering=True,id=ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2,network=Network(b8227184-a0b2-457f-9458-e3d8638d23a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddc57d7c-5a')#033[00m
Jan 27 09:05:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85-userdata-shm.mount: Deactivated successfully.
Jan 27 09:05:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bd9d59e33be80c52a289ab36ae73467414f6cf2208d0fcfd15c0a3f7dcb92608-merged.mount: Deactivated successfully.
Jan 27 09:05:23 np0005597378 podman[329346]: 2026-01-27 14:05:23.18463583 +0000 UTC m=+0.096219049 container cleanup 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 09:05:23 np0005597378 systemd[1]: libpod-conmon-25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85.scope: Deactivated successfully.
Jan 27 09:05:23 np0005597378 podman[329397]: 2026-01-27 14:05:23.302357208 +0000 UTC m=+0.093324111 container remove 25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.311 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c73257-f3d3-4fe7-8708-22b2842bd961]: (4, ('Tue Jan 27 02:05:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 (25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85)\n25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85\nTue Jan 27 02:05:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 (25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85)\n25c6f6fbac3fd3a6340446b8e24c20aa39a92dd1010fd183f9cf78124aa9fc85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.313 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb599f92-2318-4854-a751-ecfd4c95769c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8227184-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:23 np0005597378 kernel: tapb8227184-a0: left promiscuous mode
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff44cf18-1006-4ab6-a4a6-41b825a1bddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.351 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8e78b39e-5ed8-4276-8915-4668cdd13dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.352 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12437baa-90ed-4486-aca6-2c8a8b04e173]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.367 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dae7ba3d-ab09-450e-8e43-5d499c6478aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533258, 'reachable_time': 28005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329416, 'error': None, 'target': 'ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.370 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8227184-a0b2-457f-9458-e3d8638d23a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:05:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:23.370 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[026813e2-5037-41a5-9f54-fd10f752782a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:05:23 np0005597378 systemd[1]: run-netns-ovnmeta\x2db8227184\x2da0b2\x2d457f\x2d9458\x2de3d8638d23a8.mount: Deactivated successfully.
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.505 238945 INFO nova.virt.libvirt.driver [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deleting instance files /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa_del#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.506 238945 INFO nova.virt.libvirt.driver [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deletion of /var/lib/nova/instances/bf112e8f-c8b9-4e70-a0ee-3024945722aa_del complete#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.556 238945 INFO nova.compute.manager [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.557 238945 DEBUG oslo.service.loopingcall [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.557 238945 DEBUG nova.compute.manager [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:05:23 np0005597378 nova_compute[238941]: 2026-01-27 14:05:23.557 238945 DEBUG nova.network.neutron [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:05:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 85 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 1.1 MiB/s wr, 101 op/s
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.975 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-unplugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.975 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.977 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.977 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.978 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] No waiting events found dispatching network-vif-unplugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.978 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-unplugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.978 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.979 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.979 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.979 238945 DEBUG oslo_concurrency.lockutils [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.980 238945 DEBUG nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] No waiting events found dispatching network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:05:24 np0005597378 nova_compute[238941]: 2026-01-27 14:05:24.981 238945 WARNING nova.compute.manager [req-9889929f-8d6b-41b7-bfbe-32207c768df9 req-4ff8d369-f57d-4ebd-b6e6-26abdd02ccd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received unexpected event network-vif-plugged-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:05:25 np0005597378 nova_compute[238941]: 2026-01-27 14:05:25.575 238945 DEBUG nova.network.neutron [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:05:25 np0005597378 nova_compute[238941]: 2026-01-27 14:05:25.598 238945 INFO nova.compute.manager [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Took 2.04 seconds to deallocate network for instance.#033[00m
Jan 27 09:05:25 np0005597378 nova_compute[238941]: 2026-01-27 14:05:25.675 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:25 np0005597378 nova_compute[238941]: 2026-01-27 14:05:25.676 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:25 np0005597378 nova_compute[238941]: 2026-01-27 14:05:25.693 238945 DEBUG nova.compute.manager [req-9fa43908-d288-45cd-914f-f5f85c27b595 req-d5c513b2-ea88-4a6a-91a9-64f69032c5a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Received event network-vif-deleted-ddc57d7c-5a93-4b67-8d9a-5d1b8d8798b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:25 np0005597378 nova_compute[238941]: 2026-01-27 14:05:25.747 238945 DEBUG oslo_concurrency.processutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:05:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2286648017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:05:26 np0005597378 nova_compute[238941]: 2026-01-27 14:05:26.310 238945 DEBUG oslo_concurrency.processutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:26 np0005597378 nova_compute[238941]: 2026-01-27 14:05:26.317 238945 DEBUG nova.compute.provider_tree [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:05:26 np0005597378 nova_compute[238941]: 2026-01-27 14:05:26.348 238945 DEBUG nova.scheduler.client.report [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:05:26 np0005597378 nova_compute[238941]: 2026-01-27 14:05:26.380 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:26 np0005597378 nova_compute[238941]: 2026-01-27 14:05:26.417 238945 INFO nova.scheduler.client.report [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Deleted allocations for instance bf112e8f-c8b9-4e70-a0ee-3024945722aa#033[00m
Jan 27 09:05:26 np0005597378 nova_compute[238941]: 2026-01-27 14:05:26.548 238945 DEBUG oslo_concurrency.lockutils [None req-e7efd92e-7487-4dbf-964a-ea651a35a41c ce6bdc56696940428e2cdd474d4d48de 126cdd69cb3d443c8ce2da310e0d0ba7 - - default default] Lock "bf112e8f-c8b9-4e70-a0ee-3024945722aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 69 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 53 KiB/s wr, 83 op/s
Jan 27 09:05:27 np0005597378 nova_compute[238941]: 2026-01-27 14:05:27.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003154855023807805 of space, bias 1.0, pg target 0.09464565071423414 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667741415579353 of space, bias 1.0, pg target 0.2003224246738059 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0710749347018817e-06 of space, bias 4.0, pg target 0.001285289921642258 quantized to 16 (current 16)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:05:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:05:28 np0005597378 nova_compute[238941]: 2026-01-27 14:05:28.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 63 op/s
Jan 27 09:05:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Jan 27 09:05:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:30.950 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:05:32 np0005597378 nova_compute[238941]: 2026-01-27 14:05:32.299 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 27 09:05:33 np0005597378 nova_compute[238941]: 2026-01-27 14:05:33.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:33 np0005597378 nova_compute[238941]: 2026-01-27 14:05:33.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:33 np0005597378 nova_compute[238941]: 2026-01-27 14:05:33.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:34 np0005597378 nova_compute[238941]: 2026-01-27 14:05:34.625 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522719.6244423, 2b352ec7-34b6-47bb-af67-779b4d1f27cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:34 np0005597378 nova_compute[238941]: 2026-01-27 14:05:34.626 238945 INFO nova.compute.manager [-] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:05:34 np0005597378 nova_compute[238941]: 2026-01-27 14:05:34.661 238945 DEBUG nova.compute.manager [None req-8d184741-bb39-4dc5-82ba-a0baca837d3b - - - - - -] [instance: 2b352ec7-34b6-47bb-af67-779b4d1f27cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 27 09:05:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 27 09:05:37 np0005597378 nova_compute[238941]: 2026-01-27 14:05:37.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:38 np0005597378 nova_compute[238941]: 2026-01-27 14:05:38.136 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522723.1353464, bf112e8f-c8b9-4e70-a0ee-3024945722aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:05:38 np0005597378 nova_compute[238941]: 2026-01-27 14:05:38.136 238945 INFO nova.compute.manager [-] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:05:38 np0005597378 nova_compute[238941]: 2026-01-27 14:05:38.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:38 np0005597378 nova_compute[238941]: 2026-01-27 14:05:38.164 238945 DEBUG nova.compute.manager [None req-772f3224-6244-4b42-806a-be9fb90c27dc - - - - - -] [instance: bf112e8f-c8b9-4e70-a0ee-3024945722aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:05:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s
Jan 27 09:05:38 np0005597378 podman[329442]: 2026-01-27 14:05:38.755116822 +0000 UTC m=+0.086093538 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:05:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:39 np0005597378 podman[329461]: 2026-01-27 14:05:39.763899625 +0000 UTC m=+0.101115352 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:05:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:42 np0005597378 nova_compute[238941]: 2026-01-27 14:05:42.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:43 np0005597378 nova_compute[238941]: 2026-01-27 14:05:43.164 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:46.312 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:05:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:47 np0005597378 nova_compute[238941]: 2026-01-27 14:05:47.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:05:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:05:48 np0005597378 nova_compute[238941]: 2026-01-27 14:05:48.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:52 np0005597378 nova_compute[238941]: 2026-01-27 14:05:52.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:53 np0005597378 nova_compute[238941]: 2026-01-27 14:05:53.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:54 np0005597378 nova_compute[238941]: 2026-01-27 14:05:54.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:54 np0005597378 nova_compute[238941]: 2026-01-27 14:05:54.825 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:54 np0005597378 nova_compute[238941]: 2026-01-27 14:05:54.825 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:54 np0005597378 nova_compute[238941]: 2026-01-27 14:05:54.851 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.068 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.069 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.076 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.076 238945 INFO nova.compute.claims [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.262 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:05:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020036755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.808 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.815 238945 DEBUG nova.compute.provider_tree [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.839 238945 DEBUG nova.scheduler.client.report [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.872 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.873 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.875 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.875 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.875 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.876 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.967 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.968 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:05:55 np0005597378 nova_compute[238941]: 2026-01-27 14:05:55.994 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.012 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.112 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.113 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.113 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Creating image(s)#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.134 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.157 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.179 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.182 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.254 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.256 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.257 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.257 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.280 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.283 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 88b90bc8-8452-4809-8183-f11595e37b63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:05:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014703910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.458 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.473 238945 DEBUG nova.policy [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d75dbdade7c48688752f59fa51f8544', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea2ff4d6e3214ca0b4fb320f18286af4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.642 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.643 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3866MB free_disk=59.98769739829004GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.644 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.725 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 88b90bc8-8452-4809-8183-f11595e37b63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.725 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.725 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:05:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:05:56 np0005597378 nova_compute[238941]: 2026-01-27 14:05:56.790 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.046 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 88b90bc8-8452-4809-8183-f11595e37b63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.179 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] resizing rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:05:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410597376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.361 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.368 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.456 238945 DEBUG nova.objects.instance [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lazy-loading 'migration_context' on Instance uuid 88b90bc8-8452-4809-8183-f11595e37b63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.506 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.571 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.572 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Ensure instance console log exists: /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.573 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.573 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.574 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.578 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.578 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.579 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.580 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:05:57 np0005597378 nova_compute[238941]: 2026-01-27 14:05:57.599 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:05:58 np0005597378 nova_compute[238941]: 2026-01-27 14:05:58.021 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Successfully created port: bc3782af-4abd-4966-b05f-ae577558ed48 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:05:58 np0005597378 nova_compute[238941]: 2026-01-27 14:05:58.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:05:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 78 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.4 MiB/s wr, 4 op/s
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.263 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Successfully updated port: bc3782af-4abd-4966-b05f-ae577558ed48 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.284 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.285 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquired lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.285 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:05:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.368 238945 DEBUG nova.compute.manager [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-changed-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.368 238945 DEBUG nova.compute.manager [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Refreshing instance network info cache due to event network-changed-bc3782af-4abd-4966-b05f-ae577558ed48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.369 238945 DEBUG oslo_concurrency.lockutils [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.431 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:05:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:05:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213750184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:05:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:05:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213750184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:05:59 np0005597378 nova_compute[238941]: 2026-01-27 14:05:59.600 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.225 238945 DEBUG nova.network.neutron [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updating instance_info_cache with network_info: [{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.247 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Releasing lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.248 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance network_info: |[{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.248 238945 DEBUG oslo_concurrency.lockutils [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.248 238945 DEBUG nova.network.neutron [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Refreshing network info cache for port bc3782af-4abd-4966-b05f-ae577558ed48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.251 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start _get_guest_xml network_info=[{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.262 238945 WARNING nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.271 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.272 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.279 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.280 238945 DEBUG nova.virt.libvirt.host [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.280 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.281 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.281 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.281 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.282 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.283 238945 DEBUG nova.virt.hardware [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.286 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:06:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:06:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:06:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973712092' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.860 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.883 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:00 np0005597378 nova_compute[238941]: 2026-01-27 14:06:00.887 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:06:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291857985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.436 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.438 238945 DEBUG nova.virt.libvirt.vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:05:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-920130386',display_name='tempest-ServerAddressesNegativeTestJSON-server-920130386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-920130386',id=104,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea2ff4d6e3214ca0b4fb320f18286af4',ramdisk_id='',reservation_id='r-y1zl0bdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-989610197',owner_user_name='tempest-ServerAddressesNegativeTestJSON-989610197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:05:56Z,user_data=None,user_id='7d75dbdade7c48688752f59fa51f8544',uuid=88b90bc8-8452-4809-8183-f11595e37b63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.438 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converting VIF {"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.439 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.440 238945 DEBUG nova.objects.instance [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88b90bc8-8452-4809-8183-f11595e37b63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.458 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <uuid>88b90bc8-8452-4809-8183-f11595e37b63</uuid>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <name>instance-00000068</name>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-920130386</nova:name>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:06:00</nova:creationTime>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:user uuid="7d75dbdade7c48688752f59fa51f8544">tempest-ServerAddressesNegativeTestJSON-989610197-project-member</nova:user>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:project uuid="ea2ff4d6e3214ca0b4fb320f18286af4">tempest-ServerAddressesNegativeTestJSON-989610197</nova:project>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <nova:port uuid="bc3782af-4abd-4966-b05f-ae577558ed48">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <entry name="serial">88b90bc8-8452-4809-8183-f11595e37b63</entry>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <entry name="uuid">88b90bc8-8452-4809-8183-f11595e37b63</entry>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/88b90bc8-8452-4809-8183-f11595e37b63_disk">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/88b90bc8-8452-4809-8183-f11595e37b63_disk.config">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:0b:36:85"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <target dev="tapbc3782af-4a"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/console.log" append="off"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:06:01 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:06:01 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:06:01 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:06:01 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.459 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Preparing to wait for external event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.460 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.460 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.460 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.461 238945 DEBUG nova.virt.libvirt.vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:05:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-920130386',display_name='tempest-ServerAddressesNegativeTestJSON-server-920130386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-920130386',id=104,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea2ff4d6e3214ca0b4fb320f18286af4',ramdisk_id='',reservation_id='r-y1zl0bdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-989610197',owner_user_name='tempest-ServerAddressesNegativeTestJSON-989610197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:05:56Z,user_data=None,user_id='7d75dbdade7c48688752f59fa51f8544',uuid=88b90bc8-8452-4809-8183-f11595e37b63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.461 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converting VIF {"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.462 238945 DEBUG nova.network.os_vif_util [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.462 238945 DEBUG os_vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.463 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.466 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc3782af-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.467 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc3782af-4a, col_values=(('external_ids', {'iface-id': 'bc3782af-4abd-4966-b05f-ae577558ed48', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:36:85', 'vm-uuid': '88b90bc8-8452-4809-8183-f11595e37b63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:01 np0005597378 NetworkManager[48904]: <info>  [1769522761.4690] manager: (tapbc3782af-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.471 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.474 238945 INFO os_vif [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a')#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.596 238945 DEBUG nova.network.neutron [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updated VIF entry in instance network info cache for port bc3782af-4abd-4966-b05f-ae577558ed48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.597 238945 DEBUG nova.network.neutron [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updating instance_info_cache with network_info: [{"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.622 238945 DEBUG oslo_concurrency.lockutils [req-f359b38b-464a-4c30-8c1b-d2d184d96257 req-d018d16b-eef4-4fa5-b086-7fecbe185532 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-88b90bc8-8452-4809-8183-f11595e37b63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.664 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.664 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.664 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] No VIF found with MAC fa:16:3e:0b:36:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.665 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Using config drive#033[00m
Jan 27 09:06:01 np0005597378 nova_compute[238941]: 2026-01-27 14:06:01.787 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.075 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.077 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.242 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Creating config drive at /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.252 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdq6onesu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.402 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdq6onesu" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.434 238945 DEBUG nova.storage.rbd_utils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] rbd image 88b90bc8-8452-4809-8183-f11595e37b63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.440 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config 88b90bc8-8452-4809-8183-f11595e37b63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.872 238945 DEBUG oslo_concurrency.processutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config 88b90bc8-8452-4809-8183-f11595e37b63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.873 238945 INFO nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deleting local config drive /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63/disk.config because it was imported into RBD.#033[00m
Jan 27 09:06:02 np0005597378 kernel: tapbc3782af-4a: entered promiscuous mode
Jan 27 09:06:02 np0005597378 NetworkManager[48904]: <info>  [1769522762.9277] manager: (tapbc3782af-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Jan 27 09:06:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:02Z|01004|binding|INFO|Claiming lport bc3782af-4abd-4966-b05f-ae577558ed48 for this chassis.
Jan 27 09:06:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:02Z|01005|binding|INFO|bc3782af-4abd-4966-b05f-ae577558ed48: Claiming fa:16:3e:0b:36:85 10.100.0.4
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.929 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.941 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:36:85 10.100.0.4'], port_security=['fa:16:3e:0b:36:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88b90bc8-8452-4809-8183-f11595e37b63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea2ff4d6e3214ca0b4fb320f18286af4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '119c6585-5922-418b-ab3b-9aa64f75b233', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0132118e-2104-470f-94a1-3814c3ec6b99, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bc3782af-4abd-4966-b05f-ae577558ed48) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.942 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bc3782af-4abd-4966-b05f-ae577558ed48 in datapath a33fdafe-6ac5-4ae1-bf0e-52644ae18217 bound to our chassis#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.944 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a33fdafe-6ac5-4ae1-bf0e-52644ae18217#033[00m
Jan 27 09:06:02 np0005597378 systemd-udevd[329855]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6168de45-b12a-4421-94dc-12af215f247c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.960 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa33fdafe-61 in ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.961 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa33fdafe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.961 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[820a140d-278e-4c19-8eeb-b9846b16e05a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.962 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34fa8a30-1db0-497d-9b1f-a203bc1bb817]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:02 np0005597378 systemd-machined[207425]: New machine qemu-129-instance-00000068.
Jan 27 09:06:02 np0005597378 NetworkManager[48904]: <info>  [1769522762.9731] device (tapbc3782af-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:06:02 np0005597378 NetworkManager[48904]: <info>  [1769522762.9738] device (tapbc3782af-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:06:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.980 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[34e0c7bb-63bb-4fef-83df-5622cff6f46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:02 np0005597378 systemd[1]: Started Virtual Machine qemu-129-instance-00000068.
Jan 27 09:06:02 np0005597378 nova_compute[238941]: 2026-01-27 14:06:02.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:02.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd8d302-bd53-434e-b62c-8a072d1fd4e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:03Z|01006|binding|INFO|Setting lport bc3782af-4abd-4966-b05f-ae577558ed48 ovn-installed in OVS
Jan 27 09:06:03 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:03Z|01007|binding|INFO|Setting lport bc3782af-4abd-4966-b05f-ae577558ed48 up in Southbound
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.031 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[71bf5a05-ad47-4f55-93b1-0c45eaedafdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 NetworkManager[48904]: <info>  [1769522763.0384] manager: (tapa33fdafe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.038 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[882a20f7-1b87-41dc-b43a-0c79bbcd7478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.073 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[97ad3ad3-5d11-4bc2-9387-c7c9b3f43e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.076 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f638404b-ea34-4c63-a95d-eaf43e9a5eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 NetworkManager[48904]: <info>  [1769522763.1042] device (tapa33fdafe-60): carrier: link connected
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.111 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff0b615-9879-4b1f-840e-9d992309eee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5513976a-dff7-4527-baaf-9ee63f164153]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa33fdafe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:35:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540272, 'reachable_time': 18466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329888, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.158 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e324241e-5202-4f27-9ed6-5a9e41ef0cc7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:3596'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540272, 'tstamp': 540272}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329889, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.181 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[caa4437f-284f-41ad-a577-eb5b5bfca15c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa33fdafe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:35:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540272, 'reachable_time': 18466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329890, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.224 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d62fbefc-68c2-49b0-b834-74d8a89bb6fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80325a47-1427-4ef8-a66d-ab1e6aa2636d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.302 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa33fdafe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa33fdafe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:03 np0005597378 kernel: tapa33fdafe-60: entered promiscuous mode
Jan 27 09:06:03 np0005597378 NetworkManager[48904]: <info>  [1769522763.3108] manager: (tapa33fdafe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.313 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa33fdafe-60, col_values=(('external_ids', {'iface-id': '80394024-f5f3-40c9-a1f0-4696e26bf4ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:03 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:03Z|01008|binding|INFO|Releasing lport 80394024-f5f3-40c9-a1f0-4696e26bf4ca from this chassis (sb_readonly=0)
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.316 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.317 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90255aca-8b76-46fb-8ca0-6d1f84404494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.319 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-a33fdafe-6ac5-4ae1-bf0e-52644ae18217
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.pid.haproxy
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID a33fdafe-6ac5-4ae1-bf0e-52644ae18217
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:06:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:03.320 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'env', 'PROCESS_TAG=haproxy-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a33fdafe-6ac5-4ae1-bf0e-52644ae18217.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.530 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522763.5294445, 88b90bc8-8452-4809-8183-f11595e37b63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.530 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Started (Lifecycle Event)#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.564 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.570 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522763.5295942, 88b90bc8-8452-4809-8183-f11595e37b63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.570 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.596 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:06:03 np0005597378 nova_compute[238941]: 2026-01-27 14:06:03.623 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:06:03 np0005597378 podman[329964]: 2026-01-27 14:06:03.746273312 +0000 UTC m=+0.057791275 container create eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:06:03 np0005597378 systemd[1]: Started libpod-conmon-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5.scope.
Jan 27 09:06:03 np0005597378 podman[329964]: 2026-01-27 14:06:03.716227673 +0000 UTC m=+0.027745646 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:06:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4ab0c2cf9adb11841d98b5abb8dd0d7fcca98c2ac364c5baab25919d96da7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:03 np0005597378 podman[329964]: 2026-01-27 14:06:03.839783176 +0000 UTC m=+0.151301159 container init eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 09:06:03 np0005597378 podman[329964]: 2026-01-27 14:06:03.845387227 +0000 UTC m=+0.156905190 container start eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 09:06:03 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : New worker (329985) forked
Jan 27 09:06:03 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : Loading success.
Jan 27 09:06:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:04 np0005597378 nova_compute[238941]: 2026-01-27 14:06:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:04 np0005597378 nova_compute[238941]: 2026-01-27 14:06:04.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:06:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 09:06:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:05.078 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:05 np0005597378 nova_compute[238941]: 2026-01-27 14:06:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:05 np0005597378 nova_compute[238941]: 2026-01-27 14:06:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:05 np0005597378 nova_compute[238941]: 2026-01-27 14:06:05.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.436 238945 DEBUG nova.compute.manager [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.436 238945 DEBUG oslo_concurrency.lockutils [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.437 238945 DEBUG oslo_concurrency.lockutils [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.437 238945 DEBUG oslo_concurrency.lockutils [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.437 238945 DEBUG nova.compute.manager [req-c9f950b2-145b-4b31-8105-aaf4c6130099 req-c6698c68-522e-4567-b618-fd262be28db6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Processing event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.438 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.442 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522766.4422681, 88b90bc8-8452-4809-8183-f11595e37b63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.442 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.444 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.447 238945 INFO nova.virt.libvirt.driver [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance spawned successfully.#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.448 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.467 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.472 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.487 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.487 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.488 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.488 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.489 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.489 238945 DEBUG nova.virt.libvirt.driver [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.532 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.598 238945 INFO nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 10.49 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.599 238945 DEBUG nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.679 238945 INFO nova.compute.manager [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 11.78 seconds to build instance.#033[00m
Jan 27 09:06:06 np0005597378 nova_compute[238941]: 2026-01-27 14:06:06.703 238945 DEBUG oslo_concurrency.lockutils [None req-9b73de4b-5066-4090-9989-96d920562004 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.841 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.841 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.841 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.843 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.843 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.844 238945 INFO nova.compute.manager [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Terminating instance#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.845 238945 DEBUG nova.compute.manager [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:06:07 np0005597378 kernel: tapbc3782af-4a (unregistering): left promiscuous mode
Jan 27 09:06:07 np0005597378 NetworkManager[48904]: <info>  [1769522767.8828] device (tapbc3782af-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:06:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:07Z|01009|binding|INFO|Releasing lport bc3782af-4abd-4966-b05f-ae577558ed48 from this chassis (sb_readonly=0)
Jan 27 09:06:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:07Z|01010|binding|INFO|Setting lport bc3782af-4abd-4966-b05f-ae577558ed48 down in Southbound
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:07Z|01011|binding|INFO|Removing iface tapbc3782af-4a ovn-installed in OVS
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.902 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:36:85 10.100.0.4'], port_security=['fa:16:3e:0b:36:85 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88b90bc8-8452-4809-8183-f11595e37b63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea2ff4d6e3214ca0b4fb320f18286af4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '119c6585-5922-418b-ab3b-9aa64f75b233', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0132118e-2104-470f-94a1-3814c3ec6b99, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=bc3782af-4abd-4966-b05f-ae577558ed48) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:06:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.903 154802 INFO neutron.agent.ovn.metadata.agent [-] Port bc3782af-4abd-4966-b05f-ae577558ed48 in datapath a33fdafe-6ac5-4ae1-bf0e-52644ae18217 unbound from our chassis#033[00m
Jan 27 09:06:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.904 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a33fdafe-6ac5-4ae1-bf0e-52644ae18217, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:06:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.905 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5bd4d1-1358-428e-bbd5-8cb872cf3caf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:07.906 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 namespace which is not needed anymore#033[00m
Jan 27 09:06:07 np0005597378 nova_compute[238941]: 2026-01-27 14:06:07.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:07 np0005597378 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 27 09:06:07 np0005597378 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000068.scope: Consumed 1.997s CPU time.
Jan 27 09:06:07 np0005597378 systemd-machined[207425]: Machine qemu-129-instance-00000068 terminated.
Jan 27 09:06:08 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : haproxy version is 2.8.14-c23fe91
Jan 27 09:06:08 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [NOTICE]   (329983) : path to executable is /usr/sbin/haproxy
Jan 27 09:06:08 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [WARNING]  (329983) : Exiting Master process...
Jan 27 09:06:08 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [ALERT]    (329983) : Current worker (329985) exited with code 143 (Terminated)
Jan 27 09:06:08 np0005597378 neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217[329979]: [WARNING]  (329983) : All workers exited. Exiting... (0)
Jan 27 09:06:08 np0005597378 systemd[1]: libpod-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5.scope: Deactivated successfully.
Jan 27 09:06:08 np0005597378 podman[330019]: 2026-01-27 14:06:08.042890783 +0000 UTC m=+0.042628067 container died eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:06:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9e4ab0c2cf9adb11841d98b5abb8dd0d7fcca98c2ac364c5baab25919d96da7d-merged.mount: Deactivated successfully.
Jan 27 09:06:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5-userdata-shm.mount: Deactivated successfully.
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.095 238945 INFO nova.virt.libvirt.driver [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Instance destroyed successfully.#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.096 238945 DEBUG nova.objects.instance [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lazy-loading 'resources' on Instance uuid 88b90bc8-8452-4809-8183-f11595e37b63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.112 238945 DEBUG nova.virt.libvirt.vif [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:05:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-920130386',display_name='tempest-ServerAddressesNegativeTestJSON-server-920130386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-920130386',id=104,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:06:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea2ff4d6e3214ca0b4fb320f18286af4',ramdisk_id='',reservation_id='r-y1zl0bdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-989610197',owner_user_name='tempest-ServerAddressesNegativeTestJSON-989610197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:06:06Z,user_data=None,user_id='7d75dbdade7c48688752f59fa51f8544',uuid=88b90bc8-8452-4809-8183-f11595e37b63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.112 238945 DEBUG nova.network.os_vif_util [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converting VIF {"id": "bc3782af-4abd-4966-b05f-ae577558ed48", "address": "fa:16:3e:0b:36:85", "network": {"id": "a33fdafe-6ac5-4ae1-bf0e-52644ae18217", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-665042304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea2ff4d6e3214ca0b4fb320f18286af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc3782af-4a", "ovs_interfaceid": "bc3782af-4abd-4966-b05f-ae577558ed48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.113 238945 DEBUG nova.network.os_vif_util [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:06:08 np0005597378 podman[330019]: 2026-01-27 14:06:08.113883031 +0000 UTC m=+0.113620315 container cleanup eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.114 238945 DEBUG os_vif [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.116 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc3782af-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.121 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.123 238945 INFO os_vif [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:36:85,bridge_name='br-int',has_traffic_filtering=True,id=bc3782af-4abd-4966-b05f-ae577558ed48,network=Network(a33fdafe-6ac5-4ae1-bf0e-52644ae18217),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc3782af-4a')#033[00m
Jan 27 09:06:08 np0005597378 systemd[1]: libpod-conmon-eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5.scope: Deactivated successfully.
Jan 27 09:06:08 np0005597378 podman[330061]: 2026-01-27 14:06:08.186438432 +0000 UTC m=+0.051620249 container remove eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.193 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95151987-4443-43bc-8a13-64708ede9877]: (4, ('Tue Jan 27 02:06:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 (eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5)\neaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5\nTue Jan 27 02:06:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 (eaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5)\neaeb1ef8247b31019891daa74c584b57e761dbf2b6f790ee5b1aafbe996fbdb5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.195 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81a80443-2453-4cad-b405-39c9b1c5fdad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.196 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa33fdafe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:08 np0005597378 kernel: tapa33fdafe-60: left promiscuous mode
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.204 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2af0eaf4-1640-49b5-b722-2c1241b87570]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.218 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f4146428-75b3-4ccc-9646-a293623a7e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[749f2144-95a7-48ef-a60b-2e2eabf4b393]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b8ae00-890b-49e0-8f5d-4eb78b555742]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540264, 'reachable_time': 22952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330094, 'error': None, 'target': 'ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.237 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a33fdafe-6ac5-4ae1-bf0e-52644ae18217 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:06:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:08.237 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[77aa75ba-0397-48f7-9024-c445c93ba5e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:06:08 np0005597378 systemd[1]: run-netns-ovnmeta\x2da33fdafe\x2d6ac5\x2d4ae1\x2dbf0e\x2d52644ae18217.mount: Deactivated successfully.
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.377 238945 INFO nova.virt.libvirt.driver [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deleting instance files /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63_del#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.378 238945 INFO nova.virt.libvirt.driver [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deletion of /var/lib/nova/instances/88b90bc8-8452-4809-8183-f11595e37b63_del complete#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.475 238945 INFO nova.compute.manager [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.475 238945 DEBUG oslo.service.loopingcall [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.475 238945 DEBUG nova.compute.manager [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.476 238945 DEBUG nova.network.neutron [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.533 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.534 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.534 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.535 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] No waiting events found dispatching network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 WARNING nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received unexpected event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-unplugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.536 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG oslo_concurrency.lockutils [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] No waiting events found dispatching network-vif-unplugged-bc3782af-4abd-4966-b05f-ae577558ed48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:06:08 np0005597378 nova_compute[238941]: 2026-01-27 14:06:08.537 238945 DEBUG nova.compute.manager [req-b5552ef8-bea8-490e-a8de-0da8c1bc0201 req-cfa5b276-e2b6-4608-a4b4-df5af9e2f741 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-unplugged-bc3782af-4abd-4966-b05f-ae577558ed48 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:06:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 76 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.027 238945 DEBUG nova.network.neutron [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.067 238945 INFO nova.compute.manager [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Took 0.59 seconds to deallocate network for instance.#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.120 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.120 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.176 238945 DEBUG oslo_concurrency.processutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:06:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365828815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:06:09 np0005597378 podman[330116]: 2026-01-27 14:06:09.731214741 +0000 UTC m=+0.068368868 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.740 238945 DEBUG oslo_concurrency.processutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.748 238945 DEBUG nova.compute.provider_tree [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.764 238945 DEBUG nova.scheduler.client.report [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.785 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.807 238945 INFO nova.scheduler.client.report [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Deleted allocations for instance 88b90bc8-8452-4809-8183-f11595e37b63#033[00m
Jan 27 09:06:09 np0005597378 nova_compute[238941]: 2026-01-27 14:06:09.879 238945 DEBUG oslo_concurrency.lockutils [None req-16ac4f06-d2a4-4e5b-a190-cdf277a01c1e 7d75dbdade7c48688752f59fa51f8544 ea2ff4d6e3214ca0b4fb320f18286af4 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 66 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 367 KiB/s wr, 117 op/s
Jan 27 09:06:10 np0005597378 podman[330137]: 2026-01-27 14:06:10.744208364 +0000 UTC m=+0.080522106 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.777 238945 DEBUG nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.777 238945 DEBUG oslo_concurrency.lockutils [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "88b90bc8-8452-4809-8183-f11595e37b63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 DEBUG oslo_concurrency.lockutils [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 DEBUG oslo_concurrency.lockutils [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "88b90bc8-8452-4809-8183-f11595e37b63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 DEBUG nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] No waiting events found dispatching network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.778 238945 WARNING nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received unexpected event network-vif-plugged-bc3782af-4abd-4966-b05f-ae577558ed48 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:06:10 np0005597378 nova_compute[238941]: 2026-01-27 14:06:10.779 238945 DEBUG nova.compute.manager [req-2505622b-f0fd-471f-95ff-5757df12309e req-dbceea5b-8433-4bda-b494-adf501ea1b37 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Received event network-vif-deleted-bc3782af-4abd-4966-b05f-ae577558ed48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:06:12 np0005597378 nova_compute[238941]: 2026-01-27 14:06:12.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 66 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 27 09:06:13 np0005597378 nova_compute[238941]: 2026-01-27 14:06:13.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:13 np0005597378 nova_compute[238941]: 2026-01-27 14:06:13.414 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 09:06:16 np0005597378 nova_compute[238941]: 2026-01-27 14:06:16.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 93 op/s
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:06:17
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:06:17 np0005597378 nova_compute[238941]: 2026-01-27 14:06:17.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:06:18 np0005597378 nova_compute[238941]: 2026-01-27 14:06:18.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:06:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:06:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.161437994 +0000 UTC m=+0.046169963 container create a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 09:06:19 np0005597378 systemd[1]: Started libpod-conmon-a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b.scope.
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.139958735 +0000 UTC m=+0.024690724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:06:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.261524064 +0000 UTC m=+0.146256053 container init a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.268779359 +0000 UTC m=+0.153511328 container start a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.27214299 +0000 UTC m=+0.156874959 container attach a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:06:19 np0005597378 infallible_euler[330322]: 167 167
Jan 27 09:06:19 np0005597378 systemd[1]: libpod-a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b.scope: Deactivated successfully.
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.276282681 +0000 UTC m=+0.161014670 container died a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e33b93da033a64aac00794f862ffc274a1704d4680bbd54515f88446627e4e88-merged.mount: Deactivated successfully.
Jan 27 09:06:19 np0005597378 podman[330307]: 2026-01-27 14:06:19.312279458 +0000 UTC m=+0.197011427 container remove a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:06:19 np0005597378 systemd[1]: libpod-conmon-a5f76eb6cde188cc52962776ba927aa3e40f8b42d8aa9e7d9847e5a678ae955b.scope: Deactivated successfully.
Jan 27 09:06:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:19 np0005597378 podman[330344]: 2026-01-27 14:06:19.464030088 +0000 UTC m=+0.037921990 container create 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:06:19 np0005597378 systemd[1]: Started libpod-conmon-6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019.scope.
Jan 27 09:06:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:19 np0005597378 podman[330344]: 2026-01-27 14:06:19.537561655 +0000 UTC m=+0.111453607 container init 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:06:19 np0005597378 podman[330344]: 2026-01-27 14:06:19.44774649 +0000 UTC m=+0.021638412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:06:19 np0005597378 podman[330344]: 2026-01-27 14:06:19.545344514 +0000 UTC m=+0.119236416 container start 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:06:19 np0005597378 podman[330344]: 2026-01-27 14:06:19.551830628 +0000 UTC m=+0.125722530 container attach 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:06:20 np0005597378 sweet_shannon[330360]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:06:20 np0005597378 sweet_shannon[330360]: --> All data devices are unavailable
Jan 27 09:06:20 np0005597378 systemd[1]: libpod-6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019.scope: Deactivated successfully.
Jan 27 09:06:20 np0005597378 podman[330344]: 2026-01-27 14:06:20.074256173 +0000 UTC m=+0.648148095 container died 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:06:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d0f9239bf6ee63f2eb69b56ae216ff39af0134c53bb25d2745699a20ae7a02ed-merged.mount: Deactivated successfully.
Jan 27 09:06:20 np0005597378 podman[330344]: 2026-01-27 14:06:20.21323644 +0000 UTC m=+0.787128342 container remove 6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:20 np0005597378 systemd[1]: libpod-conmon-6f7a62629563b8669fe46ca7251c130b1ff03850ed9e2560b0860f0db77da019.scope: Deactivated successfully.
Jan 27 09:06:20 np0005597378 podman[330458]: 2026-01-27 14:06:20.698809714 +0000 UTC m=+0.048132945 container create 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 464 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 09:06:20 np0005597378 systemd[1]: Started libpod-conmon-3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c.scope.
Jan 27 09:06:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:20 np0005597378 podman[330458]: 2026-01-27 14:06:20.682560697 +0000 UTC m=+0.031883948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:06:20 np0005597378 podman[330458]: 2026-01-27 14:06:20.875473164 +0000 UTC m=+0.224796485 container init 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:06:20 np0005597378 podman[330458]: 2026-01-27 14:06:20.882168974 +0000 UTC m=+0.231492245 container start 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:06:20 np0005597378 musing_mendeleev[330475]: 167 167
Jan 27 09:06:20 np0005597378 systemd[1]: libpod-3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c.scope: Deactivated successfully.
Jan 27 09:06:20 np0005597378 podman[330458]: 2026-01-27 14:06:20.886760887 +0000 UTC m=+0.236084168 container attach 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:06:20 np0005597378 podman[330458]: 2026-01-27 14:06:20.889647555 +0000 UTC m=+0.238970786 container died 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:06:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-28f07a2bf5286977335a038aac5434b4e0028cc8d3417524b224d7b024996c98-merged.mount: Deactivated successfully.
Jan 27 09:06:21 np0005597378 podman[330458]: 2026-01-27 14:06:21.093626339 +0000 UTC m=+0.442949580 container remove 3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:06:21 np0005597378 systemd[1]: libpod-conmon-3b4182d2ff7d4a715e58560d9a557c4575da40d96b94d58e1e2789a45650633c.scope: Deactivated successfully.
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.26181186 +0000 UTC m=+0.042589146 container create c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:06:21 np0005597378 systemd[1]: Started libpod-conmon-c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6.scope.
Jan 27 09:06:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.24285405 +0000 UTC m=+0.023631356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.345221892 +0000 UTC m=+0.125999198 container init c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.352610671 +0000 UTC m=+0.133387967 container start c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.356179037 +0000 UTC m=+0.136956333 container attach c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]: {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:    "0": [
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:        {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "devices": [
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "/dev/loop3"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            ],
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_name": "ceph_lv0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_size": "21470642176",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "name": "ceph_lv0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "tags": {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cluster_name": "ceph",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.crush_device_class": "",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.encrypted": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.objectstore": "bluestore",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osd_id": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.type": "block",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.vdo": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.with_tpm": "0"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            },
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "type": "block",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "vg_name": "ceph_vg0"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:        }
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:    ],
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:    "1": [
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:        {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "devices": [
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "/dev/loop4"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            ],
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_name": "ceph_lv1",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_size": "21470642176",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "name": "ceph_lv1",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "tags": {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cluster_name": "ceph",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.crush_device_class": "",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.encrypted": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.objectstore": "bluestore",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osd_id": "1",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.type": "block",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.vdo": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.with_tpm": "0"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            },
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "type": "block",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "vg_name": "ceph_vg1"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:        }
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:    ],
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:    "2": [
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:        {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "devices": [
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "/dev/loop5"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            ],
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_name": "ceph_lv2",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_size": "21470642176",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "name": "ceph_lv2",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "tags": {
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.cluster_name": "ceph",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.crush_device_class": "",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.encrypted": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.objectstore": "bluestore",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osd_id": "2",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.type": "block",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.vdo": "0",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:                "ceph.with_tpm": "0"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            },
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "type": "block",
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:            "vg_name": "ceph_vg2"
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:        }
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]:    ]
Jan 27 09:06:21 np0005597378 quirky_ptolemy[330515]: }
Jan 27 09:06:21 np0005597378 systemd[1]: libpod-c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6.scope: Deactivated successfully.
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.670081176 +0000 UTC m=+0.450858472 container died c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 27 09:06:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5427d45ee1a7df69b17cc344a082f061380595bb0c34601db307fbc59d0bd5cd-merged.mount: Deactivated successfully.
Jan 27 09:06:21 np0005597378 podman[330498]: 2026-01-27 14:06:21.71708841 +0000 UTC m=+0.497865696 container remove c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:06:21 np0005597378 systemd[1]: libpod-conmon-c0d9cf5240dd81bffbf8dcf78e7fabafcaf84eb1c2c821ed9926d742d66459b6.scope: Deactivated successfully.
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.002 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.005 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.023 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.164645941 +0000 UTC m=+0.040811028 container create ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.174 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.175 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.184 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.185 238945 INFO nova.compute.claims [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:06:22 np0005597378 systemd[1]: Started libpod-conmon-ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697.scope.
Jan 27 09:06:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.146732301 +0000 UTC m=+0.022897388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.257207911 +0000 UTC m=+0.133373008 container init ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.264992029 +0000 UTC m=+0.141157116 container start ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:06:22 np0005597378 youthful_bell[330613]: 167 167
Jan 27 09:06:22 np0005597378 systemd[1]: libpod-ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697.scope: Deactivated successfully.
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.287940517 +0000 UTC m=+0.164105634 container attach ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.288612895 +0000 UTC m=+0.164777982 container died ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.329 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ced283a87d1b61278d73cb82f100991b25f46a2ce95c5c8f33a50862659d30ec-merged.mount: Deactivated successfully.
Jan 27 09:06:22 np0005597378 podman[330597]: 2026-01-27 14:06:22.355088331 +0000 UTC m=+0.231253408 container remove ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:06:22 np0005597378 systemd[1]: libpod-conmon-ac65f68a8fa69a0bae2e77b01c0bb2029d1ee1da2ff82a13645a5da0555bb697.scope: Deactivated successfully.
Jan 27 09:06:22 np0005597378 podman[330640]: 2026-01-27 14:06:22.519194334 +0000 UTC m=+0.049268316 container create 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:06:22 np0005597378 systemd[1]: Started libpod-conmon-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope.
Jan 27 09:06:22 np0005597378 podman[330640]: 2026-01-27 14:06:22.496255216 +0000 UTC m=+0.026329198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:06:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:06:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:06:22 np0005597378 podman[330640]: 2026-01-27 14:06:22.614808204 +0000 UTC m=+0.144882216 container init 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:06:22 np0005597378 podman[330640]: 2026-01-27 14:06:22.621682259 +0000 UTC m=+0.151756241 container start 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:06:22 np0005597378 podman[330640]: 2026-01-27 14:06:22.627618538 +0000 UTC m=+0.157692520 container attach 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:06:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 09:06:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:06:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581995684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.930 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.942 238945 DEBUG nova.compute.provider_tree [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.960 238945 DEBUG nova.scheduler.client.report [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.985 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:22 np0005597378 nova_compute[238941]: 2026-01-27 14:06:22.986 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.050 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.069 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.091 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.094 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522768.0932057, 88b90bc8-8452-4809-8183-f11595e37b63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.094 238945 INFO nova.compute.manager [-] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.137 238945 DEBUG nova.compute.manager [None req-a6010bb4-0707-4f41-8776-7478061bca25 - - - - - -] [instance: 88b90bc8-8452-4809-8183-f11595e37b63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.227 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.230 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.230 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating image(s)#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.253 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.277 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.302 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.307 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:23 np0005597378 lvm[330809]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:06:23 np0005597378 lvm[330809]: VG ceph_vg0 finished
Jan 27 09:06:23 np0005597378 lvm[330812]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:06:23 np0005597378 lvm[330812]: VG ceph_vg1 finished
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.397 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.399 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.400 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.400 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:23 np0005597378 lvm[330816]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:06:23 np0005597378 lvm[330816]: VG ceph_vg2 finished
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.424 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.428 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:23 np0005597378 sharp_cerf[330675]: {}
Jan 27 09:06:23 np0005597378 systemd[1]: libpod-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope: Deactivated successfully.
Jan 27 09:06:23 np0005597378 systemd[1]: libpod-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope: Consumed 1.475s CPU time.
Jan 27 09:06:23 np0005597378 podman[330640]: 2026-01-27 14:06:23.535554068 +0000 UTC m=+1.065628060 container died 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:06:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-399c69e21df9a75d5a9f749e8453af59c9805461151bf56ae2ed23d65bfebf8e-merged.mount: Deactivated successfully.
Jan 27 09:06:23 np0005597378 podman[330640]: 2026-01-27 14:06:23.580980739 +0000 UTC m=+1.111054721 container remove 12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:06:23 np0005597378 systemd[1]: libpod-conmon-12237a1654aa93259c5bb11d1479b94d8b46ce49543c8468871f33b175463cbb.scope: Deactivated successfully.
Jan 27 09:06:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:06:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:06:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:06:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.712 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.780 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] resizing rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.913 238945 DEBUG nova.objects.instance [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'migration_context' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.935 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.936 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Ensure instance console log exists: /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.936 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.937 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.937 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.939 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.948 238945 WARNING nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.954 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.956 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.960 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.961 238945 DEBUG nova.virt.libvirt.host [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.961 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.962 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.962 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.962 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.963 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.963 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.963 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.964 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.964 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.964 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.965 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.965 238945 DEBUG nova.virt.hardware [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:06:23 np0005597378 nova_compute[238941]: 2026-01-27 14:06:23.968 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:06:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:06:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:06:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3900817236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:06:24 np0005597378 nova_compute[238941]: 2026-01-27 14:06:24.576 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:24 np0005597378 nova_compute[238941]: 2026-01-27 14:06:24.601 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:24 np0005597378 nova_compute[238941]: 2026-01-27 14:06:24.606 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Jan 27 09:06:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:06:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655752066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.177 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.180 238945 DEBUG nova.objects.instance [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.201 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <uuid>e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</uuid>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <name>instance-00000069</name>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV257Test-server-382999791</nova:name>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:06:23</nova:creationTime>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:user uuid="8deaa70b09b7493e96f0be27ab928e59">tempest-ServerShowV257Test-957661861-project-member</nova:user>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <nova:project uuid="ea1cd8ee266245b1a19efda0f4357fa3">tempest-ServerShowV257Test-957661861</nova:project>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <entry name="serial">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <entry name="uuid">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log" append="off"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:06:25 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:06:25 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:06:25 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:06:25 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.354 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.354 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.355 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Using config drive#033[00m
Jan 27 09:06:25 np0005597378 nova_compute[238941]: 2026-01-27 14:06:25.373 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.288 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating config drive at /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config#033[00m
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.293 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dmzq61t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.434 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dmzq61t" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.459 238945 DEBUG nova.storage.rbd_utils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.463 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 54 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 781 KiB/s wr, 12 op/s
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.980 238945 DEBUG oslo_concurrency.processutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:26 np0005597378 nova_compute[238941]: 2026-01-27 14:06:26.982 238945 INFO nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting local config drive /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config because it was imported into RBD.#033[00m
Jan 27 09:06:27 np0005597378 systemd-machined[207425]: New machine qemu-130-instance-00000069.
Jan 27 09:06:27 np0005597378 systemd[1]: Started Virtual Machine qemu-130-instance-00000069.
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.573 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522787.573374, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.574 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.577 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.578 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.582 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance spawned successfully.#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.582 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.605 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.609 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.629 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.630 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.631 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:27 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.631 238945 DEBUG nova.virt.libvirt.driver [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:27 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.637 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.637 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522787.5768197, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.638 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Started (Lifecycle Event)#033[00m
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0001588806693361569 of space, bias 1.0, pg target 0.047664200800847066 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677596730677312 of space, bias 1.0, pg target 0.20032790192031935 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0702365806437009e-06 of space, bias 4.0, pg target 0.001284283896772441 quantized to 16 (current 16)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:06:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.670 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.673 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.720 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.732 238945 INFO nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 4.50 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.732 238945 DEBUG nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.796 238945 INFO nova.compute.manager [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 5.72 seconds to build instance.#033[00m
Jan 27 09:06:27 np0005597378 nova_compute[238941]: 2026-01-27 14:06:27.871 238945 DEBUG oslo_concurrency.lockutils [None req-0f40f9da-5425-480e-bdd1-2f3722a9115f 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:28 np0005597378 nova_compute[238941]: 2026-01-27 14:06:28.125 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 27 09:06:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:29 np0005597378 nova_compute[238941]: 2026-01-27 14:06:29.694 238945 INFO nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Rebuilding instance#033[00m
Jan 27 09:06:29 np0005597378 nova_compute[238941]: 2026-01-27 14:06:29.910 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:29 np0005597378 nova_compute[238941]: 2026-01-27 14:06:29.926 238945 DEBUG nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:29 np0005597378 nova_compute[238941]: 2026-01-27 14:06:29.967 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'pci_requests' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:29 np0005597378 nova_compute[238941]: 2026-01-27 14:06:29.983 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:29 np0005597378 nova_compute[238941]: 2026-01-27 14:06:29.998 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'resources' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:30 np0005597378 nova_compute[238941]: 2026-01-27 14:06:30.011 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'migration_context' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:30 np0005597378 nova_compute[238941]: 2026-01-27 14:06:30.035 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:06:30 np0005597378 nova_compute[238941]: 2026-01-27 14:06:30.038 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:06:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 09:06:32 np0005597378 nova_compute[238941]: 2026-01-27 14:06:32.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 27 09:06:33 np0005597378 nova_compute[238941]: 2026-01-27 14:06:33.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:06:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:06:37 np0005597378 nova_compute[238941]: 2026-01-27 14:06:37.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:38 np0005597378 nova_compute[238941]: 2026-01-27 14:06:38.130 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Jan 27 09:06:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:40 np0005597378 nova_compute[238941]: 2026-01-27 14:06:40.076 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 27 09:06:40 np0005597378 podman[331145]: 2026-01-27 14:06:40.720939613 +0000 UTC m=+0.062391079 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:06:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 102 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 899 KiB/s wr, 70 op/s
Jan 27 09:06:41 np0005597378 podman[331165]: 2026-01-27 14:06:41.7363053 +0000 UTC m=+0.081157173 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 27 09:06:42 np0005597378 nova_compute[238941]: 2026-01-27 14:06:42.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:42 np0005597378 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 27 09:06:42 np0005597378 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Consumed 12.734s CPU time.
Jan 27 09:06:42 np0005597378 systemd-machined[207425]: Machine qemu-130-instance-00000069 terminated.
Jan 27 09:06:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 102 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 898 KiB/s wr, 41 op/s
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.089 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance shutdown successfully after 13 seconds.#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.094 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance destroyed successfully.#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.099 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance destroyed successfully.#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.659 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting instance files /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.660 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deletion of /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del complete#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.837 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.838 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating image(s)#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.868 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.899 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.931 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:43 np0005597378 nova_compute[238941]: 2026-01-27 14:06:43.937 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.007 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.008 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.009 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.009 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.032 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.036 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.316 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.377 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] resizing rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.450 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.451 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Ensure instance console log exists: /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.451 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.452 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.452 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.453 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.456 238945 WARNING nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.466 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.467 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.470 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.470 238945 DEBUG nova.virt.libvirt.host [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.471 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.471 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.472 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.472 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.472 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.473 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.473 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.474 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.474 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.474 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.475 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.475 238945 DEBUG nova.virt.hardware [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.475 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:44 np0005597378 nova_compute[238941]: 2026-01-27 14:06:44.494 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 27 09:06:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:06:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644709706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.061 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.088 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.092 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:06:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1200824541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.679 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.683 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <uuid>e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</uuid>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <name>instance-00000069</name>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServerShowV257Test-server-382999791</nova:name>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:06:44</nova:creationTime>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:user uuid="8deaa70b09b7493e96f0be27ab928e59">tempest-ServerShowV257Test-957661861-project-member</nova:user>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <nova:project uuid="ea1cd8ee266245b1a19efda0f4357fa3">tempest-ServerShowV257Test-957661861</nova:project>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <entry name="serial">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <entry name="uuid">e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2</entry>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/console.log" append="off"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:06:45 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:06:45 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:06:45 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:06:45 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.749 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.751 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.751 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Using config drive#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.774 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.804 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:45 np0005597378 nova_compute[238941]: 2026-01-27 14:06:45.836 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'keypairs' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.249 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Creating config drive at /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.255 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4whnh_qc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:06:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.396 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4whnh_qc" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.426 238945 DEBUG nova.storage.rbd_utils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] rbd image e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.430 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.581 238945 DEBUG oslo_concurrency.processutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:46 np0005597378 nova_compute[238941]: 2026-01-27 14:06:46.582 238945 INFO nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting local config drive /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2/disk.config because it was imported into RBD.#033[00m
Jan 27 09:06:46 np0005597378 systemd-machined[207425]: New machine qemu-131-instance-00000069.
Jan 27 09:06:46 np0005597378 systemd[1]: Started Virtual Machine qemu-131-instance-00000069.
Jan 27 09:06:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 113 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 3.1 MiB/s wr, 81 op/s
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.395 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.395 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522807.3946044, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.395 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.398 238945 DEBUG nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.398 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.401 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance spawned successfully.#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.402 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.448 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.454 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.458 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.459 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.459 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.459 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.460 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.460 238945 DEBUG nova.virt.libvirt.driver [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.493 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.494 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522807.3948638, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.494 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Started (Lifecycle Event)#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.535 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.537 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.552 238945 DEBUG nova.compute.manager [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.571 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.639 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.640 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.640 238945 DEBUG nova.objects.instance [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:06:47 np0005597378 nova_compute[238941]: 2026-01-27 14:06:47.720 238945 DEBUG oslo_concurrency.lockutils [None req-5eeb1b07-304e-40e1-9fd4-aaa5da269f95 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:06:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.134 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 469 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.789 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.790 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.792 238945 INFO nova.compute.manager [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Terminating instance#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.793 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "refresh_cache-e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.794 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquired lock "refresh_cache-e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.794 238945 DEBUG nova.network.neutron [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:06:48 np0005597378 nova_compute[238941]: 2026-01-27 14:06:48.960 238945 DEBUG nova.network.neutron [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.275 238945 DEBUG nova.network.neutron [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.289 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Releasing lock "refresh_cache-e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.289 238945 DEBUG nova.compute.manager [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:06:49 np0005597378 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 27 09:06:49 np0005597378 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Consumed 2.557s CPU time.
Jan 27 09:06:49 np0005597378 systemd-machined[207425]: Machine qemu-131-instance-00000069 terminated.
Jan 27 09:06:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.511 238945 INFO nova.virt.libvirt.driver [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance destroyed successfully.#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.512 238945 DEBUG nova.objects.instance [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lazy-loading 'resources' on Instance uuid e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.892 238945 INFO nova.virt.libvirt.driver [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deleting instance files /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.894 238945 INFO nova.virt.libvirt.driver [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deletion of /var/lib/nova/instances/e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2_del complete#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.958 238945 INFO nova.compute.manager [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.959 238945 DEBUG oslo.service.loopingcall [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.960 238945 DEBUG nova.compute.manager [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:06:49 np0005597378 nova_compute[238941]: 2026-01-27 14:06:49.960 238945 DEBUG nova.network.neutron [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:06:50 np0005597378 nova_compute[238941]: 2026-01-27 14:06:50.298 238945 DEBUG nova.network.neutron [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:06:50 np0005597378 nova_compute[238941]: 2026-01-27 14:06:50.313 238945 DEBUG nova.network.neutron [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:06:50 np0005597378 nova_compute[238941]: 2026-01-27 14:06:50.333 238945 INFO nova.compute.manager [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Took 0.37 seconds to deallocate network for instance.#033[00m
Jan 27 09:06:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:06:50Z|01012|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 27 09:06:50 np0005597378 nova_compute[238941]: 2026-01-27 14:06:50.390 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:50 np0005597378 nova_compute[238941]: 2026-01-27 14:06:50.391 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:50 np0005597378 nova_compute[238941]: 2026-01-27 14:06:50.444 238945 DEBUG oslo_concurrency.processutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 68 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Jan 27 09:06:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:06:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202933263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:06:51 np0005597378 nova_compute[238941]: 2026-01-27 14:06:51.076 238945 DEBUG oslo_concurrency.processutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:51 np0005597378 nova_compute[238941]: 2026-01-27 14:06:51.083 238945 DEBUG nova.compute.provider_tree [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:06:51 np0005597378 nova_compute[238941]: 2026-01-27 14:06:51.097 238945 DEBUG nova.scheduler.client.report [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:06:51 np0005597378 nova_compute[238941]: 2026-01-27 14:06:51.163 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:51 np0005597378 nova_compute[238941]: 2026-01-27 14:06:51.191 238945 INFO nova.scheduler.client.report [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Deleted allocations for instance e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2#033[00m
Jan 27 09:06:51 np0005597378 nova_compute[238941]: 2026-01-27 14:06:51.271 238945 DEBUG oslo_concurrency.lockutils [None req-d7013932-202f-425e-b3ef-be74eaf3e94d 8deaa70b09b7493e96f0be27ab928e59 ea1cd8ee266245b1a19efda0f4357fa3 - - default default] Lock "e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:52 np0005597378 nova_compute[238941]: 2026-01-27 14:06:52.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 68 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Jan 27 09:06:53 np0005597378 nova_compute[238941]: 2026-01-27 14:06:53.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:54 np0005597378 nova_compute[238941]: 2026-01-27 14:06:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 205 op/s
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:06:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290913417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:06:55 np0005597378 nova_compute[238941]: 2026-01-27 14:06:55.961 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.124 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.126 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3838MB free_disk=59.98759024403989GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.126 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.126 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.203 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.204 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.228 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:06:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 27 09:06:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:06:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512382593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.811 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.817 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.833 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.857 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:06:56 np0005597378 nova_compute[238941]: 2026-01-27 14:06:56.857 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:06:57 np0005597378 nova_compute[238941]: 2026-01-27 14:06:57.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:58 np0005597378 nova_compute[238941]: 2026-01-27 14:06:58.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:06:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 913 KiB/s wr, 139 op/s
Jan 27 09:06:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:06:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:06:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3820546409' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:06:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:06:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3820546409' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:06:59 np0005597378 nova_compute[238941]: 2026-01-27 14:06:59.859 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:00 np0005597378 nova_compute[238941]: 2026-01-27 14:07:00.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 87 op/s
Jan 27 09:07:01 np0005597378 nova_compute[238941]: 2026-01-27 14:07:01.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:01 np0005597378 nova_compute[238941]: 2026-01-27 14:07:01.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:07:01 np0005597378 nova_compute[238941]: 2026-01-27 14:07:01.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:07:01 np0005597378 nova_compute[238941]: 2026-01-27 14:07:01.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:07:02 np0005597378 nova_compute[238941]: 2026-01-27 14:07:02.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 852 B/s wr, 49 op/s
Jan 27 09:07:03 np0005597378 nova_compute[238941]: 2026-01-27 14:07:03.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:04 np0005597378 nova_compute[238941]: 2026-01-27 14:07:04.511 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522809.509349, e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:04 np0005597378 nova_compute[238941]: 2026-01-27 14:07:04.512 238945 INFO nova.compute.manager [-] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:07:04 np0005597378 nova_compute[238941]: 2026-01-27 14:07:04.531 238945 DEBUG nova.compute.manager [None req-69eff9f5-be66-4e44-8ded-13befeba5340 - - - - - -] [instance: e0e75b40-d27e-4f0a-a592-6b3fa11ad6c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 852 B/s wr, 49 op/s
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.632 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.633 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.660 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.735 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.736 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.741 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.741 238945 INFO nova.compute.claims [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:07:05 np0005597378 nova_compute[238941]: 2026-01-27 14:07:05.843 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847879567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.397 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.405 238945 DEBUG nova.compute.provider_tree [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.430 238945 DEBUG nova.scheduler.client.report [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.456 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.457 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.532 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.533 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.559 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.581 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.681 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.683 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.683 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating image(s)#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.709 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.738 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 676 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.767 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.772 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.850 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.851 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.852 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.852 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.871 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:06 np0005597378 nova_compute[238941]: 2026-01-27 14:07:06.874 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7514a588-c48b-45af-a889-ea57cc9f1730_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.103 238945 DEBUG nova.policy [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '945414e1b82946ccadab2e408cf6151c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96c668beb6b74661927ce283539bb68e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.248 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 7514a588-c48b-45af-a889-ea57cc9f1730_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.309 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] resizing rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.338 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.376 238945 DEBUG nova.objects.instance [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'migration_context' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.402 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.403 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Ensure instance console log exists: /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:07 np0005597378 nova_compute[238941]: 2026-01-27 14:07:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:08 np0005597378 nova_compute[238941]: 2026-01-27 14:07:08.144 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:08 np0005597378 nova_compute[238941]: 2026-01-27 14:07:08.191 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Successfully created port: 05f217fa-372b-46d3-974f-de79101f0b2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:07:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 69 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 680 KiB/s wr, 25 op/s
Jan 27 09:07:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:09 np0005597378 nova_compute[238941]: 2026-01-27 14:07:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:09 np0005597378 nova_compute[238941]: 2026-01-27 14:07:09.996 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Successfully updated port: 05f217fa-372b-46d3-974f-de79101f0b2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.017 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.017 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.017 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.128 238945 DEBUG nova.compute.manager [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.129 238945 DEBUG nova.compute.manager [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing instance network info cache due to event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.129 238945 DEBUG oslo_concurrency.lockutils [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:10 np0005597378 nova_compute[238941]: 2026-01-27 14:07:10.211 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:07:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.594 238945 DEBUG nova.network.neutron [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.629 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.629 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance network_info: |[{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.630 238945 DEBUG oslo_concurrency.lockutils [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.630 238945 DEBUG nova.network.neutron [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.633 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start _get_guest_xml network_info=[{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.638 238945 WARNING nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.643 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.643 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.646 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.646 238945 DEBUG nova.virt.libvirt.host [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.647 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.647 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.648 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.649 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.650 238945 DEBUG nova.virt.hardware [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:07:11 np0005597378 nova_compute[238941]: 2026-01-27 14:07:11.652 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:11 np0005597378 podman[331833]: 2026-01-27 14:07:11.728547444 +0000 UTC m=+0.065685297 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:07:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:07:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436880587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.249 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.277 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.283 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.344 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:12 np0005597378 podman[331915]: 2026-01-27 14:07:12.741156037 +0000 UTC m=+0.075911562 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller)
Jan 27 09:07:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:07:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:07:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001427324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.814 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.816 238945 DEBUG nova.virt.libvirt.vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:06Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.816 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.817 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.818 238945 DEBUG nova.objects.instance [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.844 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <uuid>7514a588-c48b-45af-a889-ea57cc9f1730</uuid>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <name>instance-0000006a</name>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersNegativeTestJSON-server-1964192211</nova:name>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:07:11</nova:creationTime>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:user uuid="945414e1b82946ccadab2e408cf6151c">tempest-ServersNegativeTestJSON-1782469845-project-member</nova:user>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:project uuid="96c668beb6b74661927ce283539bb68e">tempest-ServersNegativeTestJSON-1782469845</nova:project>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <nova:port uuid="05f217fa-372b-46d3-974f-de79101f0b2f">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <entry name="serial">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <entry name="uuid">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk.config">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e3:41:9e"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <target dev="tap05f217fa-37"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log" append="off"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:07:12 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:07:12 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:07:12 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:07:12 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.846 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Preparing to wait for external event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.846 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.846 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.847 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.847 238945 DEBUG nova.virt.libvirt.vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:06Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.848 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.848 238945 DEBUG nova.network.os_vif_util [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.849 238945 DEBUG os_vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.849 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.850 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.854 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05f217fa-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.854 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05f217fa-37, col_values=(('external_ids', {'iface-id': '05f217fa-372b-46d3-974f-de79101f0b2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:41:9e', 'vm-uuid': '7514a588-c48b-45af-a889-ea57cc9f1730'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:12 np0005597378 NetworkManager[48904]: <info>  [1769522832.8574] manager: (tap05f217fa-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.858 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.865 238945 INFO os_vif [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.934 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.935 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.935 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No VIF found with MAC fa:16:3e:e3:41:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.935 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Using config drive#033[00m
Jan 27 09:07:12 np0005597378 nova_compute[238941]: 2026-01-27 14:07:12.956 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.245 238945 DEBUG nova.network.neutron [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated VIF entry in instance network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.246 238945 DEBUG nova.network.neutron [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.265 238945 DEBUG oslo_concurrency.lockutils [req-d96d47dd-abaa-456b-ad31-472419228955 req-0faec130-3dc3-4bb2-8252-36a25952758b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.305 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating config drive at /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.310 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9mvea551 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.452 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9mvea551" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.478 238945 DEBUG nova.storage.rbd_utils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.482 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.792 238945 DEBUG oslo_concurrency.processutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.794 238945 INFO nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting local config drive /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config because it was imported into RBD.#033[00m
Jan 27 09:07:13 np0005597378 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 09:07:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:13Z|01013|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 09:07:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:13Z|01014|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:07:13 np0005597378 NetworkManager[48904]: <info>  [1769522833.8374] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.837 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.842 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.852 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.854 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.855 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b#033[00m
Jan 27 09:07:13 np0005597378 systemd-udevd[332014]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.866 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[599eac2b-4efb-4af4-9fa5-fe6dd5b9060d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.867 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d5d79a0-31 in ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.870 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d5d79a0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa75112-47d3-4e70-bf3f-de5629b9a785]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.871 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2f5c48-d4f1-4411-b992-1f4291adac05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 systemd-machined[207425]: New machine qemu-132-instance-0000006a.
Jan 27 09:07:13 np0005597378 NetworkManager[48904]: <info>  [1769522833.8762] device (tap05f217fa-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:07:13 np0005597378 NetworkManager[48904]: <info>  [1769522833.8776] device (tap05f217fa-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.883 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[88689631-15f1-49b1-b7bb-a6f317b48d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea8a661-b3d9-497a-a436-287b4910a217]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:13Z|01015|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 09:07:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:13Z|01016|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 09:07:13 np0005597378 nova_compute[238941]: 2026-01-27 14:07:13.910 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:13 np0005597378 systemd[1]: Started Virtual Machine qemu-132-instance-0000006a.
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.939 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[346d00ac-5d01-4ec3-b663-6bc822d21f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 NetworkManager[48904]: <info>  [1769522833.9454] manager: (tap5d5d79a0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/418)
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca821c9-cca3-4176-9155-e3d6d5ffea00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.972 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d66eb7-c19d-47bb-a318-38fa6135b5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:13.975 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff9aa45-451f-4c44-9cc6-3fb93687df44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:13 np0005597378 NetworkManager[48904]: <info>  [1769522833.9980] device (tap5d5d79a0-30): carrier: link connected
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.003 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ec0458-f27c-4a5a-99aa-cf699d0f0b7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72a2a2a1-4325-42c9-b87a-ce0bb3f61c5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332048, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.034 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[baec83ac-cc77-40ce-ba92-654210e47be4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:6ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547362, 'tstamp': 547362}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332049, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.047 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9481e881-9c98-4e2d-9871-40a36acf08a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332050, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36352a3a-b711-4bc2-9aa1-d9815c391e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9600e0e-036c-4b54-8465-5ef550467248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.136 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:14 np0005597378 kernel: tap5d5d79a0-30: entered promiscuous mode
Jan 27 09:07:14 np0005597378 NetworkManager[48904]: <info>  [1769522834.1389] manager: (tap5d5d79a0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.142 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:14Z|01017|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.157 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.158 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[48da34cc-2a37-4f5f-8ab0-51a26ff87ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.159 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:07:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:14.221 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'env', 'PROCESS_TAG=haproxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:07:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.386 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522834.3865585, 7514a588-c48b-45af-a889-ea57cc9f1730 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.387 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Started (Lifecycle Event)#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.411 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.415 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522834.3874726, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.415 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.437 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.440 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:14 np0005597378 nova_compute[238941]: 2026-01-27 14:07:14.463 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:07:14 np0005597378 podman[332124]: 2026-01-27 14:07:14.552700089 +0000 UTC m=+0.024748797 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:07:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:07:14 np0005597378 podman[332124]: 2026-01-27 14:07:14.871048877 +0000 UTC m=+0.343097555 container create 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:07:14 np0005597378 systemd[1]: Started libpod-conmon-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1.scope.
Jan 27 09:07:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ca992457f6bdeab196a6ef3ca22c517d8ecce52c9ae93303d17b4c880e894/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:14 np0005597378 podman[332124]: 2026-01-27 14:07:14.986662015 +0000 UTC m=+0.458710723 container init 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 09:07:14 np0005597378 podman[332124]: 2026-01-27 14:07:14.992348518 +0000 UTC m=+0.464397206 container start 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 09:07:15 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : New worker (332146) forked
Jan 27 09:07:15 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : Loading success.
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.050 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.084 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.181 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.183 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.189 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.190 238945 INFO nova.compute.claims [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.300 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519687671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.845 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.852 238945 DEBUG nova.compute.provider_tree [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.872 238945 DEBUG nova.scheduler.client.report [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.897 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.898 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.965 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.965 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:07:15 np0005597378 nova_compute[238941]: 2026-01-27 14:07:15.983 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.005 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.092 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.093 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.094 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Creating image(s)#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.123 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.143 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.167 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.171 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.205 238945 DEBUG nova.policy [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.239 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.240 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.240 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.241 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.259 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.263 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4259b642-9030-422e-b18b-71be996845f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.523 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 4259b642-9030-422e-b18b-71be996845f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.581 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 4259b642-9030-422e-b18b-71be996845f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.762 238945 DEBUG nova.objects.instance [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.778 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.778 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Ensure instance console log exists: /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.779 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.779 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:16 np0005597378 nova_compute[238941]: 2026-01-27 14:07:16.779 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:07:17
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.856 238945 DEBUG nova.compute.manager [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.856 238945 DEBUG oslo_concurrency.lockutils [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.856 238945 DEBUG oslo_concurrency.lockutils [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.857 238945 DEBUG oslo_concurrency.lockutils [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.857 238945 DEBUG nova.compute.manager [req-673f5c2e-98cb-4cb8-b747-0bd4efb47c20 req-c66aada3-b2d2-46bc-9059-c85e1249b6c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Processing event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.858 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.861 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522837.8616986, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.862 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.866 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.870 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance spawned successfully.#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.870 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.884 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.889 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.892 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.893 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.893 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.893 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.894 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.894 238945 DEBUG nova.virt.libvirt.driver [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.923 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.965 238945 INFO nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 11.28 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:07:17 np0005597378 nova_compute[238941]: 2026-01-27 14:07:17.965 238945 DEBUG nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:18 np0005597378 nova_compute[238941]: 2026-01-27 14:07:18.024 238945 INFO nova.compute.manager [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 12.32 seconds to build instance.#033[00m
Jan 27 09:07:18 np0005597378 nova_compute[238941]: 2026-01-27 14:07:18.042 238945 DEBUG oslo_concurrency.lockutils [None req-1b8aa3c5-e8b2-4e11-b5b0-443c78b28629 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:07:18 np0005597378 nova_compute[238941]: 2026-01-27 14:07:18.570 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Successfully created port: f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:07:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 117 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.2 MiB/s wr, 50 op/s
Jan 27 09:07:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.512 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Successfully updated port: f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.527 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.527 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.527 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.630 238945 DEBUG nova.compute.manager [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.630 238945 DEBUG nova.compute.manager [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing instance network info cache due to event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.631 238945 DEBUG oslo_concurrency.lockutils [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.715 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.956 238945 DEBUG nova.compute.manager [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG oslo_concurrency.lockutils [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG oslo_concurrency.lockutils [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG oslo_concurrency.lockutils [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.957 238945 DEBUG nova.compute.manager [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:19 np0005597378 nova_compute[238941]: 2026-01-27 14:07:19.958 238945 WARNING nova.compute.manager [req-2d9f79a0-e795-464b-a3ec-9a05edfaff62 req-9b13758f-3d51-4231-9509-70dd18792529 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state None.#033[00m
Jan 27 09:07:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.9 MiB/s wr, 74 op/s
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.673 238945 DEBUG nova.network.neutron [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.701 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.701 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance network_info: |[{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.701 238945 DEBUG oslo_concurrency.lockutils [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.702 238945 DEBUG nova.network.neutron [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.704 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start _get_guest_xml network_info=[{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.709 238945 WARNING nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.717 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.717 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.725 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.726 238945 DEBUG nova.virt.libvirt.host [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.726 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.727 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.727 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.728 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.729 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.729 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.729 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.730 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.730 238945 DEBUG nova.virt.hardware [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:07:21 np0005597378 nova_compute[238941]: 2026-01-27 14:07:21.734 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:07:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1757266559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.342 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.369 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.374 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.860 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:07:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894055311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.962 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.964 238945 DEBUG nova.virt.libvirt.vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1477433599',display_name='tempest-TestNetworkAdvancedServerOps-server-1477433599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1477433599',id=107,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOTlA/zzf8k5RE7YtGF0JstMbK3jsPM4B/qPl2KHxnpSuYQBvrk4VnlFqVZKbSWBalvc4W/8oi1h1woqWdU+1B67nCBWNnY6LMtFdr08A3euNBaTSW62NVvw7+zpwmvIZg==',key_name='tempest-TestNetworkAdvancedServerOps-587163644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-hwy9nsc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:16Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=4259b642-9030-422e-b18b-71be996845f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.965 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.966 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.967 238945 DEBUG nova.objects.instance [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.985 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <uuid>4259b642-9030-422e-b18b-71be996845f4</uuid>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <name>instance-0000006b</name>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1477433599</nova:name>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:07:21</nova:creationTime>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <nova:port uuid="f52f3fb0-e55e-48d6-b983-7e87ed6296d2">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <entry name="serial">4259b642-9030-422e-b18b-71be996845f4</entry>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <entry name="uuid">4259b642-9030-422e-b18b-71be996845f4</entry>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/4259b642-9030-422e-b18b-71be996845f4_disk">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/4259b642-9030-422e-b18b-71be996845f4_disk.config">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:9d:be:34"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <target dev="tapf52f3fb0-e5"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/console.log" append="off"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:07:22 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:07:22 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:07:22 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:07:22 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.987 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Preparing to wait for external event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.988 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.988 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.989 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.989 238945 DEBUG nova.virt.libvirt.vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1477433599',display_name='tempest-TestNetworkAdvancedServerOps-server-1477433599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1477433599',id=107,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOTlA/zzf8k5RE7YtGF0JstMbK3jsPM4B/qPl2KHxnpSuYQBvrk4VnlFqVZKbSWBalvc4W/8oi1h1woqWdU+1B67nCBWNnY6LMtFdr08A3euNBaTSW62NVvw7+zpwmvIZg==',key_name='tempest-TestNetworkAdvancedServerOps-587163644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-hwy9nsc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:16Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=4259b642-9030-422e-b18b-71be996845f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.990 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.990 238945 DEBUG nova.network.os_vif_util [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.991 238945 DEBUG os_vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.992 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.992 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.995 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.996 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf52f3fb0-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:22 np0005597378 nova_compute[238941]: 2026-01-27 14:07:22.997 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf52f3fb0-e5, col_values=(('external_ids', {'iface-id': 'f52f3fb0-e55e-48d6-b983-7e87ed6296d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:be:34', 'vm-uuid': '4259b642-9030-422e-b18b-71be996845f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:22 np0005597378 NetworkManager[48904]: <info>  [1769522842.9995] manager: (tapf52f3fb0-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.007 238945 INFO os_vif [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5')#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.083 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.083 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.083 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:9d:be:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.084 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Using config drive#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.107 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.115 238945 DEBUG nova.network.neutron [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updated VIF entry in instance network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.115 238945 DEBUG nova.network.neutron [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.158 238945 DEBUG oslo_concurrency.lockutils [req-634b113b-da70-48f9-bf77-c1486b397072 req-72cb679a-9aa8-4cbb-901e-84e7a2d1d57b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.163 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.164 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.179 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.258 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.259 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.267 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.267 238945 INFO nova.compute.claims [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.412 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Creating config drive at /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.418 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rqt4ir9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.465 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.571 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rqt4ir9" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.602 238945 DEBUG nova.storage.rbd_utils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 4259b642-9030-422e-b18b-71be996845f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.607 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config 4259b642-9030-422e-b18b-71be996845f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.848 238945 DEBUG oslo_concurrency.processutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config 4259b642-9030-422e-b18b-71be996845f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.849 238945 INFO nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deleting local config drive /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4/disk.config because it was imported into RBD.#033[00m
Jan 27 09:07:23 np0005597378 kernel: tapf52f3fb0-e5: entered promiscuous mode
Jan 27 09:07:23 np0005597378 NetworkManager[48904]: <info>  [1769522843.9023] manager: (tapf52f3fb0-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Jan 27 09:07:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:23Z|01018|binding|INFO|Claiming lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for this chassis.
Jan 27 09:07:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:23Z|01019|binding|INFO|f52f3fb0-e55e-48d6-b983-7e87ed6296d2: Claiming fa:16:3e:9d:be:34 10.100.0.6
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.911 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.916 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:be:34 10.100.0.6'], port_security=['fa:16:3e:9d:be:34 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4259b642-9030-422e-b18b-71be996845f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dcab28e-8e80-4909-a1ac-f9e4562ec577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=52e54f30-cc96-4c77-8cb6-812d376ca09a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f52f3fb0-e55e-48d6-b983-7e87ed6296d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.917 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 in datapath a225edd2-04b0-4782-bb92-d2dbbfa8bc5e bound to our chassis#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.918 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a225edd2-04b0-4782-bb92-d2dbbfa8bc5e#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.931 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1d022a13-5497-474b-b29d-c89899c40b67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.932 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa225edd2-01 in ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.943 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa225edd2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b538c5c2-59e3-47ac-85b5-3ec0925e3506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:23 np0005597378 systemd-udevd[332549]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.944 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8928aa01-f8d6-4b97-9566-d78dbafc49e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:23 np0005597378 systemd-machined[207425]: New machine qemu-133-instance-0000006b.
Jan 27 09:07:23 np0005597378 NetworkManager[48904]: <info>  [1769522843.9575] device (tapf52f3fb0-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:07:23 np0005597378 NetworkManager[48904]: <info>  [1769522843.9584] device (tapf52f3fb0-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.961 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dc410e16-50f7-4fde-9ec3-ac9f9d513366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:23 np0005597378 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:23.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5e300d-e8fa-48fe-909a-245706f3c292]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:23Z|01020|binding|INFO|Setting lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 ovn-installed in OVS
Jan 27 09:07:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:23Z|01021|binding|INFO|Setting lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 up in Southbound
Jan 27 09:07:23 np0005597378 nova_compute[238941]: 2026-01-27 14:07:23.982 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.012 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0975b410-006e-41a8-a484-aef149aa04f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 NetworkManager[48904]: <info>  [1769522844.0221] manager: (tapa225edd2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/422)
Jan 27 09:07:24 np0005597378 systemd-udevd[332552]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.025 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[473a619b-743c-448e-a2a4-c3b231f7e542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.061 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[219ea783-5ae2-4370-8d98-2821d32c70c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.065 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2ecb86-a5e5-4a9a-a5b4-f67d517411b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547813249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:24 np0005597378 NetworkManager[48904]: <info>  [1769522844.0877] device (tapa225edd2-00): carrier: link connected
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.091 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8e582e-cdbe-47ca-ac44-2cb7f6df38ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.097 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.107 238945 DEBUG nova.compute.provider_tree [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.116 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f37b05-085f-4d4d-b03a-77509b5a5105]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa225edd2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:da:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548371, 'reachable_time': 26590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332583, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2760a8f9-097b-47f4-9fd3-e8e167122974]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:da18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548371, 'tstamp': 548371}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332585, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.131 238945 DEBUG nova.scheduler.client.report [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.159 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.160 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.163 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b470560d-3676-4bff-839c-ba62a38ca43f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa225edd2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:da:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548371, 'reachable_time': 26590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332592, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.190 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58fe07f9-502a-4fa7-90c2-741a15e1c075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.202 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.202 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.228 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.248 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72ad4512-5342-4200-8fac-cd2d91c5549c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.285 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa225edd2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.285 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.286 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa225edd2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:24 np0005597378 kernel: tapa225edd2-00: entered promiscuous mode
Jan 27 09:07:24 np0005597378 NetworkManager[48904]: <info>  [1769522844.2895] manager: (tapa225edd2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.293 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa225edd2-00, col_values=(('external_ids', {'iface-id': '553ac0b9-81da-4f9f-8f6d-c743fffbe53a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.295 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:07:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:24Z|01022|binding|INFO|Releasing lport 553ac0b9-81da-4f9f-8f6d-c743fffbe53a from this chassis (sb_readonly=0)
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[616562ac-09ae-434b-be34-8f7fdac1670c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.298 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.pid.haproxy
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID a225edd2-04b0-4782-bb92-d2dbbfa8bc5e
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:24.300 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'env', 'PROCESS_TAG=haproxy-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a225edd2-04b0-4782-bb92-d2dbbfa8bc5e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.335 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.338 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.339 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Creating image(s)#033[00m
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.369 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.393 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.415 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.419 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.462 238945 DEBUG nova.compute.manager [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.463 238945 DEBUG oslo_concurrency.lockutils [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.463 238945 DEBUG oslo_concurrency.lockutils [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.463 238945 DEBUG oslo_concurrency.lockutils [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.464 238945 DEBUG nova.compute.manager [req-958667ac-4fdc-42c8-8033-ff36bb5f5344 req-4c030014-e7b3-4e61-a96f-b39662de444a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Processing event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.507 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.508 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.509 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.509 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.541 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.553 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6eca106c-c3a5-4932-93f4-8208e54431e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.727 238945 DEBUG nova.policy [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '945414e1b82946ccadab2e408cf6151c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96c668beb6b74661927ce283539bb68e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:07:24 np0005597378 podman[332740]: 2026-01-27 14:07:24.666025576 +0000 UTC m=+0.033234655 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:07:24 np0005597378 podman[332740]: 2026-01-27 14:07:24.768144322 +0000 UTC m=+0.135353371 container create 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:07:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 134 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:07:24 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:07:24 np0005597378 systemd[1]: Started libpod-conmon-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679.scope.
Jan 27 09:07:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2495157624be490c860f0c22e86290ed614242e1ab449ab46285c8548caa66dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:24 np0005597378 nova_compute[238941]: 2026-01-27 14:07:24.966 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6eca106c-c3a5-4932-93f4-8208e54431e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:24 np0005597378 podman[332740]: 2026-01-27 14:07:24.977752656 +0000 UTC m=+0.344961725 container init 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:07:24 np0005597378 podman[332740]: 2026-01-27 14:07:24.985074904 +0000 UTC m=+0.352283953 container start 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:07:25 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : New worker (332853) forked
Jan 27 09:07:25 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : Loading success.
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.060 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] resizing rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.046604628 +0000 UTC m=+0.033235285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.232973158 +0000 UTC m=+0.219603775 container create 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:07:25 np0005597378 systemd[1]: Started libpod-conmon-3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a.scope.
Jan 27 09:07:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.42907097 +0000 UTC m=+0.415701607 container init 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.438277017 +0000 UTC m=+0.424907624 container start 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:07:25 np0005597378 eager_matsumoto[332894]: 167 167
Jan 27 09:07:25 np0005597378 systemd[1]: libpod-3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a.scope: Deactivated successfully.
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.457169865 +0000 UTC m=+0.443800472 container attach 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.457484604 +0000 UTC m=+0.444115211 container died 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.469 238945 DEBUG nova.objects.instance [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'migration_context' on Instance uuid 6eca106c-c3a5-4932-93f4-8208e54431e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.488 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.488 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Ensure instance console log exists: /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.489 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.489 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.490 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5219fd57ef80495ed5f5101c04b342dbef49c0014806c02c7ac415a9b49c732e-merged.mount: Deactivated successfully.
Jan 27 09:07:25 np0005597378 podman[332833]: 2026-01-27 14:07:25.559973829 +0000 UTC m=+0.546604436 container remove 3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:07:25 np0005597378 systemd[1]: libpod-conmon-3c68ecf4683c13d9dda24d3066c1c0fdc2958a4eb745b82c515a3359f431f12a.scope: Deactivated successfully.
Jan 27 09:07:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:25.741 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:25.744 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.767 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.768 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522845.7671402, 4259b642-9030-422e-b18b-71be996845f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:25 np0005597378 podman[332975]: 2026-01-27 14:07:25.768724551 +0000 UTC m=+0.058108993 container create df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.768 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Started (Lifecycle Event)#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.772 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.776 238945 INFO nova.virt.libvirt.driver [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance spawned successfully.#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.776 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.796 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.802 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.807 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.808 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.809 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.809 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.810 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.810 238945 DEBUG nova.virt.libvirt.driver [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.822 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.822 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522845.7673848, 4259b642-9030-422e-b18b-71be996845f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.823 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:07:25 np0005597378 systemd[1]: Started libpod-conmon-df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256.scope.
Jan 27 09:07:25 np0005597378 podman[332975]: 2026-01-27 14:07:25.750157042 +0000 UTC m=+0.039541514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.852 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.857 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522845.7712455, 4259b642-9030-422e-b18b-71be996845f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.858 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:07:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.886 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.890 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:25 np0005597378 podman[332975]: 2026-01-27 14:07:25.900540574 +0000 UTC m=+0.189925016 container init df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.904 238945 INFO nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 9.81 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.904 238945 DEBUG nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:25 np0005597378 podman[332975]: 2026-01-27 14:07:25.908985951 +0000 UTC m=+0.198370393 container start df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.913 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:07:25 np0005597378 podman[332975]: 2026-01-27 14:07:25.934782925 +0000 UTC m=+0.224167367 container attach df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.966 238945 INFO nova.compute.manager [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 10.82 seconds to build instance.#033[00m
Jan 27 09:07:25 np0005597378 nova_compute[238941]: 2026-01-27 14:07:25.987 238945 DEBUG oslo_concurrency.lockutils [None req-4926addd-f3ca-444f-9d8f-42542a4c3474 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.003 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Successfully created port: 56cf48cb-2667-496b-8157-5edbbc1a6091 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:07:26 np0005597378 heuristic_haibt[332993]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:07:26 np0005597378 heuristic_haibt[332993]: --> All data devices are unavailable
Jan 27 09:07:26 np0005597378 systemd[1]: libpod-df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256.scope: Deactivated successfully.
Jan 27 09:07:26 np0005597378 podman[332975]: 2026-01-27 14:07:26.447195571 +0000 UTC m=+0.736580013 container died df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4c955675b294e16274d0283189a77d8bf183965afd42fb8d10702d789b9f7459-merged.mount: Deactivated successfully.
Jan 27 09:07:26 np0005597378 podman[332975]: 2026-01-27 14:07:26.513525354 +0000 UTC m=+0.802909796 container remove df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 27 09:07:26 np0005597378 systemd[1]: libpod-conmon-df70441c3e3d793d85d9e5e922893aaff4ab68b637b06d141d3fa788eabf9256.scope: Deactivated successfully.
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.545 238945 DEBUG nova.compute.manager [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.547 238945 DEBUG oslo_concurrency.lockutils [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.547 238945 DEBUG oslo_concurrency.lockutils [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.549 238945 DEBUG oslo_concurrency.lockutils [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.549 238945 DEBUG nova.compute.manager [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] No waiting events found dispatching network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:26 np0005597378 nova_compute[238941]: 2026-01-27 14:07:26.549 238945 WARNING nova.compute.manager [req-03edbd47-b744-48d6-a0da-50eabaf8fbf9 req-e5ddee1a-3f1d-42c7-b53f-a8a09132da20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received unexpected event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:07:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 147 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 104 op/s
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.017720199 +0000 UTC m=+0.023190464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.134453947 +0000 UTC m=+0.139924192 container create 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:07:27 np0005597378 systemd[1]: Started libpod-conmon-2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6.scope.
Jan 27 09:07:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.214551931 +0000 UTC m=+0.220022196 container init 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.221456736 +0000 UTC m=+0.226926981 container start 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:07:27 np0005597378 modest_torvalds[333102]: 167 167
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.225493765 +0000 UTC m=+0.230964040 container attach 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:07:27 np0005597378 systemd[1]: libpod-2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6.scope: Deactivated successfully.
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.227532219 +0000 UTC m=+0.233002464 container died 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:07:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d66f1ce8b24cee3361f0d8891e69fe6f906076ec5e21858ef986768888352b9c-merged.mount: Deactivated successfully.
Jan 27 09:07:27 np0005597378 podman[333087]: 2026-01-27 14:07:27.265048948 +0000 UTC m=+0.270519193 container remove 2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:07:27 np0005597378 systemd[1]: libpod-conmon-2767cef61054958b5e6fc7c48902e09bd2bee89a4ce755929075da967667f6a6.scope: Deactivated successfully.
Jan 27 09:07:27 np0005597378 nova_compute[238941]: 2026-01-27 14:07:27.355 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:27 np0005597378 podman[333124]: 2026-01-27 14:07:27.490017747 +0000 UTC m=+0.045614328 container create 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:07:27 np0005597378 systemd[1]: Started libpod-conmon-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope.
Jan 27 09:07:27 np0005597378 podman[333124]: 2026-01-27 14:07:27.471956131 +0000 UTC m=+0.027552742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:07:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:27 np0005597378 podman[333124]: 2026-01-27 14:07:27.589501031 +0000 UTC m=+0.145097632 container init 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:07:27 np0005597378 podman[333124]: 2026-01-27 14:07:27.596487929 +0000 UTC m=+0.152084520 container start 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:07:27 np0005597378 podman[333124]: 2026-01-27 14:07:27.601099353 +0000 UTC m=+0.156695954 container attach 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008243576285040627 of space, bias 1.0, pg target 0.2473072885512188 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685724107519121 of space, bias 1.0, pg target 0.20057172322557362 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.068187270723703e-06 of space, bias 4.0, pg target 0.0012818247248684437 quantized to 16 (current 16)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:07:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]: {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:    "0": [
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:        {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "devices": [
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "/dev/loop3"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            ],
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_name": "ceph_lv0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_size": "21470642176",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "name": "ceph_lv0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "tags": {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cluster_name": "ceph",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.crush_device_class": "",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.encrypted": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.objectstore": "bluestore",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osd_id": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.type": "block",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.vdo": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.with_tpm": "0"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            },
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "type": "block",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "vg_name": "ceph_vg0"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:        }
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:    ],
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:    "1": [
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:        {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "devices": [
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "/dev/loop4"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            ],
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_name": "ceph_lv1",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_size": "21470642176",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "name": "ceph_lv1",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "tags": {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cluster_name": "ceph",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.crush_device_class": "",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.encrypted": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.objectstore": "bluestore",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osd_id": "1",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.type": "block",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.vdo": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.with_tpm": "0"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            },
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "type": "block",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "vg_name": "ceph_vg1"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:        }
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:    ],
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:    "2": [
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:        {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "devices": [
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "/dev/loop5"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            ],
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_name": "ceph_lv2",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_size": "21470642176",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "name": "ceph_lv2",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "tags": {
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.cluster_name": "ceph",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.crush_device_class": "",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.encrypted": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.objectstore": "bluestore",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osd_id": "2",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.type": "block",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.vdo": "0",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:                "ceph.with_tpm": "0"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            },
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "type": "block",
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:            "vg_name": "ceph_vg2"
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:        }
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]:    ]
Jan 27 09:07:27 np0005597378 relaxed_davinci[333142]: }
Jan 27 09:07:27 np0005597378 systemd[1]: libpod-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope: Deactivated successfully.
Jan 27 09:07:27 np0005597378 conmon[333142]: conmon 7625e3c8e82afce8af6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope/container/memory.events
Jan 27 09:07:27 np0005597378 podman[333124]: 2026-01-27 14:07:27.899564106 +0000 UTC m=+0.455160697 container died 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:27.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f579ce48dc84265abff501db938b4961e818aaf7175ef95344e22a542f59324c-merged.mount: Deactivated successfully.
Jan 27 09:07:28 np0005597378 podman[333124]: 2026-01-27 14:07:28.086379139 +0000 UTC m=+0.641975730 container remove 7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:07:28 np0005597378 systemd[1]: libpod-conmon-7625e3c8e82afce8af6b005294e4b48bf06be22afe1189aad2a216be4cc22efd.scope: Deactivated successfully.
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.378 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Successfully updated port: 56cf48cb-2667-496b-8157-5edbbc1a6091 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.400 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.403 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.404 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.504 238945 DEBUG nova.compute.manager [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-changed-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.504 238945 DEBUG nova.compute.manager [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Refreshing instance network info cache due to event network-changed-56cf48cb-2667-496b-8157-5edbbc1a6091. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.504 238945 DEBUG oslo_concurrency.lockutils [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:28 np0005597378 nova_compute[238941]: 2026-01-27 14:07:28.573 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.555788159 +0000 UTC m=+0.032631258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.693171082 +0000 UTC m=+0.170014151 container create f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:07:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:28.746 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 181 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Jan 27 09:07:28 np0005597378 systemd[1]: Started libpod-conmon-f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a.scope.
Jan 27 09:07:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.886192131 +0000 UTC m=+0.363035220 container init f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.893724903 +0000 UTC m=+0.370567972 container start f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:07:28 np0005597378 elated_wilbur[333243]: 167 167
Jan 27 09:07:28 np0005597378 systemd[1]: libpod-f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a.scope: Deactivated successfully.
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.90255027 +0000 UTC m=+0.379393359 container attach f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.903157217 +0000 UTC m=+0.380000286 container died f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:07:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3d4886ffee917c7b83eb59f0b4abb7e716d89bbc542b55558a14cdd5b260be8b-merged.mount: Deactivated successfully.
Jan 27 09:07:28 np0005597378 podman[333227]: 2026-01-27 14:07:28.951644591 +0000 UTC m=+0.428487660 container remove f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:07:28 np0005597378 systemd[1]: libpod-conmon-f81405220add34ec93d8704700f09cb0dac4f368069a140549d8c31ee2fd950a.scope: Deactivated successfully.
Jan 27 09:07:29 np0005597378 podman[333268]: 2026-01-27 14:07:29.134786964 +0000 UTC m=+0.044047505 container create 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:07:29 np0005597378 systemd[1]: Started libpod-conmon-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope.
Jan 27 09:07:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:07:29 np0005597378 podman[333268]: 2026-01-27 14:07:29.117872489 +0000 UTC m=+0.027133060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:07:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:07:29 np0005597378 podman[333268]: 2026-01-27 14:07:29.231793913 +0000 UTC m=+0.141054474 container init 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:29 np0005597378 podman[333268]: 2026-01-27 14:07:29.239704635 +0000 UTC m=+0.148965176 container start 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:07:29 np0005597378 podman[333268]: 2026-01-27 14:07:29.2428495 +0000 UTC m=+0.152110071 container attach 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.314 238945 DEBUG nova.network.neutron [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updating instance_info_cache with network_info: [{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.337 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.337 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance network_info: |[{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.337 238945 DEBUG oslo_concurrency.lockutils [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.338 238945 DEBUG nova.network.neutron [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Refreshing network info cache for port 56cf48cb-2667-496b-8157-5edbbc1a6091 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.340 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start _get_guest_xml network_info=[{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.344 238945 WARNING nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.350 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.350 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.356 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.357 238945 DEBUG nova.virt.libvirt.host [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.357 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.357 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.358 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.359 238945 DEBUG nova.virt.hardware [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.361 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:29 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 27 09:07:29 np0005597378 lvm[333383]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:07:29 np0005597378 lvm[333383]: VG ceph_vg1 finished
Jan 27 09:07:29 np0005597378 lvm[333382]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:07:29 np0005597378 lvm[333382]: VG ceph_vg0 finished
Jan 27 09:07:29 np0005597378 lvm[333385]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:07:29 np0005597378 lvm[333385]: VG ceph_vg2 finished
Jan 27 09:07:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:07:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654037007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:07:29 np0005597378 nova_compute[238941]: 2026-01-27 14:07:29.991 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:30 np0005597378 objective_engelbart[333284]: {}
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.040 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.044 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:30 np0005597378 systemd[1]: libpod-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope: Deactivated successfully.
Jan 27 09:07:30 np0005597378 systemd[1]: libpod-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope: Consumed 1.292s CPU time.
Jan 27 09:07:30 np0005597378 podman[333268]: 2026-01-27 14:07:30.04895257 +0000 UTC m=+0.958213111 container died 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4f77634a7a339697e20dfb45b1851c7f44356955d453ef08f618f8287453f220-merged.mount: Deactivated successfully.
Jan 27 09:07:30 np0005597378 podman[333268]: 2026-01-27 14:07:30.142377152 +0000 UTC m=+1.051637693 container remove 06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_engelbart, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:07:30 np0005597378 systemd[1]: libpod-conmon-06a5eba63037ce62af7f5380527d79e3de4206d249bc0c7baceecdc9a54f0b9b.scope: Deactivated successfully.
Jan 27 09:07:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:07:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:07:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:07:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:07:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:30Z|01023|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:07:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:30Z|01024|binding|INFO|Releasing lport 553ac0b9-81da-4f9f-8f6d-c743fffbe53a from this chassis (sb_readonly=0)
Jan 27 09:07:30 np0005597378 NetworkManager[48904]: <info>  [1769522850.4417] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 27 09:07:30 np0005597378 NetworkManager[48904]: <info>  [1769522850.4429] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:30Z|01025|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:07:30 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:30Z|01026|binding|INFO|Releasing lport 553ac0b9-81da-4f9f-8f6d-c743fffbe53a from this chassis (sb_readonly=0)
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.475 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.567 238945 DEBUG nova.network.neutron [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updated VIF entry in instance network info cache for port 56cf48cb-2667-496b-8157-5edbbc1a6091. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.568 238945 DEBUG nova.network.neutron [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updating instance_info_cache with network_info: [{"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.590 238945 DEBUG oslo_concurrency.lockutils [req-60ad3bba-011c-4e96-8761-58189015a503 req-b24921bd-f6b2-48c8-81af-bf8fee4932a5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6eca106c-c3a5-4932-93f4-8208e54431e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:07:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332052533' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.654 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.657 238945 DEBUG nova.virt.libvirt.vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-205053496',display_name='tempest-ServersNegativeTestJSON-server-205053496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-205053496',id=108,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-opmrxpla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:24Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=6eca106c-c3a5-4932-93f4-8208e54431e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.657 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.659 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.660 238945 DEBUG nova.objects.instance [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6eca106c-c3a5-4932-93f4-8208e54431e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.676 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <uuid>6eca106c-c3a5-4932-93f4-8208e54431e0</uuid>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <name>instance-0000006c</name>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersNegativeTestJSON-server-205053496</nova:name>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:07:29</nova:creationTime>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:user uuid="945414e1b82946ccadab2e408cf6151c">tempest-ServersNegativeTestJSON-1782469845-project-member</nova:user>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:project uuid="96c668beb6b74661927ce283539bb68e">tempest-ServersNegativeTestJSON-1782469845</nova:project>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <nova:port uuid="56cf48cb-2667-496b-8157-5edbbc1a6091">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <entry name="serial">6eca106c-c3a5-4932-93f4-8208e54431e0</entry>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <entry name="uuid">6eca106c-c3a5-4932-93f4-8208e54431e0</entry>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6eca106c-c3a5-4932-93f4-8208e54431e0_disk">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:ef:80:80"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <target dev="tap56cf48cb-26"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/console.log" append="off"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:07:30 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:07:30 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:07:30 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:07:30 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.678 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Preparing to wait for external event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.678 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.678 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.679 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.679 238945 DEBUG nova.virt.libvirt.vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:07:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-205053496',display_name='tempest-ServersNegativeTestJSON-server-205053496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-205053496',id=108,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-opmrxpla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:07:24Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=6eca106c-c3a5-4932-93f4-8208e54431e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.680 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.680 238945 DEBUG nova.network.os_vif_util [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.681 238945 DEBUG os_vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.682 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.683 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.686 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56cf48cb-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.686 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56cf48cb-26, col_values=(('external_ids', {'iface-id': '56cf48cb-2667-496b-8157-5edbbc1a6091', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:80:80', 'vm-uuid': '6eca106c-c3a5-4932-93f4-8208e54431e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 NetworkManager[48904]: <info>  [1769522850.6889] manager: (tap56cf48cb-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.696 238945 INFO os_vif [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26')#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.766 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.766 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.766 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No VIF found with MAC fa:16:3e:ef:80:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.767 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Using config drive#033[00m
Jan 27 09:07:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 195 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 187 op/s
Jan 27 09:07:30 np0005597378 nova_compute[238941]: 2026-01-27 14:07:30.784 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.091 238945 DEBUG nova.compute.manager [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.091 238945 DEBUG nova.compute.manager [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing instance network info cache due to event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.092 238945 DEBUG oslo_concurrency.lockutils [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.092 238945 DEBUG oslo_concurrency.lockutils [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.092 238945 DEBUG nova.network.neutron [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:07:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:07:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.295 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Creating config drive at /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.304 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98kvuup2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.450 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98kvuup2" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.473 238945 DEBUG nova.storage.rbd_utils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.477 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:31Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:07:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:31Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.727 238945 DEBUG oslo_concurrency.processutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config 6eca106c-c3a5-4932-93f4-8208e54431e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.728 238945 INFO nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deleting local config drive /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0/disk.config because it was imported into RBD.#033[00m
Jan 27 09:07:31 np0005597378 kernel: tap56cf48cb-26: entered promiscuous mode
Jan 27 09:07:31 np0005597378 NetworkManager[48904]: <info>  [1769522851.7804] manager: (tap56cf48cb-26): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Jan 27 09:07:31 np0005597378 systemd-udevd[333381]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:07:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:31Z|01027|binding|INFO|Claiming lport 56cf48cb-2667-496b-8157-5edbbc1a6091 for this chassis.
Jan 27 09:07:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:31Z|01028|binding|INFO|56cf48cb-2667-496b-8157-5edbbc1a6091: Claiming fa:16:3e:ef:80:80 10.100.0.5
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.789 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:80:80 10.100.0.5'], port_security=['fa:16:3e:ef:80:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6eca106c-c3a5-4932-93f4-8208e54431e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=56cf48cb-2667-496b-8157-5edbbc1a6091) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.790 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 56cf48cb-2667-496b-8157-5edbbc1a6091 in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis#033[00m
Jan 27 09:07:31 np0005597378 NetworkManager[48904]: <info>  [1769522851.7921] device (tap56cf48cb-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:07:31 np0005597378 NetworkManager[48904]: <info>  [1769522851.7931] device (tap56cf48cb-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.794 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b#033[00m
Jan 27 09:07:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:31Z|01029|binding|INFO|Setting lport 56cf48cb-2667-496b-8157-5edbbc1a6091 ovn-installed in OVS
Jan 27 09:07:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:31Z|01030|binding|INFO|Setting lport 56cf48cb-2667-496b-8157-5edbbc1a6091 up in Southbound
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1c21f1-b772-4f66-97fd-2111ed07aae2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:31 np0005597378 systemd-machined[207425]: New machine qemu-134-instance-0000006c.
Jan 27 09:07:31 np0005597378 systemd[1]: Started Virtual Machine qemu-134-instance-0000006c.
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.846 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[40801542-8de5-4839-9e16-dd5a29a5a5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.853 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e92b30e1-4a1a-4e43-8d10-048b6e85938b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.891 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[362ff557-65f0-4b31-810d-f7a6ef17cb89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.916 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34d5ff51-4348-4d36-9ffe-5331ea1f1f76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333553, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.931 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfbe16a-dc39-4653-8b99-b4f68f00f774]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547372, 'tstamp': 547372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333555, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547375, 'tstamp': 547375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333555, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:31 np0005597378 nova_compute[238941]: 2026-01-27 14:07:31.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.935 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.935 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.936 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:31.936 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.324 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522852.32384, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.324 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Started (Lifecycle Event)#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.350 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522852.323947, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.350 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.370 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.373 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.395 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.474 238945 DEBUG nova.compute.manager [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.475 238945 DEBUG oslo_concurrency.lockutils [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.475 238945 DEBUG oslo_concurrency.lockutils [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.476 238945 DEBUG oslo_concurrency.lockutils [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.476 238945 DEBUG nova.compute.manager [req-2644d724-923c-4925-867d-69e4c53acd56 req-ac4a86a3-1bee-44cb-b439-88e09eb7651d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Processing event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.477 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.479 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522852.479783, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.480 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.481 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.487 238945 INFO nova.virt.libvirt.driver [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance spawned successfully.#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.487 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.517 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.521 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.522 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.522 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.523 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.523 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.524 238945 DEBUG nova.virt.libvirt.driver [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.527 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.572 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.623 238945 INFO nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 8.29 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.623 238945 DEBUG nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.692 238945 INFO nova.compute.manager [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 9.46 seconds to build instance.#033[00m
Jan 27 09:07:32 np0005597378 nova_compute[238941]: 2026-01-27 14:07:32.727 238945 DEBUG oslo_concurrency.lockutils [None req-8724599e-2c7c-4a40-8f02-1a6bf4d5929b 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 195 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 137 op/s
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.078 238945 DEBUG nova.network.neutron [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updated VIF entry in instance network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.079 238945 DEBUG nova.network.neutron [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.103 238945 DEBUG oslo_concurrency.lockutils [req-67b3fe33-9236-4286-bab5-6943b411b201 req-3c5ad3e6-8b84-4476-af35-457d9df530a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.486 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.486 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.487 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.487 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.487 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.488 238945 INFO nova.compute.manager [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Terminating instance#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.489 238945 DEBUG nova.compute.manager [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:07:33 np0005597378 kernel: tap56cf48cb-26 (unregistering): left promiscuous mode
Jan 27 09:07:33 np0005597378 NetworkManager[48904]: <info>  [1769522853.5311] device (tap56cf48cb-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:07:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:33Z|01031|binding|INFO|Releasing lport 56cf48cb-2667-496b-8157-5edbbc1a6091 from this chassis (sb_readonly=0)
Jan 27 09:07:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:33Z|01032|binding|INFO|Setting lport 56cf48cb-2667-496b-8157-5edbbc1a6091 down in Southbound
Jan 27 09:07:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:33Z|01033|binding|INFO|Removing iface tap56cf48cb-26 ovn-installed in OVS
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.554 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:80:80 10.100.0.5'], port_security=['fa:16:3e:ef:80:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6eca106c-c3a5-4932-93f4-8208e54431e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=56cf48cb-2667-496b-8157-5edbbc1a6091) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.555 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 56cf48cb-2667-496b-8157-5edbbc1a6091 in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.557 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.575 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04d1ec8a-8bfb-4743-bb31-19e92846863a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:33 np0005597378 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Jan 27 09:07:33 np0005597378 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Consumed 1.461s CPU time.
Jan 27 09:07:33 np0005597378 systemd-machined[207425]: Machine qemu-134-instance-0000006c terminated.
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.605 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[300612fd-5b88-4218-98d9-2b7ac22312c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.608 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fefc108e-a0f4-4689-b160-09caa94b35a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.632 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6d092490-e3d9-47cb-9fb1-c667b392399c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f17a62-2fea-49d4-bd6e-0557f7aff005]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547362, 'reachable_time': 37253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333609, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.671 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d7340c-34ba-4d83-804b-1b4079cac513]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547372, 'tstamp': 547372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333610, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5d5d79a0-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547375, 'tstamp': 547375}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333610, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.673 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.678 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.678 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:33.679 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.720 238945 INFO nova.virt.libvirt.driver [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Instance destroyed successfully.#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.721 238945 DEBUG nova.objects.instance [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'resources' on Instance uuid 6eca106c-c3a5-4932-93f4-8208e54431e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.735 238945 DEBUG nova.virt.libvirt.vif [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:07:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-205053496',display_name='tempest-ServersNegativeTestJSON-server-205053496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-205053496',id=108,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-opmrxpla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:07:32Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=6eca106c-c3a5-4932-93f4-8208e54431e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.735 238945 DEBUG nova.network.os_vif_util [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "56cf48cb-2667-496b-8157-5edbbc1a6091", "address": "fa:16:3e:ef:80:80", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56cf48cb-26", "ovs_interfaceid": "56cf48cb-2667-496b-8157-5edbbc1a6091", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.736 238945 DEBUG nova.network.os_vif_util [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.737 238945 DEBUG os_vif [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.740 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56cf48cb-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:07:33 np0005597378 nova_compute[238941]: 2026-01-27 14:07:33.747 238945 INFO os_vif [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:80:80,bridge_name='br-int',has_traffic_filtering=True,id=56cf48cb-2667-496b-8157-5edbbc1a6091,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56cf48cb-26')#033[00m
Jan 27 09:07:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.563 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.564 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.565 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.565 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.565 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] No waiting events found dispatching network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 WARNING nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received unexpected event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-unplugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.566 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] No waiting events found dispatching network-vif-unplugged-56cf48cb-2667-496b-8157-5edbbc1a6091 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-unplugged-56cf48cb-2667-496b-8157-5edbbc1a6091 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.567 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.568 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.568 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.568 238945 DEBUG oslo_concurrency.lockutils [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.569 238945 DEBUG nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] No waiting events found dispatching network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.569 238945 WARNING nova.compute.manager [req-64cc4c49-49eb-44bf-a488-95a866d3e773 req-efd75e53-013e-4440-9e31-ad3322496c20 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received unexpected event network-vif-plugged-56cf48cb-2667-496b-8157-5edbbc1a6091 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:07:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 211 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.914 238945 INFO nova.virt.libvirt.driver [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deleting instance files /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0_del#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.915 238945 INFO nova.virt.libvirt.driver [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deletion of /var/lib/nova/instances/6eca106c-c3a5-4932-93f4-8208e54431e0_del complete#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.960 238945 INFO nova.compute.manager [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.961 238945 DEBUG oslo.service.loopingcall [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.961 238945 DEBUG nova.compute.manager [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:07:34 np0005597378 nova_compute[238941]: 2026-01-27 14:07:34.962 238945 DEBUG nova.network.neutron [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:07:36 np0005597378 nova_compute[238941]: 2026-01-27 14:07:36.362 238945 DEBUG nova.network.neutron [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:36 np0005597378 nova_compute[238941]: 2026-01-27 14:07:36.382 238945 INFO nova.compute.manager [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Took 1.42 seconds to deallocate network for instance.#033[00m
Jan 27 09:07:36 np0005597378 nova_compute[238941]: 2026-01-27 14:07:36.424 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:36 np0005597378 nova_compute[238941]: 2026-01-27 14:07:36.425 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:36 np0005597378 nova_compute[238941]: 2026-01-27 14:07:36.435 238945 DEBUG nova.compute.manager [req-4b79ec55-4d1b-480f-a476-be10a3cb6a28 req-cb728c6e-8d37-4250-aaa5-087a483505d8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Received event network-vif-deleted-56cf48cb-2667-496b-8157-5edbbc1a6091 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:36 np0005597378 nova_compute[238941]: 2026-01-27 14:07:36.516 238945 DEBUG oslo_concurrency.processutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 201 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Jan 27 09:07:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195373556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.110 238945 DEBUG oslo_concurrency.processutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.117 238945 DEBUG nova.compute.provider_tree [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.147 238945 DEBUG nova.scheduler.client.report [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.171 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.203 238945 INFO nova.scheduler.client.report [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Deleted allocations for instance 6eca106c-c3a5-4932-93f4-8208e54431e0#033[00m
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.258 238945 DEBUG oslo_concurrency.lockutils [None req-b7289864-a349-4b0a-961c-92d0ebf8fc36 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "6eca106c-c3a5-4932-93f4-8208e54431e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:37 np0005597378 nova_compute[238941]: 2026-01-27 14:07:37.358 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:38 np0005597378 nova_compute[238941]: 2026-01-27 14:07:38.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 186 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.0 MiB/s wr, 256 op/s
Jan 27 09:07:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:40Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:be:34 10.100.0.6
Jan 27 09:07:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:40Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:be:34 10.100.0.6
Jan 27 09:07:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 189 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 210 op/s
Jan 27 09:07:42 np0005597378 nova_compute[238941]: 2026-01-27 14:07:42.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:42 np0005597378 podman[333663]: 2026-01-27 14:07:42.725294163 +0000 UTC m=+0.059729307 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 27 09:07:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 189 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 181 op/s
Jan 27 09:07:43 np0005597378 podman[333684]: 2026-01-27 14:07:43.742198871 +0000 UTC m=+0.082251983 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:07:43 np0005597378 nova_compute[238941]: 2026-01-27 14:07:43.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 27 09:07:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:46.313 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.379 238945 INFO nova.compute.manager [None req-96196109-eabc-4b55-9315-acdfe286084d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Get console output#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.384 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.669 238945 INFO nova.compute.manager [None req-e09f6fa5-d363-4576-837e-5370dde607db a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Pausing#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.670 238945 DEBUG nova.objects.instance [None req-e09f6fa5-d363-4576-837e-5370dde607db a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.713 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522866.7134645, 4259b642-9030-422e-b18b-71be996845f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.714 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.716 238945 DEBUG nova.compute.manager [None req-e09f6fa5-d363-4576-837e-5370dde607db a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.750 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:46 np0005597378 nova_compute[238941]: 2026-01-27 14:07:46.754 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 27 09:07:47 np0005597378 nova_compute[238941]: 2026-01-27 14:07:47.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:07:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:07:48 np0005597378 nova_compute[238941]: 2026-01-27 14:07:48.718 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522853.717136, 6eca106c-c3a5-4932-93f4-8208e54431e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:48 np0005597378 nova_compute[238941]: 2026-01-27 14:07:48.719 238945 INFO nova.compute.manager [-] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:07:48 np0005597378 nova_compute[238941]: 2026-01-27 14:07:48.738 238945 DEBUG nova.compute.manager [None req-1f6534ae-ef01-4e71-9e1a-870876ef0006 - - - - - -] [instance: 6eca106c-c3a5-4932-93f4-8208e54431e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:48 np0005597378 nova_compute[238941]: 2026-01-27 14:07:48.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 27 09:07:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.038 238945 INFO nova.compute.manager [None req-1792ab75-7756-4840-8b1e-d40a149b5a9b a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Get console output#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.043 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.201 238945 INFO nova.compute.manager [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Unpausing#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.202 238945 DEBUG nova.objects.instance [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.227 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522870.2266195, 4259b642-9030-422e-b18b-71be996845f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.228 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:07:50 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.232 238945 DEBUG nova.virt.libvirt.guest [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.232 238945 DEBUG nova.compute.manager [None req-80c79fc0-e7c3-4743-afcf-a42c1b092707 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.259 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:07:50 np0005597378 nova_compute[238941]: 2026-01-27 14:07:50.264 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:07:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 444 KiB/s wr, 49 op/s
Jan 27 09:07:52 np0005597378 nova_compute[238941]: 2026-01-27 14:07:52.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:52 np0005597378 nova_compute[238941]: 2026-01-27 14:07:52.389 238945 INFO nova.compute.manager [None req-3e1037f3-3727-4bba-ae36-a954f6bdf977 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Get console output#033[00m
Jan 27 09:07:52 np0005597378 nova_compute[238941]: 2026-01-27 14:07:52.394 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:07:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 80 KiB/s wr, 18 op/s
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.434 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.434 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.435 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.435 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.435 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.436 238945 INFO nova.compute.manager [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Terminating instance#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.437 238945 DEBUG nova.compute.manager [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.503 238945 DEBUG nova.compute.manager [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.503 238945 DEBUG nova.compute.manager [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing instance network info cache due to event network-changed-f52f3fb0-e55e-48d6-b983-7e87ed6296d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.504 238945 DEBUG oslo_concurrency.lockutils [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.504 238945 DEBUG oslo_concurrency.lockutils [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.504 238945 DEBUG nova.network.neutron [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Refreshing network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:07:53 np0005597378 kernel: tapf52f3fb0-e5 (unregistering): left promiscuous mode
Jan 27 09:07:53 np0005597378 NetworkManager[48904]: <info>  [1769522873.5175] device (tapf52f3fb0-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:07:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:53Z|01034|binding|INFO|Releasing lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 from this chassis (sb_readonly=0)
Jan 27 09:07:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:53Z|01035|binding|INFO|Setting lport f52f3fb0-e55e-48d6-b983-7e87ed6296d2 down in Southbound
Jan 27 09:07:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:07:53Z|01036|binding|INFO|Removing iface tapf52f3fb0-e5 ovn-installed in OVS
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.535 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:be:34 10.100.0.6'], port_security=['fa:16:3e:9d:be:34 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4259b642-9030-422e-b18b-71be996845f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dcab28e-8e80-4909-a1ac-f9e4562ec577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=52e54f30-cc96-4c77-8cb6-812d376ca09a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f52f3fb0-e55e-48d6-b983-7e87ed6296d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.537 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f52f3fb0-e55e-48d6-b983-7e87ed6296d2 in datapath a225edd2-04b0-4782-bb92-d2dbbfa8bc5e unbound from our chassis#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.538 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6233feb-e196-412e-933a-40578f084d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.541 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e namespace which is not needed anymore#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 27 09:07:53 np0005597378 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 15.337s CPU time.
Jan 27 09:07:53 np0005597378 systemd-machined[207425]: Machine qemu-133-instance-0000006b terminated.
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.676 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.686 238945 INFO nova.virt.libvirt.driver [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Instance destroyed successfully.#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.686 238945 DEBUG nova.objects.instance [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 4259b642-9030-422e-b18b-71be996845f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:07:53 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : haproxy version is 2.8.14-c23fe91
Jan 27 09:07:53 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [NOTICE]   (332828) : path to executable is /usr/sbin/haproxy
Jan 27 09:07:53 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [WARNING]  (332828) : Exiting Master process...
Jan 27 09:07:53 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [WARNING]  (332828) : Exiting Master process...
Jan 27 09:07:53 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [ALERT]    (332828) : Current worker (332853) exited with code 143 (Terminated)
Jan 27 09:07:53 np0005597378 neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e[332798]: [WARNING]  (332828) : All workers exited. Exiting... (0)
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.701 238945 DEBUG nova.virt.libvirt.vif [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1477433599',display_name='tempest-TestNetworkAdvancedServerOps-server-1477433599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1477433599',id=107,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOTlA/zzf8k5RE7YtGF0JstMbK3jsPM4B/qPl2KHxnpSuYQBvrk4VnlFqVZKbSWBalvc4W/8oi1h1woqWdU+1B67nCBWNnY6LMtFdr08A3euNBaTSW62NVvw7+zpwmvIZg==',key_name='tempest-TestNetworkAdvancedServerOps-587163644',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-hwy9nsc5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:07:50Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=4259b642-9030-422e-b18b-71be996845f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.701 238945 DEBUG nova.network.os_vif_util [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.702 238945 DEBUG nova.network.os_vif_util [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.703 238945 DEBUG os_vif [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:07:53 np0005597378 systemd[1]: libpod-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679.scope: Deactivated successfully.
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.705 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf52f3fb0-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.710 238945 INFO os_vif [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:be:34,bridge_name='br-int',has_traffic_filtering=True,id=f52f3fb0-e55e-48d6-b983-7e87ed6296d2,network=Network(a225edd2-04b0-4782-bb92-d2dbbfa8bc5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf52f3fb0-e5')#033[00m
Jan 27 09:07:53 np0005597378 podman[333736]: 2026-01-27 14:07:53.713541774 +0000 UTC m=+0.075250724 container died 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 09:07:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679-userdata-shm.mount: Deactivated successfully.
Jan 27 09:07:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2495157624be490c860f0c22e86290ed614242e1ab449ab46285c8548caa66dc-merged.mount: Deactivated successfully.
Jan 27 09:07:53 np0005597378 podman[333736]: 2026-01-27 14:07:53.81645492 +0000 UTC m=+0.178163850 container cleanup 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:07:53 np0005597378 systemd[1]: libpod-conmon-89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679.scope: Deactivated successfully.
Jan 27 09:07:53 np0005597378 podman[333792]: 2026-01-27 14:07:53.920253891 +0000 UTC m=+0.076620100 container remove 89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.928 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0269b5d-b583-47e9-9345-85882af36f9b]: (4, ('Tue Jan 27 02:07:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e (89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679)\n89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679\nTue Jan 27 02:07:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e (89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679)\n89a60121a05ab3009146370dc628be12d7ea237399c2e121ea3cfd1794518679\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[984afa46-962c-430c-8464-11ae0284731c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.931 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa225edd2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 kernel: tapa225edd2-00: left promiscuous mode
Jan 27 09:07:53 np0005597378 nova_compute[238941]: 2026-01-27 14:07:53.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.952 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86da65ea-5a8e-4314-a1d6-a5da902ca311]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.972 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9beebf-06fa-4060-8e4b-b8f69eaedc97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.974 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08685e10-4173-49ed-a1f9-14732a2c0682]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:53.998 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af34b7e6-1ab9-46e7-87d6-71db4fdfb957]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548363, 'reachable_time': 16894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333808, 'error': None, 'target': 'ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:54.001 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a225edd2-04b0-4782-bb92-d2dbbfa8bc5e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:07:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:07:54.001 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[21669a44-3129-4a72-b6be-155bba58cc5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:07:54 np0005597378 systemd[1]: run-netns-ovnmeta\x2da225edd2\x2d04b0\x2d4782\x2dbb92\x2dd2dbbfa8bc5e.mount: Deactivated successfully.
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.264 238945 INFO nova.virt.libvirt.driver [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deleting instance files /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4_del#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.265 238945 INFO nova.virt.libvirt.driver [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deletion of /var/lib/nova/instances/4259b642-9030-422e-b18b-71be996845f4_del complete#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.310 238945 INFO nova.compute.manager [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.311 238945 DEBUG oslo.service.loopingcall [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.311 238945 DEBUG nova.compute.manager [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.311 238945 DEBUG nova.network.neutron [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:07:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.453 238945 DEBUG nova.compute.manager [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-unplugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG oslo_concurrency.lockutils [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG oslo_concurrency.lockutils [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG oslo_concurrency.lockutils [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.454 238945 DEBUG nova.compute.manager [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] No waiting events found dispatching network-vif-unplugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.455 238945 DEBUG nova.compute.manager [req-64e50160-ee17-4609-abbc-5acadb541d36 req-eeb0face-c458-4d00-8631-47b6ad8cac54 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-unplugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.742 238945 DEBUG nova.network.neutron [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updated VIF entry in instance network info cache for port f52f3fb0-e55e-48d6-b983-7e87ed6296d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.743 238945 DEBUG nova.network.neutron [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [{"id": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "address": "fa:16:3e:9d:be:34", "network": {"id": "a225edd2-04b0-4782-bb92-d2dbbfa8bc5e", "bridge": "br-int", "label": "tempest-network-smoke--953373426", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf52f3fb0-e5", "ovs_interfaceid": "f52f3fb0-e55e-48d6-b983-7e87ed6296d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:54 np0005597378 nova_compute[238941]: 2026-01-27 14:07:54.779 238945 DEBUG oslo_concurrency.lockutils [req-0a7f88cf-d336-46db-85cc-6e2007a17fc4 req-2e17cbbb-e757-4972-8fe7-18666f1f294c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-4259b642-9030-422e-b18b-71be996845f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:07:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 81 KiB/s wr, 18 op/s
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.391 238945 DEBUG nova.network.neutron [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.412 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.446 238945 INFO nova.compute.manager [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] Took 1.14 seconds to deallocate network for instance.#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.504 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.504 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:55 np0005597378 nova_compute[238941]: 2026-01-27 14:07:55.577 238945 DEBUG oslo_concurrency.processutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285961464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.030 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.100 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.101 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:07:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549020239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.252 238945 DEBUG oslo_concurrency.processutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.260 238945 DEBUG nova.compute.provider_tree [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.276 238945 DEBUG nova.scheduler.client.report [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.299 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.316 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.318 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3637MB free_disk=59.89653507061303GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.318 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.318 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.333 238945 INFO nova.scheduler.client.report [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 4259b642-9030-422e-b18b-71be996845f4#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.427 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 7514a588-c48b-45af-a889-ea57cc9f1730 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.428 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.428 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.436 238945 DEBUG nova.compute.manager [req-fc12602b-30db-4a39-aae3-5e86c6b7256d req-c88ce134-0d89-4ef1-8381-c329faea7626 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-deleted-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.440 238945 DEBUG oslo_concurrency.lockutils [None req-138d996c-cb37-452f-94af-a495ea97af6c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.472 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.565 238945 DEBUG nova.compute.manager [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG oslo_concurrency.lockutils [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "4259b642-9030-422e-b18b-71be996845f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG oslo_concurrency.lockutils [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG oslo_concurrency.lockutils [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "4259b642-9030-422e-b18b-71be996845f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.566 238945 DEBUG nova.compute.manager [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] No waiting events found dispatching network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:07:56 np0005597378 nova_compute[238941]: 2026-01-27 14:07:56.567 238945 WARNING nova.compute.manager [req-7e72fce3-bee1-43ae-9e0d-095fbada493f req-89818c35-27aa-4712-bb75-6b1a5b162e23 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 4259b642-9030-422e-b18b-71be996845f4] Received unexpected event network-vif-plugged-f52f3fb0-e55e-48d6-b983-7e87ed6296d2 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:07:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 179 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 18 op/s
Jan 27 09:07:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:07:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584755235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:07:57 np0005597378 nova_compute[238941]: 2026-01-27 14:07:57.083 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:07:57 np0005597378 nova_compute[238941]: 2026-01-27 14:07:57.090 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:07:57 np0005597378 nova_compute[238941]: 2026-01-27 14:07:57.109 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:07:57 np0005597378 nova_compute[238941]: 2026-01-27 14:07:57.140 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:07:57 np0005597378 nova_compute[238941]: 2026-01-27 14:07:57.140 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:07:57 np0005597378 nova_compute[238941]: 2026-01-27 14:07:57.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:58 np0005597378 nova_compute[238941]: 2026-01-27 14:07:58.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:07:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 121 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 27 09:07:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:07:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:07:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951631030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:07:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:07:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951631030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:08:00 np0005597378 nova_compute[238941]: 2026-01-27 14:08:00.141 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 27 09:08:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:01Z|01037|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:08:01 np0005597378 nova_compute[238941]: 2026-01-27 14:08:01.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:01Z|01038|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:08:01 np0005597378 nova_compute[238941]: 2026-01-27 14:08:01.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.588 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.589 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.589 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:08:02 np0005597378 nova_compute[238941]: 2026-01-27 14:08:02.589 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 27 09:08:03 np0005597378 nova_compute[238941]: 2026-01-27 14:08:03.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:03 np0005597378 nova_compute[238941]: 2026-01-27 14:08:03.849 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:03 np0005597378 nova_compute[238941]: 2026-01-27 14:08:03.863 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:03 np0005597378 nova_compute[238941]: 2026-01-27 14:08:03.864 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:08:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 27 09:08:05 np0005597378 nova_compute[238941]: 2026-01-27 14:08:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:05 np0005597378 nova_compute[238941]: 2026-01-27 14:08:05.963 238945 INFO nova.compute.manager [None req-22875a81-c4cf-43df-9ff6-e1c36c0bb9e8 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Pausing#033[00m
Jan 27 09:08:05 np0005597378 nova_compute[238941]: 2026-01-27 14:08:05.964 238945 DEBUG nova.objects.instance [None req-22875a81-c4cf-43df-9ff6-e1c36c0bb9e8 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'flavor' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:05 np0005597378 nova_compute[238941]: 2026-01-27 14:08:05.997 238945 DEBUG nova.compute.manager [None req-22875a81-c4cf-43df-9ff6-e1c36c0bb9e8 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:05 np0005597378 nova_compute[238941]: 2026-01-27 14:08:05.999 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522885.997804, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:05 np0005597378 nova_compute[238941]: 2026-01-27 14:08:05.999 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:08:06 np0005597378 nova_compute[238941]: 2026-01-27 14:08:06.049 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:06 np0005597378 nova_compute[238941]: 2026-01-27 14:08:06.053 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:06 np0005597378 nova_compute[238941]: 2026-01-27 14:08:06.086 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.334622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886334672, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1737, "num_deletes": 251, "total_data_size": 2786121, "memory_usage": 2831688, "flush_reason": "Manual Compaction"}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886600686, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2724592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38381, "largest_seqno": 40117, "table_properties": {"data_size": 2716603, "index_size": 4803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16617, "raw_average_key_size": 20, "raw_value_size": 2700644, "raw_average_value_size": 3269, "num_data_blocks": 214, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522709, "oldest_key_time": 1769522709, "file_creation_time": 1769522886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 266147 microseconds, and 6811 cpu microseconds.
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.600761) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2724592 bytes OK
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.600791) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.653457) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.653529) EVENT_LOG_v1 {"time_micros": 1769522886653516, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.653571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2778631, prev total WAL file size 2778631, number of live WAL files 2.
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.655110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2660KB)], [86(8451KB)]
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886655146, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11378502, "oldest_snapshot_seqno": -1}
Jan 27 09:08:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6493 keys, 9687486 bytes, temperature: kUnknown
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886796774, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9687486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9643170, "index_size": 26990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 165085, "raw_average_key_size": 25, "raw_value_size": 9526208, "raw_average_value_size": 1467, "num_data_blocks": 1084, "num_entries": 6493, "num_filter_entries": 6493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769522886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.797037) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9687486 bytes
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.975499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 68.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.3 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7007, records dropped: 514 output_compression: NoCompression
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.975533) EVENT_LOG_v1 {"time_micros": 1769522886975520, "job": 50, "event": "compaction_finished", "compaction_time_micros": 141724, "compaction_time_cpu_micros": 23221, "output_level": 6, "num_output_files": 1, "total_output_size": 9687486, "num_input_records": 7007, "num_output_records": 6493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886976260, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769522886978046, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.655024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:08:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:08:06.978264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:08:07 np0005597378 nova_compute[238941]: 2026-01-27 14:08:07.370 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:07 np0005597378 nova_compute[238941]: 2026-01-27 14:08:07.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:07 np0005597378 nova_compute[238941]: 2026-01-27 14:08:07.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.338 238945 INFO nova.compute.manager [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Unpausing#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.340 238945 DEBUG nova.objects.instance [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'flavor' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.368 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522888.3674927, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.369 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:08:08 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.375 238945 DEBUG nova.virt.libvirt.guest [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.378 238945 DEBUG nova.compute.manager [None req-e635b0e8-24ea-4953-a53e-a44bb1bbc230 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.389 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.394 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.417 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.684 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522873.6834962, 4259b642-9030-422e-b18b-71be996845f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.685 238945 INFO nova.compute.manager [-] [instance: 4259b642-9030-422e-b18b-71be996845f4] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.702 238945 DEBUG nova.compute.manager [None req-4bc2ae37-aa1d-4c23-8507-9ee5e808bb94 - - - - - -] [instance: 4259b642-9030-422e-b18b-71be996845f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:08 np0005597378 nova_compute[238941]: 2026-01-27 14:08:08.716 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 3.5 KiB/s wr, 10 op/s
Jan 27 09:08:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 09:08:11 np0005597378 nova_compute[238941]: 2026-01-27 14:08:11.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:12 np0005597378 nova_compute[238941]: 2026-01-27 14:08:12.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 09:08:13 np0005597378 nova_compute[238941]: 2026-01-27 14:08:13.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:13 np0005597378 podman[333878]: 2026-01-27 14:08:13.724301836 +0000 UTC m=+0.059227824 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:08:14 np0005597378 nova_compute[238941]: 2026-01-27 14:08:14.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 27 09:08:15 np0005597378 podman[333897]: 2026-01-27 14:08:15.198137488 +0000 UTC m=+0.537362906 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:08:17
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'vms', '.mgr', 'backups']
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.531 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.531 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.546 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.615 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.616 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.623 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.623 238945 INFO nova.compute.claims [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:08:17 np0005597378 nova_compute[238941]: 2026-01-27 14:08:17.742 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:08:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:08:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330996264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.322 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.328 238945 DEBUG nova.compute.provider_tree [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.346 238945 DEBUG nova.scheduler.client.report [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.370 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.371 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.419 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.420 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.440 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.459 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.538 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.540 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.540 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Creating image(s)#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.566 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.587 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.609 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.612 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.659 238945 DEBUG nova.policy [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.704 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.705 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.705 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.706 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.731 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.736 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2466272a-7218-432a-a223-43ade0ce6480_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:18 np0005597378 nova_compute[238941]: 2026-01-27 14:08:18.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:08:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:19 np0005597378 nova_compute[238941]: 2026-01-27 14:08:19.825 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Successfully created port: 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.495 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Successfully updated port: 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.511 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.511 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.512 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.604 238945 DEBUG nova.compute.manager [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.605 238945 DEBUG nova.compute.manager [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing instance network info cache due to event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.605 238945 DEBUG oslo_concurrency.lockutils [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.646 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:08:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 129 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 1 op/s
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.852 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 2466272a-7218-432a-a223-43ade0ce6480_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:20 np0005597378 nova_compute[238941]: 2026-01-27 14:08:20.917 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:08:21 np0005597378 nova_compute[238941]: 2026-01-27 14:08:21.559 238945 DEBUG nova.network.neutron [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:21 np0005597378 nova_compute[238941]: 2026-01-27 14:08:21.587 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:21 np0005597378 nova_compute[238941]: 2026-01-27 14:08:21.588 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance network_info: |[{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:08:21 np0005597378 nova_compute[238941]: 2026-01-27 14:08:21.588 238945 DEBUG oslo_concurrency.lockutils [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:21 np0005597378 nova_compute[238941]: 2026-01-27 14:08:21.589 238945 DEBUG nova.network.neutron [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.261 238945 DEBUG nova.objects.instance [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 2466272a-7218-432a-a223-43ade0ce6480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.273 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.274 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Ensure instance console log exists: /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.274 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.274 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.275 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.276 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start _get_guest_xml network_info=[{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.280 238945 WARNING nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.284 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.284 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.287 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.287 238945 DEBUG nova.virt.libvirt.host [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.287 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.288 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.288 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.288 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.289 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.290 238945 DEBUG nova.virt.hardware [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.293 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 129 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 1 op/s
Jan 27 09:08:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:08:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498171430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.846 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.865 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:22 np0005597378 nova_compute[238941]: 2026-01-27 14:08:22.868 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.367 238945 DEBUG nova.network.neutron [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updated VIF entry in instance network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.369 238945 DEBUG nova.network.neutron [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:08:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754932920' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.415 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.418 238945 DEBUG nova.virt.libvirt.vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-543476362',display_name='tempest-TestNetworkAdvancedServerOps-server-543476362',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-543476362',id=109,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/GNfo4rgH2jt9z1vILeWPgbvw2k851alu9Kp+NwI5lf80CNeN0I8Fy8fHycn/1SqZgv2Od2/qgDtUPrcIBt7klOfNWsUFqoF2kTS60AUSiiWxxXfFT80yb+FHTNgIRvA==',key_name='tempest-TestNetworkAdvancedServerOps-1613602353',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-0w9nwgs0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:18Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=2466272a-7218-432a-a223-43ade0ce6480,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.419 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.420 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.423 238945 DEBUG nova.objects.instance [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2466272a-7218-432a-a223-43ade0ce6480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.428 238945 DEBUG oslo_concurrency.lockutils [req-cdf3d5c0-0ac3-45ab-a149-96a4a2177dc9 req-3b7aba3f-5acd-4a2e-8f4a-df7447300712 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.447 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <uuid>2466272a-7218-432a-a223-43ade0ce6480</uuid>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <name>instance-0000006d</name>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-543476362</nova:name>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:08:22</nova:creationTime>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <nova:port uuid="82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <entry name="serial">2466272a-7218-432a-a223-43ade0ce6480</entry>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <entry name="uuid">2466272a-7218-432a-a223-43ade0ce6480</entry>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2466272a-7218-432a-a223-43ade0ce6480_disk">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/2466272a-7218-432a-a223-43ade0ce6480_disk.config">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:81:41:7b"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <target dev="tap82581e93-b7"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/console.log" append="off"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:08:23 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:08:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:08:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:08:23 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.450 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Preparing to wait for external event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.450 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.450 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.451 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.451 238945 DEBUG nova.virt.libvirt.vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-543476362',display_name='tempest-TestNetworkAdvancedServerOps-server-543476362',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-543476362',id=109,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/GNfo4rgH2jt9z1vILeWPgbvw2k851alu9Kp+NwI5lf80CNeN0I8Fy8fHycn/1SqZgv2Od2/qgDtUPrcIBt7klOfNWsUFqoF2kTS60AUSiiWxxXfFT80yb+FHTNgIRvA==',key_name='tempest-TestNetworkAdvancedServerOps-1613602353',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-0w9nwgs0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:18Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=2466272a-7218-432a-a223-43ade0ce6480,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.452 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.452 238945 DEBUG nova.network.os_vif_util [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.453 238945 DEBUG os_vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.454 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.454 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.457 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82581e93-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.457 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82581e93-b7, col_values=(('external_ids', {'iface-id': '82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:41:7b', 'vm-uuid': '2466272a-7218-432a-a223-43ade0ce6480'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:23 np0005597378 NetworkManager[48904]: <info>  [1769522903.4602] manager: (tap82581e93-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.466 238945 INFO os_vif [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7')#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.722 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.723 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.723 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:81:41:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.723 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Using config drive#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.965 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.998 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.998 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:23 np0005597378 nova_compute[238941]: 2026-01-27 14:08:23.998 238945 INFO nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Shelving#033[00m
Jan 27 09:08:24 np0005597378 nova_compute[238941]: 2026-01-27 14:08:24.019 238945 DEBUG nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:08:24 np0005597378 nova_compute[238941]: 2026-01-27 14:08:24.378 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Creating config drive at /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config#033[00m
Jan 27 09:08:24 np0005597378 nova_compute[238941]: 2026-01-27 14:08:24.382 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpouwg6yi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:24 np0005597378 nova_compute[238941]: 2026-01-27 14:08:24.520 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpouwg6yi4" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:24 np0005597378 nova_compute[238941]: 2026-01-27 14:08:24.548 238945 DEBUG nova.storage.rbd_utils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 2466272a-7218-432a-a223-43ade0ce6480_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:08:24 np0005597378 nova_compute[238941]: 2026-01-27 14:08:24.552 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config 2466272a-7218-432a-a223-43ade0ce6480_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 157 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.555 238945 DEBUG oslo_concurrency.processutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config 2466272a-7218-432a-a223-43ade0ce6480_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.556 238945 INFO nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deleting local config drive /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480/disk.config because it was imported into RBD.#033[00m
Jan 27 09:08:25 np0005597378 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 09:08:25 np0005597378 NetworkManager[48904]: <info>  [1769522905.6048] manager: (tap82581e93-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:25Z|01039|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 09:08:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:25Z|01040|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.617 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.618 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 bound to our chassis#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.619 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3ace283-87b2-4641-aad8-0cf005dc2525#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.633 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ccf6c4-f5c8-49d5-af7b-9dfa53c0caf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.634 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3ace283-81 in ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:08:25 np0005597378 systemd-udevd[334247]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.636 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3ace283-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.636 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1cb2a7-a5de-47ab-8546-697b466e96c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.637 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[705550d2-1db5-40e5-b326-a34683ceb24e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 systemd-machined[207425]: New machine qemu-135-instance-0000006d.
Jan 27 09:08:25 np0005597378 NetworkManager[48904]: <info>  [1769522905.6497] device (tap82581e93-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:08:25 np0005597378 NetworkManager[48904]: <info>  [1769522905.6502] device (tap82581e93-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.649 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3870f9-33f8-4802-8c93-bf3f9fc16b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:25 np0005597378 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.679 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49e36af6-d1d3-4118-832c-c3f28611fdb7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:25Z|01041|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 09:08:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:25Z|01042|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 up in Southbound
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.713 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[73742e97-077e-465b-b2f4-4af8efa386b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eef926a2-73a9-466e-8858-2a91f0529cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 NetworkManager[48904]: <info>  [1769522905.7214] manager: (tape3ace283-80): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.752 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4fe0a7-28d2-408f-886f-d289c820e199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.755 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[25e186ca-9656-49f1-a455-41e2b10734d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 NetworkManager[48904]: <info>  [1769522905.7808] device (tape3ace283-80): carrier: link connected
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.787 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[03db0e88-bf04-485c-b700-f02ec6f127da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.806 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6ab57e-ff0a-4ea2-8c9b-3bf453a20095]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554540, 'reachable_time': 42508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334279, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.821 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b96088de-87d1-45be-b7c8-1eec90ec3901]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:5a07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554540, 'tstamp': 554540}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334280, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.839 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52f3cd8a-4104-44e1-b540-35a304baf674]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554540, 'reachable_time': 42508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334281, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.870 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[942ae0e9-9ab9-4063-a33f-468268f9db92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.918 238945 DEBUG nova.compute.manager [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.919 238945 DEBUG oslo_concurrency.lockutils [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.919 238945 DEBUG oslo_concurrency.lockutils [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.919 238945 DEBUG oslo_concurrency.lockutils [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.920 238945 DEBUG nova.compute.manager [req-83c27736-978b-4ed7-8e2c-500b94992710 req-906c1e40-97a9-4097-a5c1-33e148a16264 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Processing event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16b224-bad7-4155-b3c0-03de2f32f126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.932 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3ace283-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:25 np0005597378 kernel: tape3ace283-80: entered promiscuous mode
Jan 27 09:08:25 np0005597378 NetworkManager[48904]: <info>  [1769522905.9357] manager: (tape3ace283-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.939 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3ace283-80, col_values=(('external_ids', {'iface-id': '8be2baf6-f073-4c01-989a-a8e6b98328a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:25Z|01043|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.944 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e740070-c5da-47e0-82be-04e5fa170668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.956 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:08:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:25.956 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'env', 'PROCESS_TAG=haproxy-e3ace283-87b2-4641-aad8-0cf005dc2525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3ace283-87b2-4641-aad8-0cf005dc2525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:08:25 np0005597378 nova_compute[238941]: 2026-01-27 14:08:25.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:26.057 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:26 np0005597378 podman[334313]: 2026-01-27 14:08:26.308634125 +0000 UTC m=+0.025349162 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.644 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522906.6437752, 2466272a-7218-432a-a223-43ade0ce6480 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.645 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Started (Lifecycle Event)#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.648 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.652 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.655 238945 INFO nova.virt.libvirt.driver [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance spawned successfully.#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.655 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.664 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.667 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.675 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.676 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.676 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.677 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.677 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.678 238945 DEBUG nova.virt.libvirt.driver [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.686 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.687 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522906.6439703, 2466272a-7218-432a-a223-43ade0ce6480 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.687 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.711 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.714 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522906.650505, 2466272a-7218-432a-a223-43ade0ce6480 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.715 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.734 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:26 np0005597378 podman[334313]: 2026-01-27 14:08:26.734490754 +0000 UTC m=+0.451205771 container create a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.749 238945 INFO nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.750 238945 DEBUG nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.754 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.778 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:08:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.817 238945 INFO nova.compute.manager [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 9.23 seconds to build instance.#033[00m
Jan 27 09:08:26 np0005597378 nova_compute[238941]: 2026-01-27 14:08:26.834 238945 DEBUG oslo_concurrency.lockutils [None req-53d93d1f-2415-408c-a184-744f0495fc45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:26 np0005597378 systemd[1]: Started libpod-conmon-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa.scope.
Jan 27 09:08:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911f762cd174dcda19c44d773820676a6e214b57e1d5674182215417e4db2dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:27 np0005597378 podman[334313]: 2026-01-27 14:08:27.136233454 +0000 UTC m=+0.852948491 container init a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 09:08:27 np0005597378 podman[334313]: 2026-01-27 14:08:27.143749646 +0000 UTC m=+0.860464663 container start a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:08:27 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : New worker (334377) forked
Jan 27 09:08:27 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : Loading success.
Jan 27 09:08:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.372 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.377 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011166045152947734 of space, bias 1.0, pg target 0.33498135458843203 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685634372584745 of space, bias 1.0, pg target 0.20056903117754235 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0656101082485542e-06 of space, bias 4.0, pg target 0.0012787321298982652 quantized to 16 (current 16)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:08:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:08:27 np0005597378 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 09:08:27 np0005597378 NetworkManager[48904]: <info>  [1769522907.7661] device (tap05f217fa-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:08:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:27Z|01044|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 09:08:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:27Z|01045|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 09:08:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:27Z|01046|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.782 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.784 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis#033[00m
Jan 27 09:08:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.785 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:08:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.786 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1338d1-415a-46ec-86ac-4e5010823fbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:27.787 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace which is not needed anymore#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:27 np0005597378 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 27 09:08:27 np0005597378 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Consumed 15.439s CPU time.
Jan 27 09:08:27 np0005597378 systemd-machined[207425]: Machine qemu-132-instance-0000006a terminated.
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.982 238945 DEBUG nova.compute.manager [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG oslo_concurrency.lockutils [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG oslo_concurrency.lockutils [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG oslo_concurrency.lockutils [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.983 238945 DEBUG nova.compute.manager [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.984 238945 WARNING nova.compute.manager [req-7f841b52-e9a5-4030-a21e-d73f05dd2328 req-da89a841-4b3a-4b01-b7a8-e39745cafdd8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving.#033[00m
Jan 27 09:08:27 np0005597378 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 09:08:27 np0005597378 NetworkManager[48904]: <info>  [1769522907.9950] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Jan 27 09:08:27 np0005597378 systemd-udevd[334264]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:08:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:27Z|01047|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 09:08:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:27Z|01048|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:08:27 np0005597378 nova_compute[238941]: 2026-01-27 14:08:27.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:27 np0005597378 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 09:08:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:28.005 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:28 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : haproxy version is 2.8.14-c23fe91
Jan 27 09:08:28 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [NOTICE]   (332144) : path to executable is /usr/sbin/haproxy
Jan 27 09:08:28 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [WARNING]  (332144) : Exiting Master process...
Jan 27 09:08:28 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [ALERT]    (332144) : Current worker (332146) exited with code 143 (Terminated)
Jan 27 09:08:28 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[332140]: [WARNING]  (332144) : All workers exited. Exiting... (0)
Jan 27 09:08:28 np0005597378 systemd[1]: libpod-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1.scope: Deactivated successfully.
Jan 27 09:08:28 np0005597378 podman[334407]: 2026-01-27 14:08:28.021835572 +0000 UTC m=+0.152136680 container died 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01049|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01050|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01051|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=1)
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01052|if_status|INFO|Dropped 1 log messages in last 212 seconds (most recently, 212 seconds ago) due to excessive rate
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01053|if_status|INFO|Not setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down as sb is readonly
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01054|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01055|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 09:08:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:28Z|01056|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.043 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance shutdown successfully after 4 seconds.#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.048 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.049 238945 DEBUG nova.objects.instance [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'numa_topology' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.052 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:28.054 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.069 238945 DEBUG nova.compute.manager [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.070 238945 DEBUG oslo_concurrency.lockutils [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.070 238945 DEBUG oslo_concurrency.lockutils [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.071 238945 DEBUG oslo_concurrency.lockutils [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.071 238945 DEBUG nova.compute.manager [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.071 238945 WARNING nova.compute.manager [req-756ae776-1d93-4a88-bba3-7d568fcd5552 req-4256a03b-9d75-4bae-a5c9-f1bfcf306438 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.283 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Beginning cold snapshot process#033[00m
Jan 27 09:08:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-924ca992457f6bdeab196a6ef3ca22c517d8ecce52c9ae93303d17b4c880e894-merged.mount: Deactivated successfully.
Jan 27 09:08:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1-userdata-shm.mount: Deactivated successfully.
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.436 238945 DEBUG nova.virt.libvirt.imagebackend [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:28 np0005597378 nova_compute[238941]: 2026-01-27 14:08:28.615 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] creating snapshot(ca0f503be1d54de58d3d2147b4ccf75e) on rbd image(7514a588-c48b-45af-a889-ea57cc9f1730_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:08:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 27 09:08:29 np0005597378 podman[334407]: 2026-01-27 14:08:29.109603936 +0000 UTC m=+1.239905044 container cleanup 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:08:29 np0005597378 systemd[1]: libpod-conmon-872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1.scope: Deactivated successfully.
Jan 27 09:08:29 np0005597378 nova_compute[238941]: 2026-01-27 14:08:29.364 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:29 np0005597378 NetworkManager[48904]: <info>  [1769522909.3656] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Jan 27 09:08:29 np0005597378 NetworkManager[48904]: <info>  [1769522909.3662] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Jan 27 09:08:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:29Z|01057|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:08:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:29Z|01058|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 09:08:29 np0005597378 nova_compute[238941]: 2026-01-27 14:08:29.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:29Z|01059|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:08:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:29Z|01060|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 09:08:29 np0005597378 nova_compute[238941]: 2026-01-27 14:08:29.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.090 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.090 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.091 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.092 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.093 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.094 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.095 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.095 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.095 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.096 238945 DEBUG oslo_concurrency.lockutils [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.097 238945 DEBUG nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.097 238945 WARNING nova.compute.manager [req-be58c8c6-f738-4b97-a539-0b55d801334b req-5b6c0477-ec37-413b-b1fc-bf9f402bcdf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:08:30 np0005597378 podman[334493]: 2026-01-27 14:08:30.158222307 +0000 UTC m=+1.024086442 container remove 872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.164 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95bc7c0c-a32c-4d31-9620-f26c3fd353b9]: (4, ('Tue Jan 27 02:08:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1)\n872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1\nTue Jan 27 02:08:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1)\n872d35d2f0a99dd6d3c87547cdaf7cc953a737c25f8f498fd53576433337cbf1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.165 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8db95c29-5d98-4399-9ff2-2bd3ee5b68c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.167 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:30 np0005597378 kernel: tap5d5d79a0-30: left promiscuous mode
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:30 np0005597378 nova_compute[238941]: 2026-01-27 14:08:30.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b58f9f62-3ae5-47aa-9ccb-9fe142a44645]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.203 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d628f1f7-fb85-4589-96f8-23f22409c9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.205 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e80bb45b-55b9-418d-bbb8-8e9d38f81121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.221 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4b1e2d-9358-414e-97f8-bedc3b6c7eea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547355, 'reachable_time': 28963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334511, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 systemd[1]: run-netns-ovnmeta\x2d5d5d79a0\x2d3ea3\x2d4f88\x2d81a4\x2d888d41f69a7b.mount: Deactivated successfully.
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.227 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.227 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[5e174651-a254-46f1-80ba-204a74f56362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.228 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.229 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[880a9cab-870a-4ced-b19c-51fcec016402]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.230 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.231 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:08:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:30.231 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[20e9754a-56df-4c50-a70f-a87f76ebd2ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 27 09:08:30 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 27 09:08:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Jan 27 09:08:31 np0005597378 nova_compute[238941]: 2026-01-27 14:08:31.140 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] cloning vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk@ca0f503be1d54de58d3d2147b4ccf75e to images/f0a65138-a9e8-4fc9-a555-029413bf034c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:08:31 np0005597378 podman[334611]: 2026-01-27 14:08:31.173499332 +0000 UTC m=+0.359272919 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:08:31 np0005597378 podman[334665]: 2026-01-27 14:08:31.474459873 +0000 UTC m=+0.193487942 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:31 np0005597378 podman[334611]: 2026-01-27 14:08:31.699262407 +0000 UTC m=+0.885035984 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.320 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] flattening images/f0a65138-a9e8-4fc9-a555-029413bf034c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.470 238945 DEBUG nova.compute.manager [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.470 238945 DEBUG nova.compute.manager [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing instance network info cache due to event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.471 238945 DEBUG oslo_concurrency.lockutils [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.471 238945 DEBUG oslo_concurrency.lockutils [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:32 np0005597378 nova_compute[238941]: 2026-01-27 14:08:32.471 238945 DEBUG nova.network.neutron [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:08:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 124 op/s
Jan 27 09:08:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:08:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:08:33 np0005597378 nova_compute[238941]: 2026-01-27 14:08:33.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:33 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:08:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 192 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 118 op/s
Jan 27 09:08:34 np0005597378 podman[334998]: 2026-01-27 14:08:34.771739578 +0000 UTC m=+0.020563524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:08:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:35 np0005597378 podman[334998]: 2026-01-27 14:08:35.067446577 +0000 UTC m=+0.316270503 container create 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:08:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:08:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:08:35 np0005597378 nova_compute[238941]: 2026-01-27 14:08:35.375 238945 DEBUG nova.network.neutron [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updated VIF entry in instance network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:08:35 np0005597378 nova_compute[238941]: 2026-01-27 14:08:35.376 238945 DEBUG nova.network.neutron [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:35 np0005597378 nova_compute[238941]: 2026-01-27 14:08:35.391 238945 DEBUG oslo_concurrency.lockutils [req-ca5d9e15-1b6e-48dc-9f77-e5fe7c7ce727 req-885c58a3-249c-4c6c-9516-82c425d69d98 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:35 np0005597378 systemd[1]: Started libpod-conmon-8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00.scope.
Jan 27 09:08:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:35 np0005597378 podman[334998]: 2026-01-27 14:08:35.922199387 +0000 UTC m=+1.171023343 container init 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:08:35 np0005597378 podman[334998]: 2026-01-27 14:08:35.930254103 +0000 UTC m=+1.179078029 container start 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:08:35 np0005597378 trusting_pike[335014]: 167 167
Jan 27 09:08:35 np0005597378 systemd[1]: libpod-8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00.scope: Deactivated successfully.
Jan 27 09:08:36 np0005597378 podman[334998]: 2026-01-27 14:08:36.152767845 +0000 UTC m=+1.401591771 container attach 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:08:36 np0005597378 podman[334998]: 2026-01-27 14:08:36.15330672 +0000 UTC m=+1.402130636 container died 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:08:36 np0005597378 nova_compute[238941]: 2026-01-27 14:08:36.255 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] removing snapshot(ca0f503be1d54de58d3d2147b4ccf75e) on rbd image(7514a588-c48b-45af-a889-ea57cc9f1730_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 09:08:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9294665be047e578046d21a335c1dbdc1e6603cdaf2d84c92b0b7e79753dfd6e-merged.mount: Deactivated successfully.
Jan 27 09:08:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 197 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.9 MiB/s wr, 148 op/s
Jan 27 09:08:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:37.374 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:37 np0005597378 nova_compute[238941]: 2026-01-27 14:08:37.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:37 np0005597378 podman[334998]: 2026-01-27 14:08:37.428154573 +0000 UTC m=+2.676978499 container remove 8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:08:37 np0005597378 systemd[1]: libpod-conmon-8ed75fa802824b6a686b0a0f1419c7d86780c82a2581622f6477a45bbde34d00.scope: Deactivated successfully.
Jan 27 09:08:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 27 09:08:37 np0005597378 podman[335059]: 2026-01-27 14:08:37.595213855 +0000 UTC m=+0.025782745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:08:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 27 09:08:38 np0005597378 podman[335059]: 2026-01-27 14:08:38.142051015 +0000 UTC m=+0.572619845 container create ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:38 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 27 09:08:38 np0005597378 systemd[1]: Started libpod-conmon-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope.
Jan 27 09:08:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:38 np0005597378 nova_compute[238941]: 2026-01-27 14:08:38.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:38 np0005597378 podman[335059]: 2026-01-27 14:08:38.629048298 +0000 UTC m=+1.059617158 container init ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:08:38 np0005597378 podman[335059]: 2026-01-27 14:08:38.637005262 +0000 UTC m=+1.067574092 container start ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:08:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 246 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.6 MiB/s wr, 104 op/s
Jan 27 09:08:38 np0005597378 podman[335059]: 2026-01-27 14:08:38.811166774 +0000 UTC m=+1.241735604 container attach ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:08:39 np0005597378 zen_saha[335076]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:08:39 np0005597378 zen_saha[335076]: --> All data devices are unavailable
Jan 27 09:08:39 np0005597378 systemd[1]: libpod-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope: Deactivated successfully.
Jan 27 09:08:39 np0005597378 conmon[335076]: conmon ebba7566fbe1a7210462 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope/container/memory.events
Jan 27 09:08:39 np0005597378 podman[335059]: 2026-01-27 14:08:39.123057629 +0000 UTC m=+1.553626459 container died ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:08:39 np0005597378 nova_compute[238941]: 2026-01-27 14:08:39.866 238945 DEBUG nova.storage.rbd_utils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] creating snapshot(snap) on rbd image(f0a65138-a9e8-4fc9-a555-029413bf034c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:08:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-35135666c7daad4eac89802edb4dcb108a496173d732d7008a4eba4576ca01a6-merged.mount: Deactivated successfully.
Jan 27 09:08:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 249 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.1 MiB/s wr, 97 op/s
Jan 27 09:08:41 np0005597378 podman[335059]: 2026-01-27 14:08:41.284868437 +0000 UTC m=+3.715437267 container remove ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:08:41 np0005597378 systemd[1]: libpod-conmon-ebba7566fbe1a7210462cfc3694b4baeb935070db029ff61b35294dc0de50b6c.scope: Deactivated successfully.
Jan 27 09:08:41 np0005597378 podman[335189]: 2026-01-27 14:08:41.822944492 +0000 UTC m=+0.116647997 container create 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:08:41 np0005597378 podman[335189]: 2026-01-27 14:08:41.728841962 +0000 UTC m=+0.022545457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:08:41 np0005597378 systemd[1]: Started libpod-conmon-374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58.scope.
Jan 27 09:08:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 27 09:08:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 27 09:08:42 np0005597378 podman[335189]: 2026-01-27 14:08:42.109944158 +0000 UTC m=+0.403647643 container init 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 09:08:42 np0005597378 podman[335189]: 2026-01-27 14:08:42.117236584 +0000 UTC m=+0.410940049 container start 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:08:42 np0005597378 festive_roentgen[335205]: 167 167
Jan 27 09:08:42 np0005597378 systemd[1]: libpod-374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58.scope: Deactivated successfully.
Jan 27 09:08:42 np0005597378 podman[335189]: 2026-01-27 14:08:42.209559386 +0000 UTC m=+0.503262851 container attach 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:08:42 np0005597378 podman[335189]: 2026-01-27 14:08:42.211377935 +0000 UTC m=+0.505081420 container died 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:08:42 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 27 09:08:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-326137adea0c215bd9d8f5ab44465e10da0611b7dc2176268b86228f1151df10-merged.mount: Deactivated successfully.
Jan 27 09:08:42 np0005597378 nova_compute[238941]: 2026-01-27 14:08:42.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:42 np0005597378 podman[335189]: 2026-01-27 14:08:42.43216132 +0000 UTC m=+0.725864785 container remove 374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_roentgen, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:08:42 np0005597378 systemd[1]: libpod-conmon-374f5350367d68b56ab21f61627def7461762432e2374b71c4cf252e4fa64b58.scope: Deactivated successfully.
Jan 27 09:08:42 np0005597378 podman[335229]: 2026-01-27 14:08:42.67947833 +0000 UTC m=+0.117028258 container create b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:08:42 np0005597378 podman[335229]: 2026-01-27 14:08:42.5857767 +0000 UTC m=+0.023326648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:08:42 np0005597378 systemd[1]: Started libpod-conmon-b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128.scope.
Jan 27 09:08:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 249 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.4 MiB/s wr, 90 op/s
Jan 27 09:08:42 np0005597378 podman[335229]: 2026-01-27 14:08:42.82012079 +0000 UTC m=+0.257670738 container init b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:08:42 np0005597378 podman[335229]: 2026-01-27 14:08:42.828204688 +0000 UTC m=+0.265754606 container start b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:08:42 np0005597378 podman[335229]: 2026-01-27 14:08:42.846733066 +0000 UTC m=+0.284283044 container attach b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:08:43 np0005597378 nova_compute[238941]: 2026-01-27 14:08:43.021 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522908.0205774, 7514a588-c48b-45af-a889-ea57cc9f1730 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:43 np0005597378 nova_compute[238941]: 2026-01-27 14:08:43.024 238945 INFO nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:08:43 np0005597378 nova_compute[238941]: 2026-01-27 14:08:43.069 238945 DEBUG nova.compute.manager [None req-bb8a118e-aebf-452c-89db-09d7a0464efb - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:43 np0005597378 nova_compute[238941]: 2026-01-27 14:08:43.075 238945 DEBUG nova.compute.manager [None req-bb8a118e-aebf-452c-89db-09d7a0464efb - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:43 np0005597378 nova_compute[238941]: 2026-01-27 14:08:43.093 238945 INFO nova.compute.manager [None req-bb8a118e-aebf-452c-89db-09d7a0464efb - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.#033[00m
Jan 27 09:08:43 np0005597378 great_bardeen[335246]: {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:    "0": [
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:        {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "devices": [
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "/dev/loop3"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            ],
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_name": "ceph_lv0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_size": "21470642176",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "name": "ceph_lv0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "tags": {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cluster_name": "ceph",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.crush_device_class": "",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.encrypted": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.objectstore": "bluestore",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osd_id": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.type": "block",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.vdo": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.with_tpm": "0"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            },
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "type": "block",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "vg_name": "ceph_vg0"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:        }
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:    ],
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:    "1": [
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:        {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "devices": [
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "/dev/loop4"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            ],
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_name": "ceph_lv1",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_size": "21470642176",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "name": "ceph_lv1",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "tags": {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cluster_name": "ceph",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.crush_device_class": "",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.encrypted": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.objectstore": "bluestore",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osd_id": "1",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.type": "block",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.vdo": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.with_tpm": "0"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            },
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "type": "block",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "vg_name": "ceph_vg1"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:        }
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:    ],
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:    "2": [
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:        {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "devices": [
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "/dev/loop5"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            ],
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_name": "ceph_lv2",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_size": "21470642176",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "name": "ceph_lv2",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "tags": {
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.cluster_name": "ceph",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.crush_device_class": "",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.encrypted": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.objectstore": "bluestore",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osd_id": "2",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.type": "block",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.vdo": "0",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:                "ceph.with_tpm": "0"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            },
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "type": "block",
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:            "vg_name": "ceph_vg2"
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:        }
Jan 27 09:08:43 np0005597378 great_bardeen[335246]:    ]
Jan 27 09:08:43 np0005597378 great_bardeen[335246]: }
Jan 27 09:08:43 np0005597378 systemd[1]: libpod-b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128.scope: Deactivated successfully.
Jan 27 09:08:43 np0005597378 podman[335229]: 2026-01-27 14:08:43.123964259 +0000 UTC m=+0.561514187 container died b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:08:43 np0005597378 nova_compute[238941]: 2026-01-27 14:08:43.467 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:43Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:08:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:43Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:08:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-72c28f9b264347a7477227afd05096603ff9c1ccc4818455704f67697f970a33-merged.mount: Deactivated successfully.
Jan 27 09:08:44 np0005597378 podman[335229]: 2026-01-27 14:08:44.540260627 +0000 UTC m=+1.977810555 container remove b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:08:44 np0005597378 systemd[1]: libpod-conmon-b82e8f54891f3466cace61a1de787389908b5d146ed21d949dadf079824a5128.scope: Deactivated successfully.
Jan 27 09:08:44 np0005597378 podman[335267]: 2026-01-27 14:08:44.643687047 +0000 UTC m=+0.714131000 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 09:08:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 270 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 6.3 MiB/s wr, 89 op/s
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.884 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Snapshot image upload complete#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.885 238945 DEBUG nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.935 238945 INFO nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Shelve offloading#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.942 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.942 238945 DEBUG nova.compute.manager [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.945 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.945 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:44 np0005597378 nova_compute[238941]: 2026-01-27 14:08:44.945 238945 DEBUG nova.network.neutron [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:08:45 np0005597378 podman[335349]: 2026-01-27 14:08:45.197442225 +0000 UTC m=+0.021649293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:08:45 np0005597378 podman[335349]: 2026-01-27 14:08:45.605843584 +0000 UTC m=+0.430050622 container create 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:08:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:45 np0005597378 systemd[1]: Started libpod-conmon-77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea.scope.
Jan 27 09:08:45 np0005597378 podman[335363]: 2026-01-27 14:08:45.948192627 +0000 UTC m=+0.307450655 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:08:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:46 np0005597378 podman[335349]: 2026-01-27 14:08:46.293237865 +0000 UTC m=+1.117444953 container init 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 27 09:08:46 np0005597378 podman[335349]: 2026-01-27 14:08:46.300379776 +0000 UTC m=+1.124586824 container start 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:08:46 np0005597378 magical_jang[335386]: 167 167
Jan 27 09:08:46 np0005597378 systemd[1]: libpod-77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea.scope: Deactivated successfully.
Jan 27 09:08:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:46 np0005597378 podman[335349]: 2026-01-27 14:08:46.736053018 +0000 UTC m=+1.560260056 container attach 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:08:46 np0005597378 podman[335349]: 2026-01-27 14:08:46.736441468 +0000 UTC m=+1.560648496 container died 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:08:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 271 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 920 KiB/s rd, 6.0 MiB/s wr, 123 op/s
Jan 27 09:08:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3fdc8f1e7c5fd81439960d8880c8b77b052d22d6f2b8e99951b1b5b247293509-merged.mount: Deactivated successfully.
Jan 27 09:08:47 np0005597378 nova_compute[238941]: 2026-01-27 14:08:47.354 238945 DEBUG nova.network.neutron [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:47 np0005597378 nova_compute[238941]: 2026-01-27 14:08:47.374 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:47 np0005597378 nova_compute[238941]: 2026-01-27 14:08:47.384 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:47 np0005597378 podman[335349]: 2026-01-27 14:08:47.460901926 +0000 UTC m=+2.285108964 container remove 77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:08:47 np0005597378 systemd[1]: libpod-conmon-77cbead4ed4d3c34730ba3939575d4c85623b65484d896dd227f8cfbdc7265ea.scope: Deactivated successfully.
Jan 27 09:08:47 np0005597378 podman[335420]: 2026-01-27 14:08:47.616177821 +0000 UTC m=+0.024567052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:08:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:08:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:08:47 np0005597378 podman[335420]: 2026-01-27 14:08:47.917463361 +0000 UTC m=+0.325852562 container create 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:08:48 np0005597378 systemd[1]: Started libpod-conmon-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope.
Jan 27 09:08:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:48 np0005597378 podman[335420]: 2026-01-27 14:08:48.22690537 +0000 UTC m=+0.635294581 container init 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:08:48 np0005597378 podman[335420]: 2026-01-27 14:08:48.234490064 +0000 UTC m=+0.642879275 container start 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:08:48 np0005597378 podman[335420]: 2026-01-27 14:08:48.417765651 +0000 UTC m=+0.826155302 container attach 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.616 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.616 238945 DEBUG nova.objects.instance [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'resources' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.657 238945 DEBUG nova.virt.libvirt.vif [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member',shelved_at='2026-01-27T14:08:44.885147',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f0a65138-a9e8-4fc9-a555-029413bf034c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:08:28Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.658 238945 DEBUG nova.network.os_vif_util [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.658 238945 DEBUG nova.network.os_vif_util [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.659 238945 DEBUG os_vif [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.661 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.661 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05f217fa-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.667 238945 INFO os_vif [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG nova.compute.manager [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG nova.compute.manager [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing instance network info cache due to event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG oslo_concurrency.lockutils [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.711 238945 DEBUG oslo_concurrency.lockutils [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:48 np0005597378 nova_compute[238941]: 2026-01-27 14:08:48.712 238945 DEBUG nova.network.neutron [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:08:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 277 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.5 MiB/s wr, 90 op/s
Jan 27 09:08:48 np0005597378 lvm[335533]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:08:48 np0005597378 lvm[335533]: VG ceph_vg0 finished
Jan 27 09:08:48 np0005597378 lvm[335534]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:08:48 np0005597378 lvm[335534]: VG ceph_vg1 finished
Jan 27 09:08:48 np0005597378 lvm[335536]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:08:48 np0005597378 lvm[335536]: VG ceph_vg2 finished
Jan 27 09:08:49 np0005597378 relaxed_golick[335437]: {}
Jan 27 09:08:49 np0005597378 systemd[1]: libpod-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope: Deactivated successfully.
Jan 27 09:08:49 np0005597378 podman[335420]: 2026-01-27 14:08:49.265840741 +0000 UTC m=+1.674229992 container died 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:08:49 np0005597378 systemd[1]: libpod-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope: Consumed 1.455s CPU time.
Jan 27 09:08:49 np0005597378 nova_compute[238941]: 2026-01-27 14:08:49.848 238945 INFO nova.compute.manager [None req-6efd1108-216f-43f3-bbe7-97841f5406f9 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Get console output#033[00m
Jan 27 09:08:49 np0005597378 nova_compute[238941]: 2026-01-27 14:08:49.864 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:08:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5dfc4903b5ea76fa658d472256b3884c89bd9cba94dc74dfc36abd0870251ada-merged.mount: Deactivated successfully.
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.078 238945 DEBUG nova.network.neutron [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated VIF entry in instance network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.079 238945 DEBUG nova.network.neutron [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": null, "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap05f217fa-37", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.096 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.097 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.097 238945 INFO nova.compute.manager [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Rebooting instance#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.099 238945 DEBUG oslo_concurrency.lockutils [req-1bcb8f33-6222-4852-9e89-6fc63994503e req-a16a0ad2-0654-4e91-9d50-aa02d1bd9a7b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.112 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.113 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:50 np0005597378 nova_compute[238941]: 2026-01-27 14:08:50.113 238945 DEBUG nova.network.neutron [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:08:50 np0005597378 podman[335420]: 2026-01-27 14:08:50.600181234 +0000 UTC m=+3.008570445 container remove 95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:08:50 np0005597378 systemd[1]: libpod-conmon-95725970c0c6183e33a95f6ab02f97bc63114dd5445f62d6cf98f700da743df1.scope: Deactivated successfully.
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:08:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 27 09:08:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.509 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting instance files /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.510 238945 INFO nova.virt.libvirt.driver [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deletion of /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del complete#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.627 238945 INFO nova.scheduler.client.report [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Deleted allocations for instance 7514a588-c48b-45af-a889-ea57cc9f1730#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.670 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.670 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.730 238945 DEBUG oslo_concurrency.processutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.765 238945 DEBUG nova.network.neutron [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.786 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:08:51 np0005597378 nova_compute[238941]: 2026-01-27 14:08:51.788 238945 DEBUG nova.compute.manager [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:51 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:08:52 np0005597378 nova_compute[238941]: 2026-01-27 14:08:52.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:08:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16507983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:08:52 np0005597378 nova_compute[238941]: 2026-01-27 14:08:52.419 238945 DEBUG oslo_concurrency.processutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:52 np0005597378 nova_compute[238941]: 2026-01-27 14:08:52.424 238945 DEBUG nova.compute.provider_tree [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:08:52 np0005597378 nova_compute[238941]: 2026-01-27 14:08:52.438 238945 DEBUG nova.scheduler.client.report [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:08:52 np0005597378 nova_compute[238941]: 2026-01-27 14:08:52.463 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:52 np0005597378 nova_compute[238941]: 2026-01-27 14:08:52.513 238945 DEBUG oslo_concurrency.lockutils [None req-71b12191-5168-4090-84de-ffe67c780a20 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 28.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 27 09:08:53 np0005597378 nova_compute[238941]: 2026-01-27 14:08:53.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 235 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 289 KiB/s wr, 84 op/s
Jan 27 09:08:54 np0005597378 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 09:08:54 np0005597378 NetworkManager[48904]: <info>  [1769522934.8416] device (tap82581e93-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:08:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:54Z|01061|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 09:08:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:54Z|01062|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down in Southbound
Jan 27 09:08:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:54Z|01063|binding|INFO|Removing iface tap82581e93-b7 ovn-installed in OVS
Jan 27 09:08:54 np0005597378 nova_compute[238941]: 2026-01-27 14:08:54.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:54 np0005597378 nova_compute[238941]: 2026-01-27 14:08:54.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.877 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis#033[00m
Jan 27 09:08:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.879 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:08:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.880 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40f3c4a0-ebd1-4191-888b-8ff130d93987]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:54.881 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace which is not needed anymore#033[00m
Jan 27 09:08:54 np0005597378 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 27 09:08:54 np0005597378 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 14.020s CPU time.
Jan 27 09:08:54 np0005597378 systemd-machined[207425]: Machine qemu-135-instance-0000006d terminated.
Jan 27 09:08:55 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : haproxy version is 2.8.14-c23fe91
Jan 27 09:08:55 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [NOTICE]   (334375) : path to executable is /usr/sbin/haproxy
Jan 27 09:08:55 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [WARNING]  (334375) : Exiting Master process...
Jan 27 09:08:55 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [ALERT]    (334375) : Current worker (334377) exited with code 143 (Terminated)
Jan 27 09:08:55 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[334371]: [WARNING]  (334375) : All workers exited. Exiting... (0)
Jan 27 09:08:55 np0005597378 systemd[1]: libpod-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa.scope: Deactivated successfully.
Jan 27 09:08:55 np0005597378 podman[335622]: 2026-01-27 14:08:55.021824486 +0000 UTC m=+0.056712176 container died a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 09:08:55 np0005597378 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 09:08:55 np0005597378 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 09:08:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:55Z|01064|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 09:08:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:55Z|01065|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:55Z|01066|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 09:08:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:55Z|01067|if_status|INFO|Dropped 2 log messages in last 27 seconds (most recently, 27 seconds ago) due to excessive rate
Jan 27 09:08:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:55Z|01068|if_status|INFO|Not setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down as sb is readonly
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:55Z|01069|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.140 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.146 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.244 238945 DEBUG nova.compute.manager [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.245 238945 DEBUG oslo_concurrency.lockutils [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.245 238945 DEBUG oslo_concurrency.lockutils [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.245 238945 DEBUG oslo_concurrency.lockutils [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.246 238945 DEBUG nova.compute.manager [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.246 238945 WARNING nova.compute.manager [req-0b2b2c71-21e9-45aa-b1a3-e71d5eb3985f req-11f0c6e7-7259-4a8b-a605-1df32fb4380d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state reboot_started.#033[00m
Jan 27 09:08:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa-userdata-shm.mount: Deactivated successfully.
Jan 27 09:08:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b911f762cd174dcda19c44d773820676a6e214b57e1d5674182215417e4db2dd-merged.mount: Deactivated successfully.
Jan 27 09:08:55 np0005597378 podman[335622]: 2026-01-27 14:08:55.592014875 +0000 UTC m=+0.626902555 container cleanup a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:08:55 np0005597378 systemd[1]: libpod-conmon-a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa.scope: Deactivated successfully.
Jan 27 09:08:55 np0005597378 podman[335651]: 2026-01-27 14:08:55.797095759 +0000 UTC m=+0.183400462 container remove a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f7816075-dc43-4a24-8653-4798bdcaf2fd]: (4, ('Tue Jan 27 02:08:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa)\na8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa\nTue Jan 27 02:08:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (a8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa)\na8f479b849c569b4a82b1bc39e956bfcfe31b3c1492494a7fac3c2e8943a7ffa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[952dff00-9b3f-45bf-b648-da0015f64ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.806 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:55 np0005597378 kernel: tape3ace283-80: left promiscuous mode
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:55 np0005597378 nova_compute[238941]: 2026-01-27 14:08:55.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.826 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af127164-797c-4c39-a0ab-1d32cb302788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.842 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93e4b949-9413-423d-aec4-c87d9ddb8f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[088ce2cd-43a6-4aa7-99c6-12288bf1000d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.859 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9525da-dc7b-4457-9849-fca21e1d481a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554533, 'reachable_time': 36970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335671, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.862 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.862 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd13292-e8dc-4e7c-9a96-9cca1855577e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.863 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis#033[00m
Jan 27 09:08:55 np0005597378 systemd[1]: run-netns-ovnmeta\x2de3ace283\x2d87b2\x2d4641\x2daad8\x2d0cf005dc2525.mount: Deactivated successfully.
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2b9583-2204-4675-a924-a3535ad6c198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.865 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.866 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:08:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:55.866 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38dedb6a-2889-4d0d-b64a-1ca03bfa70a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.108 238945 INFO nova.virt.libvirt.driver [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance shutdown successfully.#033[00m
Jan 27 09:08:56 np0005597378 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 09:08:56 np0005597378 systemd-udevd[335601]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:08:56 np0005597378 NetworkManager[48904]: <info>  [1769522936.1708] manager: (tap82581e93-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:56Z|01070|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 09:08:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:56Z|01071|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:08:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:56Z|01072|binding|INFO|Removing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:56 np0005597378 NetworkManager[48904]: <info>  [1769522936.1873] device (tap82581e93-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:08:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:56Z|01073|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.188 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:56 np0005597378 NetworkManager[48904]: <info>  [1769522936.1895] device (tap82581e93-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:56 np0005597378 systemd-machined[207425]: New machine qemu-136-instance-0000006d.
Jan 27 09:08:56 np0005597378 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Jan 27 09:08:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:56Z|01074|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 up in Southbound
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.319 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.320 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 bound to our chassis#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.321 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3ace283-87b2-4641-aad8-0cf005dc2525#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.334 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce38dd1-81bb-45b6-b6fe-40519671fbae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.334 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3ace283-81 in ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.336 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3ace283-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b62d9814-f943-4111-8ca6-114fe96538a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.337 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65cb5a5a-f861-42ff-bb68-6d3abe35dcda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.353 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4949c544-600d-4abe-9126-420cc2211579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45e4e27e-38a2-46de-a8b3-9d48e0309a20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.401 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9e14f7b2-941c-446a-9e8a-ac456fde8bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.408 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db349717-15a9-45f3-a570-19a0eb9a5fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 NetworkManager[48904]: <info>  [1769522936.4115] manager: (tape3ace283-80): new Veth device (/org/freedesktop/NetworkManager/Devices/436)
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.445 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cdffc2-813e-45be-aaf5-cc76d77242e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.448 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef0ac20-cb25-4aa9-b6a5-0117efdabc88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.462 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.462 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.462 238945 INFO nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Unshelving#033[00m
Jan 27 09:08:56 np0005597378 NetworkManager[48904]: <info>  [1769522936.4754] device (tape3ace283-80): carrier: link connected
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.488 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5f445ff2-36d0-46de-9619-66d6cc4d23c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.509 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27af687b-b3e3-46d0-9f69-1ac32180f2d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557609, 'reachable_time': 16936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335716, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b74bda8-786f-40f3-ae7c-b0b3982a1b1b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:5a07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557609, 'tstamp': 557609}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335717, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.550 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b48f3c72-4861-4743-bf09-ef1ed16d23a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3ace283-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:5a:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557609, 'reachable_time': 16936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335718, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.558 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.559 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.563 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_requests' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.580 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d13ede-07bf-47b9-b42e-d2941e0605f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.601 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'numa_topology' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.657 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.658 238945 INFO nova.compute.claims [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.658 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5971c5-ff70-4e1d-aab2-d038fbe4886f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.660 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.660 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.661 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3ace283-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:56 np0005597378 kernel: tape3ace283-80: entered promiscuous mode
Jan 27 09:08:56 np0005597378 NetworkManager[48904]: <info>  [1769522936.6657] manager: (tape3ace283-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.667 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3ace283-80, col_values=(('external_ids', {'iface-id': '8be2baf6-f073-4c01-989a-a8e6b98328a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:08:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:08:56Z|01075|binding|INFO|Releasing lport 8be2baf6-f073-4c01-989a-a8e6b98328a4 from this chassis (sb_readonly=0)
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.685 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.686 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d950ba0a-1ae2-4877-9a53-c7d05105864c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.687 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e3ace283-87b2-4641-aad8-0cf005dc2525.pid.haproxy
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e3ace283-87b2-4641-aad8-0cf005dc2525
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:08:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:08:56.688 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'env', 'PROCESS_TAG=haproxy-e3ace283-87b2-4641-aad8-0cf005dc2525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3ace283-87b2-4641-aad8-0cf005dc2525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:08:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 200 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 145 KiB/s wr, 54 op/s
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.942 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.948 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 2466272a-7218-432a-a223-43ade0ce6480 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.948 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522936.9478164, 2466272a-7218-432a-a223-43ade0ce6480 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.949 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.954 238945 INFO nova.virt.libvirt.driver [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance running successfully.#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.954 238945 INFO nova.virt.libvirt.driver [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance soft rebooted successfully.#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.954 238945 DEBUG nova.compute.manager [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.969 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.969 238945 DEBUG nova.compute.provider_tree [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.991 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.995 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:56 np0005597378 nova_compute[238941]: 2026-01-27 14:08:56.998 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.022 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.041 238945 DEBUG oslo_concurrency.lockutils [None req-5caa4b29-dd23-48af-9e5f-b3614f61399d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.043 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.044 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522936.9480975, 2466272a-7218-432a-a223-43ade0ce6480 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.044 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Started (Lifecycle Event)#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.071 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.074 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.084 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:57 np0005597378 podman[335792]: 2026-01-27 14:08:57.04660846 +0000 UTC m=+0.023524033 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:08:57 np0005597378 podman[335792]: 2026-01-27 14:08:57.320549595 +0000 UTC m=+0.297465148 container create 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.331 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.332 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.334 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.334 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.334 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.335 238945 WARNING nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.335 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.335 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.336 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.336 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.337 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.337 238945 WARNING nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.337 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.338 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.338 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.338 238945 DEBUG oslo_concurrency.lockutils [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.339 238945 DEBUG nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.339 238945 WARNING nova.compute.manager [req-7e1bb49a-e36e-4701-aaa6-892049c34ab9 req-05b45943-3616-44b4-985a-a050d7c41458 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:57 np0005597378 systemd[1]: Started libpod-conmon-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2.scope.
Jan 27 09:08:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:08:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb596eb91856c943b7796af9b84888a906467625c9e9bc006594a3aa8f2c25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:08:57 np0005597378 podman[335792]: 2026-01-27 14:08:57.486602809 +0000 UTC m=+0.463518412 container init 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:08:57 np0005597378 podman[335792]: 2026-01-27 14:08:57.492731204 +0000 UTC m=+0.469646767 container start 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:08:57 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : New worker (335833) forked
Jan 27 09:08:57 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : Loading success.
Jan 27 09:08:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:08:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681673969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.677 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.682 238945 DEBUG nova.compute.provider_tree [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.811 238945 DEBUG nova.scheduler.client.report [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.842 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.848 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:08:57 np0005597378 nova_compute[238941]: 2026-01-27 14:08:57.849 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.011 238945 INFO nova.network.neutron [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating port 05f217fa-372b-46d3-974f-de79101f0b2f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 27 09:08:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:08:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242631407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.418 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.505 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.505 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.667 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.669 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3635MB free_disk=59.94201959762722GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.669 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.670 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.785 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 2466272a-7218-432a-a223-43ade0ce6480 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.785 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 7514a588-c48b-45af-a889-ea57cc9f1730 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.785 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.786 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:08:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 119 KiB/s wr, 83 op/s
Jan 27 09:08:58 np0005597378 nova_compute[238941]: 2026-01-27 14:08:58.857 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.332 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.332 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.333 238945 DEBUG nova.network.neutron [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:08:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:08:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097135878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.421 238945 DEBUG nova.compute.manager [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.421 238945 DEBUG nova.compute.manager [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing instance network info cache due to event network-changed-05f217fa-372b-46d3-974f-de79101f0b2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.421 238945 DEBUG oslo_concurrency.lockutils [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.422 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.427 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.442 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.465 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:08:59 np0005597378 nova_compute[238941]: 2026-01-27 14:08:59.465 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:08:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:08:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3376316469' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:08:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:08:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3376316469' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.376 238945 DEBUG nova.network.neutron [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.442 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.443 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.444 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating image(s)#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.466 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.469 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.471 238945 DEBUG oslo_concurrency.lockutils [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.472 238945 DEBUG nova.network.neutron [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Refreshing network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.522 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.560 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.565 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "c8def0e76a36e63ef463dc99da589230b727636a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.566 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "c8def0e76a36e63ef463dc99da589230b727636a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 118 op/s
Jan 27 09:09:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.849 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/f0a65138-a9e8-4fc9-a555-029413bf034c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/f0a65138-a9e8-4fc9-a555-029413bf034c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.893 238945 DEBUG nova.virt.libvirt.imagebackend [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/f0a65138-a9e8-4fc9-a555-029413bf034c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 27 09:09:00 np0005597378 nova_compute[238941]: 2026-01-27 14:09:00.893 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] cloning images/f0a65138-a9e8-4fc9-a555-029413bf034c@snap to None/7514a588-c48b-45af-a889-ea57cc9f1730_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:09:01 np0005597378 nova_compute[238941]: 2026-01-27 14:09:01.011 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "c8def0e76a36e63ef463dc99da589230b727636a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:01 np0005597378 nova_compute[238941]: 2026-01-27 14:09:01.149 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'migration_context' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:01 np0005597378 nova_compute[238941]: 2026-01-27 14:09:01.255 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] flattening vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:09:01 np0005597378 nova_compute[238941]: 2026-01-27 14:09:01.797 238945 DEBUG nova.network.neutron [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated VIF entry in instance network info cache for port 05f217fa-372b-46d3-974f-de79101f0b2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:09:01 np0005597378 nova_compute[238941]: 2026-01-27 14:09:01.798 238945 DEBUG nova.network.neutron [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:01 np0005597378 nova_compute[238941]: 2026-01-27 14:09:01.813 238945 DEBUG oslo_concurrency.lockutils [req-610a97ca-dc1d-46b8-9a59-a6e877f7987b req-4246f380-03aa-4eb4-8235-611c7a4711cf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.389 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.465 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.542 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Image rbd:vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.543 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.544 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Ensure instance console log exists: /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.544 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.544 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.545 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.547 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start _get_guest_xml network_info=[{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:08:23Z,direct_url=<?>,disk_format='raw',id=f0a65138-a9e8-4fc9-a555-029413bf034c,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1964192211-shelved',owner='96c668beb6b74661927ce283539bb68e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:08:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.551 238945 WARNING nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.555 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.556 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.558 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.559 238945 DEBUG nova.virt.libvirt.host [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.559 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.560 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:08:23Z,direct_url=<?>,disk_format='raw',id=f0a65138-a9e8-4fc9-a555-029413bf034c,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1964192211-shelved',owner='96c668beb6b74661927ce283539bb68e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:08:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.560 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.560 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.561 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.561 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.561 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.562 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.563 238945 DEBUG nova.virt.hardware [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.563 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:02 np0005597378 nova_compute[238941]: 2026-01-27 14:09:02.579 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 99 op/s
Jan 27 09:09:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:09:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802840587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.120 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.141 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.144 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:09:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096136962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.695 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.697 238945 DEBUG nova.virt.libvirt.vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='f0a65138-a9e8-4fc9-a555-029413bf034c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member',shelved_at='2026-01-27T14:08:44.885147',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f0a65138-a9e8-4fc9-a555-029413bf034c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:56Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.698 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.699 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.700 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.719 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <uuid>7514a588-c48b-45af-a889-ea57cc9f1730</uuid>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <name>instance-0000006a</name>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:name>tempest-ServersNegativeTestJSON-server-1964192211</nova:name>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:09:02</nova:creationTime>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:user uuid="945414e1b82946ccadab2e408cf6151c">tempest-ServersNegativeTestJSON-1782469845-project-member</nova:user>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:project uuid="96c668beb6b74661927ce283539bb68e">tempest-ServersNegativeTestJSON-1782469845</nova:project>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="f0a65138-a9e8-4fc9-a555-029413bf034c"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <nova:port uuid="05f217fa-372b-46d3-974f-de79101f0b2f">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <entry name="serial">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <entry name="uuid">7514a588-c48b-45af-a889-ea57cc9f1730</entry>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/7514a588-c48b-45af-a889-ea57cc9f1730_disk.config">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e3:41:9e"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <target dev="tap05f217fa-37"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/console.log" append="off"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:09:03 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:09:03 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:09:03 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:09:03 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.721 238945 DEBUG nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Preparing to wait for external event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.721 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.722 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.722 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.723 238945 DEBUG nova.virt.libvirt.vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='f0a65138-a9e8-4fc9-a555-029413bf034c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:07:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member',shelved_at='2026-01-27T14:08:44.885147',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f0a65138-a9e8-4fc9-a555-029413bf034c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:08:56Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.723 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.724 238945 DEBUG nova.network.os_vif_util [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.724 238945 DEBUG os_vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.725 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.726 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.728 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05f217fa-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.729 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05f217fa-37, col_values=(('external_ids', {'iface-id': '05f217fa-372b-46d3-974f-de79101f0b2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:41:9e', 'vm-uuid': '7514a588-c48b-45af-a889-ea57cc9f1730'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:03 np0005597378 NetworkManager[48904]: <info>  [1769522943.7319] manager: (tap05f217fa-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.738 238945 INFO os_vif [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.812 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.813 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.813 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] No VIF found with MAC fa:16:3e:e3:41:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.814 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Using config drive#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.834 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.858 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:03 np0005597378 nova_compute[238941]: 2026-01-27 14:09:03.896 238945 DEBUG nova.objects.instance [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'keypairs' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.180 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Creating config drive at /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.185 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgh0o9lg8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.332 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgh0o9lg8" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.358 238945 DEBUG nova.storage.rbd_utils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] rbd image 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.362 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.393 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.394 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.394 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.425 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.426 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:09:04 np0005597378 nova_compute[238941]: 2026-01-27 14:09:04.426 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 251 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 132 op/s
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.223 238945 DEBUG oslo_concurrency.processutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config 7514a588-c48b-45af-a889-ea57cc9f1730_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.224 238945 INFO nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting local config drive /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730/disk.config because it was imported into RBD.#033[00m
Jan 27 09:09:05 np0005597378 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 09:09:05 np0005597378 NetworkManager[48904]: <info>  [1769522945.2890] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/439)
Jan 27 09:09:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:05Z|01076|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 09:09:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:05Z|01077|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.298 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.301 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.303 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b#033[00m
Jan 27 09:09:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:05Z|01078|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 09:09:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:05Z|01079|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188cde7d-f91e-47ef-9988-1749280b3cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.319 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d5d79a0-31 in ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.322 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d5d79a0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[711431f3-19f9-4edf-9406-df1c47d060e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f06593a0-2201-4a83-9742-5d77f49517c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 systemd-udevd[336238]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.338 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bf90d0-4c1b-41ee-a3bb-e1cb79b2b83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 systemd-machined[207425]: New machine qemu-137-instance-0000006a.
Jan 27 09:09:05 np0005597378 NetworkManager[48904]: <info>  [1769522945.3509] device (tap05f217fa-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:09:05 np0005597378 NetworkManager[48904]: <info>  [1769522945.3519] device (tap05f217fa-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.357 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[697c6485-bccc-4447-a244-7609bb455c51]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 systemd[1]: Started Virtual Machine qemu-137-instance-0000006a.
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.391 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8c70cc-448b-4faf-b48f-696bd28fbfb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 NetworkManager[48904]: <info>  [1769522945.3975] manager: (tap5d5d79a0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Jan 27 09:09:05 np0005597378 systemd-udevd[336242]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.398 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a52db76e-24e5-44dc-b239-3c71f7176206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.432 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[953aca13-2c59-41f0-b85c-850d034c1ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.435 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bb431e-4ec9-487d-9d7a-f845b27ed545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 NetworkManager[48904]: <info>  [1769522945.4622] device (tap5d5d79a0-30): carrier: link connected
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.464 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b14a8fa6-0f71-4fad-a386-061152a882bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.482 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[281f3a06-4b13-4797-b2b4-f98a605a0fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558508, 'reachable_time': 18240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336270, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.499 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d82397e7-c1b6-4940-97b2-b7fb1b36078f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:6ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558508, 'tstamp': 558508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336271, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.517 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22c34817-9548-4311-aa21-de183c2e5461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558508, 'reachable_time': 18240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336272, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.540 238945 DEBUG nova.compute.manager [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.541 238945 DEBUG oslo_concurrency.lockutils [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.541 238945 DEBUG oslo_concurrency.lockutils [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.541 238945 DEBUG oslo_concurrency.lockutils [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.542 238945 DEBUG nova.compute.manager [req-0efc439c-6b53-4332-b1eb-afea327d906f req-8ce11b07-4c74-429b-99b6-9922a5718dcd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Processing event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.557 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fce4b43-a9f5-4132-be20-3f3ebc2e1c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.619 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc82efbd-0539-4e6c-8506-d2ec48dcb84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.622 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:05 np0005597378 NetworkManager[48904]: <info>  [1769522945.6246] manager: (tap5d5d79a0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Jan 27 09:09:05 np0005597378 kernel: tap5d5d79a0-30: entered promiscuous mode
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:05Z|01080|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.645 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.647 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.648 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9872953a-e09b-4be2-998a-003e4a65e389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.649 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:09:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:05.651 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'env', 'PROCESS_TAG=haproxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.658 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:05 np0005597378 nova_compute[238941]: 2026-01-27 14:09:05.658 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:09:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:06 np0005597378 podman[336303]: 2026-01-27 14:09:06.009019247 +0000 UTC m=+0.030921112 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:09:06 np0005597378 podman[336303]: 2026-01-27 14:09:06.412236877 +0000 UTC m=+0.434138712 container create a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:09:06 np0005597378 systemd[1]: Started libpod-conmon-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0.scope.
Jan 27 09:09:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9d0031cd95503e6641019da20612ac34713ee9b75e8b88473b5a3704695fe0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.513 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522946.5134947, 7514a588-c48b-45af-a889-ea57cc9f1730 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.514 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Started (Lifecycle Event)#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.516 238945 DEBUG nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.520 238945 DEBUG nova.virt.libvirt.driver [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.524 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance spawned successfully.#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.541 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.560 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.561 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522946.514443, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.561 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.582 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.586 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522946.51839, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.586 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:09:06 np0005597378 podman[336303]: 2026-01-27 14:09:06.594708602 +0000 UTC m=+0.616610447 container init a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:09:06 np0005597378 podman[336303]: 2026-01-27 14:09:06.602675797 +0000 UTC m=+0.624577642 container start a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.606 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.621 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:06 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : New worker (336367) forked
Jan 27 09:09:06 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : Loading success.
Jan 27 09:09:06 np0005597378 nova_compute[238941]: 2026-01-27 14:09:06.643 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:09:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.649 238945 DEBUG nova.compute.manager [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.650 238945 DEBUG oslo_concurrency.lockutils [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.651 238945 DEBUG oslo_concurrency.lockutils [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.651 238945 DEBUG oslo_concurrency.lockutils [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.651 238945 DEBUG nova.compute.manager [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:07 np0005597378 nova_compute[238941]: 2026-01-27 14:09:07.652 238945 WARNING nova.compute.manager [req-44cb6627-e0a2-4412-855f-bd3d6f683760 req-3c9e905c-234a-4de7-b5de-6cd9885a2e8a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Jan 27 09:09:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 27 09:09:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 27 09:09:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 27 09:09:08 np0005597378 nova_compute[238941]: 2026-01-27 14:09:08.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 279 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.7 MiB/s wr, 191 op/s
Jan 27 09:09:09 np0005597378 nova_compute[238941]: 2026-01-27 14:09:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:09 np0005597378 nova_compute[238941]: 2026-01-27 14:09:09.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:09:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:10Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:09:10 np0005597378 nova_compute[238941]: 2026-01-27 14:09:10.204 238945 DEBUG nova.compute.manager [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:10 np0005597378 nova_compute[238941]: 2026-01-27 14:09:10.270 238945 DEBUG oslo_concurrency.lockutils [None req-7e4100c6-7335-49c5-9e1b-fafac97ef6f5 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 274 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.7 MiB/s wr, 228 op/s
Jan 27 09:09:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:12 np0005597378 nova_compute[238941]: 2026-01-27 14:09:12.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 274 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Jan 27 09:09:13 np0005597378 nova_compute[238941]: 2026-01-27 14:09:13.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:13 np0005597378 nova_compute[238941]: 2026-01-27 14:09:13.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 200 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 220 op/s
Jan 27 09:09:15 np0005597378 podman[336376]: 2026-01-27 14:09:15.719250918 +0000 UTC m=+0.057387304 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:09:15 np0005597378 nova_compute[238941]: 2026-01-27 14:09:15.785 238945 INFO nova.compute.manager [None req-b41822a6-cb89-4309-83cb-58dc26f37439 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Get console output#033[00m
Jan 27 09:09:15 np0005597378 nova_compute[238941]: 2026-01-27 14:09:15.790 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:09:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 27 09:09:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 27 09:09:16 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 27 09:09:16 np0005597378 nova_compute[238941]: 2026-01-27 14:09:16.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:16 np0005597378 podman[336396]: 2026-01-27 14:09:16.788915435 +0000 UTC m=+0.125164306 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 09:09:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 202 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 21 KiB/s wr, 192 op/s
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:09:17
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta']
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.162 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.163 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.164 238945 INFO nova.compute.manager [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Terminating instance#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.165 238945 DEBUG nova.compute.manager [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.216 238945 DEBUG nova.compute.manager [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.217 238945 DEBUG nova.compute.manager [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing instance network info cache due to event network-changed-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.217 238945 DEBUG oslo_concurrency.lockutils [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.218 238945 DEBUG oslo_concurrency.lockutils [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.218 238945 DEBUG nova.network.neutron [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Refreshing network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:09:17 np0005597378 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 09:09:17 np0005597378 NetworkManager[48904]: <info>  [1769522957.3517] device (tap82581e93-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01081|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01082|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down in Southbound
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01083|binding|INFO|Removing iface tap82581e93-b7 ovn-installed in OVS
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.378 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.379 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis#033[00m
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.380 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.381 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6c819d-dd31-4cfe-abd1-41c0e9a4194a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.382 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 namespace which is not needed anymore#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 27 09:09:17 np0005597378 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 13.071s CPU time.
Jan 27 09:09:17 np0005597378 systemd-machined[207425]: Machine qemu-136-instance-0000006d terminated.
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.521 238945 DEBUG nova.objects.instance [None req-f2d1b49b-cd8e-4fb9-9315-ffa67e9dc783 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.548 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522957.548099, 7514a588-c48b-45af-a889-ea57cc9f1730 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.549 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:09:17 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : haproxy version is 2.8.14-c23fe91
Jan 27 09:09:17 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [NOTICE]   (335831) : path to executable is /usr/sbin/haproxy
Jan 27 09:09:17 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [WARNING]  (335831) : Exiting Master process...
Jan 27 09:09:17 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [WARNING]  (335831) : Exiting Master process...
Jan 27 09:09:17 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [ALERT]    (335831) : Current worker (335833) exited with code 143 (Terminated)
Jan 27 09:09:17 np0005597378 neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525[335827]: [WARNING]  (335831) : All workers exited. Exiting... (0)
Jan 27 09:09:17 np0005597378 systemd[1]: libpod-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2.scope: Deactivated successfully.
Jan 27 09:09:17 np0005597378 podman[336445]: 2026-01-27 14:09:17.566045627 +0000 UTC m=+0.096281909 container died 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.576 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:17 np0005597378 kernel: tap82581e93-b7: entered promiscuous mode
Jan 27 09:09:17 np0005597378 systemd-udevd[336429]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:09:17 np0005597378 NetworkManager[48904]: <info>  [1769522957.5850] manager: (tap82581e93-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Jan 27 09:09:17 np0005597378 kernel: tap82581e93-b7 (unregistering): left promiscuous mode
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.585 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01084|binding|INFO|Claiming lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for this chassis.
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01085|binding|INFO|82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9: Claiming fa:16:3e:81:41:7b 10.100.0.9
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.606 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01086|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 ovn-installed in OVS
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01087|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 up in Southbound
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01088|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=1)
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01089|if_status|INFO|Dropped 2 log messages in last 23 seconds (most recently, 23 seconds ago) due to excessive rate
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01090|if_status|INFO|Not setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down as sb is readonly
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01091|binding|INFO|Removing iface tap82581e93-b7 ovn-installed in OVS
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.633 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.634 238945 INFO nova.virt.libvirt.driver [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Instance destroyed successfully.#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.634 238945 DEBUG nova.objects.instance [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 2466272a-7218-432a-a223-43ade0ce6480 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01092|binding|INFO|Releasing lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 from this chassis (sb_readonly=0)
Jan 27 09:09:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:17Z|01093|binding|INFO|Setting lport 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 down in Southbound
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.653 238945 DEBUG nova.virt.libvirt.vif [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-543476362',display_name='tempest-TestNetworkAdvancedServerOps-server-543476362',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-543476362',id=109,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/GNfo4rgH2jt9z1vILeWPgbvw2k851alu9Kp+NwI5lf80CNeN0I8Fy8fHycn/1SqZgv2Od2/qgDtUPrcIBt7klOfNWsUFqoF2kTS60AUSiiWxxXfFT80yb+FHTNgIRvA==',key_name='tempest-TestNetworkAdvancedServerOps-1613602353',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:08:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-0w9nwgs0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:08:57Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=2466272a-7218-432a-a223-43ade0ce6480,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.653 238945 DEBUG nova.network.os_vif_util [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.654 238945 DEBUG nova.network.os_vif_util [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.654 238945 DEBUG os_vif [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.656 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82581e93-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:09:17 np0005597378 nova_compute[238941]: 2026-01-27 14:09:17.661 238945 INFO os_vif [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:41:7b,bridge_name='br-int',has_traffic_filtering=True,id=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9,network=Network(e3ace283-87b2-4641-aad8-0cf005dc2525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82581e93-b7')#033[00m
Jan 27 09:09:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:17.662 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:41:7b 10.100.0.9'], port_security=['fa:16:3e:81:41:7b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2466272a-7218-432a-a223-43ade0ce6480', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3ace283-87b2-4641-aad8-0cf005dc2525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d4301c7-49a9-41b4-b0b8-8f360c555be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef144390-9d5b-4d16-a28f-1d92c6206087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:09:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2-userdata-shm.mount: Deactivated successfully.
Jan 27 09:09:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1afb596eb91856c943b7796af9b84888a906467625c9e9bc006594a3aa8f2c25-merged.mount: Deactivated successfully.
Jan 27 09:09:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 202 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 29 KiB/s wr, 115 op/s
Jan 27 09:09:18 np0005597378 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 09:09:18 np0005597378 NetworkManager[48904]: <info>  [1769522958.9971] device (tap05f217fa-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:09:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:19Z|01094|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 09:09:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:19Z|01095|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:19Z|01096|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.023 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:19 np0005597378 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 27 09:09:19 np0005597378 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006a.scope: Consumed 12.376s CPU time.
Jan 27 09:09:19 np0005597378 systemd-machined[207425]: Machine qemu-137-instance-0000006a terminated.
Jan 27 09:09:19 np0005597378 NetworkManager[48904]: <info>  [1769522959.1355] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.152 238945 DEBUG nova.compute.manager [None req-f2d1b49b-cd8e-4fb9-9315-ffa67e9dc783 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.297 238945 DEBUG nova.compute.manager [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.298 238945 DEBUG oslo_concurrency.lockutils [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.298 238945 DEBUG oslo_concurrency.lockutils [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.298 238945 DEBUG oslo_concurrency.lockutils [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.299 238945 DEBUG nova.compute.manager [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.299 238945 WARNING nova.compute.manager [req-d9e1fc6f-9961-43ff-b4ab-64d56e71c1c2 req-4b6caa7d-4ef6-42f1-aaa1-4374f9ee37f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state suspended and task_state None.#033[00m
Jan 27 09:09:19 np0005597378 podman[336445]: 2026-01-27 14:09:19.315869249 +0000 UTC m=+1.846105531 container cleanup 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:09:19 np0005597378 systemd[1]: libpod-conmon-8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2.scope: Deactivated successfully.
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.331 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.331 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.332 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 DEBUG oslo_concurrency.lockutils [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 DEBUG nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.333 238945 WARNING nova.compute.manager [req-1d1c2c00-809e-48b5-95b0-adb55de493e2 req-af34aead-6050-455c-9df5-758adff7b312 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:09:19 np0005597378 podman[336514]: 2026-01-27 14:09:19.579928289 +0000 UTC m=+0.242035178 container remove 8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f29bb19f-ce5e-4d4e-9882-1c12af8c4eef]: (4, ('Tue Jan 27 02:09:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2)\n8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2\nTue Jan 27 02:09:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 (8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2)\n8a35f74c64c6bfc3c778c6b6e798326e09f11087561845431dcd29ec4379c8c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.588 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d518e072-0d8d-4de3-b012-df5c12e7af90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.589 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ace283-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:19 np0005597378 kernel: tape3ace283-80: left promiscuous mode
Jan 27 09:09:19 np0005597378 nova_compute[238941]: 2026-01-27 14:09:19.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.611 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[540d6886-5a4f-43cf-bd22-9eebef21c73b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.633 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef78c0fd-d9d5-409a-a8e8-677c10e9da0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd43dc3d-a552-4a71-9953-007ed108f427]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.649 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ab00eeda-bcd4-4bf0-8d89-5b6b43350647]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557602, 'reachable_time': 21684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336532, 'error': None, 'target': 'ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.651 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3ace283-87b2-4641-aad8-0cf005dc2525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.651 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b75bfa43-3767-43ce-8105-a3e9c23d2feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.652 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis#033[00m
Jan 27 09:09:19 np0005597378 systemd[1]: run-netns-ovnmeta\x2de3ace283\x2d87b2\x2d4641\x2daad8\x2d0cf005dc2525.mount: Deactivated successfully.
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.653 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.654 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c690a396-da96-434b-85e4-8adfac0bb574]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.655 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 in datapath e3ace283-87b2-4641-aad8-0cf005dc2525 unbound from our chassis#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.656 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3ace283-87b2-4641-aad8-0cf005dc2525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.656 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a54d9d0-a3ee-4d2e-ae6b-537c998ced0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.657 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.658 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.658 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8469c8-ee37-40c2-b70d-147ba7539d59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:19.659 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace which is not needed anymore#033[00m
Jan 27 09:09:20 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : haproxy version is 2.8.14-c23fe91
Jan 27 09:09:20 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [NOTICE]   (336365) : path to executable is /usr/sbin/haproxy
Jan 27 09:09:20 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [WARNING]  (336365) : Exiting Master process...
Jan 27 09:09:20 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [ALERT]    (336365) : Current worker (336367) exited with code 143 (Terminated)
Jan 27 09:09:20 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336360]: [WARNING]  (336365) : All workers exited. Exiting... (0)
Jan 27 09:09:20 np0005597378 systemd[1]: libpod-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0.scope: Deactivated successfully.
Jan 27 09:09:20 np0005597378 podman[336550]: 2026-01-27 14:09:20.033385429 +0000 UTC m=+0.291516208 container died a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.252 238945 DEBUG nova.network.neutron [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updated VIF entry in instance network info cache for port 82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.253 238945 DEBUG nova.network.neutron [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [{"id": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "address": "fa:16:3e:81:41:7b", "network": {"id": "e3ace283-87b2-4641-aad8-0cf005dc2525", "bridge": "br-int", "label": "tempest-network-smoke--242553674", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82581e93-b7", "ovs_interfaceid": "82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0-userdata-shm.mount: Deactivated successfully.
Jan 27 09:09:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-eb9d0031cd95503e6641019da20612ac34713ee9b75e8b88473b5a3704695fe0-merged.mount: Deactivated successfully.
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.281 238945 DEBUG oslo_concurrency.lockutils [req-4639c808-14fa-4bd7-941f-6097bd96fe1b req-f29f8285-24fd-45b2-800a-b6ebf31a00f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-2466272a-7218-432a-a223-43ade0ce6480" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:20 np0005597378 podman[336550]: 2026-01-27 14:09:20.593848827 +0000 UTC m=+0.851979616 container cleanup a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:09:20 np0005597378 systemd[1]: libpod-conmon-a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0.scope: Deactivated successfully.
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.797 238945 INFO nova.compute.manager [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Resuming#033[00m
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.797 238945 DEBUG nova.objects.instance [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'flavor' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 175 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 14 KiB/s wr, 39 op/s
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.841 238945 DEBUG oslo_concurrency.lockutils [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.841 238945 DEBUG oslo_concurrency.lockutils [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquired lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.841 238945 DEBUG nova.network.neutron [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:09:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:20 np0005597378 podman[336582]: 2026-01-27 14:09:20.917508798 +0000 UTC m=+0.301552528 container remove a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.925 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cbea0192-ac49-474a-9ff6-3724104999ed]: (4, ('Tue Jan 27 02:09:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0)\na5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0\nTue Jan 27 02:09:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (a5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0)\na5f996740f689fffc7830cd9a279828f01124b13f0f15ad8671e59cc6b3794a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.927 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cebaa2-dfe8-4de7-a272-ba114389bba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.928 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:20 np0005597378 kernel: tap5d5d79a0-30: left promiscuous mode
Jan 27 09:09:20 np0005597378 nova_compute[238941]: 2026-01-27 14:09:20.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eff966c1-9dd9-4844-b4cb-fa42ea2bcd2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0126b872-c181-41fa-be7a-16ac01abdb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3354dd4e-f4fc-44e8-a765-6901ccad5b37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f99c440-acb6-4c44-a5da-ab88ed522ce0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558500, 'reachable_time': 17752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336600, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.988 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:09:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:20.988 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[89ccc8ba-b20f-49f9-9273-b57670dc7b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:20 np0005597378 systemd[1]: run-netns-ovnmeta\x2d5d5d79a0\x2d3ea3\x2d4f88\x2d81a4\x2d888d41f69a7b.mount: Deactivated successfully.
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.069 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG nova.compute.manager [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG oslo_concurrency.lockutils [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG oslo_concurrency.lockutils [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG oslo_concurrency.lockutils [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.599 238945 DEBUG nova.compute.manager [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.600 238945 WARNING nova.compute.manager [req-ba5b6dbd-b039-409d-82cb-98c6e81550ee req-7aa108d2-d728-4c08-acc6-acd16f7976fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.657 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.657 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.657 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 WARNING nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.658 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 WARNING nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.659 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-unplugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "2466272a-7218-432a-a223-43ade0ce6480-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.660 238945 DEBUG oslo_concurrency.lockutils [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.661 238945 DEBUG nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] No waiting events found dispatching network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.661 238945 WARNING nova.compute.manager [req-d38db0d0-1ef9-406d-9d53-9c866e69514d req-6b6554b2-c482-4242-8152-f0c2d6b8555b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received unexpected event network-vif-plugged-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.752 238945 INFO nova.virt.libvirt.driver [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deleting instance files /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480_del#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.753 238945 INFO nova.virt.libvirt.driver [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deletion of /var/lib/nova/instances/2466272a-7218-432a-a223-43ade0ce6480_del complete#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.821 238945 INFO nova.compute.manager [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 4.66 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.821 238945 DEBUG oslo.service.loopingcall [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.821 238945 DEBUG nova.compute.manager [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:09:21 np0005597378 nova_compute[238941]: 2026-01-27 14:09:21.822 238945 DEBUG nova.network.neutron [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:09:22 np0005597378 nova_compute[238941]: 2026-01-27 14:09:22.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:22 np0005597378 nova_compute[238941]: 2026-01-27 14:09:22.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 175 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 14 KiB/s wr, 39 op/s
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.356 238945 DEBUG nova.network.neutron [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [{"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.387 238945 DEBUG oslo_concurrency.lockutils [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Releasing lock "refresh_cache-7514a588-c48b-45af-a889-ea57cc9f1730" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.392 238945 DEBUG nova.virt.libvirt.vif [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:09:19Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.393 238945 DEBUG nova.network.os_vif_util [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.393 238945 DEBUG nova.network.os_vif_util [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.394 238945 DEBUG os_vif [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.395 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.396 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.400 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05f217fa-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05f217fa-37, col_values=(('external_ids', {'iface-id': '05f217fa-372b-46d3-974f-de79101f0b2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:41:9e', 'vm-uuid': '7514a588-c48b-45af-a889-ea57cc9f1730'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.401 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.401 238945 INFO os_vif [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.520 238945 DEBUG nova.network.neutron [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.538 238945 INFO nova.compute.manager [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Took 1.72 seconds to deallocate network for instance.#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.614 238945 DEBUG nova.objects.instance [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'numa_topology' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.644 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.645 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:23 np0005597378 kernel: tap05f217fa-37: entered promiscuous mode
Jan 27 09:09:23 np0005597378 systemd-udevd[336602]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:09:23 np0005597378 NetworkManager[48904]: <info>  [1769522963.6911] manager: (tap05f217fa-37): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:23Z|01097|binding|INFO|Claiming lport 05f217fa-372b-46d3-974f-de79101f0b2f for this chassis.
Jan 27 09:09:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:23Z|01098|binding|INFO|05f217fa-372b-46d3-974f-de79101f0b2f: Claiming fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:09:23 np0005597378 NetworkManager[48904]: <info>  [1769522963.7036] device (tap05f217fa-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:09:23 np0005597378 NetworkManager[48904]: <info>  [1769522963.7046] device (tap05f217fa-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.705 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '12', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.707 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b bound to our chassis#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.709 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b#033[00m
Jan 27 09:09:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:23Z|01099|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f ovn-installed in OVS
Jan 27 09:09:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:23Z|01100|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f up in Southbound
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:23 np0005597378 systemd-machined[207425]: New machine qemu-138-instance-0000006a.
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.722 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6415da9c-5c79-407d-a1a8-8b80efaea789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.723 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d5d79a0-31 in ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.725 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d5d79a0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.725 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1ef3c7-2ed9-437a-a1f6-e2b293a0d460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7be793-d95e-4d15-b885-e6368054343f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 systemd[1]: Started Virtual Machine qemu-138-instance-0000006a.
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.738 238945 DEBUG oslo_concurrency.processutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.741 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[41155bfa-eadf-4449-b7df-6adfeabb81f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.766 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c09605e4-d45e-4fc2-87fb-5aa1645a000d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.788 238945 DEBUG nova.compute.manager [req-aca4e208-7093-4d9a-8ee5-d74a0f13dd09 req-e491d7a5-29bd-4a46-a345-94b89549187e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Received event network-vif-deleted-82581e93-b7f2-4ec4-bf6c-3cacb51f4ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.805 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[08a55189-a89e-42da-95ef-8ca81984d3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.810 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6b7f46-3b70-45d7-886b-b0818da93fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 NetworkManager[48904]: <info>  [1769522963.8123] manager: (tap5d5d79a0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/445)
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.856 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[01dacb6d-d3cf-404c-aabf-a674b7450b56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.859 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[500e93a1-e2ca-4f9d-b927-bf85161d5ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 NetworkManager[48904]: <info>  [1769522963.8821] device (tap5d5d79a0-30): carrier: link connected
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.888 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9220afce-c38e-4129-8d6e-9468ca6adcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[322cc960-7077-48f8-b34c-7f34335d9321]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560350, 'reachable_time': 42011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336670, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccd4d59-b1ec-446e-b07e-0fe700c0c4fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:6ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560350, 'tstamp': 560350}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336671, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.950 238945 DEBUG nova.compute.manager [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.950 238945 DEBUG oslo_concurrency.lockutils [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.951 238945 DEBUG oslo_concurrency.lockutils [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.950 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fdce6290-29d5-470d-8a7c-71159bd67197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d5d79a0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:6e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560350, 'reachable_time': 42011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336672, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.951 238945 DEBUG oslo_concurrency.lockutils [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.952 238945 DEBUG nova.compute.manager [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:23 np0005597378 nova_compute[238941]: 2026-01-27 14:09:23.952 238945 WARNING nova.compute.manager [req-b54c0616-292f-4d75-954a-1e93d301cc91 req-68dd5dbd-cce3-4e07-a6c9-34795ac8ed13 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:09:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:23.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[849a75d8-2954-417e-9aac-283c4d1670ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.082 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[742c6815-c161-49a3-89aa-7b4646b3a180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.084 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.084 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.085 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d5d79a0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:24 np0005597378 kernel: tap5d5d79a0-30: entered promiscuous mode
Jan 27 09:09:24 np0005597378 NetworkManager[48904]: <info>  [1769522964.0891] manager: (tap5d5d79a0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.091 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d5d79a0-30, col_values=(('external_ids', {'iface-id': '174db04a-6000-4d42-9793-445f0033fd57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:24Z|01101|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.109 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cad907f3-57b9-43c0-a9bd-7208b7c1e8c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.111 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.pid.haproxy
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:09:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:24.114 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'env', 'PROCESS_TAG=haproxy-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d5d79a0-3ea3-4f88-81a4-888d41f69a7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:09:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:09:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/386014551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.372 238945 DEBUG oslo_concurrency.processutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.379 238945 DEBUG nova.compute.provider_tree [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.403 238945 DEBUG nova.scheduler.client.report [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.412 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 7514a588-c48b-45af-a889-ea57cc9f1730 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.412 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522964.4120507, 7514a588-c48b-45af-a889-ea57cc9f1730 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.413 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Started (Lifecycle Event)#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.447 238945 DEBUG nova.compute.manager [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.448 238945 DEBUG nova.objects.instance [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.531 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.534 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance running successfully.#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.536 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:24 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.537 238945 DEBUG nova.virt.libvirt.guest [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.538 238945 DEBUG nova.compute.manager [None req-e3b57f16-506c-49d0-9f27-32149fb933b9 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:24 np0005597378 podman[336749]: 2026-01-27 14:09:24.450981094 +0000 UTC m=+0.024606923 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:09:24 np0005597378 podman[336749]: 2026-01-27 14:09:24.567562248 +0000 UTC m=+0.141188067 container create 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.582 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:24 np0005597378 systemd[1]: Started libpod-conmon-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e.scope.
Jan 27 09:09:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d291edb2adef28cdade67c13c58b9f8c166a0e165196abe2f5c7f3e55eb08f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:24 np0005597378 podman[336749]: 2026-01-27 14:09:24.729756728 +0000 UTC m=+0.303382567 container init 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:24 np0005597378 podman[336749]: 2026-01-27 14:09:24.73579897 +0000 UTC m=+0.309424779 container start 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.746 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.747 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522964.41594, 7514a588-c48b-45af-a889-ea57cc9f1730 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.748 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:09:24 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : New worker (336770) forked
Jan 27 09:09:24 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : Loading success.
Jan 27 09:09:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 121 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 25 op/s
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.905 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:24 np0005597378 nova_compute[238941]: 2026-01-27 14:09:24.909 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:25 np0005597378 nova_compute[238941]: 2026-01-27 14:09:25.512 238945 INFO nova.scheduler.client.report [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 2466272a-7218-432a-a223-43ade0ce6480#033[00m
Jan 27 09:09:25 np0005597378 nova_compute[238941]: 2026-01-27 14:09:25.696 238945 DEBUG oslo_concurrency.lockutils [None req-818a9ad5-0e7e-483b-b7bc-6b61a897956e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "2466272a-7218-432a-a223-43ade0ce6480" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:26Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:41:9e 10.100.0.6
Jan 27 09:09:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 21 KiB/s wr, 44 op/s
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.103 238945 DEBUG nova.compute.manager [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.103 238945 DEBUG oslo_concurrency.lockutils [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.103 238945 DEBUG oslo_concurrency.lockutils [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.104 238945 DEBUG oslo_concurrency.lockutils [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.104 238945 DEBUG nova.compute.manager [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.104 238945 WARNING nova.compute.manager [req-9298d1d6-21f9-4894-94f8-e677604515a1 req-40afac1a-fbcb-41e9-9f00-69f5f5952061 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state active and task_state None.#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007717652875728002 of space, bias 1.0, pg target 0.23152958627184006 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685757020678442 of space, bias 1.0, pg target 0.20057271062035326 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0612630872061347e-06 of space, bias 4.0, pg target 0.0012735157046473617 quantized to 16 (current 16)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:09:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:09:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:27.766 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:27 np0005597378 nova_compute[238941]: 2026-01-27 14:09:27.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:27.767 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:09:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 21 KiB/s wr, 60 op/s
Jan 27 09:09:29 np0005597378 nova_compute[238941]: 2026-01-27 14:09:29.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:29.769 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 13 KiB/s wr, 73 op/s
Jan 27 09:09:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:32Z|01102|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:32Z|01103|binding|INFO|Releasing lport 174db04a-6000-4d42-9793-445f0033fd57 from this chassis (sb_readonly=0)
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.140 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.401 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.633 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522957.6121304, 2466272a-7218-432a-a223-43ade0ce6480 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.634 238945 INFO nova.compute.manager [-] [instance: 2466272a-7218-432a-a223-43ade0ce6480] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.664 238945 DEBUG nova.compute.manager [None req-ca53c112-4010-4d1a-8517-86f6c13b4e41 - - - - - -] [instance: 2466272a-7218-432a-a223-43ade0ce6480] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:32 np0005597378 nova_compute[238941]: 2026-01-27 14:09:32.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 13 KiB/s wr, 70 op/s
Jan 27 09:09:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 21 KiB/s wr, 70 op/s
Jan 27 09:09:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 122 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 21 KiB/s wr, 55 op/s
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.184 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.185 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.185 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.186 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.186 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.188 238945 INFO nova.compute.manager [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Terminating instance#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.191 238945 DEBUG nova.compute.manager [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:09:37 np0005597378 kernel: tap05f217fa-37 (unregistering): left promiscuous mode
Jan 27 09:09:37 np0005597378 NetworkManager[48904]: <info>  [1769522977.3962] device (tap05f217fa-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:37Z|01104|binding|INFO|Releasing lport 05f217fa-372b-46d3-974f-de79101f0b2f from this chassis (sb_readonly=0)
Jan 27 09:09:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:37Z|01105|binding|INFO|Setting lport 05f217fa-372b-46d3-974f-de79101f0b2f down in Southbound
Jan 27 09:09:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:37Z|01106|binding|INFO|Removing iface tap05f217fa-37 ovn-installed in OVS
Jan 27 09:09:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.414 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:41:9e 10.100.0.6'], port_security=['fa:16:3e:e3:41:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7514a588-c48b-45af-a889-ea57cc9f1730', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96c668beb6b74661927ce283539bb68e', 'neutron:revision_number': '13', 'neutron:security_group_ids': '30628006-70d9-49c9-a68f-99c77c1bc946', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa878a03-6528-40d1-820c-a9a0442d321a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=05f217fa-372b-46d3-974f-de79101f0b2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.415 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 05f217fa-372b-46d3-974f-de79101f0b2f in datapath 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b unbound from our chassis#033[00m
Jan 27 09:09:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.416 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:09:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.417 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2438a87a-72df-41ee-b759-bdce1c70b6d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:37.418 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b namespace which is not needed anymore#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:37 np0005597378 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 27 09:09:37 np0005597378 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006a.scope: Consumed 2.616s CPU time.
Jan 27 09:09:37 np0005597378 systemd-machined[207425]: Machine qemu-138-instance-0000006a terminated.
Jan 27 09:09:37 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : haproxy version is 2.8.14-c23fe91
Jan 27 09:09:37 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [NOTICE]   (336768) : path to executable is /usr/sbin/haproxy
Jan 27 09:09:37 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [WARNING]  (336768) : Exiting Master process...
Jan 27 09:09:37 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [ALERT]    (336768) : Current worker (336770) exited with code 143 (Terminated)
Jan 27 09:09:37 np0005597378 neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b[336764]: [WARNING]  (336768) : All workers exited. Exiting... (0)
Jan 27 09:09:37 np0005597378 systemd[1]: libpod-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e.scope: Deactivated successfully.
Jan 27 09:09:37 np0005597378 podman[336805]: 2026-01-27 14:09:37.62071411 +0000 UTC m=+0.117003527 container died 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.624 238945 INFO nova.virt.libvirt.driver [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Instance destroyed successfully.#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.625 238945 DEBUG nova.objects.instance [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lazy-loading 'resources' on Instance uuid 7514a588-c48b-45af-a889-ea57cc9f1730 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.648 238945 DEBUG nova.virt.libvirt.vif [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1964192211',display_name='tempest-ServersNegativeTestJSON-server-1964192211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1964192211',id=106,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96c668beb6b74661927ce283539bb68e',ramdisk_id='',reservation_id='r-bdwaej8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1782469845',owner_user_name='tempest-ServersNegativeTestJSON-1782469845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:09:24Z,user_data=None,user_id='945414e1b82946ccadab2e408cf6151c',uuid=7514a588-c48b-45af-a889-ea57cc9f1730,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.649 238945 DEBUG nova.network.os_vif_util [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converting VIF {"id": "05f217fa-372b-46d3-974f-de79101f0b2f", "address": "fa:16:3e:e3:41:9e", "network": {"id": "5d5d79a0-3ea3-4f88-81a4-888d41f69a7b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-75669567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96c668beb6b74661927ce283539bb68e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05f217fa-37", "ovs_interfaceid": "05f217fa-372b-46d3-974f-de79101f0b2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.650 238945 DEBUG nova.network.os_vif_util [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.650 238945 DEBUG os_vif [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.651 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.652 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05f217fa-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:09:37 np0005597378 nova_compute[238941]: 2026-01-27 14:09:37.657 238945 INFO os_vif [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:41:9e,bridge_name='br-int',has_traffic_filtering=True,id=05f217fa-372b-46d3-974f-de79101f0b2f,network=Network(5d5d79a0-3ea3-4f88-81a4-888d41f69a7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05f217fa-37')#033[00m
Jan 27 09:09:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e-userdata-shm.mount: Deactivated successfully.
Jan 27 09:09:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1d291edb2adef28cdade67c13c58b9f8c166a0e165196abe2f5c7f3e55eb08f0-merged.mount: Deactivated successfully.
Jan 27 09:09:37 np0005597378 podman[336805]: 2026-01-27 14:09:37.91047582 +0000 UTC m=+0.406765247 container cleanup 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:09:37 np0005597378 systemd[1]: libpod-conmon-407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e.scope: Deactivated successfully.
Jan 27 09:09:38 np0005597378 podman[336865]: 2026-01-27 14:09:38.263341817 +0000 UTC m=+0.318637138 container remove 407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.270 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9d74202c-d9f0-4fe7-9865-3c5252c6d1b2]: (4, ('Tue Jan 27 02:09:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e)\n407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e\nTue Jan 27 02:09:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b (407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e)\n407129e1b2809f5e4a10e540058b495e57db7ee780a690756334c382aa3d143e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.272 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8eb39a7-81c9-4b76-b431-8011ea3cd726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d5d79a0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:38 np0005597378 kernel: tap5d5d79a0-30: left promiscuous mode
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.288 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.293 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72d1c29a-3118-4659-be5b-a372129833da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.306 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e51f6d-226f-4e72-8a90-42087e04f417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c6de284e-a408-4e00-97ea-338246f9e2c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3aae3e59-908e-48e8-b091-be051956c93d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560342, 'reachable_time': 24393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336881, 'error': None, 'target': 'ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 systemd[1]: run-netns-ovnmeta\x2d5d5d79a0\x2d3ea3\x2d4f88\x2d81a4\x2d888d41f69a7b.mount: Deactivated successfully.
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.328 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d5d79a0-3ea3-4f88-81a4-888d41f69a7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:09:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:38.328 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e24de49c-2272-4659-8021-782ddae8b2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.458 238945 INFO nova.virt.libvirt.driver [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deleting instance files /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.459 238945 INFO nova.virt.libvirt.driver [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deletion of /var/lib/nova/instances/7514a588-c48b-45af-a889-ea57cc9f1730_del complete#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.507 238945 INFO nova.compute.manager [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 1.32 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.508 238945 DEBUG oslo.service.loopingcall [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.508 238945 DEBUG nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.508 238945 DEBUG nova.network.neutron [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.805 238945 DEBUG nova.compute.manager [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.805 238945 DEBUG oslo_concurrency.lockutils [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG oslo_concurrency.lockutils [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG oslo_concurrency.lockutils [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG nova.compute.manager [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:38 np0005597378 nova_compute[238941]: 2026-01-27 14:09:38.806 238945 DEBUG nova.compute.manager [req-1e33a130-657b-4339-ba25-8ab10f15efd6 req-a1ae9a08-f54d-4c81-a3bb-9a67f4c80c8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-unplugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:09:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 93 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 41 op/s
Jan 27 09:09:39 np0005597378 nova_compute[238941]: 2026-01-27 14:09:39.418 238945 DEBUG nova.network.neutron [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:39 np0005597378 nova_compute[238941]: 2026-01-27 14:09:39.453 238945 INFO nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Took 0.94 seconds to deallocate network for instance.#033[00m
Jan 27 09:09:39 np0005597378 nova_compute[238941]: 2026-01-27 14:09:39.515 238945 DEBUG nova.compute.manager [req-9e9f2f02-8ebb-410b-bb0d-24bbf97c35ee req-00d92013-c7ba-47dd-a355-f17082192ef2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-deleted-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:39 np0005597378 nova_compute[238941]: 2026-01-27 14:09:39.516 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:39 np0005597378 nova_compute[238941]: 2026-01-27 14:09:39.517 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:39 np0005597378 nova_compute[238941]: 2026-01-27 14:09:39.569 238945 DEBUG oslo_concurrency.processutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:09:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472860898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.160 238945 DEBUG oslo_concurrency.processutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.166 238945 DEBUG nova.compute.provider_tree [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.183 238945 DEBUG nova.scheduler.client.report [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.202 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.236 238945 INFO nova.scheduler.client.report [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Deleted allocations for instance 7514a588-c48b-45af-a889-ea57cc9f1730#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.317 238945 DEBUG oslo_concurrency.lockutils [None req-023fbf52-a55e-4eba-98e4-c41a5b468409 945414e1b82946ccadab2e408cf6151c 96c668beb6b74661927ce283539bb68e - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 64 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 10 KiB/s wr, 33 op/s
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.889 238945 DEBUG nova.compute.manager [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.889 238945 DEBUG oslo_concurrency.lockutils [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 DEBUG oslo_concurrency.lockutils [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 DEBUG oslo_concurrency.lockutils [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "7514a588-c48b-45af-a889-ea57cc9f1730-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 DEBUG nova.compute.manager [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] No waiting events found dispatching network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:09:40 np0005597378 nova_compute[238941]: 2026-01-27 14:09:40.890 238945 WARNING nova.compute.manager [req-62f11ad9-2c91-4f3a-907a-6588136f78f8 req-72af1cb5-fb34-41d6-b487-a2f9fac7366c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Received unexpected event network-vif-plugged-05f217fa-372b-46d3-974f-de79101f0b2f for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:09:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:42 np0005597378 nova_compute[238941]: 2026-01-27 14:09:42.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:42 np0005597378 nova_compute[238941]: 2026-01-27 14:09:42.654 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 64 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 10 KiB/s wr, 18 op/s
Jan 27 09:09:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Jan 27 09:09:45 np0005597378 nova_compute[238941]: 2026-01-27 14:09:45.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:46.314 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:46 np0005597378 podman[336905]: 2026-01-27 14:09:46.745161263 +0000 UTC m=+0.087040011 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:09:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Jan 27 09:09:47 np0005597378 nova_compute[238941]: 2026-01-27 14:09:47.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:47 np0005597378 nova_compute[238941]: 2026-01-27 14:09:47.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:47 np0005597378 podman[336924]: 2026-01-27 14:09:47.780513718 +0000 UTC m=+0.110979205 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:09:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:09:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:09:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.514 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.515 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.540 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.616 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.617 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.626 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.626 238945 INFO nova.compute.claims [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:09:49 np0005597378 nova_compute[238941]: 2026-01-27 14:09:49.716 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:09:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539257978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.255 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.261 238945 DEBUG nova.compute.provider_tree [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.279 238945 DEBUG nova.scheduler.client.report [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.310 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.311 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.357 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.357 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.375 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.391 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.515 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.516 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.517 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating image(s)#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.538 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.557 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.579 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.582 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.655 238945 DEBUG nova.policy [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.665 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.666 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.666 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.667 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.686 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:50 np0005597378 nova_compute[238941]: 2026-01-27 14:09:50.689 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 23 op/s
Jan 27 09:09:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.055 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.114 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.199 238945 DEBUG nova.objects.instance [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.217 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.218 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Ensure instance console log exists: /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.219 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.219 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.219 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:09:51 np0005597378 nova_compute[238941]: 2026-01-27 14:09:51.663 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Successfully created port: 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:09:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.102181191 +0000 UTC m=+0.024657583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.373629139 +0000 UTC m=+0.296105511 container create cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:09:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:09:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:09:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:09:52 np0005597378 nova_compute[238941]: 2026-01-27 14:09:52.452 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:52 np0005597378 systemd[1]: Started libpod-conmon-cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a.scope.
Jan 27 09:09:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.582974798 +0000 UTC m=+0.505451190 container init cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.590219243 +0000 UTC m=+0.512695615 container start cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:09:52 np0005597378 elastic_yalow[337297]: 167 167
Jan 27 09:09:52 np0005597378 systemd[1]: libpod-cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a.scope: Deactivated successfully.
Jan 27 09:09:52 np0005597378 nova_compute[238941]: 2026-01-27 14:09:52.622 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769522977.6202095, 7514a588-c48b-45af-a889-ea57cc9f1730 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:52 np0005597378 nova_compute[238941]: 2026-01-27 14:09:52.624 238945 INFO nova.compute.manager [-] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.636105396 +0000 UTC m=+0.558582038 container attach cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.636616 +0000 UTC m=+0.559092372 container died cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:09:52 np0005597378 nova_compute[238941]: 2026-01-27 14:09:52.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:52 np0005597378 nova_compute[238941]: 2026-01-27 14:09:52.709 238945 DEBUG nova.compute.manager [None req-4ee376f8-7d2c-4c99-a9f3-3a7bbe6dea1f - - - - - -] [instance: 7514a588-c48b-45af-a889-ea57cc9f1730] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2924dba017c7ce9ec6493f2108104966a792596aeeb32d5ef5d7a574dda9e8b7-merged.mount: Deactivated successfully.
Jan 27 09:09:52 np0005597378 podman[337281]: 2026-01-27 14:09:52.788700919 +0000 UTC m=+0.711177301 container remove cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 27 09:09:52 np0005597378 systemd[1]: libpod-conmon-cd1b1384fa91569cbff7c36be57a3daa3371fdb0f5a7a530f27a39d9d5a2122a.scope: Deactivated successfully.
Jan 27 09:09:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 511 B/s wr, 9 op/s
Jan 27 09:09:52 np0005597378 podman[337321]: 2026-01-27 14:09:52.977587527 +0000 UTC m=+0.039691179 container create bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:09:53 np0005597378 systemd[1]: Started libpod-conmon-bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864.scope.
Jan 27 09:09:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:53 np0005597378 podman[337321]: 2026-01-27 14:09:52.95985236 +0000 UTC m=+0.021956032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:09:53 np0005597378 podman[337321]: 2026-01-27 14:09:53.067776171 +0000 UTC m=+0.129879883 container init bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:09:53 np0005597378 podman[337321]: 2026-01-27 14:09:53.075490469 +0000 UTC m=+0.137594121 container start bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:09:53 np0005597378 podman[337321]: 2026-01-27 14:09:53.079931498 +0000 UTC m=+0.142035200 container attach bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:09:53 np0005597378 priceless_spence[337338]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:09:53 np0005597378 priceless_spence[337338]: --> All data devices are unavailable
Jan 27 09:09:53 np0005597378 systemd[1]: libpod-bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864.scope: Deactivated successfully.
Jan 27 09:09:53 np0005597378 podman[337321]: 2026-01-27 14:09:53.619720429 +0000 UTC m=+0.681824141 container died bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4a84f21033ac096ba99b101ee80cadb7d832686f3cbd917b4c9606e649fdea29-merged.mount: Deactivated successfully.
Jan 27 09:09:53 np0005597378 podman[337321]: 2026-01-27 14:09:53.663997 +0000 UTC m=+0.726100662 container remove bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:09:53 np0005597378 systemd[1]: libpod-conmon-bc6a7282a1eb1793af71cd572d823d0e6497db42ad80f097fa9057c129993864.scope: Deactivated successfully.
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.735 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Successfully updated port: 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.824 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.824 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.824 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.907 238945 DEBUG nova.compute.manager [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.909 238945 DEBUG nova.compute.manager [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing instance network info cache due to event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:09:53 np0005597378 nova_compute[238941]: 2026-01-27 14:09:53.909 238945 DEBUG oslo_concurrency.lockutils [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:09:54 np0005597378 nova_compute[238941]: 2026-01-27 14:09:54.061 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.117625535 +0000 UTC m=+0.042214905 container create b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:54 np0005597378 systemd[1]: Started libpod-conmon-b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069.scope.
Jan 27 09:09:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.192144659 +0000 UTC m=+0.116734059 container init b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.100906006 +0000 UTC m=+0.025495406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.201238704 +0000 UTC m=+0.125828074 container start b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.205292042 +0000 UTC m=+0.129881412 container attach b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:09:54 np0005597378 strange_bardeen[337448]: 167 167
Jan 27 09:09:54 np0005597378 systemd[1]: libpod-b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069.scope: Deactivated successfully.
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.207125992 +0000 UTC m=+0.131715362 container died b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:09:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5d8c4d33a8ed4c20d9262035a8d66eb21556aa2c7870b36f078d8fd42b8eed75-merged.mount: Deactivated successfully.
Jan 27 09:09:54 np0005597378 podman[337431]: 2026-01-27 14:09:54.25057205 +0000 UTC m=+0.175161420 container remove b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:09:54 np0005597378 systemd[1]: libpod-conmon-b635f430f894a64e8e6f06d7832ddcf5a3724c411e5e85041f8251f132e59069.scope: Deactivated successfully.
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.403731377 +0000 UTC m=+0.039662887 container create 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:09:54 np0005597378 systemd[1]: Started libpod-conmon-552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6.scope.
Jan 27 09:09:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.386322389 +0000 UTC m=+0.022253919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.491418265 +0000 UTC m=+0.127349795 container init 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.498680329 +0000 UTC m=+0.134611840 container start 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.501887205 +0000 UTC m=+0.137818745 container attach 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:54 np0005597378 youthful_euler[337487]: {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:    "0": [
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:        {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "devices": [
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "/dev/loop3"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            ],
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_name": "ceph_lv0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_size": "21470642176",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "name": "ceph_lv0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "tags": {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cluster_name": "ceph",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.crush_device_class": "",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.encrypted": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.objectstore": "bluestore",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osd_id": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.type": "block",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.vdo": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.with_tpm": "0"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            },
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "type": "block",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "vg_name": "ceph_vg0"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:        }
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:    ],
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:    "1": [
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:        {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "devices": [
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "/dev/loop4"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            ],
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_name": "ceph_lv1",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_size": "21470642176",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "name": "ceph_lv1",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "tags": {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cluster_name": "ceph",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.crush_device_class": "",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.encrypted": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.objectstore": "bluestore",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osd_id": "1",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.type": "block",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.vdo": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.with_tpm": "0"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            },
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "type": "block",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "vg_name": "ceph_vg1"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:        }
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:    ],
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:    "2": [
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:        {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "devices": [
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "/dev/loop5"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            ],
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_name": "ceph_lv2",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_size": "21470642176",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "name": "ceph_lv2",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "tags": {
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.cluster_name": "ceph",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.crush_device_class": "",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.encrypted": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.objectstore": "bluestore",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osd_id": "2",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.type": "block",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.vdo": "0",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:                "ceph.with_tpm": "0"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            },
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "type": "block",
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:            "vg_name": "ceph_vg2"
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:        }
Jan 27 09:09:54 np0005597378 youthful_euler[337487]:    ]
Jan 27 09:09:54 np0005597378 youthful_euler[337487]: }
Jan 27 09:09:54 np0005597378 systemd[1]: libpod-552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6.scope: Deactivated successfully.
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.823994505 +0000 UTC m=+0.459926015 container died 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:09:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 74 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.6 MiB/s wr, 13 op/s
Jan 27 09:09:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8c28c66dfe0f58473b02ca48c74bf07c62e2065b13d5cea2d28466382d339eaf-merged.mount: Deactivated successfully.
Jan 27 09:09:54 np0005597378 podman[337471]: 2026-01-27 14:09:54.865976314 +0000 UTC m=+0.501907814 container remove 552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:09:54 np0005597378 systemd[1]: libpod-conmon-552c219689c0f950fffbd2d855c9a6be7d48079b475ac6105abe63aad64808d6.scope: Deactivated successfully.
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.325 238945 DEBUG nova.network.neutron [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.334693625 +0000 UTC m=+0.053925480 container create b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.351 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.352 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance network_info: |[{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.353 238945 DEBUG oslo_concurrency.lockutils [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.353 238945 DEBUG nova.network.neutron [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.356 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start _get_guest_xml network_info=[{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.363 238945 WARNING nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.370 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.372 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.376 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.377 238945 DEBUG nova.virt.libvirt.host [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:09:55 np0005597378 systemd[1]: Started libpod-conmon-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope.
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.377 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.377 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.378 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.378 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.379 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.380 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.380 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.380 238945 DEBUG nova.virt.hardware [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.383 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.307230516 +0000 UTC m=+0.026462451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:09:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.441431984 +0000 UTC m=+0.160663869 container init b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.448461263 +0000 UTC m=+0.167693118 container start b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:09:55 np0005597378 brave_hodgkin[337586]: 167 167
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.454992509 +0000 UTC m=+0.174224364 container attach b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:55 np0005597378 systemd[1]: libpod-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope: Deactivated successfully.
Jan 27 09:09:55 np0005597378 conmon[337586]: conmon b328efb575c68513e1ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope/container/memory.events
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.458077412 +0000 UTC m=+0.177309267 container died b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:09:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e5ca5d5756cb6849108f5a8028401d41098c3048859090c5e9e30303087ed49a-merged.mount: Deactivated successfully.
Jan 27 09:09:55 np0005597378 podman[337570]: 2026-01-27 14:09:55.494195633 +0000 UTC m=+0.213427478 container remove b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:09:55 np0005597378 systemd[1]: libpod-conmon-b328efb575c68513e1ce658734bd9dfee9e62627602c01bb3dee15e8e04f2eb6.scope: Deactivated successfully.
Jan 27 09:09:55 np0005597378 podman[337630]: 2026-01-27 14:09:55.647465973 +0000 UTC m=+0.038749222 container create e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:55 np0005597378 systemd[1]: Started libpod-conmon-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope.
Jan 27 09:09:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:55 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:55 np0005597378 podman[337630]: 2026-01-27 14:09:55.63131425 +0000 UTC m=+0.022597519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:09:55 np0005597378 podman[337630]: 2026-01-27 14:09:55.734900834 +0000 UTC m=+0.126184103 container init e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:55 np0005597378 podman[337630]: 2026-01-27 14:09:55.741493862 +0000 UTC m=+0.132777141 container start e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:09:55 np0005597378 podman[337630]: 2026-01-27 14:09:55.745760507 +0000 UTC m=+0.137043786 container attach e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:09:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:09:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:09:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517153717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.966 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.986 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:55 np0005597378 nova_compute[238941]: 2026-01-27 14:09:55.989 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:56 np0005597378 lvm[337765]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:09:56 np0005597378 lvm[337764]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:09:56 np0005597378 lvm[337764]: VG ceph_vg0 finished
Jan 27 09:09:56 np0005597378 lvm[337765]: VG ceph_vg1 finished
Jan 27 09:09:56 np0005597378 lvm[337767]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:09:56 np0005597378 lvm[337767]: VG ceph_vg2 finished
Jan 27 09:09:56 np0005597378 elastic_goldberg[337646]: {}
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554970511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:09:56 np0005597378 systemd[1]: libpod-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope: Deactivated successfully.
Jan 27 09:09:56 np0005597378 systemd[1]: libpod-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope: Consumed 1.277s CPU time.
Jan 27 09:09:56 np0005597378 podman[337630]: 2026-01-27 14:09:56.554694943 +0000 UTC m=+0.945978192 container died e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.555 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.557 238945 DEBUG nova.virt.libvirt.vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:09:50Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.558 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.559 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.560 238945 DEBUG nova.objects.instance [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:09:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b08d3b375a3d3db31b2c23e3fb66dac29859792e98e3621e31cb3910ff4a4f08-merged.mount: Deactivated successfully.
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.582 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <uuid>5a3eb35a-b675-4084-a737-24918aecfd12</uuid>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <name>instance-0000006e</name>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1195479189</nova:name>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:09:55</nova:creationTime>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <nova:port uuid="2cb1f123-4012-46d4-bbe9-914b25f6f6a3">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <entry name="serial">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <entry name="uuid">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk.config">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:9a:53:2b"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <target dev="tap2cb1f123-40"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log" append="off"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:09:56 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:09:56 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:09:56 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:09:56 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Preparing to wait for external event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.583 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.584 238945 DEBUG nova.virt.libvirt.vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:09:50Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.584 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.585 238945 DEBUG nova.network.os_vif_util [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.585 238945 DEBUG os_vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.590 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb1f123-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.590 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cb1f123-40, col_values=(('external_ids', {'iface-id': '2cb1f123-4012-46d4-bbe9-914b25f6f6a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:53:2b', 'vm-uuid': '5a3eb35a-b675-4084-a737-24918aecfd12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:56 np0005597378 NetworkManager[48904]: <info>  [1769522996.5928] manager: (tap2cb1f123-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.605 238945 INFO os_vif [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')#033[00m
Jan 27 09:09:56 np0005597378 podman[337630]: 2026-01-27 14:09:56.610278848 +0000 UTC m=+1.001562097 container remove e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:09:56 np0005597378 systemd[1]: libpod-conmon-e9a38048fd5b7ec49d7c988f0d7ccc77e06af2971ddb5c69e4f12435155e293f.scope: Deactivated successfully.
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.669 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.670 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.670 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:9a:53:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.670 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Using config drive#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.699 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:09:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.930 238945 DEBUG nova.network.neutron [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updated VIF entry in instance network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.930 238945 DEBUG nova.network.neutron [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:09:56 np0005597378 nova_compute[238941]: 2026-01-27 14:09:56.945 238945 DEBUG oslo_concurrency.lockutils [req-aa48972f-5b18-4134-9a80-8d51fd7a9891 req-7d47a31b-2d03-4fcf-b8aa-038f3cddc7de 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.133 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating config drive at /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.141 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_66s_qw3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.287 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_66s_qw3" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.313 238945 DEBUG nova.storage.rbd_utils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.316 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.471 238945 DEBUG oslo_concurrency.processutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.471 238945 INFO nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting local config drive /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config because it was imported into RBD.#033[00m
Jan 27 09:09:57 np0005597378 kernel: tap2cb1f123-40: entered promiscuous mode
Jan 27 09:09:57 np0005597378 NetworkManager[48904]: <info>  [1769522997.5409] manager: (tap2cb1f123-40): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Jan 27 09:09:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:57Z|01107|binding|INFO|Claiming lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for this chassis.
Jan 27 09:09:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:57Z|01108|binding|INFO|2cb1f123-4012-46d4-bbe9-914b25f6f6a3: Claiming fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 systemd-udevd[337766]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.559 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:09:57 np0005597378 NetworkManager[48904]: <info>  [1769522997.5627] device (tap2cb1f123-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.561 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef bound to our chassis#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.562 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75da836f-929f-4646-940e-3cd4153d5aef#033[00m
Jan 27 09:09:57 np0005597378 NetworkManager[48904]: <info>  [1769522997.5637] device (tap2cb1f123-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:09:57 np0005597378 systemd-machined[207425]: New machine qemu-139-instance-0000006e.
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.581 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc075104-7afb-4710-94f0-a53c0f78b984]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.583 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75da836f-91 in ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.586 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75da836f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.586 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0065933-1d16-4a90-98e6-a3463e9aeabc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.587 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[164ba9e2-33dd-44f2-8f82-c46e1e5e9198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.600 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7a0c1b-4d57-4a8f-a9c8-5d7eeaea18bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 systemd[1]: Started Virtual Machine qemu-139-instance-0000006e.
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.630 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8cae20-c9a0-4afd-a484-abfe90b26d0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:57Z|01109|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 ovn-installed in OVS
Jan 27 09:09:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:57Z|01110|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 up in Southbound
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.675 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0af5efc8-de8b-4799-b53a-eaf4b141f13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[407c6401-cf3c-4618-a2ad-b010e3395c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 NetworkManager[48904]: <info>  [1769522997.6839] manager: (tap75da836f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.728 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c74af771-366d-464d-8c4c-7600d112456f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.732 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d9151d-7e72-44be-9e44-962dd46c3dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 NetworkManager[48904]: <info>  [1769522997.7612] device (tap75da836f-90): carrier: link connected
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.768 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e96212a0-b9cb-470f-b3be-2419a1c096dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.797 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1748a7ff-524d-4490-8fe2-0cca31332971]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563738, 'reachable_time': 26879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337911, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.818 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a641f825-6a10-4240-a077-cc53d89d1f50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:8d07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563738, 'tstamp': 563738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337912, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc012b7-6d5e-4537-98fc-61eb7820bd19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563738, 'reachable_time': 26879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337913, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce11bc9-3874-4a2a-81a6-77b57bdab971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.901 238945 DEBUG nova.compute.manager [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.901 238945 DEBUG oslo_concurrency.lockutils [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.902 238945 DEBUG oslo_concurrency.lockutils [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.902 238945 DEBUG oslo_concurrency.lockutils [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.902 238945 DEBUG nova.compute.manager [req-85e24856-9aae-4f65-bdde-4d3838a1e1d8 req-c18e9b68-5c97-429f-9b96-d09528575241 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Processing event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.946 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7804a3fe-1a5b-4d09-bd2a-26f5cd2ec89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.947 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.948 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75da836f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 NetworkManager[48904]: <info>  [1769522997.9506] manager: (tap75da836f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Jan 27 09:09:57 np0005597378 kernel: tap75da836f-90: entered promiscuous mode
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75da836f-90, col_values=(('external_ids', {'iface-id': '4d4b2aab-f70a-4144-8265-087681a0ee38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:09:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:09:57Z|01111|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.955 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.956 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.957 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b372217c-a154-41ba-a1df-a50bc9c410d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.958 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:09:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:09:57.960 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'env', 'PROCESS_TAG=haproxy-75da836f-929f-4646-940e-3cd4153d5aef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75da836f-929f-4646-940e-3cd4153d5aef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:09:57 np0005597378 nova_compute[238941]: 2026-01-27 14:09:57.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.051 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522998.0503528, 5a3eb35a-b675-4084-a737-24918aecfd12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.051 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Started (Lifecycle Event)#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.053 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.061 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.065 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance spawned successfully.#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.065 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.071 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.075 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.097 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.097 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.098 238945 DEBUG nova.virt.libvirt.driver [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.111 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.111 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522998.0506175, 5a3eb35a-b675-4084-a737-24918aecfd12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.111 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.137 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.140 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769522998.0604541, 5a3eb35a-b675-4084-a737-24918aecfd12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.141 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.166 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.170 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.180 238945 INFO nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 7.66 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.180 238945 DEBUG nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.192 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.241 238945 INFO nova.compute.manager [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 8.65 seconds to build instance.#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.258 238945 DEBUG oslo_concurrency.lockutils [None req-0360279f-15ae-4c41-a2e8-05c5c480523e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:58 np0005597378 podman[337987]: 2026-01-27 14:09:58.375033212 +0000 UTC m=+0.048564337 container create 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:09:58 np0005597378 systemd[1]: Started libpod-conmon-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92.scope.
Jan 27 09:09:58 np0005597378 nova_compute[238941]: 2026-01-27 14:09:58.414 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:09:58 np0005597378 podman[337987]: 2026-01-27 14:09:58.352967349 +0000 UTC m=+0.026498494 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:09:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b543109219a18c6436ec5cd74514df89c924e904cff182235f32d79336a8aeb9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:09:58 np0005597378 podman[337987]: 2026-01-27 14:09:58.463964193 +0000 UTC m=+0.137495338 container init 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:09:58 np0005597378 podman[337987]: 2026-01-27 14:09:58.471916097 +0000 UTC m=+0.145447232 container start 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 09:09:58 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : New worker (338009) forked
Jan 27 09:09:58 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : Loading success.
Jan 27 09:09:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812621683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.048 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.132 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.132 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.288 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.289 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3690MB free_disk=59.96667531412095GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.290 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.290 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.365 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5a3eb35a-b675-4084-a737-24918aecfd12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.366 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.366 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.398 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3708899762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3708899762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:09:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652495819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.968 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:09:59 np0005597378 nova_compute[238941]: 2026-01-27 14:09:59.975 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.022 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.059 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.060 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.104 238945 DEBUG nova.compute.manager [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG oslo_concurrency.lockutils [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG oslo_concurrency.lockutils [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG oslo_concurrency.lockutils [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.105 238945 DEBUG nova.compute.manager [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:00 np0005597378 nova_compute[238941]: 2026-01-27 14:10:00.106 238945 WARNING nova.compute.manager [req-aad295d7-4633-4abf-96af-249a941d242f req-9d251f5b-2deb-42c8-b311-8a616c8f48b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:10:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.901850) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523000901891, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1281, "num_deletes": 253, "total_data_size": 1900824, "memory_usage": 1933584, "flush_reason": "Manual Compaction"}
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523000917286, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1859898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40118, "largest_seqno": 41398, "table_properties": {"data_size": 1853775, "index_size": 3390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12169, "raw_average_key_size": 18, "raw_value_size": 1841430, "raw_average_value_size": 2798, "num_data_blocks": 151, "num_entries": 658, "num_filter_entries": 658, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769522887, "oldest_key_time": 1769522887, "file_creation_time": 1769523000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15544 microseconds, and 5555 cpu microseconds.
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.917392) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1859898 bytes OK
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.917415) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.920694) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.920721) EVENT_LOG_v1 {"time_micros": 1769523000920714, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.920742) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1894988, prev total WAL file size 1922555, number of live WAL files 2.
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.921779) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1816KB)], [89(9460KB)]
Jan 27 09:10:00 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523000921889, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11547384, "oldest_snapshot_seqno": -1}
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6629 keys, 10803561 bytes, temperature: kUnknown
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523001031087, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10803561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10756749, "index_size": 29164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169758, "raw_average_key_size": 25, "raw_value_size": 10635699, "raw_average_value_size": 1604, "num_data_blocks": 1159, "num_entries": 6629, "num_filter_entries": 6629, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.031500) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10803561 bytes
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.032940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.6 rd, 98.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.0) write-amplify(5.8) OK, records in: 7151, records dropped: 522 output_compression: NoCompression
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.032965) EVENT_LOG_v1 {"time_micros": 1769523001032953, "job": 52, "event": "compaction_finished", "compaction_time_micros": 109305, "compaction_time_cpu_micros": 36842, "output_level": 6, "num_output_files": 1, "total_output_size": 10803561, "num_input_records": 7151, "num_output_records": 6629, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523001033583, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523001035355, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:00.921612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:10:01.035542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.058 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.059 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:01Z|01112|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 09:10:01 np0005597378 NetworkManager[48904]: <info>  [1769523001.3094] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:01 np0005597378 NetworkManager[48904]: <info>  [1769523001.3107] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Jan 27 09:10:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:01Z|01113|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.343 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.905 238945 DEBUG nova.compute.manager [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.905 238945 DEBUG nova.compute.manager [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing instance network info cache due to event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.906 238945 DEBUG oslo_concurrency.lockutils [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.906 238945 DEBUG oslo_concurrency.lockutils [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:01 np0005597378 nova_compute[238941]: 2026-01-27 14:10:01.906 238945 DEBUG nova.network.neutron [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:10:02 np0005597378 nova_compute[238941]: 2026-01-27 14:10:02.458 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 27 09:10:03 np0005597378 nova_compute[238941]: 2026-01-27 14:10:03.143 238945 DEBUG nova.network.neutron [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updated VIF entry in instance network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:10:03 np0005597378 nova_compute[238941]: 2026-01-27 14:10:03.144 238945 DEBUG nova.network.neutron [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:03 np0005597378 nova_compute[238941]: 2026-01-27 14:10:03.202 238945 DEBUG oslo_concurrency.lockutils [req-59373f93-2f76-4887-8ca9-b90217b6ecd5 req-461da253-0857-47b1-935b-fbe5c15886f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:03 np0005597378 nova_compute[238941]: 2026-01-27 14:10:03.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:03 np0005597378 nova_compute[238941]: 2026-01-27 14:10:03.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:10:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:06 np0005597378 nova_compute[238941]: 2026-01-27 14:10:06.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:06 np0005597378 nova_compute[238941]: 2026-01-27 14:10:06.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:06 np0005597378 nova_compute[238941]: 2026-01-27 14:10:06.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:10:06 np0005597378 nova_compute[238941]: 2026-01-27 14:10:06.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:10:06 np0005597378 nova_compute[238941]: 2026-01-27 14:10:06.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 204 KiB/s wr, 96 op/s
Jan 27 09:10:07 np0005597378 nova_compute[238941]: 2026-01-27 14:10:07.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.828 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:dc:92 10.100.0.2 2001:db8::f816:3eff:fe17:dc92'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe17:dc92/64', 'neutron:device_id': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bcf23ed4-8bec-4985-bf23-8dec9fe6105c) old=Port_Binding(mac=['fa:16:3e:17:dc:92 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.829 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bcf23ed4-8bec-4985-bf23-8dec9fe6105c in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b updated#033[00m
Jan 27 09:10:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.830 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03273c18-2cc1-455f-8ffc-28f9813c664b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:10:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:07.831 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2445a17-0e53-4041-9255-c715034357d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:10:09 np0005597378 nova_compute[238941]: 2026-01-27 14:10:09.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:10Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 09:10:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:10Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 09:10:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 100 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 27 09:10:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:11 np0005597378 nova_compute[238941]: 2026-01-27 14:10:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:11 np0005597378 nova_compute[238941]: 2026-01-27 14:10:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:10:11 np0005597378 nova_compute[238941]: 2026-01-27 14:10:11.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:12 np0005597378 nova_compute[238941]: 2026-01-27 14:10:12.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 100 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 74 op/s
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.354 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.355 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.465 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.706 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.707 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.714 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:10:14 np0005597378 nova_compute[238941]: 2026-01-27 14:10:14.715 238945 INFO nova.compute.claims [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:10:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 119 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.039 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:10:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801126203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.638 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.644 238945 DEBUG nova.compute.provider_tree [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.672 238945 INFO nova.compute.manager [None req-203188f8-05ec-4a9d-812e-38932de9bb45 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Get console output#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.680 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.684 238945 DEBUG nova.scheduler.client.report [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.872 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:15 np0005597378 nova_compute[238941]: 2026-01-27 14:10:15.872 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:10:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.064 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.065 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.148 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.168 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.281 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.282 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.282 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Creating image(s)#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.301 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.323 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.348 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.351 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.422 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.423 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.424 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.424 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.445 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.448 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d9fff719-3828-4c36-8698-604421b7382d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.543 238945 DEBUG nova.policy [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.705 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d9fff719-3828-4c36-8698-604421b7382d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.760 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image d9fff719-3828-4c36-8698-604421b7382d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.831 238945 DEBUG nova.objects.instance [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.854 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.854 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Ensure instance console log exists: /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.855 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.855 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:16 np0005597378 nova_compute[238941]: 2026-01-27 14:10:16.855 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:10:17
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:10:17 np0005597378 nova_compute[238941]: 2026-01-27 14:10:17.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:17 np0005597378 nova_compute[238941]: 2026-01-27 14:10:17.562 238945 INFO nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Rebuilding instance#033[00m
Jan 27 09:10:17 np0005597378 nova_compute[238941]: 2026-01-27 14:10:17.676 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Successfully created port: 779af42d-d593-45a0-a42d-cf6aa2d34f31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:10:17 np0005597378 podman[338253]: 2026-01-27 14:10:17.722254182 +0000 UTC m=+0.056431269 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:17 np0005597378 nova_compute[238941]: 2026-01-27 14:10:17.896 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:17 np0005597378 nova_compute[238941]: 2026-01-27 14:10:17.915 238945 DEBUG nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:17 np0005597378 nova_compute[238941]: 2026-01-27 14:10:17.965 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_requests' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:18 np0005597378 nova_compute[238941]: 2026-01-27 14:10:18.002 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:18 np0005597378 nova_compute[238941]: 2026-01-27 14:10:18.023 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:18 np0005597378 nova_compute[238941]: 2026-01-27 14:10:18.044 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:18 np0005597378 nova_compute[238941]: 2026-01-27 14:10:18.060 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:10:18 np0005597378 nova_compute[238941]: 2026-01-27 14:10:18.063 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:10:18 np0005597378 podman[338273]: 2026-01-27 14:10:18.760559515 +0000 UTC m=+0.096961328 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:10:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 151 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.2 MiB/s wr, 91 op/s
Jan 27 09:10:18 np0005597378 nova_compute[238941]: 2026-01-27 14:10:18.975 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Successfully updated port: 779af42d-d593-45a0-a42d-cf6aa2d34f31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.085 238945 DEBUG nova.compute.manager [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG nova.compute.manager [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing instance network info cache due to event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG oslo_concurrency.lockutils [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG oslo_concurrency.lockutils [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.086 238945 DEBUG nova.network.neutron [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.092 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.453 238945 DEBUG nova.network.neutron [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.751 238945 DEBUG nova.network.neutron [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.771 238945 DEBUG oslo_concurrency.lockutils [req-0a011a76-c9b0-45b2-ac53-18b510473e94 req-4ee4e7e6-4ff6-4408-8fd6-8d82336b1925 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.772 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.772 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:10:19 np0005597378 nova_compute[238941]: 2026-01-27 14:10:19.931 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:10:20 np0005597378 kernel: tap2cb1f123-40 (unregistering): left promiscuous mode
Jan 27 09:10:20 np0005597378 NetworkManager[48904]: <info>  [1769523020.2843] device (tap2cb1f123-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:20Z|01114|binding|INFO|Releasing lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 from this chassis (sb_readonly=0)
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.295 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:20Z|01115|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 down in Southbound
Jan 27 09:10:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:20Z|01116|binding|INFO|Removing iface tap2cb1f123-40 ovn-installed in OVS
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.303 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.305 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef unbound from our chassis#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.307 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75da836f-929f-4646-940e-3cd4153d5aef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.308 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2b60ac-227b-449d-ab2f-0c2b6aac9336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.309 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace which is not needed anymore#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:20 np0005597378 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 27 09:10:20 np0005597378 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Consumed 13.652s CPU time.
Jan 27 09:10:20 np0005597378 systemd-machined[207425]: Machine qemu-139-instance-0000006e terminated.
Jan 27 09:10:20 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : haproxy version is 2.8.14-c23fe91
Jan 27 09:10:20 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [NOTICE]   (338007) : path to executable is /usr/sbin/haproxy
Jan 27 09:10:20 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [WARNING]  (338007) : Exiting Master process...
Jan 27 09:10:20 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [WARNING]  (338007) : Exiting Master process...
Jan 27 09:10:20 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [ALERT]    (338007) : Current worker (338009) exited with code 143 (Terminated)
Jan 27 09:10:20 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[338002]: [WARNING]  (338007) : All workers exited. Exiting... (0)
Jan 27 09:10:20 np0005597378 systemd[1]: libpod-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92.scope: Deactivated successfully.
Jan 27 09:10:20 np0005597378 podman[338324]: 2026-01-27 14:10:20.448230175 +0000 UTC m=+0.049172223 container died 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:10:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92-userdata-shm.mount: Deactivated successfully.
Jan 27 09:10:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b543109219a18c6436ec5cd74514df89c924e904cff182235f32d79336a8aeb9-merged.mount: Deactivated successfully.
Jan 27 09:10:20 np0005597378 podman[338324]: 2026-01-27 14:10:20.496571255 +0000 UTC m=+0.097513313 container cleanup 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:10:20 np0005597378 systemd[1]: libpod-conmon-41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92.scope: Deactivated successfully.
Jan 27 09:10:20 np0005597378 podman[338354]: 2026-01-27 14:10:20.56482125 +0000 UTC m=+0.047029336 container remove 41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.571 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a46ca9c-6551-4495-bc12-0e568f4c573d]: (4, ('Tue Jan 27 02:10:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92)\n41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92\nTue Jan 27 02:10:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92)\n41fa4333da1209a757154dc29027c2a21a316efa3c08ffbc38bc7e56c9771f92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.572 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e9824a-ea78-4acc-b769-6435051c8e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.574 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.576 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:20 np0005597378 kernel: tap75da836f-90: left promiscuous mode
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4541fa0-4242-4170-8421-3011880fd928]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.618 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[825bc2af-725e-4071-9eb5-e30fab0c3705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.620 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bc4cc8-74ac-4742-bde3-2563d16e381b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[27c6a116-87c2-4d5a-8581-5c7d119b148c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563729, 'reachable_time': 30835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338383, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.645 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:10:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:20.645 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[361f167f-cfc8-4840-8b73-b57c49596dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:20 np0005597378 systemd[1]: run-netns-ovnmeta\x2d75da836f\x2d929f\x2d4646\x2d940e\x2d3cd4153d5aef.mount: Deactivated successfully.
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.723 238945 DEBUG nova.compute.manager [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.724 238945 DEBUG oslo_concurrency.lockutils [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.725 238945 DEBUG oslo_concurrency.lockutils [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.725 238945 DEBUG oslo_concurrency.lockutils [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.725 238945 DEBUG nova.compute.manager [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:20 np0005597378 nova_compute[238941]: 2026-01-27 14:10:20.726 238945 WARNING nova.compute.manager [req-867bcfda-8584-4dad-9ac4-0ef41ee8bd06 req-5ce29202-6937-4690-b4a3-28d699ada1c3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 27 09:10:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 27 09:10:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.078 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.085 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance destroyed successfully.#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.089 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance destroyed successfully.#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.090 238945 DEBUG nova.virt.libvirt.vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:16Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.091 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.092 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.092 238945 DEBUG os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.096 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb1f123-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.097 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.102 238945 INFO os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.241 238945 DEBUG nova.network.neutron [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.264 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.265 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance network_info: |[{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.268 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start _get_guest_xml network_info=[{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.273 238945 WARNING nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.279 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.280 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.284 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.285 238945 DEBUG nova.virt.libvirt.host [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.285 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.285 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.286 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.287 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.288 238945 DEBUG nova.virt.hardware [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.291 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.394 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting instance files /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.395 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deletion of /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del complete#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.567 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.568 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating image(s)#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.588 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.609 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.631 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.635 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.702 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.704 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.705 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.705 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "3912a4d8b71ba799f3af029b116f734f2c6341ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.726 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.730 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:10:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410653673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.859 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.888 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:21 np0005597378 nova_compute[238941]: 2026-01-27 14:10:21.899 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.014 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea 5a3eb35a-b675-4084-a737-24918aecfd12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.098 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.183 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.183 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Ensure instance console log exists: /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.184 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.184 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.184 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.187 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start _get_guest_xml network_info=[{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.190 238945 WARNING nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.228 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.228 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.276 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.277 238945 DEBUG nova.virt.libvirt.host [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.277 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.277 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:09Z,direct_url=<?>,disk_format='qcow2',id=0ee8954b-88fb-4f95-ac2f-0ee07bab09cc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.278 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.278 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.279 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.280 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.280 238945 DEBUG nova.virt.hardware [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.280 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.378 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:10:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767653675' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.470 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.473 238945 DEBUG nova.virt.libvirt.vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2105858304',display_name='tempest-TestGettingAddress-server-2105858304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2105858304',id=111,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-lhof0svg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=d9fff719-3828-4c36-8698-604421b7382d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.473 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.475 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.477 238945 DEBUG nova.objects.instance [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.494 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <uuid>d9fff719-3828-4c36-8698-604421b7382d</uuid>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <name>instance-0000006f</name>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-2105858304</nova:name>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:10:21</nova:creationTime>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <nova:port uuid="779af42d-d593-45a0-a42d-cf6aa2d34f31">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe03:2fa4" ipVersion="6"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <entry name="serial">d9fff719-3828-4c36-8698-604421b7382d</entry>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <entry name="uuid">d9fff719-3828-4c36-8698-604421b7382d</entry>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d9fff719-3828-4c36-8698-604421b7382d_disk">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d9fff719-3828-4c36-8698-604421b7382d_disk.config">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:03:2f:a4"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <target dev="tap779af42d-d5"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/console.log" append="off"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:10:22 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:10:22 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:10:22 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:10:22 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.495 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Preparing to wait for external event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.495 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.496 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.496 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.497 238945 DEBUG nova.virt.libvirt.vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2105858304',display_name='tempest-TestGettingAddress-server-2105858304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2105858304',id=111,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-lhof0svg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=d9fff719-3828-4c36-8698-604421b7382d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.498 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.499 238945 DEBUG nova.network.os_vif_util [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.500 238945 DEBUG os_vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.501 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.506 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap779af42d-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.506 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap779af42d-d5, col_values=(('external_ids', {'iface-id': '779af42d-d593-45a0-a42d-cf6aa2d34f31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:2f:a4', 'vm-uuid': 'd9fff719-3828-4c36-8698-604421b7382d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:22 np0005597378 NetworkManager[48904]: <info>  [1769523022.5090] manager: (tap779af42d-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.514 238945 INFO os_vif [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5')#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.594 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.595 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.595 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:03:2f:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.596 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Using config drive#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.634 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.9 MiB/s wr, 79 op/s
Jan 27 09:10:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:10:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298325162' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.932 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.967 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:22 np0005597378 nova_compute[238941]: 2026-01-27 14:10:22.972 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.027 238945 DEBUG nova.compute.manager [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG oslo_concurrency.lockutils [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG oslo_concurrency.lockutils [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG oslo_concurrency.lockutils [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 DEBUG nova.compute.manager [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.028 238945 WARNING nova.compute.manager [req-db738060-1135-4aa6-952e-7b3cdeab6172 req-99f3f619-efae-458d-aeba-ccd2194f4209 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.032 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Creating config drive at /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.036 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmiq7ctmo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.193 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmiq7ctmo" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.220 238945 DEBUG nova.storage.rbd_utils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image d9fff719-3828-4c36-8698-604421b7382d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.225 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config d9fff719-3828-4c36-8698-604421b7382d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.346 238945 DEBUG oslo_concurrency.processutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config d9fff719-3828-4c36-8698-604421b7382d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.347 238945 INFO nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deleting local config drive /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d/disk.config because it was imported into RBD.#033[00m
Jan 27 09:10:23 np0005597378 kernel: tap779af42d-d5: entered promiscuous mode
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.3883] manager: (tap779af42d-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Jan 27 09:10:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:23Z|01117|binding|INFO|Claiming lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 for this chassis.
Jan 27 09:10:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:23Z|01118|binding|INFO|779af42d-d593-45a0-a42d-cf6aa2d34f31: Claiming fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.399 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], port_security=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe03:2fa4/64', 'neutron:device_id': 'd9fff719-3828-4c36-8698-604421b7382d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=779af42d-d593-45a0-a42d-cf6aa2d34f31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.400 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 779af42d-d593-45a0-a42d-cf6aa2d34f31 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b bound to our chassis#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.402 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03273c18-2cc1-455f-8ffc-28f9813c664b#033[00m
Jan 27 09:10:23 np0005597378 systemd-udevd[338765]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:10:23 np0005597378 systemd-machined[207425]: New machine qemu-140-instance-0000006f.
Jan 27 09:10:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:23Z|01119|binding|INFO|Setting lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 ovn-installed in OVS
Jan 27 09:10:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:23Z|01120|binding|INFO|Setting lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 up in Southbound
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.421 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.423 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[277775a5-2d3e-4e3d-a0e8-21a5fc6ca713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.424 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03273c18-21 in ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.427 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03273c18-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.427 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2fde21-b203-4e36-8085-d13268a39d05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 systemd[1]: Started Virtual Machine qemu-140-instance-0000006f.
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.429 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0569cf6d-e90a-4d44-b9aa-b6ad3c001f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.4366] device (tap779af42d-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.4377] device (tap779af42d-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.450 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca8c3f4-20ec-4435-89da-5298be1e3f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.478 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40a6efa5-8a94-461a-afd1-728b79db92a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.521 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf9ef4d-983a-4c98-9b72-2902d1a04602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 systemd-udevd[338768]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb96f6f5-dc88-4389-a8a1-f9f38782e57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.5286] manager: (tap03273c18-20): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Jan 27 09:10:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:10:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285644882' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.574 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8949e9d6-0386-41cc-acd4-2bbe2a28f3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.578 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad4d0fe-b7cd-400b-8f55-a2e5a2e984ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.579 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.581 238945 DEBUG nova.virt.libvirt.vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:21Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.581 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.582 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.584 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <uuid>5a3eb35a-b675-4084-a737-24918aecfd12</uuid>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <name>instance-0000006e</name>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1195479189</nova:name>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:10:22</nova:creationTime>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="0ee8954b-88fb-4f95-ac2f-0ee07bab09cc"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <nova:port uuid="2cb1f123-4012-46d4-bbe9-914b25f6f6a3">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <entry name="serial">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <entry name="uuid">5a3eb35a-b675-4084-a737-24918aecfd12</entry>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5a3eb35a-b675-4084-a737-24918aecfd12_disk.config">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:9a:53:2b"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <target dev="tap2cb1f123-40"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/console.log" append="off"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:10:23 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:10:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:10:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:10:23 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.585 238945 DEBUG nova.virt.libvirt.vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:09:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:21Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.585 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.5916] manager: (tap2cb1f123-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.586 238945 DEBUG nova.network.os_vif_util [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.586 238945 DEBUG os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.586 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb1f123-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cb1f123-40, col_values=(('external_ids', {'iface-id': '2cb1f123-4012-46d4-bbe9-914b25f6f6a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:53:2b', 'vm-uuid': '5a3eb35a-b675-4084-a737-24918aecfd12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.600 238945 INFO os_vif [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')#033[00m
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.6209] device (tap03273c18-20): carrier: link connected
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.627 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c604fc6a-d2f5-41c7-87cb-67a0e5899cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.653 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:9a:53:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.654 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Using config drive#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.654 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c366495-141a-45b5-b289-290a5e916e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338801, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db74a2ce-5e32-43ee-8d02-5af8af2099ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:dc92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566324, 'tstamp': 566324}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338816, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.680 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.701 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.704 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f97fd082-f7ae-4de1-b278-c99a912ee822]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338821, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.727 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'keypairs' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d248a988-1cf0-42cc-8cfe-e554caed6c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.794 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[afb1eb39-2ba6-454b-a52c-ed6d320428c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.796 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.796 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.796 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03273c18-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 NetworkManager[48904]: <info>  [1769523023.7992] manager: (tap03273c18-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.801 238945 DEBUG nova.compute.manager [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:23 np0005597378 kernel: tap03273c18-20: entered promiscuous mode
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG oslo_concurrency.lockutils [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG oslo_concurrency.lockutils [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG oslo_concurrency.lockutils [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.802 238945 DEBUG nova.compute.manager [req-f97bcb60-3746-4c77-b40b-53bad8f7b34e req-3e70f571-4344-42b9-8b3a-6fef04eacf39 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Processing event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.804 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03273c18-20, col_values=(('external_ids', {'iface-id': 'bcf23ed4-8bec-4985-bf23-8dec9fe6105c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:23Z|01121|binding|INFO|Releasing lport bcf23ed4-8bec-4985-bf23-8dec9fe6105c from this chassis (sb_readonly=0)
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.821 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03273c18-2cc1-455f-8ffc-28f9813c664b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03273c18-2cc1-455f-8ffc-28f9813c664b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.822 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f15a49c9-44cd-411d-b335-a065d414ea53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.822 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/03273c18-2cc1-455f-8ffc-28f9813c664b.pid.haproxy
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 03273c18-2cc1-455f-8ffc-28f9813c664b
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:10:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:23.823 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'env', 'PROCESS_TAG=haproxy-03273c18-2cc1-455f-8ffc-28f9813c664b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03273c18-2cc1-455f-8ffc-28f9813c664b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.856 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523023.855472, d9fff719-3828-4c36-8698-604421b7382d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.856 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Started (Lifecycle Event)#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.857 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.860 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.863 238945 INFO nova.virt.libvirt.driver [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance spawned successfully.#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.863 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.886 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.889 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.890 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.890 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.890 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.891 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.891 238945 DEBUG nova.virt.libvirt.driver [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.922 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.922 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523023.8557222, d9fff719-3828-4c36-8698-604421b7382d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.922 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.944 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523023.8596072, d9fff719-3828-4c36-8698-604421b7382d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.944 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.954 238945 INFO nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.955 238945 DEBUG nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.985 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:23 np0005597378 nova_compute[238941]: 2026-01-27 14:10:23.993 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.026 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.043 238945 INFO nova.compute.manager [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 9.37 seconds to build instance.#033[00m
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.065 238945 DEBUG oslo_concurrency.lockutils [None req-17ef0cc2-431c-444c-a6b4-b4a9cc6a4379 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:24 np0005597378 podman[338900]: 2026-01-27 14:10:24.182257371 +0000 UTC m=+0.046702386 container create ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:10:24 np0005597378 systemd[1]: Started libpod-conmon-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97.scope.
Jan 27 09:10:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:10:24 np0005597378 podman[338900]: 2026-01-27 14:10:24.159556691 +0000 UTC m=+0.024001706 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:10:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fc44f600491e92dad7ef6678a16baca6db4e3ed1c446f070a15f9116340f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:24 np0005597378 podman[338900]: 2026-01-27 14:10:24.277618125 +0000 UTC m=+0.142063150 container init ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:10:24 np0005597378 podman[338900]: 2026-01-27 14:10:24.283108122 +0000 UTC m=+0.147553117 container start ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:10:24 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : New worker (338922) forked
Jan 27 09:10:24 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : Loading success.
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.711 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Creating config drive at /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config#033[00m
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.717 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9j0trtrh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 153 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.6 MiB/s wr, 100 op/s
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.860 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9j0trtrh" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.883 238945 DEBUG nova.storage.rbd_utils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:24 np0005597378 nova_compute[238941]: 2026-01-27 14:10:24.887 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.038 238945 DEBUG oslo_concurrency.processutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config 5a3eb35a-b675-4084-a737-24918aecfd12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.039 238945 INFO nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting local config drive /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12/disk.config because it was imported into RBD.#033[00m
Jan 27 09:10:25 np0005597378 kernel: tap2cb1f123-40: entered promiscuous mode
Jan 27 09:10:25 np0005597378 NetworkManager[48904]: <info>  [1769523025.0805] manager: (tap2cb1f123-40): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Jan 27 09:10:25 np0005597378 systemd-udevd[338793]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:10:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:25Z|01122|binding|INFO|Claiming lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for this chassis.
Jan 27 09:10:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:25Z|01123|binding|INFO|2cb1f123-4012-46d4-bbe9-914b25f6f6a3: Claiming fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 NetworkManager[48904]: <info>  [1769523025.0942] device (tap2cb1f123-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:10:25 np0005597378 NetworkManager[48904]: <info>  [1769523025.0952] device (tap2cb1f123-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.096 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.097 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef bound to our chassis#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.098 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75da836f-929f-4646-940e-3cd4153d5aef#033[00m
Jan 27 09:10:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:25Z|01124|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 ovn-installed in OVS
Jan 27 09:10:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:25Z|01125|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 up in Southbound
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.100 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 systemd-machined[207425]: New machine qemu-141-instance-0000006e.
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.112 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44cad314-53d6-4ffa-a078-bf2e8f56d754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.113 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75da836f-91 in ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.114 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75da836f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af6561b1-c3f1-4916-a319-3a6195c5def7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6911c680-3cdf-4aa2-943b-7a19b7146367]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 systemd[1]: Started Virtual Machine qemu-141-instance-0000006e.
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.129 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[73f8924f-0304-4470-9287-eb2dce2efb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62146cf4-6aae-4d13-ba66-3d2e18806e97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.167 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[139298b0-ab13-4168-9296-4382a67d9685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 NetworkManager[48904]: <info>  [1769523025.1742] manager: (tap75da836f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3137cf8b-c328-4460-8225-17931078f015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.211 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[212cf405-520f-4a87-81e6-9f973df76d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.214 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d20d7ab3-60ef-4da1-a39d-8ba02c7b02db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 NetworkManager[48904]: <info>  [1769523025.2360] device (tap75da836f-90): carrier: link connected
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.240 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[935dffad-5a2e-47be-92b7-1f1595f045c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.254 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef45a6d9-6c95-4d60-9f7b-7e552295e882]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566486, 'reachable_time': 39920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339002, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d061be29-ff7e-42c0-bbc0-72ea7c98509b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:8d07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566486, 'tstamp': 566486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339003, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.281 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82fbc48e-f0cb-4506-b551-5d9829d37343]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75da836f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:8d:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566486, 'reachable_time': 39920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339004, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.304 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04d9e9b6-3f23-4d74-af6c-c7a526ab3cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.372 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[661b918d-e036-41d2-b55d-8ddd6c1504fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.375 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.378 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.380 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75da836f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:25 np0005597378 NetworkManager[48904]: <info>  [1769523025.3868] manager: (tap75da836f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Jan 27 09:10:25 np0005597378 kernel: tap75da836f-90: entered promiscuous mode
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.388 238945 DEBUG nova.compute.manager [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG oslo_concurrency.lockutils [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG oslo_concurrency.lockutils [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG oslo_concurrency.lockutils [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 DEBUG nova.compute.manager [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.389 238945 WARNING nova.compute.manager [req-2ce38e35-5dc1-4c0d-bf70-6aa7014d5ade req-4b671a4f-3f1e-434d-a374-8853d6ed97b2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.392 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75da836f-90, col_values=(('external_ids', {'iface-id': '4d4b2aab-f70a-4144-8265-087681a0ee38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.394 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:25Z|01126|binding|INFO|Releasing lport 4d4b2aab-f70a-4144-8265-087681a0ee38 from this chassis (sb_readonly=0)
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.395 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.408 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbbcf4e-3753-4648-8ee4-bba958d689c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.409 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/75da836f-929f-4646-940e-3cd4153d5aef.pid.haproxy
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 75da836f-929f-4646-940e-3cd4153d5aef
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:10:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:25.410 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'env', 'PROCESS_TAG=haproxy-75da836f-929f-4646-940e-3cd4153d5aef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75da836f-929f-4646-940e-3cd4153d5aef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:10:25 np0005597378 podman[339036]: 2026-01-27 14:10:25.807889415 +0000 UTC m=+0.086232109 container create 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:10:25 np0005597378 podman[339036]: 2026-01-27 14:10:25.754377606 +0000 UTC m=+0.032720320 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:10:25 np0005597378 systemd[1]: Started libpod-conmon-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8.scope.
Jan 27 09:10:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:10:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf37ff90630e079e77dcab00b067b2ebbab260d68bf804b82e874bcd474df34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:25 np0005597378 podman[339036]: 2026-01-27 14:10:25.890755823 +0000 UTC m=+0.169098537 container init 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:25 np0005597378 podman[339036]: 2026-01-27 14:10:25.896244231 +0000 UTC m=+0.174586925 container start 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.913 238945 DEBUG nova.compute.manager [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.913 238945 DEBUG oslo_concurrency.lockutils [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 DEBUG oslo_concurrency.lockutils [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 DEBUG oslo_concurrency.lockutils [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 DEBUG nova.compute.manager [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] No waiting events found dispatching network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.914 238945 WARNING nova.compute.manager [req-c3e870a8-c5f6-4d59-9762-eb560da8875b req-df57bd09-ee3d-442b-9aac-fa5479bcb552 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received unexpected event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:10:25 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : New worker (339098) forked
Jan 27 09:10:25 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : Loading success.
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.965 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 5a3eb35a-b675-4084-a737-24918aecfd12 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.966 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523025.9654796, 5a3eb35a-b675-4084-a737-24918aecfd12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.966 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.968 238945 DEBUG nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.969 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.971 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance spawned successfully.#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.971 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.984 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.987 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.994 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.995 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.995 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.995 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.996 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:25 np0005597378 nova_compute[238941]: 2026-01-27 14:10:25.996 238945 DEBUG nova.virt.libvirt.driver [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.030 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.030 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523025.9691405, 5a3eb35a-b675-4084-a737-24918aecfd12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.030 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Started (Lifecycle Event)#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.076 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.083 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.091 238945 DEBUG nova.compute.manager [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.123 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.164 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.164 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.164 238945 DEBUG nova.objects.instance [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 27 09:10:26 np0005597378 nova_compute[238941]: 2026-01-27 14:10:26.219 238945 DEBUG oslo_concurrency.lockutils [None req-e99cd37c-62ad-421a-a474-d047a529c196 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.538 238945 DEBUG nova.compute.manager [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG oslo_concurrency.lockutils [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG oslo_concurrency.lockutils [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG oslo_concurrency.lockutils [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.539 238945 DEBUG nova.compute.manager [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:27 np0005597378 nova_compute[238941]: 2026-01-27 14:10:27.540 238945 WARNING nova.compute.manager [req-1bb510cd-4b0b-4e16-8855-660cee4c26de req-2142d2c1-11fe-4190-bdd2-bee444a1d387 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007039357063833481 of space, bias 1.0, pg target 0.21118071191500443 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685822536495582 of space, bias 1.0, pg target 0.20057467609486745 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0604868334485597e-06 of space, bias 4.0, pg target 0.0012725842001382716 quantized to 16 (current 16)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:10:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG nova.compute.manager [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG nova.compute.manager [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing instance network info cache due to event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG oslo_concurrency.lockutils [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.064 238945 DEBUG oslo_concurrency.lockutils [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.065 238945 DEBUG nova.network.neutron [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.384 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.386 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.386 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.387 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.387 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.411 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.425 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.425 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Image id deec719f-9679-4d33-adfe-db01148e4a56 yields fingerprint 285e7430fe92ea66e9eadd94d86f83f43a584b0f _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.426 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image deec719f-9679-4d33-adfe-db01148e4a56 at (/var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f): checking#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.426 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image deec719f-9679-4d33-adfe-db01148e4a56 at (/var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.429 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.429 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Image id 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc yields fingerprint 3912a4d8b71ba799f3af029b116f734f2c6341ea _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.429 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc at (/var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea): checking#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.430 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] image 0ee8954b-88fb-4f95-ac2f-0ee07bab09cc at (/var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.431 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] d9fff719-3828-4c36-8698-604421b7382d is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.431 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] 5a3eb35a-b675-4084-a737-24918aecfd12 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Active base files: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.432 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.433 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Jan 27 09:10:28 np0005597378 nova_compute[238941]: 2026-01-27 14:10:28.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Jan 27 09:10:29 np0005597378 nova_compute[238941]: 2026-01-27 14:10:29.804 238945 DEBUG nova.network.neutron [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updated VIF entry in instance network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:10:29 np0005597378 nova_compute[238941]: 2026-01-27 14:10:29.804 238945 DEBUG nova.network.neutron [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:29 np0005597378 nova_compute[238941]: 2026-01-27 14:10:29.828 238945 DEBUG oslo_concurrency.lockutils [req-d7f3c8af-7c03-4be7-8f89-c35b3c1a25ea req-c7c8e621-e26e-4a09-8777-ea8d6340fa11 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 205 op/s
Jan 27 09:10:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:32 np0005597378 nova_compute[238941]: 2026-01-27 14:10:32.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 27 09:10:33 np0005597378 nova_compute[238941]: 2026-01-27 14:10:33.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 134 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 27 09:10:35 np0005597378 nova_compute[238941]: 2026-01-27 14:10:35.428 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:35.432 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:35.435 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:10:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 139 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 640 KiB/s wr, 185 op/s
Jan 27 09:10:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:37Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:2f:a4 10.100.0.3
Jan 27 09:10:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:37Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:2f:a4 10.100.0.3
Jan 27 09:10:37 np0005597378 nova_compute[238941]: 2026-01-27 14:10:37.477 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:38Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 09:10:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:38Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:53:2b 10.100.0.3
Jan 27 09:10:38 np0005597378 nova_compute[238941]: 2026-01-27 14:10:38.597 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 179 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 211 op/s
Jan 27 09:10:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 194 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 4.2 MiB/s wr, 120 op/s
Jan 27 09:10:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:42.437 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:42 np0005597378 nova_compute[238941]: 2026-01-27 14:10:42.478 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 194 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 4.2 MiB/s wr, 117 op/s
Jan 27 09:10:43 np0005597378 nova_compute[238941]: 2026-01-27 14:10:43.050 238945 INFO nova.compute.manager [None req-907539fc-be5d-47ed-acd5-ac8da75b71ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Get console output#033[00m
Jan 27 09:10:43 np0005597378 nova_compute[238941]: 2026-01-27 14:10:43.057 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:10:43 np0005597378 nova_compute[238941]: 2026-01-27 14:10:43.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.309 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.310 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.311 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.311 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.311 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.312 238945 INFO nova.compute.manager [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Terminating instance#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.313 238945 DEBUG nova.compute.manager [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.401 238945 DEBUG nova.compute.manager [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.401 238945 DEBUG nova.compute.manager [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing instance network info cache due to event network-changed-2cb1f123-4012-46d4-bbe9-914b25f6f6a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.402 238945 DEBUG oslo_concurrency.lockutils [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.402 238945 DEBUG oslo_concurrency.lockutils [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.402 238945 DEBUG nova.network.neutron [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Refreshing network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:10:44 np0005597378 kernel: tap2cb1f123-40 (unregistering): left promiscuous mode
Jan 27 09:10:44 np0005597378 NetworkManager[48904]: <info>  [1769523044.4115] device (tap2cb1f123-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:10:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:44Z|01127|binding|INFO|Releasing lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 from this chassis (sb_readonly=0)
Jan 27 09:10:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:44Z|01128|binding|INFO|Setting lport 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 down in Southbound
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:44Z|01129|binding|INFO|Removing iface tap2cb1f123-40 ovn-installed in OVS
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.425 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.432 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:53:2b 10.100.0.3'], port_security=['fa:16:3e:9a:53:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a3eb35a-b675-4084-a737-24918aecfd12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75da836f-929f-4646-940e-3cd4153d5aef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ce1c71d8-1f6c-4191-8aaf-3bb4bc201711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4385daa-faf2-4073-8fdd-d03d3d8a6a22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2cb1f123-4012-46d4-bbe9-914b25f6f6a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.434 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3 in datapath 75da836f-929f-4646-940e-3cd4153d5aef unbound from our chassis#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.435 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75da836f-929f-4646-940e-3cd4153d5aef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.436 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b72e2bd2-09e8-4ceb-af27-b26112e7d1a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.437 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef namespace which is not needed anymore#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.438 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 27 09:10:44 np0005597378 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Consumed 13.277s CPU time.
Jan 27 09:10:44 np0005597378 systemd-machined[207425]: Machine qemu-141-instance-0000006e terminated.
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.554 238945 INFO nova.virt.libvirt.driver [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Instance destroyed successfully.#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.555 238945 DEBUG nova.objects.instance [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 5a3eb35a-b675-4084-a737-24918aecfd12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:44 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : haproxy version is 2.8.14-c23fe91
Jan 27 09:10:44 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [NOTICE]   (339096) : path to executable is /usr/sbin/haproxy
Jan 27 09:10:44 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [WARNING]  (339096) : Exiting Master process...
Jan 27 09:10:44 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [WARNING]  (339096) : Exiting Master process...
Jan 27 09:10:44 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [ALERT]    (339096) : Current worker (339098) exited with code 143 (Terminated)
Jan 27 09:10:44 np0005597378 neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef[339073]: [WARNING]  (339096) : All workers exited. Exiting... (0)
Jan 27 09:10:44 np0005597378 systemd[1]: libpod-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8.scope: Deactivated successfully.
Jan 27 09:10:44 np0005597378 podman[339134]: 2026-01-27 14:10:44.602467661 +0000 UTC m=+0.057962700 container died 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:10:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8-userdata-shm.mount: Deactivated successfully.
Jan 27 09:10:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6bf37ff90630e079e77dcab00b067b2ebbab260d68bf804b82e874bcd474df34-merged.mount: Deactivated successfully.
Jan 27 09:10:44 np0005597378 podman[339134]: 2026-01-27 14:10:44.660007668 +0000 UTC m=+0.115502717 container cleanup 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:10:44 np0005597378 systemd[1]: libpod-conmon-49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8.scope: Deactivated successfully.
Jan 27 09:10:44 np0005597378 podman[339169]: 2026-01-27 14:10:44.745556547 +0000 UTC m=+0.054410524 container remove 49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.753 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b11ecd72-339c-4504-a505-e6b8784b47d6]: (4, ('Tue Jan 27 02:10:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8)\n49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8\nTue Jan 27 02:10:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef (49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8)\n49cec2d17a5c7fc0376b5a2f8a8296651707deb6f691ffccb2c57816d84d2ad8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29104e1c-9722-4dcc-b1e9-e66a0f9dee3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.758 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75da836f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 kernel: tap75da836f-90: left promiscuous mode
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.786 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74bd9fd9-3b5c-44a1-b1dd-1d5c7d4c0e9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.803 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d1aa93-63c8-4987-ae4b-2a9cd5c77bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.805 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8c40aa-f0c6-46f6-92da-59dc3ef4307e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.811 238945 DEBUG nova.virt.libvirt.vif [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1195479189',display_name='tempest-TestNetworkAdvancedServerOps-server-1195479189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1195479189',id=110,image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+v6IIeXMH2K9HXQKoJGfMEJBIPmlwPbH4KjiTbOA+pygoRKT84WN1ACxViMYGsiCqvcSzyJoye+rKEd+WJzaq8rjcwrNm7VY4SUOaosw3oosbY5c4vMWnh4H/Xg8Gbpw==',key_name='tempest-TestNetworkAdvancedServerOps-1513209013',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-ve6140ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ee8954b-88fb-4f95-ac2f-0ee07bab09cc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:10:26Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=5a3eb35a-b675-4084-a737-24918aecfd12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.812 238945 DEBUG nova.network.os_vif_util [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.812 238945 DEBUG nova.network.os_vif_util [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.813 238945 DEBUG os_vif [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.816 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb1f123-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:44 np0005597378 nova_compute[238941]: 2026-01-27 14:10:44.824 238945 INFO os_vif [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:53:2b,bridge_name='br-int',has_traffic_filtering=True,id=2cb1f123-4012-46d4-bbe9-914b25f6f6a3,network=Network(75da836f-929f-4646-940e-3cd4153d5aef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cb1f123-40')#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.830 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[addb2101-9553-45b6-bc5a-a4d471b77ac1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566478, 'reachable_time': 17146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339188, 'error': None, 'target': 'ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.832 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75da836f-929f-4646-940e-3cd4153d5aef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:10:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:44.832 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4e2b4a-fbba-434b-bf2e-3d16cd287140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:44 np0005597378 systemd[1]: run-netns-ovnmeta\x2d75da836f\x2d929f\x2d4646\x2d940e\x2d3cd4153d5aef.mount: Deactivated successfully.
Jan 27 09:10:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Jan 27 09:10:45 np0005597378 nova_compute[238941]: 2026-01-27 14:10:45.487 238945 INFO nova.virt.libvirt.driver [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deleting instance files /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del#033[00m
Jan 27 09:10:45 np0005597378 nova_compute[238941]: 2026-01-27 14:10:45.489 238945 INFO nova.virt.libvirt.driver [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deletion of /var/lib/nova/instances/5a3eb35a-b675-4084-a737-24918aecfd12_del complete#033[00m
Jan 27 09:10:45 np0005597378 nova_compute[238941]: 2026-01-27 14:10:45.652 238945 INFO nova.compute.manager [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:10:45 np0005597378 nova_compute[238941]: 2026-01-27 14:10:45.654 238945 DEBUG oslo.service.loopingcall [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:10:45 np0005597378 nova_compute[238941]: 2026-01-27 14:10:45.654 238945 DEBUG nova.compute.manager [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:10:45 np0005597378 nova_compute[238941]: 2026-01-27 14:10:45.654 238945 DEBUG nova.network.neutron [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:10:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:46.315 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:46.316 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:46.316 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.499 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.500 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-unplugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG oslo_concurrency.lockutils [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 DEBUG nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] No waiting events found dispatching network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.501 238945 WARNING nova.compute.manager [req-9c34274f-478f-4acf-b5ce-d316d04bc309 req-47019b71-80a0-46ac-b83d-07301c6b1d5c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received unexpected event network-vif-plugged-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:10:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 191 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 664 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.969 238945 DEBUG nova.network.neutron [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updated VIF entry in instance network info cache for port 2cb1f123-4012-46d4-bbe9-914b25f6f6a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.969 238945 DEBUG nova.network.neutron [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [{"id": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "address": "fa:16:3e:9a:53:2b", "network": {"id": "75da836f-929f-4646-940e-3cd4153d5aef", "bridge": "br-int", "label": "tempest-network-smoke--1910218711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cb1f123-40", "ovs_interfaceid": "2cb1f123-4012-46d4-bbe9-914b25f6f6a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.990 238945 DEBUG nova.network.neutron [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:46 np0005597378 nova_compute[238941]: 2026-01-27 14:10:46.992 238945 DEBUG oslo_concurrency.lockutils [req-40645ecb-8a06-431f-8994-028494eb194f req-1debbadb-462c-4e2b-ad98-d8ee55f8f607 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5a3eb35a-b675-4084-a737-24918aecfd12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.010 238945 INFO nova.compute.manager [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Took 1.36 seconds to deallocate network for instance.#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.062 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.063 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.126 238945 DEBUG nova.compute.manager [req-1e2636e7-fa08-40c1-9fa2-224e4567d04d req-a50bcfd6-8a89-473e-a23a-d9b5e15254fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Received event network-vif-deleted-2cb1f123-4012-46d4-bbe9-914b25f6f6a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.153 238945 DEBUG oslo_concurrency.processutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:10:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/346787681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.711 238945 DEBUG oslo_concurrency.processutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.717 238945 DEBUG nova.compute.provider_tree [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.730 238945 DEBUG nova.scheduler.client.report [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.746 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.771 238945 INFO nova.scheduler.client.report [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 5a3eb35a-b675-4084-a737-24918aecfd12#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.835 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.836 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:10:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.859 238945 DEBUG oslo_concurrency.lockutils [None req-6cde8491-93b8-4535-b8cd-4895bc74359d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "5a3eb35a-b675-4084-a737-24918aecfd12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.860 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.917 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.918 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.923 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:10:47 np0005597378 nova_compute[238941]: 2026-01-27 14:10:47.924 238945 INFO nova.compute.claims [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.024 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:10:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101819634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.641 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.647 238945 DEBUG nova.compute.provider_tree [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.668 238945 DEBUG nova.scheduler.client.report [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.695 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.696 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:10:48 np0005597378 podman[339252]: 2026-01-27 14:10:48.713541563 +0000 UTC m=+0.050878469 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.765 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.765 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.792 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.822 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:10:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 3.8 MiB/s wr, 149 op/s
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.937 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.938 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.939 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Creating image(s)#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.966 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:48 np0005597378 nova_compute[238941]: 2026-01-27 14:10:48.989 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.012 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.016 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.055 238945 DEBUG nova.policy [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.096 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.096 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.097 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.097 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.118 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.122 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.589 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.652 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:10:49 np0005597378 podman[339399]: 2026-01-27 14:10:49.735078296 +0000 UTC m=+0.076411156 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.880 238945 DEBUG nova.objects.instance [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.897 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.897 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Ensure instance console log exists: /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.897 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.898 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:49 np0005597378 nova_compute[238941]: 2026-01-27 14:10:49.898 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:50Z|01130|binding|INFO|Releasing lport bcf23ed4-8bec-4985-bf23-8dec9fe6105c from this chassis (sb_readonly=0)
Jan 27 09:10:50 np0005597378 nova_compute[238941]: 2026-01-27 14:10:50.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:50 np0005597378 nova_compute[238941]: 2026-01-27 14:10:50.193 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Successfully created port: ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:10:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 131 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 27 09:10:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.628 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Successfully updated port: ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.664 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.664 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.665 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.767 238945 DEBUG nova.compute.manager [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.768 238945 DEBUG nova.compute.manager [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing instance network info cache due to event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.768 238945 DEBUG oslo_concurrency.lockutils [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:10:51 np0005597378 nova_compute[238941]: 2026-01-27 14:10:51.817 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:10:52 np0005597378 nova_compute[238941]: 2026-01-27 14:10:52.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 131 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 312 KiB/s wr, 60 op/s
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.184 238945 DEBUG nova.network.neutron [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.201 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.202 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance network_info: |[{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.203 238945 DEBUG oslo_concurrency.lockutils [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.203 238945 DEBUG nova.network.neutron [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.206 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start _get_guest_xml network_info=[{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.210 238945 WARNING nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.219 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.220 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.223 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.224 238945 DEBUG nova.virt.libvirt.host [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.224 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.225 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.226 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.226 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.226 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.227 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.228 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.228 238945 DEBUG nova.virt.hardware [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.230 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:10:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946843385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.805 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.828 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:53 np0005597378 nova_compute[238941]: 2026-01-27 14:10:53.832 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:10:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043105639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.393 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.395 238945 DEBUG nova.virt.libvirt.vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1563603684',display_name='tempest-TestGettingAddress-server-1563603684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1563603684',id=112,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ks3qcltj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:48Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.395 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.396 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.397 238945 DEBUG nova.objects.instance [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.416 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <uuid>0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9</uuid>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <name>instance-00000070</name>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-1563603684</nova:name>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:10:53</nova:creationTime>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <nova:port uuid="ffadabd9-ed10-48aa-9297-8b6cf0c692a0">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1b:6aa5" ipVersion="6"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <entry name="serial">0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9</entry>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <entry name="uuid">0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9</entry>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:1b:6a:a5"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <target dev="tapffadabd9-ed"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/console.log" append="off"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:10:54 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:10:54 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:10:54 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:10:54 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.416 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Preparing to wait for external event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.416 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.417 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.417 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.417 238945 DEBUG nova.virt.libvirt.vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:10:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1563603684',display_name='tempest-TestGettingAddress-server-1563603684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1563603684',id=112,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ks3qcltj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:10:48Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.418 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.418 238945 DEBUG nova.network.os_vif_util [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.418 238945 DEBUG os_vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.419 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.420 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.422 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffadabd9-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.422 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffadabd9-ed, col_values=(('external_ids', {'iface-id': 'ffadabd9-ed10-48aa-9297-8b6cf0c692a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:6a:a5', 'vm-uuid': '0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:54 np0005597378 NetworkManager[48904]: <info>  [1769523054.4247] manager: (tapffadabd9-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.431 238945 INFO os_vif [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed')#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.485 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.485 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.486 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:1b:6a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.486 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Using config drive#033[00m
Jan 27 09:10:54 np0005597378 nova_compute[238941]: 2026-01-27 14:10:54.508 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.639 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Creating config drive at /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config#033[00m
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.645 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bczabj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.793 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bczabj7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.824 238945 DEBUG nova.storage.rbd_utils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.828 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:10:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.976 238945 DEBUG oslo_concurrency.processutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:10:55 np0005597378 nova_compute[238941]: 2026-01-27 14:10:55.978 238945 INFO nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deleting local config drive /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9/disk.config because it was imported into RBD.#033[00m
Jan 27 09:10:56 np0005597378 kernel: tapffadabd9-ed: entered promiscuous mode
Jan 27 09:10:56 np0005597378 NetworkManager[48904]: <info>  [1769523056.0447] manager: (tapffadabd9-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Jan 27 09:10:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:56Z|01131|binding|INFO|Claiming lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for this chassis.
Jan 27 09:10:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:56Z|01132|binding|INFO|ffadabd9-ed10-48aa-9297-8b6cf0c692a0: Claiming fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.063 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], port_security=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe1b:6aa5/64', 'neutron:device_id': '0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ffadabd9-ed10-48aa-9297-8b6cf0c692a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.064 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b bound to our chassis#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.065 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03273c18-2cc1-455f-8ffc-28f9813c664b#033[00m
Jan 27 09:10:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:56Z|01133|binding|INFO|Setting lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 ovn-installed in OVS
Jan 27 09:10:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:10:56Z|01134|binding|INFO|Setting lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 up in Southbound
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:56 np0005597378 systemd-udevd[339603]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:10:56 np0005597378 systemd-machined[207425]: New machine qemu-142-instance-00000070.
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2a4e86-0787-40b3-9b61-0cacfd9e0c45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:56 np0005597378 systemd[1]: Started Virtual Machine qemu-142-instance-00000070.
Jan 27 09:10:56 np0005597378 NetworkManager[48904]: <info>  [1769523056.1049] device (tapffadabd9-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:10:56 np0005597378 NetworkManager[48904]: <info>  [1769523056.1056] device (tapffadabd9-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.124 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e736cf-8ce8-474b-93ef-f57a6187f7a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.129 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4218f5-cfba-428c-b201-cd7a4616b6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.158 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1721b9e-2a51-4809-8fb9-0b20270535d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.177 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12b0af87-c6ea-4dee-860b-45db9f17638d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339615, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.193 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41789b01-7361-4353-8512-a8b527b2ed74]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566338, 'tstamp': 566338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339617, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566341, 'tstamp': 566341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339617, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.194 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03273c18-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03273c18-20, col_values=(('external_ids', {'iface-id': 'bcf23ed4-8bec-4985-bf23-8dec9fe6105c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:10:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:10:56.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.757 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523056.7569926, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.758 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Started (Lifecycle Event)#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.782 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.787 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523056.75741, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.787 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.809 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.813 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:10:56 np0005597378 nova_compute[238941]: 2026-01-27 14:10:56.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:10:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.645 238945 DEBUG nova.compute.manager [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG oslo_concurrency.lockutils [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG oslo_concurrency.lockutils [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG oslo_concurrency.lockutils [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.646 238945 DEBUG nova.compute.manager [req-0ef93d9a-1b82-4bba-80f8-91a3f6bde7ab req-df414bea-c9d0-4170-bde5-291f769195d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Processing event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.648 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.653 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523057.6530962, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.653 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.657 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.661 238945 INFO nova.virt.libvirt.driver [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance spawned successfully.#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.662 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.680 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.695 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.696 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.697 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.697 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.697 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.698 238945 DEBUG nova.virt.libvirt.driver [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.707 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.757 238945 INFO nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 8.82 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.758 238945 DEBUG nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.852 238945 INFO nova.compute.manager [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 9.95 seconds to build instance.#033[00m
Jan 27 09:10:57 np0005597378 nova_compute[238941]: 2026-01-27 14:10:57.874 238945 DEBUG oslo_concurrency.lockutils [None req-8dde4950-d454-49f5-93e3-daa7fd4484d3 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:10:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:10:57 np0005597378 podman[339799]: 2026-01-27 14:10:57.934949772 +0000 UTC m=+0.046986414 container create 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 27 09:10:57 np0005597378 systemd[1]: Started libpod-conmon-41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d.scope.
Jan 27 09:10:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:10:58 np0005597378 podman[339799]: 2026-01-27 14:10:57.913365522 +0000 UTC m=+0.025402194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:10:58 np0005597378 podman[339799]: 2026-01-27 14:10:58.018016225 +0000 UTC m=+0.130052887 container init 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:10:58 np0005597378 podman[339799]: 2026-01-27 14:10:58.026711119 +0000 UTC m=+0.138747761 container start 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:10:58 np0005597378 quizzical_hawking[339815]: 167 167
Jan 27 09:10:58 np0005597378 systemd[1]: libpod-41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d.scope: Deactivated successfully.
Jan 27 09:10:58 np0005597378 podman[339799]: 2026-01-27 14:10:58.039511244 +0000 UTC m=+0.151547906 container attach 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:10:58 np0005597378 podman[339799]: 2026-01-27 14:10:58.040821459 +0000 UTC m=+0.152858111 container died 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-eb51b606d446900262f256eac90b0a6b832b3db9563189d63f12069b0edf567f-merged.mount: Deactivated successfully.
Jan 27 09:10:58 np0005597378 podman[339799]: 2026-01-27 14:10:58.111317354 +0000 UTC m=+0.223353996 container remove 41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:10:58 np0005597378 systemd[1]: libpod-conmon-41ddfdaff1c80a381d7ad6ca6e6cd7b82cb837aea9ed96a2784bd2313b00559d.scope: Deactivated successfully.
Jan 27 09:10:58 np0005597378 podman[339838]: 2026-01-27 14:10:58.323674673 +0000 UTC m=+0.067020143 container create c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:10:58 np0005597378 systemd[1]: Started libpod-conmon-c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c.scope.
Jan 27 09:10:58 np0005597378 podman[339838]: 2026-01-27 14:10:58.297120469 +0000 UTC m=+0.040465969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:10:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:10:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:58 np0005597378 podman[339838]: 2026-01-27 14:10:58.433085134 +0000 UTC m=+0.176430625 container init c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:10:58 np0005597378 podman[339838]: 2026-01-27 14:10:58.441051308 +0000 UTC m=+0.184396778 container start c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:10:58 np0005597378 podman[339838]: 2026-01-27 14:10:58.449778363 +0000 UTC m=+0.193124083 container attach c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:10:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 27 09:10:58 np0005597378 elated_volhard[339855]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:10:58 np0005597378 elated_volhard[339855]: --> All data devices are unavailable
Jan 27 09:10:58 np0005597378 systemd[1]: libpod-c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c.scope: Deactivated successfully.
Jan 27 09:10:58 np0005597378 podman[339838]: 2026-01-27 14:10:58.950152035 +0000 UTC m=+0.693497505 container died c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:10:58 np0005597378 nova_compute[238941]: 2026-01-27 14:10:58.980 238945 DEBUG nova.network.neutron [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updated VIF entry in instance network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:10:58 np0005597378 nova_compute[238941]: 2026-01-27 14:10:58.982 238945 DEBUG nova.network.neutron [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:10:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-708f4f3da6a8ffa951e7b56415e9d772a00d7cb7741c324c2b4f0c700d5111d6-merged.mount: Deactivated successfully.
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.002 238945 DEBUG oslo_concurrency.lockutils [req-3ce2b586-6055-4555-81f9-d2cde070bc8e req-7eebe503-f6ae-4660-9797-4969c98d7173 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:10:59 np0005597378 podman[339838]: 2026-01-27 14:10:59.024808082 +0000 UTC m=+0.768153542 container remove c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:59 np0005597378 systemd[1]: libpod-conmon-c388602ef001b90a3c35ec9021f07836a8e3449a00c3148bb1e158d7828ab16c.scope: Deactivated successfully.
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.426 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.432 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.461396559 +0000 UTC m=+0.042181825 container create 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:10:59 np0005597378 systemd[1]: Started libpod-conmon-7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b.scope.
Jan 27 09:10:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.53841592 +0000 UTC m=+0.119201206 container init 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.444823254 +0000 UTC m=+0.025608540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.546298332 +0000 UTC m=+0.127083598 container start 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.550452374 +0000 UTC m=+0.131237640 container attach 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 27 09:10:59 np0005597378 crazy_dirac[339964]: 167 167
Jan 27 09:10:59 np0005597378 systemd[1]: libpod-7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b.scope: Deactivated successfully.
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.553 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523044.5521054, 5a3eb35a-b675-4084-a737-24918aecfd12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.553 238945 INFO nova.compute.manager [-] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.554735939 +0000 UTC m=+0.135521225 container died 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:10:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:10:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896626236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:10:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:10:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896626236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.580 238945 DEBUG nova.compute.manager [None req-b0c2c7c2-9c29-4dc2-b444-2a49c72d3210 - - - - - -] [instance: 5a3eb35a-b675-4084-a737-24918aecfd12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:10:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c07e47d1c7bc0ebceff578aa95d61e8bc311bd94a40fcde25a827e53fff2a223-merged.mount: Deactivated successfully.
Jan 27 09:10:59 np0005597378 podman[339947]: 2026-01-27 14:10:59.605559345 +0000 UTC m=+0.186344611 container remove 7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:10:59 np0005597378 systemd[1]: libpod-conmon-7f04c764b7053992ff544077d53c93e10d30afde2f36d8fa482d427c4939ec6b.scope: Deactivated successfully.
Jan 27 09:10:59 np0005597378 podman[339988]: 2026-01-27 14:10:59.798665717 +0000 UTC m=+0.049662616 container create ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.844 238945 DEBUG nova.compute.manager [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG oslo_concurrency.lockutils [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG oslo_concurrency.lockutils [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG oslo_concurrency.lockutils [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.846 238945 DEBUG nova.compute.manager [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] No waiting events found dispatching network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:10:59 np0005597378 nova_compute[238941]: 2026-01-27 14:10:59.847 238945 WARNING nova.compute.manager [req-3565f941-1432-40b8-b478-463f300410f3 req-a97ab183-a439-496c-9018-fb42a128df06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received unexpected event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:10:59 np0005597378 systemd[1]: Started libpod-conmon-ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7.scope.
Jan 27 09:10:59 np0005597378 podman[339988]: 2026-01-27 14:10:59.776218603 +0000 UTC m=+0.027215532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:10:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:10:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:10:59 np0005597378 podman[339988]: 2026-01-27 14:10:59.901197643 +0000 UTC m=+0.152194572 container init ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:10:59 np0005597378 podman[339988]: 2026-01-27 14:10:59.910539544 +0000 UTC m=+0.161536443 container start ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:10:59 np0005597378 podman[339988]: 2026-01-27 14:10:59.914685716 +0000 UTC m=+0.165682635 container attach ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]: {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:    "0": [
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:        {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "devices": [
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "/dev/loop3"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            ],
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_name": "ceph_lv0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_size": "21470642176",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "name": "ceph_lv0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "tags": {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cluster_name": "ceph",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.crush_device_class": "",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.encrypted": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.objectstore": "bluestore",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osd_id": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.type": "block",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.vdo": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.with_tpm": "0"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            },
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "type": "block",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "vg_name": "ceph_vg0"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:        }
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:    ],
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:    "1": [
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:        {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "devices": [
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "/dev/loop4"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            ],
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_name": "ceph_lv1",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_size": "21470642176",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "name": "ceph_lv1",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "tags": {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cluster_name": "ceph",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.crush_device_class": "",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.encrypted": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.objectstore": "bluestore",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osd_id": "1",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.type": "block",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.vdo": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.with_tpm": "0"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            },
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "type": "block",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "vg_name": "ceph_vg1"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:        }
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:    ],
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:    "2": [
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:        {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "devices": [
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "/dev/loop5"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            ],
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_name": "ceph_lv2",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_size": "21470642176",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "name": "ceph_lv2",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "tags": {
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.cluster_name": "ceph",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.crush_device_class": "",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.encrypted": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.objectstore": "bluestore",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osd_id": "2",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.type": "block",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.vdo": "0",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:                "ceph.with_tpm": "0"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            },
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "type": "block",
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:            "vg_name": "ceph_vg2"
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:        }
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]:    ]
Jan 27 09:11:00 np0005597378 hungry_mestorf[340005]: }
Jan 27 09:11:00 np0005597378 systemd[1]: libpod-ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7.scope: Deactivated successfully.
Jan 27 09:11:00 np0005597378 podman[340014]: 2026-01-27 14:11:00.252665432 +0000 UTC m=+0.023864973 container died ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:11:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6787563c197654aa54630d8d8befc5816fbae237cd616de1909c5ef18611dc92-merged.mount: Deactivated successfully.
Jan 27 09:11:00 np0005597378 podman[340014]: 2026-01-27 14:11:00.382544344 +0000 UTC m=+0.153743865 container remove ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:00 np0005597378 systemd[1]: libpod-conmon-ecedd857d5ccc983f6c279d95c6d9195b2e44e210333616ecc932ca5406034e7.scope: Deactivated successfully.
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.403 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.404 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 27 09:11:00 np0005597378 podman[340109]: 2026-01-27 14:11:00.88830426 +0000 UTC m=+0.069095178 container create c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:11:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:00 np0005597378 podman[340109]: 2026-01-27 14:11:00.84141752 +0000 UTC m=+0.022208468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:11:00 np0005597378 systemd[1]: Started libpod-conmon-c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2.scope.
Jan 27 09:11:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:11:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:11:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019429260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:11:00 np0005597378 nova_compute[238941]: 2026-01-27 14:11:00.986 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:00 np0005597378 podman[340109]: 2026-01-27 14:11:00.989054749 +0000 UTC m=+0.169845677 container init c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:00 np0005597378 podman[340109]: 2026-01-27 14:11:00.995879423 +0000 UTC m=+0.176670341 container start c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:11:01 np0005597378 relaxed_shamir[340126]: 167 167
Jan 27 09:11:01 np0005597378 systemd[1]: libpod-c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2.scope: Deactivated successfully.
Jan 27 09:11:01 np0005597378 podman[340109]: 2026-01-27 14:11:01.000913338 +0000 UTC m=+0.181704266 container attach c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:11:01 np0005597378 podman[340109]: 2026-01-27 14:11:01.001516554 +0000 UTC m=+0.182307472 container died c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.059 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.060 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.065 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.065 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:11:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-16a0f42ade7c8eb24d0d12c0ed612f0a1dfc3e8b3adc9ef708299c9b5dde4315-merged.mount: Deactivated successfully.
Jan 27 09:11:01 np0005597378 podman[340109]: 2026-01-27 14:11:01.206819644 +0000 UTC m=+0.387610562 container remove c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:11:01 np0005597378 systemd[1]: libpod-conmon-c21f2bcf365291939997f4af22e050c587530c0e0505810e1d9f66187ff72be2.scope: Deactivated successfully.
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.267 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.269 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3503MB free_disk=59.921105328947306GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.269 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.270 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:01 np0005597378 podman[340153]: 2026-01-27 14:11:01.398506967 +0000 UTC m=+0.051750192 container create 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:11:01 np0005597378 systemd[1]: Started libpod-conmon-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope.
Jan 27 09:11:01 np0005597378 podman[340153]: 2026-01-27 14:11:01.373020311 +0000 UTC m=+0.026263556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d9fff719-3828-4c36-8698-604421b7382d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:11:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:11:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:01 np0005597378 podman[340153]: 2026-01-27 14:11:01.523450126 +0000 UTC m=+0.176693371 container init 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:11:01 np0005597378 podman[340153]: 2026-01-27 14:11:01.533683931 +0000 UTC m=+0.186927156 container start 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:11:01 np0005597378 podman[340153]: 2026-01-27 14:11:01.54220618 +0000 UTC m=+0.195449405 container attach 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:11:01 np0005597378 nova_compute[238941]: 2026-01-27 14:11:01.686 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:02 np0005597378 lvm[340269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:11:02 np0005597378 lvm[340268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:11:02 np0005597378 lvm[340269]: VG ceph_vg1 finished
Jan 27 09:11:02 np0005597378 lvm[340268]: VG ceph_vg0 finished
Jan 27 09:11:02 np0005597378 lvm[340271]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:11:02 np0005597378 lvm[340271]: VG ceph_vg2 finished
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191485924' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:11:02 np0005597378 nova_compute[238941]: 2026-01-27 14:11:02.310 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:02 np0005597378 nova_compute[238941]: 2026-01-27 14:11:02.317 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:11:02 np0005597378 flamboyant_zhukovsky[340170]: {}
Jan 27 09:11:02 np0005597378 nova_compute[238941]: 2026-01-27 14:11:02.333 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:11:02 np0005597378 systemd[1]: libpod-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope: Deactivated successfully.
Jan 27 09:11:02 np0005597378 podman[340153]: 2026-01-27 14:11:02.354274412 +0000 UTC m=+1.007517627 container died 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:11:02 np0005597378 systemd[1]: libpod-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope: Consumed 1.326s CPU time.
Jan 27 09:11:02 np0005597378 nova_compute[238941]: 2026-01-27 14:11:02.353 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:11:02 np0005597378 nova_compute[238941]: 2026-01-27 14:11:02.355 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-090616e6adf8ae8f31e628e4f6aa1b4b9fa756a541bf2f2f8c8d5ee50b407f02-merged.mount: Deactivated successfully.
Jan 27 09:11:02 np0005597378 podman[340153]: 2026-01-27 14:11:02.41408918 +0000 UTC m=+1.067332405 container remove 8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:11:02 np0005597378 systemd[1]: libpod-conmon-8eb8d566169ca66d1e88abd7d0fb85e98d309e322374d433256c9e126c507950.scope: Deactivated successfully.
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:11:02 np0005597378 nova_compute[238941]: 2026-01-27 14:11:02.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:11:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 644 KiB/s rd, 1.6 MiB/s wr, 36 op/s
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:11:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:11:03 np0005597378 nova_compute[238941]: 2026-01-27 14:11:03.815 238945 DEBUG nova.compute.manager [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:03 np0005597378 nova_compute[238941]: 2026-01-27 14:11:03.816 238945 DEBUG nova.compute.manager [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing instance network info cache due to event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:11:03 np0005597378 nova_compute[238941]: 2026-01-27 14:11:03.816 238945 DEBUG oslo_concurrency.lockutils [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:03 np0005597378 nova_compute[238941]: 2026-01-27 14:11:03.817 238945 DEBUG oslo_concurrency.lockutils [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:03 np0005597378 nova_compute[238941]: 2026-01-27 14:11:03.817 238945 DEBUG nova.network.neutron [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:11:04 np0005597378 nova_compute[238941]: 2026-01-27 14:11:04.355 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:04 np0005597378 nova_compute[238941]: 2026-01-27 14:11:04.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 77 op/s
Jan 27 09:11:05 np0005597378 nova_compute[238941]: 2026-01-27 14:11:05.378 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:05 np0005597378 nova_compute[238941]: 2026-01-27 14:11:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:05 np0005597378 nova_compute[238941]: 2026-01-27 14:11:05.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:11:05 np0005597378 nova_compute[238941]: 2026-01-27 14:11:05.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.405 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.406 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.406 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.524 238945 DEBUG nova.network.neutron [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updated VIF entry in instance network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.525 238945 DEBUG nova.network.neutron [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [{"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.552 238945 DEBUG oslo_concurrency.lockutils [req-638dcb01-c9dd-482b-92eb-7e6a43811c95 req-58ae414f-514e-4981-b068-7d581efcaaa6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.614 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.615 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.615 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:11:06 np0005597378 nova_compute[238941]: 2026-01-27 14:11:06.615 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:11:07 np0005597378 nova_compute[238941]: 2026-01-27 14:11:07.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.215 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.216 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.228 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.243 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.263 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.273 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.289 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.289 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.296 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.296 238945 INFO nova.compute.claims [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.413 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:09 np0005597378 nova_compute[238941]: 2026-01-27 14:11:09.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:11:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388707627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.050 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.057 238945 DEBUG nova.compute.provider_tree [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.074 238945 DEBUG nova.scheduler.client.report [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.095 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.096 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.158 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.159 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.185 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.206 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.392 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.439 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.440 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.441 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Creating image(s)#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.470 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.498 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.528 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.535 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.633 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.634 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.635 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.635 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.660 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.664 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.791 238945 DEBUG nova.policy [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:11:10 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 27 09:11:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 175 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 586 KiB/s wr, 93 op/s
Jan 27 09:11:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:10 np0005597378 nova_compute[238941]: 2026-01-27 14:11:10.955 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:11Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:6a:a5 10.100.0.4
Jan 27 09:11:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:11Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:6a:a5 10.100.0.4
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.040 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.122 238945 DEBUG nova.objects.instance [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.135 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.136 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Ensure instance console log exists: /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.136 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.137 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:11 np0005597378 nova_compute[238941]: 2026-01-27 14:11:11.137 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:12 np0005597378 nova_compute[238941]: 2026-01-27 14:11:12.371 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Successfully created port: 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:11:12 np0005597378 nova_compute[238941]: 2026-01-27 14:11:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:12 np0005597378 nova_compute[238941]: 2026-01-27 14:11:12.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:11:12 np0005597378 nova_compute[238941]: 2026-01-27 14:11:12.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 175 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 573 KiB/s wr, 71 op/s
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.408 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Successfully updated port: 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.419 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.420 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.420 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.573 238945 DEBUG nova.compute.manager [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.573 238945 DEBUG nova.compute.manager [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing instance network info cache due to event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.574 238945 DEBUG oslo_concurrency.lockutils [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:13 np0005597378 nova_compute[238941]: 2026-01-27 14:11:13.690 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:11:14 np0005597378 nova_compute[238941]: 2026-01-27 14:11:14.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 227 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.423 238945 DEBUG nova.network.neutron [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.445 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.445 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance network_info: |[{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.445 238945 DEBUG oslo_concurrency.lockutils [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.446 238945 DEBUG nova.network.neutron [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.449 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start _get_guest_xml network_info=[{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.453 238945 WARNING nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.458 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.459 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.464 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.464 238945 DEBUG nova.virt.libvirt.host [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.464 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.465 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.466 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.467 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.467 238945 DEBUG nova.virt.hardware [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:11:15 np0005597378 nova_compute[238941]: 2026-01-27 14:11:15.469 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:11:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936736147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.026 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.055 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.061 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:11:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552017623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.639 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.641 238945 DEBUG nova.virt.libvirt.vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:10Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.641 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.642 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.643 238945 DEBUG nova.objects.instance [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.667 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <uuid>46ce04c1-b6c3-42cb-99b4-546ad865b2f5</uuid>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <name>instance-00000071</name>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-338504836</nova:name>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:11:15</nova:creationTime>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <nova:port uuid="78c393a3-5ecf-49c2-9d5a-dab369d909b4">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <entry name="serial">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <entry name="uuid">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:00:e9:b9"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <target dev="tap78c393a3-5e"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/console.log" append="off"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:11:16 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:11:16 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:11:16 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:11:16 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.669 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Preparing to wait for external event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.670 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.670 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.670 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.671 238945 DEBUG nova.virt.libvirt.vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:10Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.671 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.672 238945 DEBUG nova.network.os_vif_util [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.672 238945 DEBUG os_vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.673 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78c393a3-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78c393a3-5e, col_values=(('external_ids', {'iface-id': '78c393a3-5ecf-49c2-9d5a-dab369d909b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:e9:b9', 'vm-uuid': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:16 np0005597378 NetworkManager[48904]: <info>  [1769523076.6796] manager: (tap78c393a3-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.680 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.685 238945 INFO os_vif [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.827 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.828 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.828 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:00:e9:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.829 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Using config drive#033[00m
Jan 27 09:11:16 np0005597378 nova_compute[238941]: 2026-01-27 14:11:16.847 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:11:17
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta']
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.190 238945 DEBUG nova.network.neutron [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updated VIF entry in instance network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.191 238945 DEBUG nova.network.neutron [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.208 238945 DEBUG oslo_concurrency.lockutils [req-8a746076-0120-441a-92e9-d881e4879db9 req-d7839a53-7cc0-4779-b5b6-25ad58159f93 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.457 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Creating config drive at /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.463 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3uhn36re execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.608 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3uhn36re" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.636 238945 DEBUG nova.storage.rbd_utils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.640 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.776 238945 DEBUG oslo_concurrency.processutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config 46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.777 238945 INFO nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deleting local config drive /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/disk.config because it was imported into RBD.#033[00m
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:17 np0005597378 NetworkManager[48904]: <info>  [1769523077.8420] manager: (tap78c393a3-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Jan 27 09:11:17 np0005597378 kernel: tap78c393a3-5e: entered promiscuous mode
Jan 27 09:11:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:17Z|01135|binding|INFO|Claiming lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 for this chassis.
Jan 27 09:11:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:17Z|01136|binding|INFO|78c393a3-5ecf-49c2-9d5a-dab369d909b4: Claiming fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.853 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.854 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 bound to our chassis#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.856 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08f92c92-fca4-41b9-a9a8-67625119a840#033[00m
Jan 27 09:11:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:17Z|01137|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 ovn-installed in OVS
Jan 27 09:11:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:17Z|01138|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 up in Southbound
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:17 np0005597378 nova_compute[238941]: 2026-01-27 14:11:17.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:17 np0005597378 systemd-udevd[340636]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.874 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8689546-c5c1-4ae1-aef1-7ce65c2ac31f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.875 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08f92c92-f1 in ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.878 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08f92c92-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4adb08b2-49c5-48d2-bf21-2c5d8026f8df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.880 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be53cb55-fbe8-4fdb-a8d3-f69e0af88880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.891 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8275d35f-c546-4bb9-9c28-11c1177f4a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 systemd-machined[207425]: New machine qemu-143-instance-00000071.
Jan 27 09:11:17 np0005597378 NetworkManager[48904]: <info>  [1769523077.8940] device (tap78c393a3-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:11:17 np0005597378 NetworkManager[48904]: <info>  [1769523077.8948] device (tap78c393a3-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:11:17 np0005597378 systemd[1]: Started Virtual Machine qemu-143-instance-00000071.
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ba8b14-4f75-41db-a7f1-0547cb181e0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.953 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[877f7ea9-0822-4a7c-b0cc-3af98ceedc60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.959 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2f56f-5ab8-4793-9891-d2b556db4237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 NetworkManager[48904]: <info>  [1769523077.9599] manager: (tap08f92c92-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.991 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[60987309-ac17-4a7a-9a53-ffda3041bef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:17.994 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[73832d37-c6d6-47ae-b750-38a150e9d89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 NetworkManager[48904]: <info>  [1769523078.0184] device (tap08f92c92-f0): carrier: link connected
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.022 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfbfb9f-f4e0-425e-9fad-67fa517aa6b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.040 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[373b8d56-4ba7-48b4-8d8c-b6161d4104d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571764, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340670, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.056 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7de22343-427f-4739-b1ec-7b793876cb8d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:c88e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571764, 'tstamp': 571764}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340671, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4b41fe-0520-4621-a0aa-3122962be496]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571764, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340687, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.091 238945 DEBUG nova.compute.manager [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.092 238945 DEBUG oslo_concurrency.lockutils [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.092 238945 DEBUG oslo_concurrency.lockutils [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.092 238945 DEBUG oslo_concurrency.lockutils [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.093 238945 DEBUG nova.compute.manager [req-8b7e8b0a-269c-4070-876a-79d77f68e3a9 req-c8724091-ba0e-4ce5-9585-8bf0e5f41ea6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Processing event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c84cf10-ba44-4592-9891-c3a8026683b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.186 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[134796bd-b99d-48a9-ba92-5d3e0205d730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.187 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.188 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.188 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08f92c92-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:18 np0005597378 NetworkManager[48904]: <info>  [1769523078.1909] manager: (tap08f92c92-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Jan 27 09:11:18 np0005597378 kernel: tap08f92c92-f0: entered promiscuous mode
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.192 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08f92c92-f0, col_values=(('external_ids', {'iface-id': '91bf8ed9-ddad-43fe-a33c-84a91be62de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:18Z|01139|binding|INFO|Releasing lport 91bf8ed9-ddad-43fe-a33c-84a91be62de5 from this chassis (sb_readonly=0)
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.208 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[89ac4780-efe4-49bb-96c8-a37a10119b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.209 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:11:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:18.210 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'env', 'PROCESS_TAG=haproxy-08f92c92-fca4-41b9-a9a8-67625119a840', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08f92c92-fca4-41b9-a9a8-67625119a840.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.219 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523078.219399, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.220 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Started (Lifecycle Event)#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.222 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.224 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.227 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance spawned successfully.#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.228 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.241 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.246 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.251 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.252 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.252 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.253 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.253 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.254 238945 DEBUG nova.virt.libvirt.driver [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.279 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.280 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523078.221792, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.280 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.310 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.315 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523078.224253, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.316 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.327 238945 INFO nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.328 238945 DEBUG nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.338 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.341 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.374 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.405 238945 INFO nova.compute.manager [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 9.14 seconds to build instance.#033[00m
Jan 27 09:11:18 np0005597378 nova_compute[238941]: 2026-01-27 14:11:18.422 238945 DEBUG oslo_concurrency.lockutils [None req-21f7f2aa-0c40-4a6a-a2da-96842ae54485 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:18 np0005597378 podman[340746]: 2026-01-27 14:11:18.576824041 +0000 UTC m=+0.056394578 container create a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:11:18 np0005597378 systemd[1]: Started libpod-conmon-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5.scope.
Jan 27 09:11:18 np0005597378 podman[340746]: 2026-01-27 14:11:18.545595421 +0000 UTC m=+0.025165978 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:11:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:11:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e2108cafd931f790f376735466b08468a1964a5a8c17bc3840bb912f768818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:18 np0005597378 podman[340746]: 2026-01-27 14:11:18.66644197 +0000 UTC m=+0.146012527 container init a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:11:18 np0005597378 podman[340746]: 2026-01-27 14:11:18.672412061 +0000 UTC m=+0.151982608 container start a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:11:18 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : New worker (340767) forked
Jan 27 09:11:18 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : Loading success.
Jan 27 09:11:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 27 09:11:19 np0005597378 podman[340776]: 2026-01-27 14:11:19.724107725 +0000 UTC m=+0.058898394 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.273 238945 DEBUG nova.compute.manager [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG oslo_concurrency.lockutils [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG oslo_concurrency.lockutils [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG oslo_concurrency.lockutils [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 DEBUG nova.compute.manager [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.274 238945 WARNING nova.compute.manager [req-4f39a57d-7443-406f-a8e5-7599513eb61d req-59419b50-14fe-49cf-8e46-2cf68e86bc02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.700 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.701 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.701 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.702 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.702 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.705 238945 INFO nova.compute.manager [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Terminating instance#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.706 238945 DEBUG nova.compute.manager [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:11:20 np0005597378 kernel: tapffadabd9-ed (unregistering): left promiscuous mode
Jan 27 09:11:20 np0005597378 NetworkManager[48904]: <info>  [1769523080.7576] device (tapffadabd9-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:11:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:20Z|01140|binding|INFO|Releasing lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 from this chassis (sb_readonly=0)
Jan 27 09:11:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:20Z|01141|binding|INFO|Setting lport ffadabd9-ed10-48aa-9297-8b6cf0c692a0 down in Southbound
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:20Z|01142|binding|INFO|Removing iface tapffadabd9-ed ovn-installed in OVS
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.775 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], port_security=['fa:16:3e:1b:6a:a5 10.100.0.4 2001:db8::f816:3eff:fe1b:6aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe1b:6aa5/64', 'neutron:device_id': '0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ffadabd9-ed10-48aa-9297-8b6cf0c692a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.777 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b unbound from our chassis#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.779 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03273c18-2cc1-455f-8ffc-28f9813c664b#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.796 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c061ec24-7715-4176-bdbc-414ab2c7fe44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:20 np0005597378 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Deactivated successfully.
Jan 27 09:11:20 np0005597378 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Consumed 13.693s CPU time.
Jan 27 09:11:20 np0005597378 podman[340795]: 2026-01-27 14:11:20.824713674 +0000 UTC m=+0.154867306 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 27 09:11:20 np0005597378 systemd-machined[207425]: Machine qemu-142-instance-00000070 terminated.
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.835 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[21cbc23e-f6f5-4225-89dd-9ea2496a5019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.838 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a537a211-b91f-43f8-a3d1-047f6b54f20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.870 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f59842-7322-4e94-9bc3-9d4d8dc831c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.887 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a5a186-31b7-4ce1-a64b-159a44bd8886]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03273c18-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566324, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340830, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.906 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19ba9ed7-9460-45ed-85e9-eee9a96a2cd1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566338, 'tstamp': 566338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340831, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03273c18-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566341, 'tstamp': 566341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340831, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.908 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03273c18-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.914 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.915 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03273c18-20, col_values=(('external_ids', {'iface-id': 'bcf23ed4-8bec-4985-bf23-8dec9fe6105c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:20.915 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.945 238945 INFO nova.virt.libvirt.driver [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance destroyed successfully.#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.946 238945 DEBUG nova.objects.instance [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.959 238945 DEBUG nova.virt.libvirt.vif [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:10:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1563603684',display_name='tempest-TestGettingAddress-server-1563603684',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1563603684',id=112,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:10:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ks3qcltj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:10:57Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.960 238945 DEBUG nova.network.os_vif_util [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "address": "fa:16:3e:1b:6a:a5", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:6aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffadabd9-ed", "ovs_interfaceid": "ffadabd9-ed10-48aa-9297-8b6cf0c692a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.962 238945 DEBUG nova.network.os_vif_util [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.963 238945 DEBUG os_vif [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.965 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.966 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffadabd9-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.967 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:20 np0005597378 nova_compute[238941]: 2026-01-27 14:11:20.971 238945 INFO os_vif [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:6a:a5,bridge_name='br-int',has_traffic_filtering=True,id=ffadabd9-ed10-48aa-9297-8b6cf0c692a0,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffadabd9-ed')#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.210 238945 INFO nova.virt.libvirt.driver [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deleting instance files /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_del#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.211 238945 INFO nova.virt.libvirt.driver [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deletion of /var/lib/nova/instances/0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9_del complete#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.259 238945 INFO nova.compute.manager [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.260 238945 DEBUG oslo.service.loopingcall [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.260 238945 DEBUG nova.compute.manager [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.260 238945 DEBUG nova.network.neutron [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.918 238945 DEBUG nova.network.neutron [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.939 238945 INFO nova.compute.manager [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Took 0.68 seconds to deallocate network for instance.#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.999 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:21 np0005597378 nova_compute[238941]: 2026-01-27 14:11:21.999 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.076 238945 DEBUG nova.compute.manager [req-2be405d9-47fd-4b9d-81be-efe6e36a3ed6 req-c71822df-0ee9-48d5-8f02-0550cb0e41af 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-deleted-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.105 238945 DEBUG oslo_concurrency.processutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.436 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing instance network info cache due to event network-changed-ffadabd9-ed10-48aa-9297-8b6cf0c692a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.437 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Refreshing network info cache for port ffadabd9-ed10-48aa-9297-8b6cf0c692a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:11:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516374749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.643 238945 DEBUG oslo_concurrency.processutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.648 238945 DEBUG nova.compute.provider_tree [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.663 238945 DEBUG nova.scheduler.client.report [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.682 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.709 238945 INFO nova.scheduler.client.report [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9#033[00m
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.776 238945 DEBUG oslo_concurrency.lockutils [None req-b10975a2-9ae0-4d39-b887-a6bb1571bc55 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 246 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Jan 27 09:11:22 np0005597378 nova_compute[238941]: 2026-01-27 14:11:22.896 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.587 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.612 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.613 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-unplugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.613 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] No waiting events found dispatching network-vif-unplugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.614 238945 WARNING nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received unexpected event network-vif-unplugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.615 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] No waiting events found dispatching network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 WARNING nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Received unexpected event network-vif-plugged-ffadabd9-ed10-48aa-9297-8b6cf0c692a0 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG nova.compute.manager [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing instance network info cache due to event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.616 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.617 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:23 np0005597378 nova_compute[238941]: 2026-01-27 14:11:23.617 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.862 238945 DEBUG nova.compute.manager [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.862 238945 DEBUG nova.compute.manager [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing instance network info cache due to event network-changed-779af42d-d593-45a0-a42d-cf6aa2d34f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.862 238945 DEBUG oslo_concurrency.lockutils [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.863 238945 DEBUG oslo_concurrency.lockutils [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.863 238945 DEBUG nova.network.neutron [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Refreshing network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:11:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 190 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 139 op/s
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.974 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.974 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.975 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.975 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.975 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.976 238945 INFO nova.compute.manager [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Terminating instance#033[00m
Jan 27 09:11:24 np0005597378 nova_compute[238941]: 2026-01-27 14:11:24.977 238945 DEBUG nova.compute.manager [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:11:25 np0005597378 kernel: tap779af42d-d5 (unregistering): left promiscuous mode
Jan 27 09:11:25 np0005597378 NetworkManager[48904]: <info>  [1769523085.0219] device (tap779af42d-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:11:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:25Z|01143|binding|INFO|Releasing lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 from this chassis (sb_readonly=0)
Jan 27 09:11:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:25Z|01144|binding|INFO|Setting lport 779af42d-d593-45a0-a42d-cf6aa2d34f31 down in Southbound
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:25Z|01145|binding|INFO|Removing iface tap779af42d-d5 ovn-installed in OVS
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.056 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], port_security=['fa:16:3e:03:2f:a4 10.100.0.3 2001:db8::f816:3eff:fe03:2fa4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe03:2fa4/64', 'neutron:device_id': 'd9fff719-3828-4c36-8698-604421b7382d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03273c18-2cc1-455f-8ffc-28f9813c664b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6ab7306f-f0ac-4893-b4ef-6c4f73785c72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ef67578-3f69-4f39-a5a2-466e94654d58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=779af42d-d593-45a0-a42d-cf6aa2d34f31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.057 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 779af42d-d593-45a0-a42d-cf6aa2d34f31 in datapath 03273c18-2cc1-455f-8ffc-28f9813c664b unbound from our chassis#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.058 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03273c18-2cc1-455f-8ffc-28f9813c664b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3815bf-ce83-4aa8-ab60-a07caf1e887c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b namespace which is not needed anymore#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.076 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 27 09:11:25 np0005597378 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Consumed 14.971s CPU time.
Jan 27 09:11:25 np0005597378 systemd-machined[207425]: Machine qemu-140-instance-0000006f terminated.
Jan 27 09:11:25 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : haproxy version is 2.8.14-c23fe91
Jan 27 09:11:25 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [NOTICE]   (338920) : path to executable is /usr/sbin/haproxy
Jan 27 09:11:25 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [WARNING]  (338920) : Exiting Master process...
Jan 27 09:11:25 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [ALERT]    (338920) : Current worker (338922) exited with code 143 (Terminated)
Jan 27 09:11:25 np0005597378 neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b[338916]: [WARNING]  (338920) : All workers exited. Exiting... (0)
Jan 27 09:11:25 np0005597378 systemd[1]: libpod-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97.scope: Deactivated successfully.
Jan 27 09:11:25 np0005597378 podman[340909]: 2026-01-27 14:11:25.20114372 +0000 UTC m=+0.047288722 container died ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.212 238945 INFO nova.virt.libvirt.driver [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Instance destroyed successfully.#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.212 238945 DEBUG nova.objects.instance [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid d9fff719-3828-4c36-8698-604421b7382d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97-userdata-shm.mount: Deactivated successfully.
Jan 27 09:11:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-755fc44f600491e92dad7ef6678a16baca6db4e3ed1c446f070a15f9116340f0-merged.mount: Deactivated successfully.
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.231 238945 DEBUG nova.virt.libvirt.vif [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2105858304',display_name='tempest-TestGettingAddress-server-2105858304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2105858304',id=111,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrnZawttJ9rxf4d7VbQ4gL91LgaB1z1r/xEUpx5mFvSKT0Aa62PdqFndwxjTEee/H4izKipNuAxh3gARhoihK2NWIJ6A2c4emnHhXuH9NTMplWR1hzf4srQVnSQwLB3CQ==',key_name='tempest-TestGettingAddress-1512698164',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:10:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-lhof0svg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:10:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=d9fff719-3828-4c36-8698-604421b7382d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.232 238945 DEBUG nova.network.os_vif_util [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.232 238945 DEBUG nova.network.os_vif_util [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.233 238945 DEBUG os_vif [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.234 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap779af42d-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.244 238945 INFO os_vif [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:2f:a4,bridge_name='br-int',has_traffic_filtering=True,id=779af42d-d593-45a0-a42d-cf6aa2d34f31,network=Network(03273c18-2cc1-455f-8ffc-28f9813c664b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap779af42d-d5')#033[00m
Jan 27 09:11:25 np0005597378 podman[340909]: 2026-01-27 14:11:25.245816491 +0000 UTC m=+0.091961473 container cleanup ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:11:25 np0005597378 systemd[1]: libpod-conmon-ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97.scope: Deactivated successfully.
Jan 27 09:11:25 np0005597378 podman[340955]: 2026-01-27 14:11:25.315438122 +0000 UTC m=+0.040296603 container remove ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.321 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f65685ba-a263-48d4-a06c-e9d6fcb7b4de]: (4, ('Tue Jan 27 02:11:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b (ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97)\nddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97\nTue Jan 27 02:11:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b (ddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97)\nddec02d199573424671e5ce5853769e848399d6121aef7758a24c3509aabaf97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.322 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6336aa73-8eab-4736-94df-f26d43c1bea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.323 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03273c18-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:25 np0005597378 kernel: tap03273c18-20: left promiscuous mode
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.339 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed7118e-1471-42ec-83a7-54aee904a140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.357 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9bab8c0a-5701-4de0-8b4a-9191e9efa139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.358 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2e22e4-7185-4201-bf2f-880f3c911e11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.377 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b83249d-6dd2-4b46-acab-4e6fcd49187a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566314, 'reachable_time': 15599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340981, 'error': None, 'target': 'ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 systemd[1]: run-netns-ovnmeta\x2d03273c18\x2d2cc1\x2d455f\x2d8ffc\x2d28f9813c664b.mount: Deactivated successfully.
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.382 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03273c18-2cc1-455f-8ffc-28f9813c664b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:11:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:25.382 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[7a91480e-5dcc-4fc0-b237-57a742c0928c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.386 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.520 238945 INFO nova.virt.libvirt.driver [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deleting instance files /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d_del#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.521 238945 INFO nova.virt.libvirt.driver [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deletion of /var/lib/nova/instances/d9fff719-3828-4c36-8698-604421b7382d_del complete#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.574 238945 INFO nova.compute.manager [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.575 238945 DEBUG oslo.service.loopingcall [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.576 238945 DEBUG nova.compute.manager [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.576 238945 DEBUG nova.network.neutron [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.625 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updated VIF entry in instance network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.626 238945 DEBUG nova.network.neutron [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:25 np0005597378 nova_compute[238941]: 2026-01-27 14:11:25.650 238945 DEBUG oslo_concurrency.lockutils [req-c8006bea-f982-47bb-89de-856fbc930c16 req-2fa5dba5-47e5-4fcb-9376-0e6376fba97f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.607 238945 DEBUG nova.network.neutron [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.627 238945 INFO nova.compute.manager [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] Took 1.05 seconds to deallocate network for instance.#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.666 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.667 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.743 238945 DEBUG oslo_concurrency.processutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 449 KiB/s wr, 126 op/s
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.957 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-unplugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.958 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.958 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.958 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.959 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] No waiting events found dispatching network-vif-unplugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.959 238945 WARNING nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received unexpected event network-vif-unplugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.959 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d9fff719-3828-4c36-8698-604421b7382d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG oslo_concurrency.lockutils [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.960 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] No waiting events found dispatching network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.961 238945 WARNING nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received unexpected event network-vif-plugged-779af42d-d593-45a0-a42d-cf6aa2d34f31 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:11:26 np0005597378 nova_compute[238941]: 2026-01-27 14:11:26.961 238945 DEBUG nova.compute.manager [req-a43e892d-6a74-4829-a8c5-be513696c862 req-e2ed9c98-0117-4376-9c35-470fa80dd21a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Received event network-vif-deleted-779af42d-d593-45a0-a42d-cf6aa2d34f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:11:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120598773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.367 238945 DEBUG oslo_concurrency.processutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.372 238945 DEBUG nova.compute.provider_tree [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.387 238945 DEBUG nova.scheduler.client.report [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.406 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.431 238945 INFO nova.scheduler.client.report [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance d9fff719-3828-4c36-8698-604421b7382d#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.492 238945 DEBUG oslo_concurrency.lockutils [None req-5a3f453c-7b31-4a78-ae35-85e3f9b2f26b 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "d9fff719-3828-4c36-8698-604421b7382d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.494 238945 DEBUG nova.network.neutron [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updated VIF entry in instance network info cache for port 779af42d-d593-45a0-a42d-cf6aa2d34f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.494 238945 DEBUG nova.network.neutron [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d9fff719-3828-4c36-8698-604421b7382d] Updating instance_info_cache with network_info: [{"id": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "address": "fa:16:3e:03:2f:a4", "network": {"id": "03273c18-2cc1-455f-8ffc-28f9813c664b", "bridge": "br-int", "label": "tempest-network-smoke--1962379635", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe03:2fa4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap779af42d-d5", "ovs_interfaceid": "779af42d-d593-45a0-a42d-cf6aa2d34f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:27 np0005597378 nova_compute[238941]: 2026-01-27 14:11:27.518 238945 DEBUG oslo_concurrency.lockutils [req-7552132a-9dd6-4b7e-8c85-78e03f7be2a3 req-57930cc6-04d1-4259-a969-3544bc389c45 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d9fff719-3828-4c36-8698-604421b7382d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011198653089291434 of space, bias 1.0, pg target 0.33595959267874304 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006685903422137121 of space, bias 1.0, pg target 0.2005771026641136 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0566366148109881e-06 of space, bias 4.0, pg target 0.0012679639377731857 quantized to 16 (current 16)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:11:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:11:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 122 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 129 op/s
Jan 27 09:11:30 np0005597378 nova_compute[238941]: 2026-01-27 14:11:30.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 88 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 124 op/s
Jan 27 09:11:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:32Z|01146|binding|INFO|Releasing lport 91bf8ed9-ddad-43fe-a33c-84a91be62de5 from this chassis (sb_readonly=0)
Jan 27 09:11:32 np0005597378 nova_compute[238941]: 2026-01-27 14:11:32.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:32 np0005597378 nova_compute[238941]: 2026-01-27 14:11:32.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 88 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.2 KiB/s wr, 108 op/s
Jan 27 09:11:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:33Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 09:11:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:33Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 09:11:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 110 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 157 op/s
Jan 27 09:11:35 np0005597378 nova_compute[238941]: 2026-01-27 14:11:35.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:35.556 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:11:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:35.557 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:11:35 np0005597378 nova_compute[238941]: 2026-01-27 14:11:35.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:35 np0005597378 nova_compute[238941]: 2026-01-27 14:11:35.944 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523080.9427621, 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:35 np0005597378 nova_compute[238941]: 2026-01-27 14:11:35.945 238945 INFO nova.compute.manager [-] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:11:36 np0005597378 nova_compute[238941]: 2026-01-27 14:11:36.018 238945 DEBUG nova.compute.manager [None req-67450093-7ca5-45ef-b73c-1608e77ce475 - - - - - -] [instance: 0b6377c9-9e9e-4fed-991c-16c0a4ea4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 27 09:11:37 np0005597378 nova_compute[238941]: 2026-01-27 14:11:37.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.210 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523085.209496, d9fff719-3828-4c36-8698-604421b7382d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.211 238945 INFO nova.compute.manager [-] [instance: d9fff719-3828-4c36-8698-604421b7382d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.238 238945 DEBUG nova.compute.manager [None req-d86a2b0c-a0ec-46e8-8aea-ee7429c3a869 - - - - - -] [instance: d9fff719-3828-4c36-8698-604421b7382d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.259 238945 INFO nova.compute.manager [None req-30634ba7-1a35-40e9-ae68-5eb59253b497 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Get console output#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.265 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.623 238945 DEBUG oslo_concurrency.lockutils [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.624 238945 DEBUG oslo_concurrency.lockutils [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.624 238945 DEBUG nova.compute.manager [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.629 238945 DEBUG nova.compute.manager [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.630 238945 DEBUG nova.objects.instance [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:40 np0005597378 nova_compute[238941]: 2026-01-27 14:11:40.715 238945 DEBUG nova.virt.libvirt.driver [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:11:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 27 09:11:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:40 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 27 09:11:40 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:40.990526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:11:40 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 27 09:11:40 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523100990603, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1061, "num_deletes": 251, "total_data_size": 1531578, "memory_usage": 1556744, "flush_reason": "Manual Compaction"}
Jan 27 09:11:40 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101002578, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1516852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41399, "largest_seqno": 42459, "table_properties": {"data_size": 1511648, "index_size": 2664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11192, "raw_average_key_size": 19, "raw_value_size": 1501310, "raw_average_value_size": 2652, "num_data_blocks": 119, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523000, "oldest_key_time": 1769523000, "file_creation_time": 1769523100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 12106 microseconds, and 6614 cpu microseconds.
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.002623) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1516852 bytes OK
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.002654) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.004403) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.004416) EVENT_LOG_v1 {"time_micros": 1769523101004412, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.004434) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1526612, prev total WAL file size 1526612, number of live WAL files 2.
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.005110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1481KB)], [92(10MB)]
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101005159, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12320413, "oldest_snapshot_seqno": -1}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6681 keys, 10556479 bytes, temperature: kUnknown
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101060356, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10556479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509665, "index_size": 29038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 171523, "raw_average_key_size": 25, "raw_value_size": 10388054, "raw_average_value_size": 1554, "num_data_blocks": 1147, "num_entries": 6681, "num_filter_entries": 6681, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.060598) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10556479 bytes
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.062935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.9 rd, 191.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.1) write-amplify(7.0) OK, records in: 7195, records dropped: 514 output_compression: NoCompression
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.062955) EVENT_LOG_v1 {"time_micros": 1769523101062946, "job": 54, "event": "compaction_finished", "compaction_time_micros": 55270, "compaction_time_cpu_micros": 24488, "output_level": 6, "num_output_files": 1, "total_output_size": 10556479, "num_input_records": 7195, "num_output_records": 6681, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101063362, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523101065815, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.005009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:11:41 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:11:41.065894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:11:42 np0005597378 nova_compute[238941]: 2026-01-27 14:11:42.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:42 np0005597378 nova_compute[238941]: 2026-01-27 14:11:42.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 27 09:11:43 np0005597378 kernel: tap78c393a3-5e (unregistering): left promiscuous mode
Jan 27 09:11:43 np0005597378 NetworkManager[48904]: <info>  [1769523103.0289] device (tap78c393a3-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:11:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:43Z|01147|binding|INFO|Releasing lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 from this chassis (sb_readonly=0)
Jan 27 09:11:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:43Z|01148|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 down in Southbound
Jan 27 09:11:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:43Z|01149|binding|INFO|Removing iface tap78c393a3-5e ovn-installed in OVS
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.049 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.050 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 unbound from our chassis#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.051 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08f92c92-fca4-41b9-a9a8-67625119a840, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.052 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1981c69e-212d-4feb-84cc-0bf645dbdc76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.053 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace which is not needed anymore#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:43 np0005597378 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 27 09:11:43 np0005597378 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Consumed 15.610s CPU time.
Jan 27 09:11:43 np0005597378 systemd-machined[207425]: Machine qemu-143-instance-00000071 terminated.
Jan 27 09:11:43 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : haproxy version is 2.8.14-c23fe91
Jan 27 09:11:43 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [NOTICE]   (340765) : path to executable is /usr/sbin/haproxy
Jan 27 09:11:43 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [WARNING]  (340765) : Exiting Master process...
Jan 27 09:11:43 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [ALERT]    (340765) : Current worker (340767) exited with code 143 (Terminated)
Jan 27 09:11:43 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[340761]: [WARNING]  (340765) : All workers exited. Exiting... (0)
Jan 27 09:11:43 np0005597378 systemd[1]: libpod-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5.scope: Deactivated successfully.
Jan 27 09:11:43 np0005597378 podman[341031]: 2026-01-27 14:11:43.194305479 +0000 UTC m=+0.047242091 container died a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 09:11:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5-userdata-shm.mount: Deactivated successfully.
Jan 27 09:11:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f3e2108cafd931f790f376735466b08468a1964a5a8c17bc3840bb912f768818-merged.mount: Deactivated successfully.
Jan 27 09:11:43 np0005597378 podman[341031]: 2026-01-27 14:11:43.229317031 +0000 UTC m=+0.082253643 container cleanup a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:11:43 np0005597378 systemd[1]: libpod-conmon-a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5.scope: Deactivated successfully.
Jan 27 09:11:43 np0005597378 podman[341062]: 2026-01-27 14:11:43.290835595 +0000 UTC m=+0.043102630 container remove a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71f16ce3-c649-48b6-8fc6-08cf176ff066]: (4, ('Tue Jan 27 02:11:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5)\na66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5\nTue Jan 27 02:11:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (a66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5)\na66b5f8dbab443b58f14c3aca63c204fe693722fedba3fd84c30741d51e141a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4e6509-c7d5-4149-9611-ef0aa53f72f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.301 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:43 np0005597378 kernel: tap08f92c92-f0: left promiscuous mode
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.322 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e5441258-9efa-4c9e-9002-4b760af2e62f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.338 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32e5a6e0-4c36-4f3c-b32f-26806d143d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.339 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72966f81-b301-48c8-bbb6-2494eb82ff87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.354 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fdc150-39bd-4274-ade9-147929568ee6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571757, 'reachable_time': 19448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341089, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 systemd[1]: run-netns-ovnmeta\x2d08f92c92\x2dfca4\x2d41b9\x2da9a8\x2d67625119a840.mount: Deactivated successfully.
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.358 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:11:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:43.358 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6c6f26-9821-4546-a86d-8f42d5f04d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.730 238945 INFO nova.virt.libvirt.driver [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.734 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance destroyed successfully.#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.734 238945 DEBUG nova.objects.instance [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.751 238945 DEBUG nova.compute.manager [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:43 np0005597378 nova_compute[238941]: 2026-01-27 14:11:43.790 238945 DEBUG oslo_concurrency.lockutils [None req-430e811b-6fbe-496e-beb9-ba6a469df45e a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 27 09:11:45 np0005597378 nova_compute[238941]: 2026-01-27 14:11:45.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:45.559 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.251 238945 INFO nova.compute.manager [None req-7a0d4ed6-b0bb-4609-a354-37453babf208 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Get console output#033[00m
Jan 27 09:11:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:46.317 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:46.317 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:46.317 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.439 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.465 238945 DEBUG oslo_concurrency.lockutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.465 238945 DEBUG oslo_concurrency.lockutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.466 238945 DEBUG nova.network.neutron [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.466 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'info_cache' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.823 238945 DEBUG nova.compute.manager [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG oslo_concurrency.lockutils [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG oslo_concurrency.lockutils [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG oslo_concurrency.lockutils [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.824 238945 DEBUG nova.compute.manager [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.825 238945 WARNING nova.compute.manager [req-ce16443b-ca85-4ec3-b42c-25e928d8f802 req-5821fd9e-06f9-443e-ba3a-795bb9a12463 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:11:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 999 KiB/s wr, 14 op/s
Jan 27 09:11:46 np0005597378 nova_compute[238941]: 2026-01-27 14:11:46.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:11:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.878 238945 DEBUG nova.network.neutron [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.899 238945 DEBUG oslo_concurrency.lockutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.922 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance destroyed successfully.#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.923 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.936 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.948 238945 DEBUG nova.virt.libvirt.vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:43Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.949 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.950 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.950 238945 DEBUG os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.952 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78c393a3-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.957 238945 INFO os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.963 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start _get_guest_xml network_info=[{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.968 238945 WARNING nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.975 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.975 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.libvirt.host [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.978 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.979 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.980 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.981 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.981 238945 DEBUG nova.virt.hardware [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.981 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:47 np0005597378 nova_compute[238941]: 2026-01-27 14:11:47.997 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:11:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078668747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.565 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.598 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 31 KiB/s wr, 3 op/s
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.956 238945 DEBUG nova.compute.manager [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.957 238945 DEBUG oslo_concurrency.lockutils [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.958 238945 DEBUG oslo_concurrency.lockutils [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.959 238945 DEBUG oslo_concurrency.lockutils [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.959 238945 DEBUG nova.compute.manager [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:48 np0005597378 nova_compute[238941]: 2026-01-27 14:11:48.960 238945 WARNING nova.compute.manager [req-d9323593-3132-4577-84d5-e9579fd7f0d7 req-acbb66f3-264e-4861-8007-a184d684cc86 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 27 09:11:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:11:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143875684' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.203 238945 DEBUG oslo_concurrency.processutils [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.205 238945 DEBUG nova.virt.libvirt.vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:43Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.206 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.207 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.208 238945 DEBUG nova.objects.instance [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.225 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <uuid>46ce04c1-b6c3-42cb-99b4-546ad865b2f5</uuid>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <name>instance-00000071</name>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-338504836</nova:name>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:11:47</nova:creationTime>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <nova:port uuid="78c393a3-5ecf-49c2-9d5a-dab369d909b4">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <entry name="serial">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <entry name="uuid">46ce04c1-b6c3-42cb-99b4-546ad865b2f5</entry>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_disk.config">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:00:e9:b9"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <target dev="tap78c393a3-5e"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5/console.log" append="off"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:11:49 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:11:49 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:11:49 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:11:49 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.227 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.228 238945 DEBUG nova.virt.libvirt.driver [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.229 238945 DEBUG nova.virt.libvirt.vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:43Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.229 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.230 238945 DEBUG nova.network.os_vif_util [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.230 238945 DEBUG os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78c393a3-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.235 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78c393a3-5e, col_values=(('external_ids', {'iface-id': '78c393a3-5ecf-49c2-9d5a-dab369d909b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:e9:b9', 'vm-uuid': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.2379] manager: (tap78c393a3-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.243 238945 INFO os_vif [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')#033[00m
Jan 27 09:11:49 np0005597378 kernel: tap78c393a3-5e: entered promiscuous mode
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:49Z|01150|binding|INFO|Claiming lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 for this chassis.
Jan 27 09:11:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:49Z|01151|binding|INFO|78c393a3-5ecf-49c2-9d5a-dab369d909b4: Claiming fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.3274] manager: (tap78c393a3-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/468)
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.331 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.332 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 bound to our chassis#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.334 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08f92c92-fca4-41b9-a9a8-67625119a840#033[00m
Jan 27 09:11:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:49Z|01152|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 ovn-installed in OVS
Jan 27 09:11:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:49Z|01153|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 up in Southbound
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.344 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.349 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9c17e9-1703-4359-99d9-a97923ef5c90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.350 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08f92c92-f1 in ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.351 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08f92c92-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.351 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5faa2d3c-e566-4637-977e-3a8988677d33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.352 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7eee34-6e2c-4779-9b36-b3081eaf1a9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 systemd-udevd[341167]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:11:49 np0005597378 systemd-machined[207425]: New machine qemu-144-instance-00000071.
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.365 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[30109d87-1699-4008-8b4a-f7351354e158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.3697] device (tap78c393a3-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:11:49 np0005597378 systemd[1]: Started Virtual Machine qemu-144-instance-00000071.
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.3706] device (tap78c393a3-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[85f01238-8213-4a70-9d09-cb045603b363]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.420 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a279e4c5-9618-41eb-b26b-561684c11d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.4273] manager: (tap08f92c92-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/469)
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.426 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1edcfc-d2a0-42f9-9dd4-1e91f31c3794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.464 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b33a6a82-8bb1-4429-aa03-05bed293f5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.467 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2c4655-a938-473e-b99d-3e4bde1fb1c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.4899] device (tap08f92c92-f0): carrier: link connected
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.496 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4d4e5f-c6d5-4cc3-a6ec-267ad5bbfd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.511 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d193f07e-0766-473a-a13e-be8577a2df1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574911, 'reachable_time': 17023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341200, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c05f1dd9-719f-411b-b1c7-0bd9dd55c935]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:c88e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 574911, 'tstamp': 574911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341201, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7987b88-3011-4784-8d25-b2bfa1a4d629]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f92c92-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:c8:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574911, 'reachable_time': 17023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341202, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.566 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea8fb45-fa23-4174-bff8-bc4048f8b946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f15ba60c-de8c-494c-ac5d-4cf28aea9189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.628 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.629 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08f92c92-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 kernel: tap08f92c92-f0: entered promiscuous mode
Jan 27 09:11:49 np0005597378 NetworkManager[48904]: <info>  [1769523109.6322] manager: (tap08f92c92-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.635 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08f92c92-f0, col_values=(('external_ids', {'iface-id': '91bf8ed9-ddad-43fe-a33c-84a91be62de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:11:49Z|01154|binding|INFO|Releasing lport 91bf8ed9-ddad-43fe-a33c-84a91be62de5 from this chassis (sb_readonly=0)
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.651 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.652 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[596ed2ad-911d-493a-9634-29524f4cfeab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.653 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/08f92c92-fca4-41b9-a9a8-67625119a840.pid.haproxy
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 08f92c92-fca4-41b9-a9a8-67625119a840
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:11:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:11:49.654 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'env', 'PROCESS_TAG=haproxy-08f92c92-fca4-41b9-a9a8-67625119a840', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08f92c92-fca4-41b9-a9a8-67625119a840.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.946 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.947 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523109.9457715, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.947 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.953 238945 DEBUG nova.compute.manager [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.962 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance rebooted successfully.#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.963 238945 DEBUG nova.compute.manager [None req-1a79ac13-4f80-45c2-be5a-a5e117a1ca51 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.991 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:49 np0005597378 nova_compute[238941]: 2026-01-27 14:11:49.994 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:11:50 np0005597378 nova_compute[238941]: 2026-01-27 14:11:50.017 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 27 09:11:50 np0005597378 nova_compute[238941]: 2026-01-27 14:11:50.018 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523109.9458983, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:11:50 np0005597378 nova_compute[238941]: 2026-01-27 14:11:50.018 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Started (Lifecycle Event)#033[00m
Jan 27 09:11:50 np0005597378 nova_compute[238941]: 2026-01-27 14:11:50.036 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:11:50 np0005597378 nova_compute[238941]: 2026-01-27 14:11:50.039 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:11:50 np0005597378 podman[341275]: 2026-01-27 14:11:50.070163411 +0000 UTC m=+0.073559799 container create ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:50 np0005597378 systemd[1]: Started libpod-conmon-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b.scope.
Jan 27 09:11:50 np0005597378 podman[341275]: 2026-01-27 14:11:50.016526008 +0000 UTC m=+0.019922426 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:11:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:11:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0a5d618344a8029a59e31204634dd0ca8cfd361d7acccc0259ced11b0a665a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:11:50 np0005597378 podman[341287]: 2026-01-27 14:11:50.144105778 +0000 UTC m=+0.047738394 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:11:50 np0005597378 podman[341275]: 2026-01-27 14:11:50.144712895 +0000 UTC m=+0.148109303 container init ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:11:50 np0005597378 podman[341275]: 2026-01-27 14:11:50.152088663 +0000 UTC m=+0.155485051 container start ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:11:50 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : New worker (341314) forked
Jan 27 09:11:50 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : Loading success.
Jan 27 09:11:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 27 09:11:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.057 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.058 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.058 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.058 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.059 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.059 238945 WARNING nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.059 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.060 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.060 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.060 238945 DEBUG oslo_concurrency.lockutils [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.061 238945 DEBUG nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.061 238945 WARNING nova.compute.manager [req-09ca7809-77b6-4406-828f-3bd4ad2caf05 req-6ef62f9c-9c39-4359-8635-d7b10ec0bf29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:11:51 np0005597378 nova_compute[238941]: 2026-01-27 14:11:51.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:51 np0005597378 podman[341323]: 2026-01-27 14:11:51.743534008 +0000 UTC m=+0.084033680 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 09:11:52 np0005597378 nova_compute[238941]: 2026-01-27 14:11:52.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 19 KiB/s wr, 5 op/s
Jan 27 09:11:53 np0005597378 nova_compute[238941]: 2026-01-27 14:11:53.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:53 np0005597378 nova_compute[238941]: 2026-01-27 14:11:53.825 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 09:11:53 np0005597378 nova_compute[238941]: 2026-01-27 14:11:53.826 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:53 np0005597378 nova_compute[238941]: 2026-01-27 14:11:53.827 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:53 np0005597378 nova_compute[238941]: 2026-01-27 14:11:53.848 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:54 np0005597378 nova_compute[238941]: 2026-01-27 14:11:54.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 19 KiB/s wr, 62 op/s
Jan 27 09:11:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 27 09:11:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 27 09:11:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.737 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.738 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.762 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.850 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.851 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.857 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.858 238945 INFO nova.compute.claims [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:11:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:11:55 np0005597378 nova_compute[238941]: 2026-01-27 14:11:55.969 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 27 09:11:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 27 09:11:56 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 27 09:11:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:11:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791451008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.571 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.578 238945 DEBUG nova.compute.provider_tree [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.595 238945 DEBUG nova.scheduler.client.report [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.617 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.618 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.665 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.666 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.685 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.707 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.789 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.791 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.792 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Creating image(s)#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.816 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.840 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.860 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.865 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 121 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 767 B/s wr, 119 op/s
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.905 238945 DEBUG nova.policy [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.953 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.954 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.955 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.956 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.979 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:11:56 np0005597378 nova_compute[238941]: 2026-01-27 14:11:56.982 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:11:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 27 09:11:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 27 09:11:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.274 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.326 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.387 238945 DEBUG nova.objects.instance [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.402 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.402 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Ensure instance console log exists: /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.402 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.403 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.403 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:11:57 np0005597378 nova_compute[238941]: 2026-01-27 14:11:57.509 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:58 np0005597378 nova_compute[238941]: 2026-01-27 14:11:58.044 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully created port: d02567c1-b424-4fc8-bf9d-3d0c7279063b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:11:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 27 09:11:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 27 09:11:58 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 27 09:11:58 np0005597378 nova_compute[238941]: 2026-01-27 14:11:58.544 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully created port: 32a4e0d7-f322-4557-8734-4d3be1786b85 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:11:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 157 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 243 op/s
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.136 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully updated port: d02567c1-b424-4fc8-bf9d-3d0c7279063b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.235 238945 DEBUG nova.compute.manager [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.236 238945 DEBUG nova.compute.manager [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.237 238945 DEBUG oslo_concurrency.lockutils [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.237 238945 DEBUG oslo_concurrency.lockutils [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.238 238945 DEBUG nova.network.neutron [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.406 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.476 238945 DEBUG nova.network.neutron [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613406434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:11:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613406434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.812 238945 DEBUG nova.network.neutron [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:11:59 np0005597378 nova_compute[238941]: 2026-01-27 14:11:59.925 238945 DEBUG oslo_concurrency.lockutils [req-8e319809-9acd-4000-9012-1e2da95a00f7 req-3a794f75-d6b4-4a8b-85e5-55e0a2fa4417 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:00 np0005597378 nova_compute[238941]: 2026-01-27 14:12:00.157 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Successfully updated port: 32a4e0d7-f322-4557-8734-4d3be1786b85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:12:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 27 09:12:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 27 09:12:00 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 27 09:12:00 np0005597378 nova_compute[238941]: 2026-01-27 14:12:00.303 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:00 np0005597378 nova_compute[238941]: 2026-01-27 14:12:00.303 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:00 np0005597378 nova_compute[238941]: 2026-01-27 14:12:00.303 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:12:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 155 KiB/s rd, 5.3 MiB/s wr, 230 op/s
Jan 27 09:12:00 np0005597378 nova_compute[238941]: 2026-01-27 14:12:00.910 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:12:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.427 238945 DEBUG nova.compute.manager [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.428 238945 DEBUG nova.compute.manager [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-32a4e0d7-f322-4557-8734-4d3be1786b85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.429 238945 DEBUG oslo_concurrency.lockutils [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.442 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.443 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:12:01 np0005597378 nova_compute[238941]: 2026-01-27 14:12:01.444 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:12:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89886756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.045 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.127 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.128 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.275 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.276 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3605MB free_disk=59.92120357044041GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.276 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.277 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.349 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.350 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6bf91edb-b66a-458b-b8bd-e8520cdc6349 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.350 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.351 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.512 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.734 238945 DEBUG nova.network.neutron [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.754 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.755 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance network_info: |[{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.755 238945 DEBUG oslo_concurrency.lockutils [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.756 238945 DEBUG nova.network.neutron [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port 32a4e0d7-f322-4557-8734-4d3be1786b85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.762 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start _get_guest_xml network_info=[{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.768 238945 WARNING nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.776 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.777 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.780 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.781 238945 DEBUG nova.virt.libvirt.host [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.781 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.782 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.782 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.783 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.783 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.783 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.784 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.784 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.784 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.785 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.785 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.786 238945 DEBUG nova.virt.hardware [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:12:02 np0005597378 nova_compute[238941]: 2026-01-27 14:12:02.790 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 3.8 MiB/s wr, 163 op/s
Jan 27 09:12:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:12:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514293375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.017 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.024 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.039 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.066 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.067 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:03 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:03Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:e9:b9 10.100.0.11
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/549753461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.372 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.395 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.401 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.736657272 +0000 UTC m=+0.042425112 container create 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:12:03 np0005597378 systemd[1]: Started libpod-conmon-67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900.scope.
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.715855002 +0000 UTC m=+0.021622852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:12:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.830901496 +0000 UTC m=+0.136669366 container init 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.838763257 +0000 UTC m=+0.144531097 container start 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.842661702 +0000 UTC m=+0.148429622 container attach 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:12:03 np0005597378 vigilant_ride[341801]: 167 167
Jan 27 09:12:03 np0005597378 systemd[1]: libpod-67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900.scope: Deactivated successfully.
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.845371925 +0000 UTC m=+0.151139785 container died 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:12:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7ac411f724c5f879457439d59c888cd282a7a5f18c3e858dcfc6f3d5060d1174-merged.mount: Deactivated successfully.
Jan 27 09:12:03 np0005597378 podman[341785]: 2026-01-27 14:12:03.890820506 +0000 UTC m=+0.196588346 container remove 67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_ride, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:12:03 np0005597378 systemd[1]: libpod-conmon-67f548da16fe21f8fb157fd2d1ae56d0e8694994b880241d454b1363d070e900.scope: Deactivated successfully.
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:12:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864589897' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.969 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.971 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.972 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.973 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.973 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.974 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.974 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:03 np0005597378 nova_compute[238941]: 2026-01-27 14:12:03.975 238945 DEBUG nova.objects.instance [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.000 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <uuid>6bf91edb-b66a-458b-b8bd-e8520cdc6349</uuid>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <name>instance-00000072</name>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-465956047</nova:name>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:12:02</nova:creationTime>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:port uuid="d02567c1-b424-4fc8-bf9d-3d0c7279063b">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <nova:port uuid="32a4e0d7-f322-4557-8734-4d3be1786b85">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7b:df4b" ipVersion="6"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <entry name="serial">6bf91edb-b66a-458b-b8bd-e8520cdc6349</entry>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <entry name="uuid">6bf91edb-b66a-458b-b8bd-e8520cdc6349</entry>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:18:aa:48"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <target dev="tapd02567c1-b4"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:7b:df:4b"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <target dev="tap32a4e0d7-f3"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/console.log" append="off"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:12:04 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:12:04 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:12:04 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:12:04 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Preparing to wait for external event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.001 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.002 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Preparing to wait for external event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.004 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.004 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.004 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.005 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.005 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.006 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.006 238945 DEBUG os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.007 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.008 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.012 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02567c1-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.012 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd02567c1-b4, col_values=(('external_ids', {'iface-id': 'd02567c1-b424-4fc8-bf9d-3d0c7279063b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:aa:48', 'vm-uuid': '6bf91edb-b66a-458b-b8bd-e8520cdc6349'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 NetworkManager[48904]: <info>  [1769523124.0154] manager: (tapd02567c1-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.026 238945 INFO os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4')#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.027 238945 DEBUG nova.virt.libvirt.vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:11:56Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.028 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.028 238945 DEBUG nova.network.os_vif_util [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.029 238945 DEBUG os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.030 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.031 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.033 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.033 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32a4e0d7-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.034 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32a4e0d7-f3, col_values=(('external_ids', {'iface-id': '32a4e0d7-f322-4557-8734-4d3be1786b85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:df:4b', 'vm-uuid': '6bf91edb-b66a-458b-b8bd-e8520cdc6349'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 NetworkManager[48904]: <info>  [1769523124.0366] manager: (tap32a4e0d7-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.037 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.045 238945 INFO os_vif [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3')#033[00m
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.107554823 +0000 UTC m=+0.059286455 container create cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.155 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.156 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.156 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:18:aa:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.156 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:7b:df:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.157 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Using config drive#033[00m
Jan 27 09:12:04 np0005597378 systemd[1]: Started libpod-conmon-cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d.scope.
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.080053624 +0000 UTC m=+0.031785366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.182 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:04 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.220925591 +0000 UTC m=+0.172657263 container init cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.229755569 +0000 UTC m=+0.181487191 container start cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.233510389 +0000 UTC m=+0.185242021 container attach cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:12:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 27 09:12:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 27 09:12:04 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 27 09:12:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:12:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.612 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Creating config drive at /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.618 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnwhyh0iu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.661 238945 DEBUG nova.network.neutron [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated VIF entry in instance network info cache for port 32a4e0d7-f322-4557-8734-4d3be1786b85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.663 238945 DEBUG nova.network.neutron [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.743 238945 DEBUG oslo_concurrency.lockutils [req-1ee1835d-0d4a-4d31-b098-8f25cfd85c10 req-afc69f6b-1279-430a-92ed-4291960118aa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:04 np0005597378 suspicious_sinoussi[341845]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:12:04 np0005597378 suspicious_sinoussi[341845]: --> All data devices are unavailable
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.764 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnwhyh0iu" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:04 np0005597378 systemd[1]: libpod-cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d.scope: Deactivated successfully.
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.771908724 +0000 UTC m=+0.723640386 container died cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.801 238945 DEBUG nova.storage.rbd_utils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:04 np0005597378 nova_compute[238941]: 2026-01-27 14:12:04.805 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 872 KiB/s rd, 1.3 MiB/s wr, 174 op/s
Jan 27 09:12:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4c61d2fbc807b08904dadbddc6641a53005a7473a7a2ab67c9cbf070540140a7-merged.mount: Deactivated successfully.
Jan 27 09:12:04 np0005597378 podman[341827]: 2026-01-27 14:12:04.982053964 +0000 UTC m=+0.933785586 container remove cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:12:04 np0005597378 systemd[1]: libpod-conmon-cf2f9b82c12fb05db11a4bb758a29e0c755f881a5f57252d0b926e38196d569d.scope: Deactivated successfully.
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.013 238945 DEBUG oslo_concurrency.processutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config 6bf91edb-b66a-458b-b8bd-e8520cdc6349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.014 238945 INFO nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deleting local config drive /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349/disk.config because it was imported into RBD.#033[00m
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.0689] manager: (tapd02567c1-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.068 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:05 np0005597378 kernel: tapd02567c1-b4: entered promiscuous mode
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01155|binding|INFO|Claiming lport d02567c1-b424-4fc8-bf9d-3d0c7279063b for this chassis.
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01156|binding|INFO|d02567c1-b424-4fc8-bf9d-3d0c7279063b: Claiming fa:16:3e:18:aa:48 10.100.0.13
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.085 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:aa:48 10.100.0.13'], port_security=['fa:16:3e:18:aa:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d02567c1-b424-4fc8-bf9d-3d0c7279063b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.086 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d02567c1-b424-4fc8-bf9d-3d0c7279063b in datapath 9964511f-1456-4111-a888-96329ab42c59 bound to our chassis#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.087 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9964511f-1456-4111-a888-96329ab42c59#033[00m
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.0936] manager: (tap32a4e0d7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Jan 27 09:12:05 np0005597378 systemd-udevd[341979]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:12:05 np0005597378 systemd-udevd[341978]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.100 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01157|binding|INFO|Setting lport d02567c1-b424-4fc8-bf9d-3d0c7279063b ovn-installed in OVS
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01158|binding|INFO|Setting lport d02567c1-b424-4fc8-bf9d-3d0c7279063b up in Southbound
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 kernel: tap32a4e0d7-f3: entered promiscuous mode
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c09680-32da-4ec6-a3f3-27dedcf4576e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.105 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9964511f-11 in ovnmeta-9964511f-1456-4111-a888-96329ab42c59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01159|if_status|INFO|Dropped 2 log messages in last 1625 seconds (most recently, 1625 seconds ago) due to excessive rate
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01160|if_status|INFO|Not updating pb chassis for 32a4e0d7-f322-4557-8734-4d3be1786b85 now as sb is readonly
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.108 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9964511f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.108 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[152e7ddb-2a8f-429a-ac27-d68b852933e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.109 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02579ceb-35f0-48a2-aedd-3017b366ce38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.1131] device (tap32a4e0d7-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.1137] device (tap32a4e0d7-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.1152] device (tapd02567c1-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.1159] device (tapd02567c1-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.121 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6d5c77-e1dd-4da8-b548-47446e77c731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01161|binding|INFO|Claiming lport 32a4e0d7-f322-4557-8734-4d3be1786b85 for this chassis.
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01162|binding|INFO|32a4e0d7-f322-4557-8734-4d3be1786b85: Claiming fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.127 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], port_security=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7b:df4b/64', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32a4e0d7-f322-4557-8734-4d3be1786b85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01163|binding|INFO|Setting lport 32a4e0d7-f322-4557-8734-4d3be1786b85 ovn-installed in OVS
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01164|binding|INFO|Setting lport 32a4e0d7-f322-4557-8734-4d3be1786b85 up in Southbound
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 systemd-machined[207425]: New machine qemu-145-instance-00000072.
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bf3c24-b8bc-4b35-b249-918063f1ae2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 systemd[1]: Started Virtual Machine qemu-145-instance-00000072.
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.182 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3bc898-7058-4611-8f40-04d962b9ec83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.1894] manager: (tap9964511f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/475)
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4639b229-fe4d-4480-82e1-2a50729df4ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.226 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f9dce774-709f-45ef-9e2b-155dd9d8e5b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.231 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8fc08e-10d1-4b71-a3ff-ab793c1dc117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.2576] device (tap9964511f-10): carrier: link connected
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.265 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[321e0bea-bbfa-4ce5-8a6f-71359b44039b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b727826c-3d56-4522-b738-4ae046494f8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342039, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.299 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5026ee0e-7cf3-453f-9f70-295779b4489f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:ae0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576488, 'tstamp': 576488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342040, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18a2d395-1d5b-4e78-8f75-43d1759ccedb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342041, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.359 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e02d6bac-c9b7-477b-921e-a07a156c1eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.433 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9091792a-d1c2-48b2-8ae3-3a567372d825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.440 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.440 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.441 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9964511f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:05 np0005597378 NetworkManager[48904]: <info>  [1769523125.4448] manager: (tap9964511f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Jan 27 09:12:05 np0005597378 kernel: tap9964511f-10: entered promiscuous mode
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.449 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9964511f-10, col_values=(('external_ids', {'iface-id': '139ea0ba-f559-4c32-9b23-bc114f6fe7b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:05Z|01165|binding|INFO|Releasing lport 139ea0ba-f559-4c32-9b23-bc114f6fe7b6 from this chassis (sb_readonly=0)
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.454 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9964511f-1456-4111-a888-96329ab42c59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9964511f-1456-4111-a888-96329ab42c59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.456 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4199f5-41b4-4666-95f9-ae5a87035df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.457 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-9964511f-1456-4111-a888-96329ab42c59
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/9964511f-1456-4111-a888-96329ab42c59.pid.haproxy
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 9964511f-1456-4111-a888-96329ab42c59
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:12:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:05.458 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'env', 'PROCESS_TAG=haproxy-9964511f-1456-4111-a888-96329ab42c59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9964511f-1456-4111-a888-96329ab42c59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.471934494 +0000 UTC m=+0.064331521 container create 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:12:05 np0005597378 systemd[1]: Started libpod-conmon-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope.
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.436735678 +0000 UTC m=+0.029132735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:12:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.563 238945 DEBUG nova.compute.manager [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.564 238945 DEBUG oslo_concurrency.lockutils [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.564 238945 DEBUG oslo_concurrency.lockutils [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.564 238945 DEBUG oslo_concurrency.lockutils [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.565 238945 DEBUG nova.compute.manager [req-6ec9f825-addc-4969-8de4-17bf747236b2 req-68cd38d8-b9db-487c-834b-d0c85181c893 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Processing event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.568595112 +0000 UTC m=+0.160992169 container init 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.577116711 +0000 UTC m=+0.169513738 container start 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:12:05 np0005597378 systemd[1]: libpod-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope: Deactivated successfully.
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.583324528 +0000 UTC m=+0.175721595 container attach 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:12:05 np0005597378 quirky_solomon[342118]: 167 167
Jan 27 09:12:05 np0005597378 conmon[342118]: conmon 11b4344f175453bf4543 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope/container/memory.events
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.585396544 +0000 UTC m=+0.177793571 container died 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:12:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9fd37b4a8d2e0aed875e4e83a7c06e5ad93c735ab568d7ff9d70b92f69e2155a-merged.mount: Deactivated successfully.
Jan 27 09:12:05 np0005597378 podman[342056]: 2026-01-27 14:12:05.624602638 +0000 UTC m=+0.216999665 container remove 11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.639 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523125.6385322, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:05 np0005597378 systemd[1]: libpod-conmon-11b4344f175453bf4543f7d0d79942f1fc37894c855761e0579d0d9f110f6538.scope: Deactivated successfully.
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.639 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Started (Lifecycle Event)#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.664 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.668 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523125.6386728, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.668 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG nova.compute.manager [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG oslo_concurrency.lockutils [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG oslo_concurrency.lockutils [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG oslo_concurrency.lockutils [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.689 238945 DEBUG nova.compute.manager [req-bbda111f-91a1-413b-9f74-8e21971c7df9 req-96391dca-b27c-4696-9111-33e078567cd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Processing event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.690 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.695 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.697 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523125.6945367, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.697 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.701 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance spawned successfully.#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.701 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.721 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.741 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.749 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.750 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.751 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.751 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.752 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.752 238945 DEBUG nova.virt.libvirt.driver [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.779 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:12:05 np0005597378 podman[342146]: 2026-01-27 14:12:05.819698873 +0000 UTC m=+0.050009826 container create 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.835 238945 INFO nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 9.04 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.835 238945 DEBUG nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:05 np0005597378 systemd[1]: Started libpod-conmon-8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4.scope.
Jan 27 09:12:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:05 np0005597378 podman[342146]: 2026-01-27 14:12:05.791797703 +0000 UTC m=+0.022108676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:12:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:05 np0005597378 podman[342146]: 2026-01-27 14:12:05.907900014 +0000 UTC m=+0.138211017 container init 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.910 238945 INFO nova.compute.manager [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 10.09 seconds to build instance.#033[00m
Jan 27 09:12:05 np0005597378 podman[342179]: 2026-01-27 14:12:05.916825994 +0000 UTC m=+0.071631027 container create 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:05 np0005597378 podman[342146]: 2026-01-27 14:12:05.92338315 +0000 UTC m=+0.153694143 container start 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.930635) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523125930662, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 567, "num_deletes": 255, "total_data_size": 514482, "memory_usage": 525192, "flush_reason": "Manual Compaction"}
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 27 09:12:05 np0005597378 podman[342146]: 2026-01-27 14:12:05.934478308 +0000 UTC m=+0.164789291 container attach 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523125935162, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 508709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42460, "largest_seqno": 43026, "table_properties": {"data_size": 505528, "index_size": 1089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7363, "raw_average_key_size": 18, "raw_value_size": 499079, "raw_average_value_size": 1276, "num_data_blocks": 48, "num_entries": 391, "num_filter_entries": 391, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523101, "oldest_key_time": 1769523101, "file_creation_time": 1769523125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4561 microseconds, and 2051 cpu microseconds.
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.935198) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 508709 bytes OK
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.935214) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.936882) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.936896) EVENT_LOG_v1 {"time_micros": 1769523125936891, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.936913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 511230, prev total WAL file size 511230, number of live WAL files 2.
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.937416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353130' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(496KB)], [95(10MB)]
Jan 27 09:12:05 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523125937453, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11065188, "oldest_snapshot_seqno": -1}
Jan 27 09:12:05 np0005597378 nova_compute[238941]: 2026-01-27 14:12:05.937 238945 DEBUG oslo_concurrency.lockutils [None req-3d14f530-3f82-440d-bf06-683e0ef1f67f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:05 np0005597378 systemd[1]: Started libpod-conmon-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795.scope.
Jan 27 09:12:05 np0005597378 podman[342179]: 2026-01-27 14:12:05.883357755 +0000 UTC m=+0.038162878 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:12:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567bd9b41b6e82e882facce6f36e0d400f760c906e7d2a72376c1afd3c1edeca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6548 keys, 10946339 bytes, temperature: kUnknown
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523126009978, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10946339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10899280, "index_size": 29603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169776, "raw_average_key_size": 25, "raw_value_size": 10778876, "raw_average_value_size": 1646, "num_data_blocks": 1167, "num_entries": 6548, "num_filter_entries": 6548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.010490) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10946339 bytes
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.036228) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.9 rd, 150.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(43.3) write-amplify(21.5) OK, records in: 7072, records dropped: 524 output_compression: NoCompression
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.036261) EVENT_LOG_v1 {"time_micros": 1769523126036248, "job": 56, "event": "compaction_finished", "compaction_time_micros": 72857, "compaction_time_cpu_micros": 27662, "output_level": 6, "num_output_files": 1, "total_output_size": 10946339, "num_input_records": 7072, "num_output_records": 6548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523126036535, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523126038089, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:05.937283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:06.038278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:06 np0005597378 podman[342179]: 2026-01-27 14:12:06.038736732 +0000 UTC m=+0.193541785 container init 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:12:06 np0005597378 podman[342179]: 2026-01-27 14:12:06.046424508 +0000 UTC m=+0.201229541 container start 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:12:06 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : New worker (342205) forked
Jan 27 09:12:06 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : Loading success.
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.102 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32a4e0d7-f322-4557-8734-4d3be1786b85 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.104 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[897f3867-df96-43eb-84d0-c9475871def1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.116 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8e1b054-51 in ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.119 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8e1b054-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01aef606-fe37-49ca-a15d-cac3e9ca35f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3808dea0-46c0-4748-882f-7efb343bcef7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.137 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[49288f39-d793-45b7-8d0f-d66340f29242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.160 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91b62cf0-91f1-4465-a6cb-171ce2e3bd5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2be26dad-65a9-472f-ae7c-131467ed61e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 NetworkManager[48904]: <info>  [1769523126.2016] manager: (tapb8e1b054-50): new Veth device (/org/freedesktop/NetworkManager/Devices/477)
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.200 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[add43948-2fb6-456d-89ff-48e68a4f08a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.239 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[165d7adb-17c6-413c-9746-bf1b2c7840ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.242 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[575c742f-eccf-45e9-b7a1-bae7fd87d516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 NetworkManager[48904]: <info>  [1769523126.2652] device (tapb8e1b054-50): carrier: link connected
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]: {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:    "0": [
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:        {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "devices": [
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "/dev/loop3"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            ],
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_name": "ceph_lv0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_size": "21470642176",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "name": "ceph_lv0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "tags": {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cluster_name": "ceph",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.crush_device_class": "",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.encrypted": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.objectstore": "bluestore",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osd_id": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.type": "block",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.vdo": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.with_tpm": "0"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            },
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "type": "block",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "vg_name": "ceph_vg0"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:        }
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:    ],
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:    "1": [
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:        {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "devices": [
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "/dev/loop4"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            ],
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_name": "ceph_lv1",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_size": "21470642176",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "name": "ceph_lv1",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "tags": {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cluster_name": "ceph",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.crush_device_class": "",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.encrypted": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.objectstore": "bluestore",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osd_id": "1",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.type": "block",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.vdo": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.with_tpm": "0"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            },
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "type": "block",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "vg_name": "ceph_vg1"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:        }
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:    ],
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:    "2": [
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:        {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "devices": [
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "/dev/loop5"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            ],
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_name": "ceph_lv2",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_size": "21470642176",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "name": "ceph_lv2",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "tags": {
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.cluster_name": "ceph",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.crush_device_class": "",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.encrypted": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.objectstore": "bluestore",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osd_id": "2",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.type": "block",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.vdo": "0",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:                "ceph.with_tpm": "0"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            },
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "type": "block",
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:            "vg_name": "ceph_vg2"
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:        }
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]:    ]
Jan 27 09:12:06 np0005597378 eloquent_hellman[342192]: }
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.269 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[71f0ce1d-c211-4f12-82a4-b11351d78e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.290 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4edfd05e-cdac-449f-9781-3b556435c25b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342230, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 systemd[1]: libpod-8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4.scope: Deactivated successfully.
Jan 27 09:12:06 np0005597378 podman[342146]: 2026-01-27 14:12:06.298793873 +0000 UTC m=+0.529104846 container died 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e838e76-ff46-456f-9c78-5cf7ea1ab9ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:96ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576588, 'tstamp': 576588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342231, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0cef987827fe3c41093c6885d5d7cdd1be519e552a2b07417b0e3070d25acd6b-merged.mount: Deactivated successfully.
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.328 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a65ed08e-f815-4234-a9a1-d6255ea8bd0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342234, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 27 09:12:06 np0005597378 podman[342146]: 2026-01-27 14:12:06.348801877 +0000 UTC m=+0.579112820 container remove 8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 27 09:12:06 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 27 09:12:06 np0005597378 systemd[1]: libpod-conmon-8a06340b7f88e468f4f7c68fd062a634c21656554bfef5d6a52d83b064fffca4.scope: Deactivated successfully.
Jan 27 09:12:06 np0005597378 nova_compute[238941]: 2026-01-27 14:12:06.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.381 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8002ae8-322f-4e64-8fb5-ae16c5ef33a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.424 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5ddf71-7281-4fd7-8875-018823dd9414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.426 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e1b054-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:06 np0005597378 NetworkManager[48904]: <info>  [1769523126.4290] manager: (tapb8e1b054-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Jan 27 09:12:06 np0005597378 kernel: tapb8e1b054-50: entered promiscuous mode
Jan 27 09:12:06 np0005597378 nova_compute[238941]: 2026-01-27 14:12:06.431 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.439 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8e1b054-50, col_values=(('external_ids', {'iface-id': 'cec58910-221b-4aa5-9532-67a30f83e8bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:06 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:06Z|01166|binding|INFO|Releasing lport cec58910-221b-4aa5-9532-67a30f83e8bb from this chassis (sb_readonly=0)
Jan 27 09:12:06 np0005597378 nova_compute[238941]: 2026-01-27 14:12:06.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.446 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[393a6287-8c58-48fd-98b9-095a9986349d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.448 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.pid.haproxy
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b8e1b054-5200-4e22-9702-c3f6d1f1a12e
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:12:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:06.449 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'env', 'PROCESS_TAG=haproxy-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8e1b054-5200-4e22-9702-c3f6d1f1a12e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:12:06 np0005597378 nova_compute[238941]: 2026-01-27 14:12:06.460 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:06 np0005597378 podman[342333]: 2026-01-27 14:12:06.883321288 +0000 UTC m=+0.097630976 container create b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:12:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 50 KiB/s wr, 270 op/s
Jan 27 09:12:06 np0005597378 podman[342333]: 2026-01-27 14:12:06.811309521 +0000 UTC m=+0.025619249 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:12:06 np0005597378 podman[342344]: 2026-01-27 14:12:06.91579676 +0000 UTC m=+0.108009564 container create fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:12:06 np0005597378 systemd[1]: Started libpod-conmon-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope.
Jan 27 09:12:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:06 np0005597378 systemd[1]: Started libpod-conmon-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope.
Jan 27 09:12:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ae59e8e76c0f59d980100fdf776d345c70ceb1140791aaf6acb297929ec17c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:06 np0005597378 podman[342333]: 2026-01-27 14:12:06.959192167 +0000 UTC m=+0.173501875 container init b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:12:06 np0005597378 podman[342344]: 2026-01-27 14:12:06.868795867 +0000 UTC m=+0.061008691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:12:06 np0005597378 podman[342333]: 2026-01-27 14:12:06.969417862 +0000 UTC m=+0.183727550 container start b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:12:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:06 np0005597378 podman[342344]: 2026-01-27 14:12:06.990289213 +0000 UTC m=+0.182502037 container init fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:12:06 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : New worker (342379) forked
Jan 27 09:12:06 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : Loading success.
Jan 27 09:12:06 np0005597378 podman[342344]: 2026-01-27 14:12:06.999404198 +0000 UTC m=+0.191617002 container start fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:12:07 np0005597378 podman[342344]: 2026-01-27 14:12:07.002951553 +0000 UTC m=+0.195164377 container attach fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:12:07 np0005597378 ecstatic_taussig[342372]: 167 167
Jan 27 09:12:07 np0005597378 systemd[1]: libpod-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope: Deactivated successfully.
Jan 27 09:12:07 np0005597378 conmon[342372]: conmon fd931848ef976e42d989 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope/container/memory.events
Jan 27 09:12:07 np0005597378 podman[342344]: 2026-01-27 14:12:07.008946344 +0000 UTC m=+0.201159148 container died fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:12:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-79f47425da38e8e671f7bfad555b052462ea6230b8efdab48fcd8ea1588343d0-merged.mount: Deactivated successfully.
Jan 27 09:12:07 np0005597378 podman[342344]: 2026-01-27 14:12:07.171469324 +0000 UTC m=+0.363682138 container remove fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:12:07 np0005597378 systemd[1]: libpod-conmon-fd931848ef976e42d98981ba4c5739992b1ebdaf950b8078e00590f57a67416f.scope: Deactivated successfully.
Jan 27 09:12:07 np0005597378 podman[342410]: 2026-01-27 14:12:07.35656797 +0000 UTC m=+0.047032905 container create eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:12:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 27 09:12:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 27 09:12:07 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 27 09:12:07 np0005597378 systemd[1]: Started libpod-conmon-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope.
Jan 27 09:12:07 np0005597378 podman[342410]: 2026-01-27 14:12:07.332649787 +0000 UTC m=+0.023114752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:12:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:07 np0005597378 podman[342410]: 2026-01-27 14:12:07.451074011 +0000 UTC m=+0.141538996 container init eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:12:07 np0005597378 podman[342410]: 2026-01-27 14:12:07.459655671 +0000 UTC m=+0.150120626 container start eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:12:07 np0005597378 podman[342410]: 2026-01-27 14:12:07.463384462 +0000 UTC m=+0.153849437 container attach eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.663 238945 DEBUG nova.compute.manager [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.664 238945 DEBUG oslo_concurrency.lockutils [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.665 238945 DEBUG oslo_concurrency.lockutils [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.665 238945 DEBUG oslo_concurrency.lockutils [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.665 238945 DEBUG nova.compute.manager [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.666 238945 WARNING nova.compute.manager [req-4025fcd8-c1b9-4cac-9bcb-6909a05f335c req-f8fcac36-0af0-4da7-8c6e-7de41f188a1d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.792 238945 DEBUG nova.compute.manager [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.793 238945 DEBUG oslo_concurrency.lockutils [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.793 238945 DEBUG oslo_concurrency.lockutils [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.794 238945 DEBUG oslo_concurrency.lockutils [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.794 238945 DEBUG nova.compute.manager [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:07 np0005597378 nova_compute[238941]: 2026-01-27 14:12:07.794 238945 WARNING nova.compute.manager [req-2f024651-6df9-43bb-8631-526ff1586a41 req-3c1931d5-4228-4fb2-bdc5-642d1d467c17 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:12:08 np0005597378 lvm[342507]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:12:08 np0005597378 lvm[342505]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:12:08 np0005597378 lvm[342507]: VG ceph_vg1 finished
Jan 27 09:12:08 np0005597378 lvm[342505]: VG ceph_vg0 finished
Jan 27 09:12:08 np0005597378 lvm[342508]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:12:08 np0005597378 lvm[342508]: VG ceph_vg2 finished
Jan 27 09:12:08 np0005597378 elated_ramanujan[342427]: {}
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 27 09:12:08 np0005597378 nova_compute[238941]: 2026-01-27 14:12:08.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:08 np0005597378 nova_compute[238941]: 2026-01-27 14:12:08.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 27 09:12:08 np0005597378 systemd[1]: libpod-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope: Deactivated successfully.
Jan 27 09:12:08 np0005597378 systemd[1]: libpod-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope: Consumed 1.429s CPU time.
Jan 27 09:12:08 np0005597378 podman[342410]: 2026-01-27 14:12:08.399419567 +0000 UTC m=+1.089884502 container died eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:12:08 np0005597378 nova_compute[238941]: 2026-01-27 14:12:08.421 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:12:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-520ef2547917bfeef77ab684ff7657355440787568d0c1c4c55713e2b332758d-merged.mount: Deactivated successfully.
Jan 27 09:12:08 np0005597378 podman[342410]: 2026-01-27 14:12:08.496312341 +0000 UTC m=+1.186777276 container remove eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:12:08 np0005597378 systemd[1]: libpod-conmon-eb44b8d9a7a7fb978c583b03856b23dd7877d2f42571a7a4a06dfeb67cdbe9a9.scope: Deactivated successfully.
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:12:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:12:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 55 KiB/s wr, 443 op/s
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:12:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.753 238945 DEBUG nova.compute.manager [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.753 238945 DEBUG nova.compute.manager [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.754 238945 DEBUG oslo_concurrency.lockutils [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.754 238945 DEBUG oslo_concurrency.lockutils [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.755 238945 DEBUG nova.network.neutron [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.785 238945 INFO nova.compute.manager [None req-9432ab0d-4418-4641-9594-0f8bfde4572c a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Get console output#033[00m
Jan 27 09:12:09 np0005597378 nova_compute[238941]: 2026-01-27 14:12:09.794 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:12:10 np0005597378 nova_compute[238941]: 2026-01-27 14:12:10.862 238945 DEBUG nova.compute.manager [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:10 np0005597378 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG nova.compute.manager [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing instance network info cache due to event network-changed-78c393a3-5ecf-49c2-9d5a-dab369d909b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:10 np0005597378 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG oslo_concurrency.lockutils [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:10 np0005597378 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG oslo_concurrency.lockutils [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:10 np0005597378 nova_compute[238941]: 2026-01-27 14:12:10.863 238945 DEBUG nova.network.neutron [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Refreshing network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 64 KiB/s wr, 414 op/s
Jan 27 09:12:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 27 09:12:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 27 09:12:10 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.012 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.012 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.012 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.013 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.013 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.014 238945 INFO nova.compute.manager [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Terminating instance#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.014 238945 DEBUG nova.compute.manager [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:12:11 np0005597378 kernel: tap78c393a3-5e (unregistering): left promiscuous mode
Jan 27 09:12:11 np0005597378 NetworkManager[48904]: <info>  [1769523131.1788] device (tap78c393a3-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:11Z|01167|binding|INFO|Releasing lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 from this chassis (sb_readonly=0)
Jan 27 09:12:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:11Z|01168|binding|INFO|Setting lport 78c393a3-5ecf-49c2-9d5a-dab369d909b4 down in Southbound
Jan 27 09:12:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:11Z|01169|binding|INFO|Removing iface tap78c393a3-5e ovn-installed in OVS
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 27 09:12:11 np0005597378 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000071.scope: Consumed 13.277s CPU time.
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.234 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:e9:b9 10.100.0.11'], port_security=['fa:16:3e:00:e9:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46ce04c1-b6c3-42cb-99b4-546ad865b2f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f92c92-fca4-41b9-a9a8-67625119a840', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9e07fcf9-373e-4573-bc84-da8b1454208f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f69261e-cc94-4cab-96cc-931010359962, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=78c393a3-5ecf-49c2-9d5a-dab369d909b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.237 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 78c393a3-5ecf-49c2-9d5a-dab369d909b4 in datapath 08f92c92-fca4-41b9-a9a8-67625119a840 unbound from our chassis#033[00m
Jan 27 09:12:11 np0005597378 systemd-machined[207425]: Machine qemu-144-instance-00000071 terminated.
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.239 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08f92c92-fca4-41b9-a9a8-67625119a840, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.240 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4af3bf1-b5ef-4e6b-9a1a-a168d354668f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.241 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 namespace which is not needed anymore#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.278 238945 DEBUG nova.network.neutron [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated VIF entry in instance network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.278 238945 DEBUG nova.network.neutron [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.349 238945 DEBUG oslo_concurrency.lockutils [req-43d0c6a1-4faa-43fa-9c7c-d52e3d7e79a2 req-8a65b8a8-8690-4ee1-acd7-0a3249d5c04c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:11 np0005597378 NetworkManager[48904]: <info>  [1769523131.4373] manager: (tap78c393a3-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : haproxy version is 2.8.14-c23fe91
Jan 27 09:12:11 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [NOTICE]   (341312) : path to executable is /usr/sbin/haproxy
Jan 27 09:12:11 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [ALERT]    (341312) : Current worker (341314) exited with code 143 (Terminated)
Jan 27 09:12:11 np0005597378 neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840[341299]: [WARNING]  (341312) : All workers exited. Exiting... (0)
Jan 27 09:12:11 np0005597378 systemd[1]: libpod-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b.scope: Deactivated successfully.
Jan 27 09:12:11 np0005597378 podman[342570]: 2026-01-27 14:12:11.450217164 +0000 UTC m=+0.096754482 container died ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.457 238945 INFO nova.virt.libvirt.driver [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Instance destroyed successfully.#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.458 238945 DEBUG nova.objects.instance [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.473 238945 DEBUG nova.virt.libvirt.vif [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-338504836',display_name='tempest-TestNetworkAdvancedServerOps-server-338504836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-338504836',id=113,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCZoEIVPZt+W5PTyiefF6CmQjyOcCka1+ETsRUdkkq8w28gqXVMa7/Mt6QcDsLQbaY22k6G/fXcPVKU22vIQ/xZ0qJM4npfGflv72d5x3TfjpLqEY8C7F6Om5C96GUAogQ==',key_name='tempest-TestNetworkAdvancedServerOps-721889522',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:11:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-8h702f02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:11:49Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=46ce04c1-b6c3-42cb-99b4-546ad865b2f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.473 238945 DEBUG nova.network.os_vif_util [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.474 238945 DEBUG nova.network.os_vif_util [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.475 238945 DEBUG os_vif [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.482 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78c393a3-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.487 238945 INFO os_vif [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:e9:b9,bridge_name='br-int',has_traffic_filtering=True,id=78c393a3-5ecf-49c2-9d5a-dab369d909b4,network=Network(08f92c92-fca4-41b9-a9a8-67625119a840),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78c393a3-5e')#033[00m
Jan 27 09:12:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b-userdata-shm.mount: Deactivated successfully.
Jan 27 09:12:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5a0a5d618344a8029a59e31204634dd0ca8cfd361d7acccc0259ced11b0a665a-merged.mount: Deactivated successfully.
Jan 27 09:12:11 np0005597378 podman[342570]: 2026-01-27 14:12:11.505439969 +0000 UTC m=+0.151977297 container cleanup ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:12:11 np0005597378 systemd[1]: libpod-conmon-ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b.scope: Deactivated successfully.
Jan 27 09:12:11 np0005597378 podman[342623]: 2026-01-27 14:12:11.619721022 +0000 UTC m=+0.088536172 container remove ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.625 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82093f3a-95f4-4bd0-966a-28be27331116]: (4, ('Tue Jan 27 02:12:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b)\nab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b\nTue Jan 27 02:12:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 (ab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b)\nab51d546e97c6b2574bd4b92be0759f9a13791e9010cc7c8273f362e1648891b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.626 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2f03ed-7370-4d13-8e26-fcf5a15f5073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.627 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f92c92-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 kernel: tap08f92c92-f0: left promiscuous mode
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.648 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58e60ca1-33c7-4460-a564-7db4849571c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.665 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f07e2ea-4238-4fa6-8ab0-bc1e443c6916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2415ea1a-4c92-40e1-8597-e0804140a0ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.682 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[06f664af-2467-46c6-b1ab-6fdeab761c3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574903, 'reachable_time': 29297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342641, 'error': None, 'target': 'ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 systemd[1]: run-netns-ovnmeta\x2d08f92c92\x2dfca4\x2d41b9\x2da9a8\x2d67625119a840.mount: Deactivated successfully.
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.686 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08f92c92-fca4-41b9-a9a8-67625119a840 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:12:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:11.686 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[03d0061c-140b-42c8-8be2-1ad2f7089a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG nova.compute.manager [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG oslo_concurrency.lockutils [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG oslo_concurrency.lockutils [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.827 238945 DEBUG oslo_concurrency.lockutils [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.828 238945 DEBUG nova.compute.manager [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:11 np0005597378 nova_compute[238941]: 2026-01-27 14:12:11.828 238945 DEBUG nova.compute.manager [req-a8596991-bb03-4dff-aeb7-f8bb4fb413fc req-a32d9132-693b-44e8-90e8-6cb57b457706 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-unplugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.278 238945 DEBUG nova.network.neutron [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updated VIF entry in instance network info cache for port 78c393a3-5ecf-49c2-9d5a-dab369d909b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.278 238945 DEBUG nova.network.neutron [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [{"id": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "address": "fa:16:3e:00:e9:b9", "network": {"id": "08f92c92-fca4-41b9-a9a8-67625119a840", "bridge": "br-int", "label": "tempest-network-smoke--1449968365", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78c393a3-5e", "ovs_interfaceid": "78c393a3-5ecf-49c2-9d5a-dab369d909b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.302 238945 DEBUG oslo_concurrency.lockutils [req-92517c53-a126-497c-975a-e8d5b3473d20 req-8a1bb51d-5653-424a-8262-ec7e789d99df 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-46ce04c1-b6c3-42cb-99b4-546ad865b2f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.542 238945 INFO nova.virt.libvirt.driver [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deleting instance files /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_del#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.543 238945 INFO nova.virt.libvirt.driver [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deletion of /var/lib/nova/instances/46ce04c1-b6c3-42cb-99b4-546ad865b2f5_del complete#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.589 238945 INFO nova.compute.manager [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.590 238945 DEBUG oslo.service.loopingcall [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.591 238945 DEBUG nova.compute.manager [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:12:12 np0005597378 nova_compute[238941]: 2026-01-27 14:12:12.591 238945 DEBUG nova.network.neutron [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:12:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 169 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 55 KiB/s wr, 333 op/s
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.204 238945 DEBUG nova.network.neutron [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.221 238945 INFO nova.compute.manager [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Took 0.63 seconds to deallocate network for instance.#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.270 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.271 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.346 238945 DEBUG oslo_concurrency.processutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:12:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:12:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794183755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.890 238945 DEBUG oslo_concurrency.processutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.896 238945 DEBUG nova.compute.provider_tree [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.928 238945 DEBUG nova.scheduler.client.report [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.956 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:13 np0005597378 nova_compute[238941]: 2026-01-27 14:12:13.988 238945 INFO nova.scheduler.client.report [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance 46ce04c1-b6c3-42cb-99b4-546ad865b2f5#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.095 238945 DEBUG nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG oslo_concurrency.lockutils [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG oslo_concurrency.lockutils [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG oslo_concurrency.lockutils [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.096 238945 DEBUG nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] No waiting events found dispatching network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.097 238945 WARNING nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received unexpected event network-vif-plugged-78c393a3-5ecf-49c2-9d5a-dab369d909b4 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.097 238945 DEBUG nova.compute.manager [req-f0467027-e6da-4f89-9c73-c446fd22cd2e req-d06843ba-09a5-4c60-948a-00434367b8d6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Received event network-vif-deleted-78c393a3-5ecf-49c2-9d5a-dab369d909b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:14 np0005597378 nova_compute[238941]: 2026-01-27 14:12:14.268 238945 DEBUG oslo_concurrency.lockutils [None req-9196060c-5a40-42a1-94d1-7a66e1cf7890 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "46ce04c1-b6c3-42cb-99b4-546ad865b2f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 113 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 46 KiB/s wr, 296 op/s
Jan 27 09:12:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 27 09:12:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 27 09:12:15 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 27 09:12:16 np0005597378 nova_compute[238941]: 2026-01-27 14:12:16.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 88 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 878 KiB/s rd, 19 KiB/s wr, 112 op/s
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:12:17
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', '.mgr']
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:12:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 27 09:12:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:17Z|01170|binding|INFO|Releasing lport 139ea0ba-f559-4c32-9b23-bc114f6fe7b6 from this chassis (sb_readonly=0)
Jan 27 09:12:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:17Z|01171|binding|INFO|Releasing lport cec58910-221b-4aa5-9532-67a30f83e8bb from this chassis (sb_readonly=0)
Jan 27 09:12:17 np0005597378 nova_compute[238941]: 2026-01-27 14:12:17.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:17 np0005597378 nova_compute[238941]: 2026-01-27 14:12:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:17 np0005597378 nova_compute[238941]: 2026-01-27 14:12:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:12:17 np0005597378 nova_compute[238941]: 2026-01-27 14:12:17.518 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:12:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 111 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 114 op/s
Jan 27 09:12:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:18Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:aa:48 10.100.0.13
Jan 27 09:12:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:18Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:aa:48 10.100.0.13
Jan 27 09:12:20 np0005597378 podman[342665]: 2026-01-27 14:12:20.719638654 +0000 UTC m=+0.060937990 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 09:12:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 117 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 27 09:12:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:21 np0005597378 nova_compute[238941]: 2026-01-27 14:12:21.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:22 np0005597378 nova_compute[238941]: 2026-01-27 14:12:22.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:22 np0005597378 podman[342685]: 2026-01-27 14:12:22.749158467 +0000 UTC m=+0.085551101 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 09:12:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 117 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 27 09:12:24 np0005597378 nova_compute[238941]: 2026-01-27 14:12:24.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.940921) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523145940958, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 475, "num_deletes": 252, "total_data_size": 424312, "memory_usage": 433760, "flush_reason": "Manual Compaction"}
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523145945572, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 366611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43027, "largest_seqno": 43501, "table_properties": {"data_size": 363842, "index_size": 805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7176, "raw_average_key_size": 20, "raw_value_size": 358227, "raw_average_value_size": 1041, "num_data_blocks": 35, "num_entries": 344, "num_filter_entries": 344, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523126, "oldest_key_time": 1769523126, "file_creation_time": 1769523145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 4703 microseconds, and 2210 cpu microseconds.
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.945625) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 366611 bytes OK
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.945646) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947218) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947235) EVENT_LOG_v1 {"time_micros": 1769523145947229, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947251) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 421467, prev total WAL file size 421467, number of live WAL files 2.
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373537' seq:0, type:0; will stop at (end)
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(358KB)], [98(10MB)]
Jan 27 09:12:25 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523145947837, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11312950, "oldest_snapshot_seqno": -1}
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6378 keys, 8009371 bytes, temperature: kUnknown
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523146001938, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8009371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7967855, "index_size": 24446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 166472, "raw_average_key_size": 26, "raw_value_size": 7854794, "raw_average_value_size": 1231, "num_data_blocks": 952, "num_entries": 6378, "num_filter_entries": 6378, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.002238) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8009371 bytes
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.003876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.6 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(52.7) write-amplify(21.8) OK, records in: 6892, records dropped: 514 output_compression: NoCompression
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.003895) EVENT_LOG_v1 {"time_micros": 1769523146003886, "job": 58, "event": "compaction_finished", "compaction_time_micros": 54220, "compaction_time_cpu_micros": 29404, "output_level": 6, "num_output_files": 1, "total_output_size": 8009371, "num_input_records": 6892, "num_output_records": 6378, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523146004106, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523146005735, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:25.947689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:26 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:12:26.005845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:12:26 np0005597378 nova_compute[238941]: 2026-01-27 14:12:26.456 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523131.454989, 46ce04c1-b6c3-42cb-99b4-546ad865b2f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:26 np0005597378 nova_compute[238941]: 2026-01-27 14:12:26.456 238945 INFO nova.compute.manager [-] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:12:26 np0005597378 nova_compute[238941]: 2026-01-27 14:12:26.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:26 np0005597378 nova_compute[238941]: 2026-01-27 14:12:26.537 238945 DEBUG nova.compute.manager [None req-73cf836a-fc2f-457b-b167-a1753b7cc0f1 - - - - - -] [instance: 46ce04c1-b6c3-42cb-99b4-546ad865b2f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Jan 27 09:12:27 np0005597378 nova_compute[238941]: 2026-01-27 14:12:27.523 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007688618035430421 of space, bias 1.0, pg target 0.23065854106291264 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695975625143159 of space, bias 1.0, pg target 0.20087926875429477 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0648493795661308e-06 of space, bias 4.0, pg target 0.001277819255479357 quantized to 16 (current 16)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:12:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.776 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.776 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.796 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:12:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.938 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.939 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.969 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:12:28 np0005597378 nova_compute[238941]: 2026-01-27 14:12:28.969 238945 INFO nova.compute.claims [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:12:29 np0005597378 nova_compute[238941]: 2026-01-27 14:12:29.361 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:12:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635216444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:12:29 np0005597378 nova_compute[238941]: 2026-01-27 14:12:29.930 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:29 np0005597378 nova_compute[238941]: 2026-01-27 14:12:29.939 238945 DEBUG nova.compute.provider_tree [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:12:29 np0005597378 nova_compute[238941]: 2026-01-27 14:12:29.960 238945 DEBUG nova.scheduler.client.report [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:12:29 np0005597378 nova_compute[238941]: 2026-01-27 14:12:29.988 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:29 np0005597378 nova_compute[238941]: 2026-01-27 14:12:29.989 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.032 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.032 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.055 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.074 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.169 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.170 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.171 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Creating image(s)#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.193 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.217 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.240 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.244 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.313 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.314 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.314 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.315 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.334 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.337 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.575 238945 DEBUG nova.policy [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.599 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.657 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.723 238945 DEBUG nova.objects.instance [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 8834b9bd-0324-4f5b-9b83-be852e0b96d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.740 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.740 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Ensure instance console log exists: /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.741 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.741 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:30 np0005597378 nova_compute[238941]: 2026-01-27 14:12:30.741 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 433 KiB/s wr, 21 op/s
Jan 27 09:12:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:31 np0005597378 nova_compute[238941]: 2026-01-27 14:12:31.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:31 np0005597378 nova_compute[238941]: 2026-01-27 14:12:31.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:32 np0005597378 nova_compute[238941]: 2026-01-27 14:12:32.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 42 KiB/s wr, 10 op/s
Jan 27 09:12:33 np0005597378 nova_compute[238941]: 2026-01-27 14:12:33.025 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully created port: 50c43789-df58-4796-81f2-c398dee6dabe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.089 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully created port: 8c19198d-9ee1-4b83-9bd2-71b418462578 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.345 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.346 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.362 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.446 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.446 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.453 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.453 238945 INFO nova.compute.claims [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:12:34 np0005597378 nova_compute[238941]: 2026-01-27 14:12:34.587 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 144 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.130 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully updated port: 50c43789-df58-4796-81f2-c398dee6dabe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:12:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:12:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2801979918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.183 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.190 238945 DEBUG nova.compute.provider_tree [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.213 238945 DEBUG nova.scheduler.client.report [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.234 238945 DEBUG nova.compute.manager [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG nova.compute.manager [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-50c43789-df58-4796-81f2-c398dee6dabe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG oslo_concurrency.lockutils [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG oslo_concurrency.lockutils [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.235 238945 DEBUG nova.network.neutron [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.254 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.255 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.321 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.321 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.360 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.392 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.440 238945 DEBUG nova.network.neutron [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.513 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.515 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.515 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Creating image(s)#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.539 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.558 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.580 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.583 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.626 238945 DEBUG nova.policy [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a87606137cd4440ab2ffebe68b325a85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.662 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.663 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.663 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.663 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.688 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.691 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e47cd4e5-669d-4001-af0c-57b561828b60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.911 238945 DEBUG nova.network.neutron [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.947 238945 DEBUG oslo_concurrency.lockutils [req-903fc2a1-7ffa-4110-b50a-b30782b8f99c req-9398d564-ae90-44ff-9196-b523c842354c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:35 np0005597378 nova_compute[238941]: 2026-01-27 14:12:35.952 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e47cd4e5-669d-4001-af0c-57b561828b60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.007 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] resizing rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.087 238945 DEBUG nova.objects.instance [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'migration_context' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.213 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.214 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Ensure instance console log exists: /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.214 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.214 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.220 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.365 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Successfully updated port: 8c19198d-9ee1-4b83-9bd2-71b418462578 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.393 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.393 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.393 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.593 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:12:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:36.608 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:36.609 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:12:36 np0005597378 nova_compute[238941]: 2026-01-27 14:12:36.726 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Successfully created port: 45efb061-6afd-4021-a345-4aa248d4409b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:12:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.385 238945 DEBUG nova.compute.manager [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.385 238945 DEBUG nova.compute.manager [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-8c19198d-9ee1-4b83-9bd2-71b418462578. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.386 238945 DEBUG oslo_concurrency.lockutils [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.551 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Successfully updated port: 45efb061-6afd-4021-a345-4aa248d4409b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.566 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.567 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.567 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.656 238945 DEBUG nova.compute.manager [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-changed-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.656 238945 DEBUG nova.compute.manager [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing instance network info cache due to event network-changed-45efb061-6afd-4021-a345-4aa248d4409b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.656 238945 DEBUG oslo_concurrency.lockutils [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:37 np0005597378 nova_compute[238941]: 2026-01-27 14:12:37.716 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:12:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 189 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 41 op/s
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.913 238945 DEBUG nova.network.neutron [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.942 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.942 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance network_info: |[{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.943 238945 DEBUG oslo_concurrency.lockutils [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.943 238945 DEBUG nova.network.neutron [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 8c19198d-9ee1-4b83-9bd2-71b418462578 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.947 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start _get_guest_xml network_info=[{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.952 238945 WARNING nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.958 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.959 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.961 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.962 238945 DEBUG nova.virt.libvirt.host [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.962 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.962 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.963 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.963 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.963 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.964 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.965 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.965 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.965 238945 DEBUG nova.virt.hardware [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:12:38 np0005597378 nova_compute[238941]: 2026-01-27 14:12:38.968 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:12:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964528850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.529 238945 DEBUG nova.network.neutron [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.538 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.558 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.561 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.595 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.596 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance network_info: |[{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.597 238945 DEBUG oslo_concurrency.lockutils [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.597 238945 DEBUG nova.network.neutron [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.600 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start _get_guest_xml network_info=[{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.603 238945 WARNING nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.608 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.609 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.611 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.612 238945 DEBUG nova.virt.libvirt.host [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.612 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.612 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.613 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.614 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.615 238945 DEBUG nova.virt.hardware [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:12:39 np0005597378 nova_compute[238941]: 2026-01-27 14:12:39.618 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803415535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.141 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275985298' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.175 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.179 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.209 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.211 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.211 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.212 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.213 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.213 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.214 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.215 238945 DEBUG nova.objects.instance [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8834b9bd-0324-4f5b-9b83-be852e0b96d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.246 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <uuid>8834b9bd-0324-4f5b-9b83-be852e0b96d2</uuid>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <name>instance-00000073</name>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-445190996</nova:name>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:12:38</nova:creationTime>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:port uuid="50c43789-df58-4796-81f2-c398dee6dabe">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:port uuid="8c19198d-9ee1-4b83-9bd2-71b418462578">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feea:ac6b" ipVersion="6"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="serial">8834b9bd-0324-4f5b-9b83-be852e0b96d2</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="uuid">8834b9bd-0324-4f5b-9b83-be852e0b96d2</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:19:3a:57"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="tap50c43789-df"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:ea:ac:6b"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="tap8c19198d-9e"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/console.log" append="off"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:12:40 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:12:40 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.247 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Preparing to wait for external event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.248 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Preparing to wait for external event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.249 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.249 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.249 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.250 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.250 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.251 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.251 238945 DEBUG os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.252 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.252 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.255 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50c43789-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.256 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50c43789-df, col_values=(('external_ids', {'iface-id': '50c43789-df58-4796-81f2-c398dee6dabe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:3a:57', 'vm-uuid': '8834b9bd-0324-4f5b-9b83-be852e0b96d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 NetworkManager[48904]: <info>  [1769523160.2588] manager: (tap50c43789-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.265 238945 INFO os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df')#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.266 238945 DEBUG nova.virt.libvirt.vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.266 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.268 238945 DEBUG nova.network.os_vif_util [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.268 238945 DEBUG os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.269 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.270 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.273 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c19198d-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.273 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c19198d-9e, col_values=(('external_ids', {'iface-id': '8c19198d-9ee1-4b83-9bd2-71b418462578', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:ac:6b', 'vm-uuid': '8834b9bd-0324-4f5b-9b83-be852e0b96d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 NetworkManager[48904]: <info>  [1769523160.2755] manager: (tap8c19198d-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.282 238945 INFO os_vif [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e')#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:19:3a:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.334 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:ea:ac:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.335 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Using config drive#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.355 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:40.611 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114339780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.744 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.746 238945 DEBUG nova.virt.libvirt.vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:35Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.746 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.747 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.749 238945 DEBUG nova.objects.instance [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.766 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <uuid>e47cd4e5-669d-4001-af0c-57b561828b60</uuid>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <name>instance-00000074</name>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1067640702</nova:name>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:12:39</nova:creationTime>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:user uuid="a87606137cd4440ab2ffebe68b325a85">tempest-TestNetworkAdvancedServerOps-507048735-project-member</nova:user>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:project uuid="f1cac40132a44f0a978ac33f26f0875d">tempest-TestNetworkAdvancedServerOps-507048735</nova:project>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <nova:port uuid="45efb061-6afd-4021-a345-4aa248d4409b">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="serial">e47cd4e5-669d-4001-af0c-57b561828b60</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="uuid">e47cd4e5-669d-4001-af0c-57b561828b60</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e47cd4e5-669d-4001-af0c-57b561828b60_disk">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e47cd4e5-669d-4001-af0c-57b561828b60_disk.config">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:c9:17:d7"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <target dev="tap45efb061-6a"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/console.log" append="off"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:12:40 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:12:40 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:12:40 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:12:40 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Preparing to wait for external event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.767 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.768 238945 DEBUG nova.virt.libvirt.vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:12:35Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.768 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.769 238945 DEBUG nova.network.os_vif_util [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.770 238945 DEBUG os_vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.770 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.771 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.773 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45efb061-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.774 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45efb061-6a, col_values=(('external_ids', {'iface-id': '45efb061-6afd-4021-a345-4aa248d4409b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:17:d7', 'vm-uuid': 'e47cd4e5-669d-4001-af0c-57b561828b60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:40 np0005597378 NetworkManager[48904]: <info>  [1769523160.7764] manager: (tap45efb061-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.787 238945 INFO os_vif [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a')#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.832 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.834 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.834 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] No VIF found with MAC fa:16:3e:c9:17:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.835 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Using config drive#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.855 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.861 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Creating config drive at /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.866 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdje8xd0x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.899 238945 DEBUG nova.network.neutron [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updated VIF entry in instance network info cache for port 8c19198d-9ee1-4b83-9bd2-71b418462578. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:40 np0005597378 nova_compute[238941]: 2026-01-27 14:12:40.899 238945 DEBUG nova.network.neutron [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.005 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdje8xd0x" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.032 238945 DEBUG nova.storage.rbd_utils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.036 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.076 238945 DEBUG oslo_concurrency.lockutils [req-85711994-2ce7-4eeb-b2f7-a3f2c2c7b651 req-a9b0eac1-3bef-4348-99f5-16470fa6faf6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.164 238945 DEBUG oslo_concurrency.processutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config 8834b9bd-0324-4f5b-9b83-be852e0b96d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.165 238945 INFO nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deleting local config drive /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2/disk.config because it was imported into RBD.#033[00m
Jan 27 09:12:41 np0005597378 kernel: tap50c43789-df: entered promiscuous mode
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.2125] manager: (tap50c43789-df): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01172|binding|INFO|Claiming lport 50c43789-df58-4796-81f2-c398dee6dabe for this chassis.
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01173|binding|INFO|50c43789-df58-4796-81f2-c398dee6dabe: Claiming fa:16:3e:19:3a:57 10.100.0.6
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.226 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Creating config drive at /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.231 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnjiksnd_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:41 np0005597378 kernel: tap8c19198d-9e: entered promiscuous mode
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.2425] manager: (tap8c19198d-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/484)
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01174|binding|INFO|Setting lport 50c43789-df58-4796-81f2-c398dee6dabe ovn-installed in OVS
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01175|if_status|INFO|Dropped 1 log messages in last 36 seconds (most recently, 36 seconds ago) due to excessive rate
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01176|if_status|INFO|Not updating pb chassis for 8c19198d-9ee1-4b83-9bd2-71b418462578 now as sb is readonly
Jan 27 09:12:41 np0005597378 systemd-udevd[343313]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:12:41 np0005597378 systemd-udevd[343314]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01177|binding|INFO|Claiming lport 8c19198d-9ee1-4b83-9bd2-71b418462578 for this chassis.
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01178|binding|INFO|8c19198d-9ee1-4b83-9bd2-71b418462578: Claiming fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01179|binding|INFO|Setting lport 50c43789-df58-4796-81f2-c398dee6dabe up in Southbound
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01180|binding|INFO|Setting lport 8c19198d-9ee1-4b83-9bd2-71b418462578 ovn-installed in OVS
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.261 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3a:57 10.100.0.6'], port_security=['fa:16:3e:19:3a:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=50c43789-df58-4796-81f2-c398dee6dabe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.262 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 50c43789-df58-4796-81f2-c398dee6dabe in datapath 9964511f-1456-4111-a888-96329ab42c59 bound to our chassis#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.263 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9964511f-1456-4111-a888-96329ab42c59#033[00m
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.2694] device (tap8c19198d-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.2704] device (tap8c19198d-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.2743] device (tap50c43789-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.2756] device (tap50c43789-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:12:41 np0005597378 systemd-machined[207425]: New machine qemu-146-instance-00000073.
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[adfc5916-7f92-4892-8b26-e9bcd36749be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 systemd[1]: Started Virtual Machine qemu-146-instance-00000073.
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.297 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], port_security=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:ac6b/64', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c19198d-9ee1-4b83-9bd2-71b418462578) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01181|binding|INFO|Setting lport 8c19198d-9ee1-4b83-9bd2-71b418462578 up in Southbound
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.320 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[27674906-9ad9-459a-b78c-0f024898ee13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.323 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fd101cfe-a017-4360-a6aa-d4f65a074146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.350 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9b160b-492d-40b9-b5ab-54197639f32b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.367 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5bbc03-7152-4382-a023-bad913ff8b6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343335, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.385 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnjiksnd_" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.386 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90854035-32b9-4df4-8687-be1d04eb6cb2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576501, 'tstamp': 576501}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343336, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576504, 'tstamp': 576504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343336, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.387 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.395 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9964511f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.395 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.396 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9964511f-10, col_values=(('external_ids', {'iface-id': '139ea0ba-f559-4c32-9b23-bc114f6fe7b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.397 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.398 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c19198d-9ee1-4b83-9bd2-71b418462578 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.400 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.409 238945 DEBUG nova.storage.rbd_utils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] rbd image e47cd4e5-669d-4001-af0c-57b561828b60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.413 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config e47cd4e5-669d-4001-af0c-57b561828b60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.416 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[068412e1-5b85-4a59-b9b0-0586191d589b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.445 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.449 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f5561284-9548-4eef-a636-3b19b8e1e560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.453 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d00f1c13-3e58-4529-abcd-5b11158ec467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.481 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[55e24bd0-45f2-45c3-a2cc-3fb659e7949c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.508 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55bc1323-426d-4314-8406-5b2081350c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343377, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.530 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[715cec81-9088-4043-839b-900e7bbfd958]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8e1b054-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576604, 'tstamp': 576604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343378, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.531 238945 DEBUG nova.compute.manager [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.532 238945 DEBUG oslo_concurrency.lockutils [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.532 238945 DEBUG oslo_concurrency.lockutils [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.532 238945 DEBUG oslo_concurrency.lockutils [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.532 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.533 238945 DEBUG nova.compute.manager [req-c4f48264-ee18-49c2-9008-ac16384ffaec req-aed2cde9-f83e-4152-b6b1-e0337daf4e29 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Processing event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.540 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e1b054-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.540 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.541 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8e1b054-50, col_values=(('external_ids', {'iface-id': 'cec58910-221b-4aa5-9532-67a30f83e8bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.541 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.570 238945 DEBUG nova.network.neutron [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updated VIF entry in instance network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.570 238945 DEBUG nova.network.neutron [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.589 238945 DEBUG oslo_concurrency.lockutils [req-3d9e6047-d917-4f1b-aa47-41128edf5c11 req-a4bd241e-45d7-4e14-a438-c2d95a2f1946 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.607 238945 DEBUG oslo_concurrency.processutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config e47cd4e5-669d-4001-af0c-57b561828b60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.607 238945 INFO nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deleting local config drive /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60/disk.config because it was imported into RBD.#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.633 238945 DEBUG nova.compute.manager [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG oslo_concurrency.lockutils [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG oslo_concurrency.lockutils [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG oslo_concurrency.lockutils [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.634 238945 DEBUG nova.compute.manager [req-baed5c2f-3ed6-4a5e-bdda-04a5cbb9f01b req-0932d4dc-9671-4745-acd2-d857c7b5c97c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Processing event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.6479] manager: (tap45efb061-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Jan 27 09:12:41 np0005597378 kernel: tap45efb061-6a: entered promiscuous mode
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01182|binding|INFO|Claiming lport 45efb061-6afd-4021-a345-4aa248d4409b for this chassis.
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01183|binding|INFO|45efb061-6afd-4021-a345-4aa248d4409b: Claiming fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.6617] device (tap45efb061-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.6626] device (tap45efb061-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01184|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b ovn-installed in OVS
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:41 np0005597378 systemd-machined[207425]: New machine qemu-147-instance-00000074.
Jan 27 09:12:41 np0005597378 systemd[1]: Started Virtual Machine qemu-147-instance-00000074.
Jan 27 09:12:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:41Z|01185|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b up in Southbound
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.694 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.695 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 bound to our chassis#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.697 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92a627e1-dd59-429c-82b4-8340ea69cf88#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1b2b73-e658-437a-aed3-e957ec5ea082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.712 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92a627e1-d1 in ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.713 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92a627e1-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44b6672a-64d0-446f-8b61-fc8b9a44d1d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.714 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa2b60a-37e9-4b7a-82d2-570a77805147]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.727 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d81deb05-2661-4177-b53a-0b53a168d1a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa1c7d0-7a24-43f4-b6f9-ba8a93246964]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.772 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[97568da4-3227-4f37-b783-b23bd7bf3708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.7781] manager: (tap92a627e1-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/486)
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8adf76e-348d-48c6-a8cb-9d820444d501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.807 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c5174dfe-c8cb-4776-b97d-1ee22983e23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.810 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cd9fb6-6b50-47b8-a0aa-c1d16f2840ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 NetworkManager[48904]: <info>  [1769523161.8342] device (tap92a627e1-d0): carrier: link connected
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.843 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb19cfd-3bff-4ef2-8dff-553fa1294f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.862 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1225d4c-c1ef-4c32-b6f4-e48aef337bfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580145, 'reachable_time': 42111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343469, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.873 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523161.873372, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.874 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Started (Lifecycle Event)#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.876 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.881 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[240df470-a1d4-4595-9c1d-e846e4b2a010]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580145, 'tstamp': 580145}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343470, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.884 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.890 238945 INFO nova.virt.libvirt.driver [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance spawned successfully.#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.890 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.898 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ba00c8ee-da89-4eba-983a-de557c49e58c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580145, 'reachable_time': 42111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343471, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed3da1a-34a6-49bd-a455-c381956fbe67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.944 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.984 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.985 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.985 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.986 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.986 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:41 np0005597378 nova_compute[238941]: 2026-01-27 14:12:41.986 238945 DEBUG nova.virt.libvirt.driver [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.997 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[17ed061d-bc6a-44fb-92ec-f73eb1b2291c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.998 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.999 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:12:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:41.999 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92a627e1-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:42 np0005597378 NetworkManager[48904]: <info>  [1769523162.0018] manager: (tap92a627e1-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Jan 27 09:12:42 np0005597378 kernel: tap92a627e1-d0: entered promiscuous mode
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.006 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92a627e1-d0, col_values=(('external_ids', {'iface-id': '1dcbe1a7-ed46-453b-aa3a-c8481b1903de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:12:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:42Z|01186|binding|INFO|Releasing lport 1dcbe1a7-ed46-453b-aa3a-c8481b1903de from this chassis (sb_readonly=0)
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.011 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.014 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35056287-023f-4dde-9bd0-109ba5380aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.015 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:12:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:42.016 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'env', 'PROCESS_TAG=haproxy-92a627e1-dd59-429c-82b4-8340ea69cf88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92a627e1-dd59-429c-82b4-8340ea69cf88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.058 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.059 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523161.873558, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.059 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.092 238945 INFO nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 11.92 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.092 238945 DEBUG nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.106 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.109 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523161.879088, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.110 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.163 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.167 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.215 238945 INFO nova.compute.manager [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 13.37 seconds to build instance.#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.302 238945 DEBUG oslo_concurrency.lockutils [None req-0d9f2cb2-de28-47f9-ba52-f7b5c93ef3e8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:42 np0005597378 podman[343540]: 2026-01-27 14:12:42.393384452 +0000 UTC m=+0.064348342 container create 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.415 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523162.4148781, e47cd4e5-669d-4001-af0c-57b561828b60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.415 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Started (Lifecycle Event)#033[00m
Jan 27 09:12:42 np0005597378 systemd[1]: Started libpod-conmon-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f.scope.
Jan 27 09:12:42 np0005597378 podman[343540]: 2026-01-27 14:12:42.354842867 +0000 UTC m=+0.025806776 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:12:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:12:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7daa7d9ddae27fe200edda2492f6aef61e04a34a33b2a40595cc0bad27c6ffd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:12:42 np0005597378 podman[343540]: 2026-01-27 14:12:42.498757336 +0000 UTC m=+0.169721255 container init 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:12:42 np0005597378 podman[343540]: 2026-01-27 14:12:42.503975736 +0000 UTC m=+0.174939625 container start 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.519 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.523 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523162.4157445, e47cd4e5-669d-4001-af0c-57b561828b60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:42 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : New worker (343567) forked
Jan 27 09:12:42 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : Loading success.
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.524 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.595 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.599 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:12:42 np0005597378 nova_compute[238941]: 2026-01-27 14:12:42.646 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:12:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.642 238945 DEBUG nova.compute.manager [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.643 238945 DEBUG oslo_concurrency.lockutils [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.644 238945 DEBUG oslo_concurrency.lockutils [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.644 238945 DEBUG oslo_concurrency.lockutils [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.644 238945 DEBUG nova.compute.manager [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.645 238945 WARNING nova.compute.manager [req-c2a0e955-d7e4-498e-a62e-d6bc9e63394a req-340b2b92-61a6-4e33-b910-01c922fdcb57 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.744 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.745 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.745 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.746 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.746 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.746 238945 WARNING nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe for instance with vm_state active and task_state None.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.747 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.747 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.747 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.748 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.748 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Processing event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.748 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG oslo_concurrency.lockutils [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.749 238945 DEBUG nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.750 238945 WARNING nova.compute.manager [req-0675e498-1c7a-4799-9cbe-b49a9375a2ed req-d6b0d5ad-691e-45d6-a268-6887fe55c33c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.751 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.754 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523163.754433, e47cd4e5-669d-4001-af0c-57b561828b60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.755 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.756 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.759 238945 INFO nova.virt.libvirt.driver [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance spawned successfully.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.759 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.775 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.780 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.783 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.784 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.784 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.785 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.785 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.786 238945 DEBUG nova.virt.libvirt.driver [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.827 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.849 238945 INFO nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 8.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.850 238945 DEBUG nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.905 238945 INFO nova.compute.manager [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 9.48 seconds to build instance.#033[00m
Jan 27 09:12:43 np0005597378 nova_compute[238941]: 2026-01-27 14:12:43.920 238945 DEBUG oslo_concurrency.lockutils [None req-90caaa60-fb31-4778-af04-62fc9fd788ff a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 110 op/s
Jan 27 09:12:45 np0005597378 nova_compute[238941]: 2026-01-27 14:12:45.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:45 np0005597378 nova_compute[238941]: 2026-01-27 14:12:45.983 238945 DEBUG nova.compute.manager [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:45 np0005597378 nova_compute[238941]: 2026-01-27 14:12:45.984 238945 DEBUG nova.compute.manager [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-50c43789-df58-4796-81f2-c398dee6dabe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:45 np0005597378 nova_compute[238941]: 2026-01-27 14:12:45.984 238945 DEBUG oslo_concurrency.lockutils [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:45 np0005597378 nova_compute[238941]: 2026-01-27 14:12:45.984 238945 DEBUG oslo_concurrency.lockutils [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:45 np0005597378 nova_compute[238941]: 2026-01-27 14:12:45.985 238945 DEBUG nova.network.neutron [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:46.318 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:12:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:12:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:12:46.320 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:12:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 164 op/s
Jan 27 09:12:47 np0005597378 nova_compute[238941]: 2026-01-27 14:12:47.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:12:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:12:47 np0005597378 nova_compute[238941]: 2026-01-27 14:12:47.917 238945 DEBUG nova.network.neutron [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updated VIF entry in instance network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:47 np0005597378 nova_compute[238941]: 2026-01-27 14:12:47.918 238945 DEBUG nova.network.neutron [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:47 np0005597378 nova_compute[238941]: 2026-01-27 14:12:47.939 238945 DEBUG oslo_concurrency.lockutils [req-3eccac7e-93f3-43dd-a637-943f81c19d0b req-765d20e5-3182-4310-a93c-5027fe2b7ec6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:48 np0005597378 nova_compute[238941]: 2026-01-27 14:12:48.056 238945 DEBUG nova.compute.manager [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-changed-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:12:48 np0005597378 nova_compute[238941]: 2026-01-27 14:12:48.056 238945 DEBUG nova.compute.manager [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing instance network info cache due to event network-changed-45efb061-6afd-4021-a345-4aa248d4409b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:12:48 np0005597378 nova_compute[238941]: 2026-01-27 14:12:48.056 238945 DEBUG oslo_concurrency.lockutils [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:12:48 np0005597378 nova_compute[238941]: 2026-01-27 14:12:48.057 238945 DEBUG oslo_concurrency.lockutils [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:12:48 np0005597378 nova_compute[238941]: 2026-01-27 14:12:48.057 238945 DEBUG nova.network.neutron [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:12:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 27 09:12:49 np0005597378 nova_compute[238941]: 2026-01-27 14:12:49.130 238945 DEBUG nova.network.neutron [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updated VIF entry in instance network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:12:49 np0005597378 nova_compute[238941]: 2026-01-27 14:12:49.130 238945 DEBUG nova.network.neutron [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:12:49 np0005597378 nova_compute[238941]: 2026-01-27 14:12:49.163 238945 DEBUG oslo_concurrency.lockutils [req-6075d29a-c4b5-48ba-8a3b-2dd142803bdd req-51ed7fe5-b65a-406e-a807-aadab7b87aa4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:12:50 np0005597378 nova_compute[238941]: 2026-01-27 14:12:50.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 161 op/s
Jan 27 09:12:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:51 np0005597378 podman[343576]: 2026-01-27 14:12:51.724171654 +0000 UTC m=+0.058090633 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:12:52 np0005597378 nova_compute[238941]: 2026-01-27 14:12:52.534 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 27 09:12:53 np0005597378 podman[343595]: 2026-01-27 14:12:53.757090087 +0000 UTC m=+0.093275869 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:12:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 214 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 27 09:12:55 np0005597378 nova_compute[238941]: 2026-01-27 14:12:55.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:12:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 215 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 229 KiB/s wr, 94 op/s
Jan 27 09:12:57 np0005597378 nova_compute[238941]: 2026-01-27 14:12:57.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:12:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9499 writes, 43K keys, 9499 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9499 writes, 9499 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1473 writes, 7393 keys, 1473 commit groups, 1.0 writes per commit group, ingest: 9.39 MB, 0.02 MB/s#012Interval WAL: 1473 writes, 1473 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     35.1      1.47              0.15        29    0.051       0      0       0.0       0.0#012  L6      1/0    7.64 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4     81.6     68.4      3.33              0.60        28    0.119    159K    15K       0.0       0.0#012 Sum      1/0    7.64 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4     56.6     58.1      4.80              0.75        57    0.084    159K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7     47.6     46.6      1.76              0.24        16    0.110     55K   4073       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     81.6     68.4      3.33              0.60        28    0.119    159K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.3      1.46              0.15        28    0.052       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.1 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.27 GB read, 0.08 MB/s read, 4.8 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 30.87 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.0002 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1946,29.68 MB,9.76397%) FilterBlock(58,449.11 KB,0.144271%) IndexBlock(58,770.98 KB,0.247669%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 09:12:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:58Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:3a:57 10.100.0.6
Jan 27 09:12:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:12:58Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:3a:57 10.100.0.6
Jan 27 09:12:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 232 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 1002 KiB/s wr, 71 op/s
Jan 27 09:12:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:12:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225852627' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:12:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:12:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225852627' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:13:00 np0005597378 nova_compute[238941]: 2026-01-27 14:13:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:00 np0005597378 nova_compute[238941]: 2026-01-27 14:13:00.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 255 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 27 09:13:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:01Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 09:13:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:01Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.415 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.415 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.415 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.416 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:13:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606924559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:13:01 np0005597378 nova_compute[238941]: 2026-01-27 14:13:01.962 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.056 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.057 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.059 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.060 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.063 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.063 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.264 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.265 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3202MB free_disk=59.865023078396916GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.265 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.265 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.340 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6bf91edb-b66a-458b-b8bd-e8520cdc6349 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.340 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8834b9bd-0324-4f5b-9b83-be852e0b96d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.341 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e47cd4e5-669d-4001-af0c-57b561828b60 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.341 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.341 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.388 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 255 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 27 09:13:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:13:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541131171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.936 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.945 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.965 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.992 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:13:02 np0005597378 nova_compute[238941]: 2026-01-27 14:13:02.993 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 277 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 450 KiB/s rd, 4.2 MiB/s wr, 109 op/s
Jan 27 09:13:04 np0005597378 nova_compute[238941]: 2026-01-27 14:13:04.995 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:05 np0005597378 nova_compute[238941]: 2026-01-27 14:13:05.786 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:06 np0005597378 nova_compute[238941]: 2026-01-27 14:13:06.665 238945 INFO nova.compute.manager [None req-ab82f1f4-b977-437b-bcbe-c27f6cd96ddf a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Get console output#033[00m
Jan 27 09:13:06 np0005597378 nova_compute[238941]: 2026-01-27 14:13:06.671 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:13:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Jan 27 09:13:06 np0005597378 nova_compute[238941]: 2026-01-27 14:13:06.958 238945 DEBUG nova.objects.instance [None req-5515387e-2dcf-497f-a321-5253fef8a12f a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:06 np0005597378 nova_compute[238941]: 2026-01-27 14:13:06.992 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523186.9916627, e47cd4e5-669d-4001-af0c-57b561828b60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:13:06 np0005597378 nova_compute[238941]: 2026-01-27 14:13:06.992 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.021 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.026 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.066 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:07 np0005597378 kernel: tap45efb061-6a (unregistering): left promiscuous mode
Jan 27 09:13:07 np0005597378 NetworkManager[48904]: <info>  [1769523187.7155] device (tap45efb061-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:07Z|01187|binding|INFO|Releasing lport 45efb061-6afd-4021-a345-4aa248d4409b from this chassis (sb_readonly=0)
Jan 27 09:13:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:07Z|01188|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b down in Southbound
Jan 27 09:13:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:07Z|01189|binding|INFO|Removing iface tap45efb061-6a ovn-installed in OVS
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.785 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.786 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 unbound from our chassis#033[00m
Jan 27 09:13:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.787 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92a627e1-dd59-429c-82b4-8340ea69cf88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:13:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.789 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[460c2b4f-4f80-4272-857e-666bd74d2fe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:07.789 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace which is not needed anymore#033[00m
Jan 27 09:13:07 np0005597378 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 27 09:13:07 np0005597378 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Consumed 18.432s CPU time.
Jan 27 09:13:07 np0005597378 systemd-machined[207425]: Machine qemu-147-instance-00000074 terminated.
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:07 np0005597378 nova_compute[238941]: 2026-01-27 14:13:07.909 238945 DEBUG nova.compute.manager [None req-5515387e-2dcf-497f-a321-5253fef8a12f a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:07 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : haproxy version is 2.8.14-c23fe91
Jan 27 09:13:07 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [NOTICE]   (343565) : path to executable is /usr/sbin/haproxy
Jan 27 09:13:07 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [WARNING]  (343565) : Exiting Master process...
Jan 27 09:13:07 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [ALERT]    (343565) : Current worker (343567) exited with code 143 (Terminated)
Jan 27 09:13:07 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[343561]: [WARNING]  (343565) : All workers exited. Exiting... (0)
Jan 27 09:13:07 np0005597378 systemd[1]: libpod-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f.scope: Deactivated successfully.
Jan 27 09:13:07 np0005597378 podman[343698]: 2026-01-27 14:13:07.955796945 +0000 UTC m=+0.054107135 container died 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:13:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f-userdata-shm.mount: Deactivated successfully.
Jan 27 09:13:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f7daa7d9ddae27fe200edda2492f6aef61e04a34a33b2a40595cc0bad27c6ffd-merged.mount: Deactivated successfully.
Jan 27 09:13:07 np0005597378 podman[343698]: 2026-01-27 14:13:07.997912747 +0000 UTC m=+0.096222937 container cleanup 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:08 np0005597378 systemd[1]: libpod-conmon-2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f.scope: Deactivated successfully.
Jan 27 09:13:08 np0005597378 podman[343730]: 2026-01-27 14:13:08.070234272 +0000 UTC m=+0.050931120 container remove 2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.076 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[640464b2-48e6-4bb2-ac75-1432f89d0bbe]: (4, ('Tue Jan 27 02:13:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f)\n2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f\nTue Jan 27 02:13:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f)\n2a0b653e2125cce0ba912090663d894c1336a32d1c1f02294d5d06172a582a8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.078 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[380766a7-657d-49c0-a592-a691813f696a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.079 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:08 np0005597378 kernel: tap92a627e1-d0: left promiscuous mode
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.102 238945 DEBUG nova.compute.manager [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.102 238945 DEBUG oslo_concurrency.lockutils [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG oslo_concurrency.lockutils [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG oslo_concurrency.lockutils [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG nova.compute.manager [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 WARNING nova.compute.manager [req-fb6db669-3e30-46ef-a416-246fcb4ba34a req-519995c7-95c8-4526-94df-84ba7ffb8e73 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state suspended and task_state None.#033[00m
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.103 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf56f1f-ae48-423e-aaf8-aade945ed890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f78e84-e18f-4775-a405-d3b906b47f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a84ccfa-765d-47d2-b662-5bd64943f60f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.136 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e21f1d9b-09a5-412e-a22c-971649530a03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580139, 'reachable_time': 19510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343749, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.139 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:13:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:08.139 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb05ebd9-f8f0-47ae-8941-bcd72c6bd2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:08 np0005597378 systemd[1]: run-netns-ovnmeta\x2d92a627e1\x2ddd59\x2d429c\x2d82b4\x2d8340ea69cf88.mount: Deactivated successfully.
Jan 27 09:13:08 np0005597378 nova_compute[238941]: 2026-01-27 14:13:08.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 4.1 MiB/s wr, 120 op/s
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:13:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.829813837 +0000 UTC m=+0.039871794 container create 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:13:09 np0005597378 systemd[1]: Started libpod-conmon-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope.
Jan 27 09:13:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.906196239 +0000 UTC m=+0.116254216 container init 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.814637008 +0000 UTC m=+0.024694995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.912873299 +0000 UTC m=+0.122931256 container start 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.917940286 +0000 UTC m=+0.127998273 container attach 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:13:09 np0005597378 systemd[1]: libpod-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope: Deactivated successfully.
Jan 27 09:13:09 np0005597378 vigilant_margulis[343908]: 167 167
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.919096427 +0000 UTC m=+0.129154384 container died 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:13:09 np0005597378 conmon[343908]: conmon 0c13fd961a1ec4b27ede <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope/container/memory.events
Jan 27 09:13:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5653e32d81123bd669be68ef8ba0378b2018c04d0829a0b2b1f1b2c6553eca1c-merged.mount: Deactivated successfully.
Jan 27 09:13:09 np0005597378 podman[343892]: 2026-01-27 14:13:09.954370565 +0000 UTC m=+0.164428522 container remove 0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_margulis, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:13:09 np0005597378 systemd[1]: libpod-conmon-0c13fd961a1ec4b27ede8e05f955ed8df90688bedb3846dce7832bb656bb5b4e.scope: Deactivated successfully.
Jan 27 09:13:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:13:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:13:10 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.133629214 +0000 UTC m=+0.039372959 container create 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:13:10 np0005597378 systemd[1]: Started libpod-conmon-3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d.scope.
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.177 238945 DEBUG nova.compute.manager [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.179 238945 DEBUG oslo_concurrency.lockutils [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 DEBUG oslo_concurrency.lockutils [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 DEBUG oslo_concurrency.lockutils [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 DEBUG nova.compute.manager [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.180 238945 WARNING nova.compute.manager [req-8c0f3506-c61a-4e4b-a006-96a5471ffc0c req-fc399553-25b8-4273-93a6-0ecdafccbfc8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state suspended and task_state None.#033[00m
Jan 27 09:13:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.118366884 +0000 UTC m=+0.024110659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.215491145 +0000 UTC m=+0.121234910 container init 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.224322953 +0000 UTC m=+0.130066698 container start 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.227742034 +0000 UTC m=+0.133485779 container attach 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.566 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.566 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.566 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.567 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:10 np0005597378 zen_hermann[343949]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:13:10 np0005597378 zen_hermann[343949]: --> All data devices are unavailable
Jan 27 09:13:10 np0005597378 systemd[1]: libpod-3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d.scope: Deactivated successfully.
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.729484923 +0000 UTC m=+0.635228688 container died 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:13:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-afbdb0a07f25235d42d31d1a798f958f11aac518eca1d13f050b09f781b6e886-merged.mount: Deactivated successfully.
Jan 27 09:13:10 np0005597378 podman[343932]: 2026-01-27 14:13:10.775056559 +0000 UTC m=+0.680800304 container remove 3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:13:10 np0005597378 nova_compute[238941]: 2026-01-27 14:13:10.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:10 np0005597378 systemd[1]: libpod-conmon-3d24a714521b24f8abcabe692a45bc33d8e86d36f3f9011a84dab0c78bf2123d.scope: Deactivated successfully.
Jan 27 09:13:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.3 MiB/s wr, 75 op/s
Jan 27 09:13:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.205050249 +0000 UTC m=+0.041410215 container create c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:13:11 np0005597378 systemd[1]: Started libpod-conmon-c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d.scope.
Jan 27 09:13:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.188057552 +0000 UTC m=+0.024417538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.288292898 +0000 UTC m=+0.124652874 container init c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.294957696 +0000 UTC m=+0.131317662 container start c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.298618405 +0000 UTC m=+0.134978391 container attach c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:13:11 np0005597378 naughty_noyce[344059]: 167 167
Jan 27 09:13:11 np0005597378 systemd[1]: libpod-c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d.scope: Deactivated successfully.
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.300646829 +0000 UTC m=+0.137006825 container died c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:13:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b3d8c831a1346312c9e8ddcf050c14ed6f1ed7fd9b94f874d7721b12e4e34636-merged.mount: Deactivated successfully.
Jan 27 09:13:11 np0005597378 podman[344043]: 2026-01-27 14:13:11.34714909 +0000 UTC m=+0.183509056 container remove c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_noyce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:13:11 np0005597378 systemd[1]: libpod-conmon-c847b183c7041c097dd9bab62ceba8dbbd57fec7e952623d87ed120a0758fa7d.scope: Deactivated successfully.
Jan 27 09:13:11 np0005597378 podman[344084]: 2026-01-27 14:13:11.519713129 +0000 UTC m=+0.035798074 container create bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:13:11 np0005597378 systemd[1]: Started libpod-conmon-bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b.scope.
Jan 27 09:13:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:11 np0005597378 podman[344084]: 2026-01-27 14:13:11.503915394 +0000 UTC m=+0.020000369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:13:11 np0005597378 podman[344084]: 2026-01-27 14:13:11.610586602 +0000 UTC m=+0.126671577 container init bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:13:11 np0005597378 podman[344084]: 2026-01-27 14:13:11.617655642 +0000 UTC m=+0.133740597 container start bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:13:11 np0005597378 podman[344084]: 2026-01-27 14:13:11.621025892 +0000 UTC m=+0.137110847 container attach bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:13:11 np0005597378 nova_compute[238941]: 2026-01-27 14:13:11.761 238945 INFO nova.compute.manager [None req-03603b45-fb81-436d-a6c5-f66236f4837b a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Get console output#033[00m
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]: {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:    "0": [
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:        {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "devices": [
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "/dev/loop3"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            ],
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_name": "ceph_lv0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_size": "21470642176",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "name": "ceph_lv0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "tags": {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cluster_name": "ceph",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.crush_device_class": "",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.encrypted": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.objectstore": "bluestore",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osd_id": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.type": "block",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.vdo": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.with_tpm": "0"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            },
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "type": "block",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "vg_name": "ceph_vg0"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:        }
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:    ],
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:    "1": [
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:        {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "devices": [
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "/dev/loop4"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            ],
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_name": "ceph_lv1",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_size": "21470642176",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "name": "ceph_lv1",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "tags": {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cluster_name": "ceph",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.crush_device_class": "",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.encrypted": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.objectstore": "bluestore",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osd_id": "1",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.type": "block",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.vdo": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.with_tpm": "0"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            },
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "type": "block",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "vg_name": "ceph_vg1"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:        }
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:    ],
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:    "2": [
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:        {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "devices": [
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "/dev/loop5"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            ],
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_name": "ceph_lv2",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_size": "21470642176",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "name": "ceph_lv2",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "tags": {
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.cluster_name": "ceph",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.crush_device_class": "",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.encrypted": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.objectstore": "bluestore",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osd_id": "2",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.type": "block",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.vdo": "0",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:                "ceph.with_tpm": "0"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            },
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "type": "block",
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:            "vg_name": "ceph_vg2"
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:        }
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]:    ]
Jan 27 09:13:11 np0005597378 stoic_meitner[344100]: }
Jan 27 09:13:11 np0005597378 systemd[1]: libpod-bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b.scope: Deactivated successfully.
Jan 27 09:13:11 np0005597378 podman[344084]: 2026-01-27 14:13:11.942905396 +0000 UTC m=+0.458990361 container died bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:13:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-75f273629200db4fdea30d4e07898055d5e5681b6e5e6c72b21377d20cdb2915-merged.mount: Deactivated successfully.
Jan 27 09:13:12 np0005597378 podman[344084]: 2026-01-27 14:13:12.043955902 +0000 UTC m=+0.560040867 container remove bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:13:12 np0005597378 systemd[1]: libpod-conmon-bf872bfa37a92fc317ef60afefb987b4748bad9fba7fade4131ebea4571f401b.scope: Deactivated successfully.
Jan 27 09:13:12 np0005597378 nova_compute[238941]: 2026-01-27 14:13:12.128 238945 INFO nova.compute.manager [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Resuming#033[00m
Jan 27 09:13:12 np0005597378 nova_compute[238941]: 2026-01-27 14:13:12.130 238945 DEBUG nova.objects.instance [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'flavor' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:12 np0005597378 nova_compute[238941]: 2026-01-27 14:13:12.164 238945 DEBUG oslo_concurrency.lockutils [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:13:12 np0005597378 nova_compute[238941]: 2026-01-27 14:13:12.165 238945 DEBUG oslo_concurrency.lockutils [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:13:12 np0005597378 nova_compute[238941]: 2026-01-27 14:13:12.165 238945 DEBUG nova.network.neutron [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.499917601 +0000 UTC m=+0.039634757 container create 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:13:12 np0005597378 systemd[1]: Started libpod-conmon-27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89.scope.
Jan 27 09:13:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:12 np0005597378 nova_compute[238941]: 2026-01-27 14:13:12.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.555765772 +0000 UTC m=+0.095482918 container init 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.564180549 +0000 UTC m=+0.103897695 container start 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.567502347 +0000 UTC m=+0.107219493 container attach 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:13:12 np0005597378 competent_fermi[344200]: 167 167
Jan 27 09:13:12 np0005597378 systemd[1]: libpod-27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89.scope: Deactivated successfully.
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.569689137 +0000 UTC m=+0.109406373 container died 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.485132763 +0000 UTC m=+0.024849929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:13:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fb60dae73249bbeba274f61669ad2a54003b0ae34978ec6a90d0bb5141da9a7a-merged.mount: Deactivated successfully.
Jan 27 09:13:12 np0005597378 podman[344185]: 2026-01-27 14:13:12.612420135 +0000 UTC m=+0.152137281 container remove 27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_fermi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:13:12 np0005597378 systemd[1]: libpod-conmon-27661db0a4795e8c539fb7e7b1b84a1e122b729269520c80d40879e4ed5e8a89.scope: Deactivated successfully.
Jan 27 09:13:12 np0005597378 podman[344225]: 2026-01-27 14:13:12.832527482 +0000 UTC m=+0.047601731 container create 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:13:12 np0005597378 systemd[1]: Started libpod-conmon-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope.
Jan 27 09:13:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:12 np0005597378 podman[344225]: 2026-01-27 14:13:12.810940972 +0000 UTC m=+0.026015231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:13:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 27 09:13:12 np0005597378 podman[344225]: 2026-01-27 14:13:12.915517724 +0000 UTC m=+0.130591993 container init 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:13:12 np0005597378 podman[344225]: 2026-01-27 14:13:12.924872345 +0000 UTC m=+0.139946584 container start 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:12 np0005597378 podman[344225]: 2026-01-27 14:13:12.92951546 +0000 UTC m=+0.144589749 container attach 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.376 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.415 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.415 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.415 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:13 np0005597378 lvm[344317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:13:13 np0005597378 lvm[344320]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:13:13 np0005597378 lvm[344320]: VG ceph_vg1 finished
Jan 27 09:13:13 np0005597378 lvm[344317]: VG ceph_vg0 finished
Jan 27 09:13:13 np0005597378 lvm[344322]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:13:13 np0005597378 lvm[344322]: VG ceph_vg2 finished
Jan 27 09:13:13 np0005597378 angry_noyce[344241]: {}
Jan 27 09:13:13 np0005597378 systemd[1]: libpod-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope: Deactivated successfully.
Jan 27 09:13:13 np0005597378 systemd[1]: libpod-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope: Consumed 1.341s CPU time.
Jan 27 09:13:13 np0005597378 podman[344225]: 2026-01-27 14:13:13.819451965 +0000 UTC m=+1.034526224 container died 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.824 238945 DEBUG nova.network.neutron [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [{"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.846 238945 DEBUG oslo_concurrency.lockutils [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.853 238945 DEBUG nova.virt.libvirt.vif [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:13:07Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.854 238945 DEBUG nova.network.os_vif_util [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.855 238945 DEBUG nova.network.os_vif_util [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.856 238945 DEBUG os_vif [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.857 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.858 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.861 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.861 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45efb061-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.862 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45efb061-6a, col_values=(('external_ids', {'iface-id': '45efb061-6afd-4021-a345-4aa248d4409b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:17:d7', 'vm-uuid': 'e47cd4e5-669d-4001-af0c-57b561828b60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.863 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.863 238945 INFO os_vif [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a')#033[00m
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.921 238945 DEBUG nova.objects.instance [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'numa_topology' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-68e7ae3deb725f61629a4d8a5f3ce4e3bc7d9a4e55604d8ddc2cd96d37edd419-merged.mount: Deactivated successfully.
Jan 27 09:13:13 np0005597378 kernel: tap45efb061-6a: entered promiscuous mode
Jan 27 09:13:13 np0005597378 systemd-udevd[344316]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:13:13 np0005597378 NetworkManager[48904]: <info>  [1769523193.9923] manager: (tap45efb061-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/488)
Jan 27 09:13:13 np0005597378 nova_compute[238941]: 2026-01-27 14:13:13.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:13Z|01190|binding|INFO|Claiming lport 45efb061-6afd-4021-a345-4aa248d4409b for this chassis.
Jan 27 09:13:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:13Z|01191|binding|INFO|45efb061-6afd-4021-a345-4aa248d4409b: Claiming fa:16:3e:c9:17:d7 10.100.0.13
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.001 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.002 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 bound to our chassis#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.004 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92a627e1-dd59-429c-82b4-8340ea69cf88#033[00m
Jan 27 09:13:14 np0005597378 NetworkManager[48904]: <info>  [1769523194.0134] device (tap45efb061-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:13:14 np0005597378 NetworkManager[48904]: <info>  [1769523194.0142] device (tap45efb061-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:13:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:14Z|01192|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b ovn-installed in OVS
Jan 27 09:13:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:14Z|01193|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b up in Southbound
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2dba0e78-4bd0-4e0f-aedb-211941cf3912]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.023 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92a627e1-d1 in ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.026 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92a627e1-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1907976b-f210-4ffe-ad7d-fb256f3ad523]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9756fd-1d3f-4604-9c77-248e055c8861]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 podman[344225]: 2026-01-27 14:13:14.027237122 +0000 UTC m=+1.242311371 container remove 286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noyce, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:13:14 np0005597378 systemd-machined[207425]: New machine qemu-148-instance-00000074.
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.049 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d7724b34-9f2e-4ae7-9d35-c70e6268320b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Jan 27 09:13:14 np0005597378 systemd[1]: libpod-conmon-286e24f532b860dbc1e689228377caacb4a555f070cb8175511f40c61624d918.scope: Deactivated successfully.
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6207ce6b-7a04-458b-9d29-cec40d90467d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:13:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:13:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:13:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.102 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5315546a-36d0-45b5-9921-64fa0bc6699b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.107 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1048e881-0035-4f08-9e72-bfb50548db30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 NetworkManager[48904]: <info>  [1769523194.1088] manager: (tap92a627e1-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/489)
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.142 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d73e854b-057f-4c62-877e-7a8ac33afb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbe9ec6-3700-4ec3-84ab-e9f41f3fcfe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 NetworkManager[48904]: <info>  [1769523194.1676] device (tap92a627e1-d0): carrier: link connected
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.174 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c441d2-86aa-45a4-8380-02bbb4322423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.194 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e69f68c2-b5cb-4472-a595-a52f344b7fa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583379, 'reachable_time': 36188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344406, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.209 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84871720-66a3-4c34-8b8e-35acc932d5d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:e887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 583379, 'tstamp': 583379}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344407, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[236b7885-f0c5-4c68-ab04-f3c7bd1cbacb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92a627e1-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:e8:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583379, 'reachable_time': 36188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344408, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.254 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30f98096-2da0-4061-915f-2d130ffd5ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51eef303-338f-41f5-b831-c62057922d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.309 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.309 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.310 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92a627e1-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:14 np0005597378 kernel: tap92a627e1-d0: entered promiscuous mode
Jan 27 09:13:14 np0005597378 NetworkManager[48904]: <info>  [1769523194.3123] manager: (tap92a627e1-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.315 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92a627e1-d0, col_values=(('external_ids', {'iface-id': '1dcbe1a7-ed46-453b-aa3a-c8481b1903de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.316 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:14Z|01194|binding|INFO|Releasing lport 1dcbe1a7-ed46-453b-aa3a-c8481b1903de from this chassis (sb_readonly=0)
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.318 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d02891-f201-4357-927c-64e2002c5d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.320 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/92a627e1-dd59-429c-82b4-8340ea69cf88.pid.haproxy
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 92a627e1-dd59-429c-82b4-8340ea69cf88
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:13:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:14.321 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'env', 'PROCESS_TAG=haproxy-92a627e1-dd59-429c-82b4-8340ea69cf88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92a627e1-dd59-429c-82b4-8340ea69cf88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.560 238945 DEBUG nova.compute.manager [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.561 238945 DEBUG oslo_concurrency.lockutils [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 DEBUG oslo_concurrency.lockutils [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 DEBUG oslo_concurrency.lockutils [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 DEBUG nova.compute.manager [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.567 238945 WARNING nova.compute.manager [req-e72f76c1-1794-4ba6-ba2a-e74b2fde159a req-7f4b4c43-2d45-4ce8-bd95-6a88fd7894b4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state suspended and task_state resuming.#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.668 238945 DEBUG nova.virt.libvirt.host [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Removed pending event for e47cd4e5-669d-4001-af0c-57b561828b60 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.669 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523194.6679697, e47cd4e5-669d-4001-af0c-57b561828b60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.669 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Started (Lifecycle Event)#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.691 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.709 238945 DEBUG nova.compute.manager [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.710 238945 DEBUG nova.objects.instance [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'pci_devices' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.712 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.729 238945 INFO nova.virt.libvirt.driver [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance running successfully.#033[00m
Jan 27 09:13:14 np0005597378 virtqemud[238711]: argument unsupported: QEMU guest agent is not configured
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.732 238945 DEBUG nova.virt.libvirt.guest [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.732 238945 DEBUG nova.compute.manager [None req-26594940-4150-4662-b34b-3bb4e052b991 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:14 np0005597378 podman[344479]: 2026-01-27 14:13:14.733675703 +0000 UTC m=+0.056064228 container create 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.738 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.738 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523194.6720684, e47cd4e5-669d-4001-af0c-57b561828b60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.739 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.764 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.766 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:13:14 np0005597378 systemd[1]: Started libpod-conmon-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a.scope.
Jan 27 09:13:14 np0005597378 nova_compute[238941]: 2026-01-27 14:13:14.792 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 27 09:13:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:13:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30516727ba9ce956901412408e429cc1bf3974118da4296edf5b055e2456582/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:13:14 np0005597378 podman[344479]: 2026-01-27 14:13:14.706901993 +0000 UTC m=+0.029290548 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:13:14 np0005597378 podman[344479]: 2026-01-27 14:13:14.810696273 +0000 UTC m=+0.133084828 container init 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:13:14 np0005597378 podman[344479]: 2026-01-27 14:13:14.817139047 +0000 UTC m=+0.139527572 container start 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:13:14 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : New worker (344500) forked
Jan 27 09:13:14 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : Loading success.
Jan 27 09:13:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 27 09:13:15 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:13:15 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:13:15 np0005597378 nova_compute[238941]: 2026-01-27 14:13:15.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.731 238945 DEBUG nova.compute.manager [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.731 238945 DEBUG oslo_concurrency.lockutils [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.731 238945 DEBUG oslo_concurrency.lockutils [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.732 238945 DEBUG oslo_concurrency.lockutils [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.732 238945 DEBUG nova.compute.manager [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.732 238945 WARNING nova.compute.manager [req-4f3f05e0-8dba-4ddf-ba0a-fb909cf7d29d req-5970c0d1-8585-4872-a69f-3f1cf899e01e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.829 238945 INFO nova.compute.manager [None req-e5f17bba-cf3a-4dca-8860-bf41acc9a54d a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Get console output#033[00m
Jan 27 09:13:16 np0005597378 nova_compute[238941]: 2026-01-27 14:13:16.834 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:13:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 279 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 53 KiB/s wr, 16 op/s
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:13:17
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'images']
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.775 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.776 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.777 238945 INFO nova.compute.manager [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Terminating instance#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.778 238945 DEBUG nova.compute.manager [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:13:17 np0005597378 kernel: tap45efb061-6a (unregistering): left promiscuous mode
Jan 27 09:13:17 np0005597378 NetworkManager[48904]: <info>  [1769523197.8297] device (tap45efb061-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:17Z|01195|binding|INFO|Releasing lport 45efb061-6afd-4021-a345-4aa248d4409b from this chassis (sb_readonly=0)
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:17Z|01196|binding|INFO|Setting lport 45efb061-6afd-4021-a345-4aa248d4409b down in Southbound
Jan 27 09:13:17 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:17Z|01197|binding|INFO|Removing iface tap45efb061-6a ovn-installed in OVS
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.848 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.853 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:17:d7 10.100.0.13'], port_security=['fa:16:3e:c9:17:d7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e47cd4e5-669d-4001-af0c-57b561828b60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92a627e1-dd59-429c-82b4-8340ea69cf88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1cac40132a44f0a978ac33f26f0875d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b600bd45-38b0-42c1-b979-e43c4f4b41d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87a4992-80b4-4b64-a6a2-e3189f2c4ab6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=45efb061-6afd-4021-a345-4aa248d4409b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.855 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 45efb061-6afd-4021-a345-4aa248d4409b in datapath 92a627e1-dd59-429c-82b4-8340ea69cf88 unbound from our chassis#033[00m
Jan 27 09:13:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.856 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92a627e1-dd59-429c-82b4-8340ea69cf88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:13:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.857 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8990cdc-e181-4e7e-9df4-57de9c09eab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:17.858 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 namespace which is not needed anymore#033[00m
Jan 27 09:13:17 np0005597378 nova_compute[238941]: 2026-01-27 14:13:17.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:17 np0005597378 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 27 09:13:17 np0005597378 systemd-machined[207425]: Machine qemu-148-instance-00000074 terminated.
Jan 27 09:13:17 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : haproxy version is 2.8.14-c23fe91
Jan 27 09:13:17 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [NOTICE]   (344498) : path to executable is /usr/sbin/haproxy
Jan 27 09:13:17 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [WARNING]  (344498) : Exiting Master process...
Jan 27 09:13:17 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [WARNING]  (344498) : Exiting Master process...
Jan 27 09:13:17 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [ALERT]    (344498) : Current worker (344500) exited with code 143 (Terminated)
Jan 27 09:13:17 np0005597378 neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88[344494]: [WARNING]  (344498) : All workers exited. Exiting... (0)
Jan 27 09:13:17 np0005597378 systemd[1]: libpod-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a.scope: Deactivated successfully.
Jan 27 09:13:17 np0005597378 podman[344532]: 2026-01-27 14:13:17.985391254 +0000 UTC m=+0.042985297 container died 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:13:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a-userdata-shm.mount: Deactivated successfully.
Jan 27 09:13:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a30516727ba9ce956901412408e429cc1bf3974118da4296edf5b055e2456582-merged.mount: Deactivated successfully.
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.018 238945 INFO nova.virt.libvirt.driver [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Instance destroyed successfully.#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.019 238945 DEBUG nova.objects.instance [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lazy-loading 'resources' on Instance uuid e47cd4e5-669d-4001-af0c-57b561828b60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:18 np0005597378 podman[344532]: 2026-01-27 14:13:18.027528066 +0000 UTC m=+0.085122099 container cleanup 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:13:18 np0005597378 systemd[1]: libpod-conmon-532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a.scope: Deactivated successfully.
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.036 238945 DEBUG nova.virt.libvirt.vif [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1067640702',display_name='tempest-TestNetworkAdvancedServerOps-server-1067640702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1067640702',id=116,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHygz0yaDdzbysAeYS7JztNxK6hVipFq4Kx9NOon417gnw4IniJ7HoWUOi5nhMcIK/LlFV+VtvUatsc1HeZ7yTwzwpB9Pr+/56SphW+/bTc95CMfRypGzHoU07GmyMqBtA==',key_name='tempest-TestNetworkAdvancedServerOps-516807630',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1cac40132a44f0a978ac33f26f0875d',ramdisk_id='',reservation_id='r-l2p1q6l3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-507048735',owner_user_name='tempest-TestNetworkAdvancedServerOps-507048735-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:13:14Z,user_data=None,user_id='a87606137cd4440ab2ffebe68b325a85',uuid=e47cd4e5-669d-4001-af0c-57b561828b60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.036 238945 DEBUG nova.network.os_vif_util [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converting VIF {"id": "45efb061-6afd-4021-a345-4aa248d4409b", "address": "fa:16:3e:c9:17:d7", "network": {"id": "92a627e1-dd59-429c-82b4-8340ea69cf88", "bridge": "br-int", "label": "tempest-network-smoke--1550301192", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1cac40132a44f0a978ac33f26f0875d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45efb061-6a", "ovs_interfaceid": "45efb061-6afd-4021-a345-4aa248d4409b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.037 238945 DEBUG nova.network.os_vif_util [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.037 238945 DEBUG os_vif [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.040 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45efb061-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.046 238945 INFO os_vif [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:17:d7,bridge_name='br-int',has_traffic_filtering=True,id=45efb061-6afd-4021-a345-4aa248d4409b,network=Network(92a627e1-dd59-429c-82b4-8340ea69cf88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45efb061-6a')#033[00m
Jan 27 09:13:18 np0005597378 podman[344574]: 2026-01-27 14:13:18.092136163 +0000 UTC m=+0.041545068 container remove 532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.100 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d22cdd2a-854e-4e5e-a385-cbff53d4af0a]: (4, ('Tue Jan 27 02:13:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a)\n532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a\nTue Jan 27 02:13:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 (532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a)\n532fe4d6d0ac0b3ac2aa52a7648344e2bee8179e510ee87760b2f51759c19e8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.101 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e492e0e-ac4d-4c2c-9a61-82d4dc1e00d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.102 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a627e1-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:18 np0005597378 kernel: tap92a627e1-d0: left promiscuous mode
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a58a3d1-222d-40cd-a1f6-99b7c6a86039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.141 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19d0f578-6bb9-444a-9706-16b076480a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.143 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b98793e-3552-423c-a7dd-f61e6a4633d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.162 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[689a83ca-4307-4ad0-81f1-23bc077449c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583372, 'reachable_time': 32394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344607, 'error': None, 'target': 'ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 systemd[1]: run-netns-ovnmeta\x2d92a627e1\x2ddd59\x2d429c\x2d82b4\x2d8340ea69cf88.mount: Deactivated successfully.
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.166 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92a627e1-dd59-429c-82b4-8340ea69cf88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:13:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:18.166 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf8735b-78be-4570-b820-01727f27cb52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.308 238945 INFO nova.virt.libvirt.driver [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deleting instance files /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60_del#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.309 238945 INFO nova.virt.libvirt.driver [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deletion of /var/lib/nova/instances/e47cd4e5-669d-4001-af0c-57b561828b60_del complete#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.374 238945 INFO nova.compute.manager [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.375 238945 DEBUG oslo.service.loopingcall [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.375 238945 DEBUG nova.compute.manager [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.376 238945 DEBUG nova.network.neutron [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.851 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-changed-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.851 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing instance network info cache due to event network-changed-45efb061-6afd-4021-a345-4aa248d4409b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.852 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.852 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:13:18 np0005597378 nova_compute[238941]: 2026-01-27 14:13:18.852 238945 DEBUG nova.network.neutron [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Refreshing network info cache for port 45efb061-6afd-4021-a345-4aa248d4409b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:13:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 235 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 19 op/s
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.094 238945 INFO nova.network.neutron [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Port 45efb061-6afd-4021-a345-4aa248d4409b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.095 238945 DEBUG nova.network.neutron [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.105 238945 DEBUG nova.network.neutron [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.246 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e47cd4e5-669d-4001-af0c-57b561828b60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.246 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.247 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-unplugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.248 238945 DEBUG oslo_concurrency.lockutils [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.249 238945 DEBUG nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] No waiting events found dispatching network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.249 238945 WARNING nova.compute.manager [req-9524071a-851f-41cb-b2e9-8fe4c94db400 req-ffeed1fd-4fcc-4826-9317-8cea4def0bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received unexpected event network-vif-plugged-45efb061-6afd-4021-a345-4aa248d4409b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.250 238945 INFO nova.compute.manager [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Took 0.87 seconds to deallocate network for instance.#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.302 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.302 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.395 238945 DEBUG oslo_concurrency.processutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.903 238945 DEBUG nova.compute.manager [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-changed-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.904 238945 DEBUG nova.compute.manager [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing instance network info cache due to event network-changed-50c43789-df58-4796-81f2-c398dee6dabe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.904 238945 DEBUG oslo_concurrency.lockutils [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.905 238945 DEBUG oslo_concurrency.lockutils [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.905 238945 DEBUG nova.network.neutron [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Refreshing network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:13:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:13:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439396382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.938 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.938 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.939 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.939 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.939 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.940 238945 INFO nova.compute.manager [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Terminating instance#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.941 238945 DEBUG nova.compute.manager [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.963 238945 DEBUG oslo_concurrency.processutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.969 238945 DEBUG nova.compute.provider_tree [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.984 238945 DEBUG nova.scheduler.client.report [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:13:19 np0005597378 kernel: tap50c43789-df (unregistering): left promiscuous mode
Jan 27 09:13:19 np0005597378 NetworkManager[48904]: <info>  [1769523199.9892] device (tap50c43789-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:13:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:19Z|01198|binding|INFO|Releasing lport 50c43789-df58-4796-81f2-c398dee6dabe from this chassis (sb_readonly=0)
Jan 27 09:13:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:19Z|01199|binding|INFO|Setting lport 50c43789-df58-4796-81f2-c398dee6dabe down in Southbound
Jan 27 09:13:19 np0005597378 nova_compute[238941]: 2026-01-27 14:13:19.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:19Z|01200|binding|INFO|Removing iface tap50c43789-df ovn-installed in OVS
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.004 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3a:57 10.100.0.6'], port_security=['fa:16:3e:19:3a:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=50c43789-df58-4796-81f2-c398dee6dabe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.005 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 50c43789-df58-4796-81f2-c398dee6dabe in datapath 9964511f-1456-4111-a888-96329ab42c59 unbound from our chassis#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.007 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9964511f-1456-4111-a888-96329ab42c59#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.008 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:20 np0005597378 kernel: tap8c19198d-9e (unregistering): left promiscuous mode
Jan 27 09:13:20 np0005597378 NetworkManager[48904]: <info>  [1769523200.0175] device (tap8c19198d-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:20Z|01201|binding|INFO|Releasing lport 8c19198d-9ee1-4b83-9bd2-71b418462578 from this chassis (sb_readonly=0)
Jan 27 09:13:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:20Z|01202|binding|INFO|Setting lport 8c19198d-9ee1-4b83-9bd2-71b418462578 down in Southbound
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.026 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[55ee000c-a8d1-4379-9f65-a92c84bd8862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:20Z|01203|binding|INFO|Removing iface tap8c19198d-9e ovn-installed in OVS
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], port_security=['fa:16:3e:ea:ac:6b 2001:db8::f816:3eff:feea:ac6b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:ac6b/64', 'neutron:device_id': '8834b9bd-0324-4f5b-9b83-be852e0b96d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8c19198d-9ee1-4b83-9bd2-71b418462578) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.034 238945 INFO nova.scheduler.client.report [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Deleted allocations for instance e47cd4e5-669d-4001-af0c-57b561828b60#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.061 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7438b91-51df-4ab4-ac21-6971a7d3cb97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.065 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[10e8eece-ba10-480d-9e8f-27e0498759c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 27 09:13:20 np0005597378 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Consumed 16.336s CPU time.
Jan 27 09:13:20 np0005597378 systemd-machined[207425]: Machine qemu-146-instance-00000073 terminated.
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.096 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8da064e1-8a0a-4260-9f41-930cafb0d534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.103 238945 DEBUG oslo_concurrency.lockutils [None req-b4f9d8dd-5ce3-464c-bd39-1a2f641c0e21 a87606137cd4440ab2ffebe68b325a85 f1cac40132a44f0a978ac33f26f0875d - - default default] Lock "e47cd4e5-669d-4001-af0c-57b561828b60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.115 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcb785c-a13d-47c5-b85b-e2028465a2a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9964511f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:ae:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576488, 'reachable_time': 35427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344644, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22463a4b-30c8-4401-a016-7490762f0efa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576501, 'tstamp': 576501}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344645, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9964511f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576504, 'tstamp': 576504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344645, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.136 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.146 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9964511f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.146 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9964511f-10, col_values=(('external_ids', {'iface-id': '139ea0ba-f559-4c32-9b23-bc114f6fe7b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.149 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8c19198d-9ee1-4b83-9bd2-71b418462578 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.168 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1d7fe5-7e04-4ff2-a1d5-45efe97c2316]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 NetworkManager[48904]: <info>  [1769523200.1774] manager: (tap8c19198d-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.192 238945 INFO nova.virt.libvirt.driver [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Instance destroyed successfully.#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.192 238945 DEBUG nova.objects.instance [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 8834b9bd-0324-4f5b-9b83-be852e0b96d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.206 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c3b06a9-ad97-4e6d-a4b9-9ee0560c6d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd08187c-4250-4e52-82ba-3193fc0d310b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.212 238945 DEBUG nova.virt.libvirt.vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:42Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.212 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.213 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.213 238945 DEBUG os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.215 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50c43789-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.222 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.224 238945 INFO os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:3a:57,bridge_name='br-int',has_traffic_filtering=True,id=50c43789-df58-4796-81f2-c398dee6dabe,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c43789-df')#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.225 238945 DEBUG nova.virt.libvirt.vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-445190996',display_name='tempest-TestGettingAddress-server-445190996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-445190996',id=115,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-77eccgfg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:42Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=8834b9bd-0324-4f5b-9b83-be852e0b96d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.225 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.226 238945 DEBUG nova.network.os_vif_util [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.226 238945 DEBUG os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.228 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c19198d-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.234 238945 INFO os_vif [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ac:6b,bridge_name='br-int',has_traffic_filtering=True,id=8c19198d-9ee1-4b83-9bd2-71b418462578,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c19198d-9e')#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.246 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7577bf48-7e4c-41a7-930e-cafb7a466ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.267 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d1198c42-32ea-428b-baca-b19fb52518ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8e1b054-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:96:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576588, 'reachable_time': 39720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344684, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61a4966e-eab9-4adb-b06e-029b2660a53b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8e1b054-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576604, 'tstamp': 576604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344685, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.285 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.288 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e1b054-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.290 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.291 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8e1b054-50, col_values=(('external_ids', {'iface-id': 'cec58910-221b-4aa5-9532-67a30f83e8bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:20.292 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:13:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 200 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Jan 27 09:13:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.966 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Received event network-vif-deleted-45efb061-6afd-4021-a345-4aa248d4409b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.966 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.967 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.967 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.968 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.968 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-unplugged-50c43789-df58-4796-81f2-c398dee6dabe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.969 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-50c43789-df58-4796-81f2-c398dee6dabe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.969 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.969 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 DEBUG oslo_concurrency.lockutils [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 DEBUG nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.970 238945 WARNING nova.compute.manager [req-17f61091-0f75-4b67-9228-2358d81509e8 req-6297a179-281e-481e-af7a-1509d84b2a05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-50c43789-df58-4796-81f2-c398dee6dabe for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.993 238945 INFO nova.virt.libvirt.driver [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deleting instance files /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2_del#033[00m
Jan 27 09:13:20 np0005597378 nova_compute[238941]: 2026-01-27 14:13:20.994 238945 INFO nova.virt.libvirt.driver [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deletion of /var/lib/nova/instances/8834b9bd-0324-4f5b-9b83-be852e0b96d2_del complete#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.061 238945 INFO nova.compute.manager [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.062 238945 DEBUG oslo.service.loopingcall [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.062 238945 DEBUG nova.compute.manager [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.062 238945 DEBUG nova.network.neutron [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.142 238945 DEBUG nova.network.neutron [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updated VIF entry in instance network info cache for port 50c43789-df58-4796-81f2-c398dee6dabe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.143 238945 DEBUG nova.network.neutron [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "50c43789-df58-4796-81f2-c398dee6dabe", "address": "fa:16:3e:19:3a:57", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c43789-df", "ovs_interfaceid": "50c43789-df58-4796-81f2-c398dee6dabe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.260 238945 DEBUG oslo_concurrency.lockutils [req-e38bfa76-2bc7-442c-bdbd-8556307183b9 req-993a7264-bfb7-468a-837a-9bc48dfc9182 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8834b9bd-0324-4f5b-9b83-be852e0b96d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.983 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.983 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.984 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.984 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.984 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-unplugged-8c19198d-9ee1-4b83-9bd2-71b418462578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-unplugged-8c19198d-9ee1-4b83-9bd2-71b418462578 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.985 238945 DEBUG oslo_concurrency.lockutils [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.986 238945 DEBUG nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] No waiting events found dispatching network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:21 np0005597378 nova_compute[238941]: 2026-01-27 14:13:21.986 238945 WARNING nova.compute.manager [req-423a1f39-bfe2-4fb5-95c4-5eedbb51d616 req-f880a862-92dd-48af-9c02-fcdc100ae6e9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received unexpected event network-vif-plugged-8c19198d-9ee1-4b83-9bd2-71b418462578 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:13:22 np0005597378 nova_compute[238941]: 2026-01-27 14:13:22.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:22 np0005597378 podman[344698]: 2026-01-27 14:13:22.713254997 +0000 UTC m=+0.051796784 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 09:13:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 200 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Jan 27 09:13:23 np0005597378 nova_compute[238941]: 2026-01-27 14:13:23.727 238945 DEBUG nova.compute.manager [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-deleted-50c43789-df58-4796-81f2-c398dee6dabe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:23 np0005597378 nova_compute[238941]: 2026-01-27 14:13:23.727 238945 INFO nova.compute.manager [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Neutron deleted interface 50c43789-df58-4796-81f2-c398dee6dabe; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:13:23 np0005597378 nova_compute[238941]: 2026-01-27 14:13:23.727 238945 DEBUG nova.network.neutron [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [{"id": "8c19198d-9ee1-4b83-9bd2-71b418462578", "address": "fa:16:3e:ea:ac:6b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feea:ac6b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c19198d-9e", "ovs_interfaceid": "8c19198d-9ee1-4b83-9bd2-71b418462578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:24 np0005597378 nova_compute[238941]: 2026-01-27 14:13:24.503 238945 DEBUG nova.compute.manager [req-46e1d110-4a8d-40e4-9b16-ba090157618c req-f6a92649-9297-462f-8fdd-eda814e46dd6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Detach interface failed, port_id=50c43789-df58-4796-81f2-c398dee6dabe, reason: Instance 8834b9bd-0324-4f5b-9b83-be852e0b96d2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:13:24 np0005597378 nova_compute[238941]: 2026-01-27 14:13:24.718 238945 DEBUG nova.network.neutron [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:24 np0005597378 podman[344719]: 2026-01-27 14:13:24.763816475 +0000 UTC m=+0.096334471 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 09:13:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 159 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 55 op/s
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.024 238945 INFO nova.compute.manager [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Took 3.96 seconds to deallocate network for instance.#033[00m
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.128 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.129 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.201 238945 DEBUG oslo_concurrency.processutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:13:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204934705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.761 238945 DEBUG oslo_concurrency.processutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.767 238945 DEBUG nova.compute.provider_tree [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:13:25 np0005597378 nova_compute[238941]: 2026-01-27 14:13:25.820 238945 DEBUG nova.scheduler.client.report [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:13:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:26 np0005597378 nova_compute[238941]: 2026-01-27 14:13:26.017 238945 DEBUG nova.compute.manager [req-7c09364b-8265-41d2-9fc9-310822e27e78 req-84bb04ef-c93a-4a62-a5bc-87a4d9584846 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Received event network-vif-deleted-8c19198d-9ee1-4b83-9bd2-71b418462578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:26 np0005597378 nova_compute[238941]: 2026-01-27 14:13:26.042 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:26 np0005597378 nova_compute[238941]: 2026-01-27 14:13:26.109 238945 INFO nova.scheduler.client.report [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 8834b9bd-0324-4f5b-9b83-be852e0b96d2#033[00m
Jan 27 09:13:26 np0005597378 nova_compute[238941]: 2026-01-27 14:13:26.247 238945 DEBUG oslo_concurrency.lockutils [None req-18e9c6a4-8df5-4451-988a-730a2dc890d1 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "8834b9bd-0324-4f5b-9b83-be852e0b96d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 63 op/s
Jan 27 09:13:27 np0005597378 nova_compute[238941]: 2026-01-27 14:13:27.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:27Z|01204|binding|INFO|Releasing lport 139ea0ba-f559-4c32-9b23-bc114f6fe7b6 from this chassis (sb_readonly=0)
Jan 27 09:13:27 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:27Z|01205|binding|INFO|Releasing lport cec58910-221b-4aa5-9532-67a30f83e8bb from this chassis (sb_readonly=0)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007729516982908026 of space, bias 1.0, pg target 0.23188550948724077 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695785598223304 of space, bias 1.0, pg target 0.2008735679466991 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0609991609285592e-06 of space, bias 4.0, pg target 0.001273198993114271 quantized to 16 (current 16)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:13:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:13:27 np0005597378 nova_compute[238941]: 2026-01-27 14:13:27.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.418 238945 DEBUG nova.compute.manager [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG nova.compute.manager [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing instance network info cache due to event network-changed-d02567c1-b424-4fc8-bf9d-3d0c7279063b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG oslo_concurrency.lockutils [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG oslo_concurrency.lockutils [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.419 238945 DEBUG nova.network.neutron [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Refreshing network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.627 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.627 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.628 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.628 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.628 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.630 238945 INFO nova.compute.manager [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Terminating instance#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.631 238945 DEBUG nova.compute.manager [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:13:28 np0005597378 kernel: tapd02567c1-b4 (unregistering): left promiscuous mode
Jan 27 09:13:28 np0005597378 NetworkManager[48904]: <info>  [1769523208.6829] device (tapd02567c1-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:28Z|01206|binding|INFO|Releasing lport d02567c1-b424-4fc8-bf9d-3d0c7279063b from this chassis (sb_readonly=0)
Jan 27 09:13:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:28Z|01207|binding|INFO|Setting lport d02567c1-b424-4fc8-bf9d-3d0c7279063b down in Southbound
Jan 27 09:13:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:28Z|01208|binding|INFO|Removing iface tapd02567c1-b4 ovn-installed in OVS
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.723 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:aa:48 10.100.0.13'], port_security=['fa:16:3e:18:aa:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9964511f-1456-4111-a888-96329ab42c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec4fd51-ea76-4523-91d0-373d6d53e00e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d02567c1-b424-4fc8-bf9d-3d0c7279063b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.724 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d02567c1-b424-4fc8-bf9d-3d0c7279063b in datapath 9964511f-1456-4111-a888-96329ab42c59 unbound from our chassis#033[00m
Jan 27 09:13:28 np0005597378 kernel: tap32a4e0d7-f3 (unregistering): left promiscuous mode
Jan 27 09:13:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.725 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9964511f-1456-4111-a888-96329ab42c59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:13:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.726 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73b780f6-248f-4d5e-8385-8cf31e2dc552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.726 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9964511f-1456-4111-a888-96329ab42c59 namespace which is not needed anymore#033[00m
Jan 27 09:13:28 np0005597378 NetworkManager[48904]: <info>  [1769523208.7291] device (tap32a4e0d7-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:28Z|01209|binding|INFO|Releasing lport 32a4e0d7-f322-4557-8734-4d3be1786b85 from this chassis (sb_readonly=0)
Jan 27 09:13:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:28Z|01210|binding|INFO|Setting lport 32a4e0d7-f322-4557-8734-4d3be1786b85 down in Southbound
Jan 27 09:13:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:13:28Z|01211|binding|INFO|Removing iface tap32a4e0d7-f3 ovn-installed in OVS
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.751 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], port_security=['fa:16:3e:7b:df:4b 2001:db8::f816:3eff:fe7b:df4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7b:df4b/64', 'neutron:device_id': '6bf91edb-b66a-458b-b8bd-e8520cdc6349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7baf54ed-4a71-4383-b238-91badee6051c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=461a5a8c-725e-4fde-b0f2-146218d7a416, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32a4e0d7-f322-4557-8734-4d3be1786b85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000072.scope: Deactivated successfully.
Jan 27 09:13:28 np0005597378 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000072.scope: Consumed 16.044s CPU time.
Jan 27 09:13:28 np0005597378 systemd-machined[207425]: Machine qemu-145-instance-00000072 terminated.
Jan 27 09:13:28 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : haproxy version is 2.8.14-c23fe91
Jan 27 09:13:28 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [NOTICE]   (342203) : path to executable is /usr/sbin/haproxy
Jan 27 09:13:28 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [WARNING]  (342203) : Exiting Master process...
Jan 27 09:13:28 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [ALERT]    (342203) : Current worker (342205) exited with code 143 (Terminated)
Jan 27 09:13:28 np0005597378 neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59[342199]: [WARNING]  (342203) : All workers exited. Exiting... (0)
Jan 27 09:13:28 np0005597378 systemd[1]: libpod-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795.scope: Deactivated successfully.
Jan 27 09:13:28 np0005597378 NetworkManager[48904]: <info>  [1769523208.8599] manager: (tap32a4e0d7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/492)
Jan 27 09:13:28 np0005597378 podman[344799]: 2026-01-27 14:13:28.862862113 +0000 UTC m=+0.044285821 container died 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.876 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Instance destroyed successfully.#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.877 238945 DEBUG nova.objects.instance [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 6bf91edb-b66a-458b-b8bd-e8520cdc6349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.896 238945 DEBUG nova.virt.libvirt.vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:05Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.896 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.897 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.897 238945 DEBUG os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795-userdata-shm.mount: Deactivated successfully.
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.900 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02567c1-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-567bd9b41b6e82e882facce6f36e0d400f760c906e7d2a72376c1afd3c1edeca-merged.mount: Deactivated successfully.
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.903 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.908 238945 INFO os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:aa:48,bridge_name='br-int',has_traffic_filtering=True,id=d02567c1-b424-4fc8-bf9d-3d0c7279063b,network=Network(9964511f-1456-4111-a888-96329ab42c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd02567c1-b4')#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.909 238945 DEBUG nova.virt.libvirt.vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-465956047',display_name='tempest-TestGettingAddress-server-465956047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-465956047',id=114,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCL5GzdVTw3g9pYkusdF2jHdfrZeoOZ/QJqs6JRaOq321DxaA1qhc/zuC6HJ5kiWIddQs1XVieopB7xJCI68KlSqzoM9jLooQd9VHY8heADLc8jRl0pWVTt1wMXL/6Amgw==',key_name='tempest-TestGettingAddress-1590092660',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:12:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-c43xdp6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:12:05Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6bf91edb-b66a-458b-b8bd-e8520cdc6349,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.909 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.910 238945 DEBUG nova.network.os_vif_util [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.910 238945 DEBUG os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:13:28 np0005597378 podman[344799]: 2026-01-27 14:13:28.912069117 +0000 UTC m=+0.093492825 container cleanup 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.912 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32a4e0d7-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 121 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 13 KiB/s wr, 60 op/s
Jan 27 09:13:28 np0005597378 nova_compute[238941]: 2026-01-27 14:13:28.917 238945 INFO os_vif [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:df:4b,bridge_name='br-int',has_traffic_filtering=True,id=32a4e0d7-f322-4557-8734-4d3be1786b85,network=Network(b8e1b054-5200-4e22-9702-c3f6d1f1a12e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32a4e0d7-f3')#033[00m
Jan 27 09:13:28 np0005597378 systemd[1]: libpod-conmon-382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795.scope: Deactivated successfully.
Jan 27 09:13:28 np0005597378 podman[344850]: 2026-01-27 14:13:28.989057807 +0000 UTC m=+0.049090641 container remove 382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:28.999 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[849d03e0-c46e-4e06-9fb2-a207444ffc26]: (4, ('Tue Jan 27 02:13:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59 (382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795)\n382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795\nTue Jan 27 02:13:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9964511f-1456-4111-a888-96329ab42c59 (382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795)\n382004103f53f4679cd0c4ad52e6a7760b5e8debe80d1bc9f9a83213999e1795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0314ce-5326-48eb-9e27-d0171c95bd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.003 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9964511f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:29 np0005597378 kernel: tap9964511f-10: left promiscuous mode
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.023 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaf4621-8ef6-4fac-9fe7-13174f0e8cde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.038 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d39b4f2-3efa-4481-bc92-238c8bb06398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.039 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0592c21-7fdb-4214-82d2-b573dd2022a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.042 238945 DEBUG nova.compute.manager [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.043 238945 DEBUG oslo_concurrency.lockutils [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.043 238945 DEBUG oslo_concurrency.lockutils [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.043 238945 DEBUG oslo_concurrency.lockutils [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.044 238945 DEBUG nova.compute.manager [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-unplugged-32a4e0d7-f322-4557-8734-4d3be1786b85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.044 238945 DEBUG nova.compute.manager [req-0db1e953-3391-4e3b-a20b-32e6f98b5814 req-c3aaf774-4f46-4c14-a06a-c17a2825c92d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-32a4e0d7-f322-4557-8734-4d3be1786b85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.060 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[168b1a86-4bda-4d32-949c-19cc208f13de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576480, 'reachable_time': 34931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344883, 'error': None, 'target': 'ovnmeta-9964511f-1456-4111-a888-96329ab42c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.063 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9964511f-1456-4111-a888-96329ab42c59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.063 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2bd1a5-b7e7-441f-8204-16c322852912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.064 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32a4e0d7-f322-4557-8734-4d3be1786b85 in datapath b8e1b054-5200-4e22-9702-c3f6d1f1a12e unbound from our chassis#033[00m
Jan 27 09:13:29 np0005597378 systemd[1]: run-netns-ovnmeta\x2d9964511f\x2d1456\x2d4111\x2da888\x2d96329ab42c59.mount: Deactivated successfully.
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.065 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8e1b054-5200-4e22-9702-c3f6d1f1a12e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23803fd6-ccd7-4f9f-b8f9-0bcad215bd80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.066 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e namespace which is not needed anymore#033[00m
Jan 27 09:13:29 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : haproxy version is 2.8.14-c23fe91
Jan 27 09:13:29 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [NOTICE]   (342376) : path to executable is /usr/sbin/haproxy
Jan 27 09:13:29 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [WARNING]  (342376) : Exiting Master process...
Jan 27 09:13:29 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [WARNING]  (342376) : Exiting Master process...
Jan 27 09:13:29 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [ALERT]    (342376) : Current worker (342379) exited with code 143 (Terminated)
Jan 27 09:13:29 np0005597378 neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e[342367]: [WARNING]  (342376) : All workers exited. Exiting... (0)
Jan 27 09:13:29 np0005597378 systemd[1]: libpod-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope: Deactivated successfully.
Jan 27 09:13:29 np0005597378 conmon[342367]: conmon b5a1e700a1a2e092733e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope/container/memory.events
Jan 27 09:13:29 np0005597378 podman[344902]: 2026-01-27 14:13:29.228645588 +0000 UTC m=+0.057577219 container died b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.238 238945 INFO nova.virt.libvirt.driver [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deleting instance files /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349_del#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.240 238945 INFO nova.virt.libvirt.driver [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deletion of /var/lib/nova/instances/6bf91edb-b66a-458b-b8bd-e8520cdc6349_del complete#033[00m
Jan 27 09:13:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27-userdata-shm.mount: Deactivated successfully.
Jan 27 09:13:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-64ae59e8e76c0f59d980100fdf776d345c70ceb1140791aaf6acb297929ec17c-merged.mount: Deactivated successfully.
Jan 27 09:13:29 np0005597378 podman[344902]: 2026-01-27 14:13:29.262428776 +0000 UTC m=+0.091360407 container cleanup b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:13:29 np0005597378 systemd[1]: libpod-conmon-b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27.scope: Deactivated successfully.
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.297 238945 INFO nova.compute.manager [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.298 238945 DEBUG oslo.service.loopingcall [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.298 238945 DEBUG nova.compute.manager [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.298 238945 DEBUG nova.network.neutron [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:13:29 np0005597378 podman[344933]: 2026-01-27 14:13:29.326673263 +0000 UTC m=+0.042887094 container remove b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b2cbbe-5d23-4ab6-a5e8-0acbc8e50902]: (4, ('Tue Jan 27 02:13:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e (b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27)\nb5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27\nTue Jan 27 02:13:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e (b5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27)\nb5a1e700a1a2e092733ee327acd2f2c07b392ebcc27a3d7c1b8b9c1e8be25f27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.333 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45474d6f-8ccf-4146-bbd2-6680a85d2928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.334 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e1b054-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:29 np0005597378 kernel: tapb8e1b054-50: left promiscuous mode
Jan 27 09:13:29 np0005597378 nova_compute[238941]: 2026-01-27 14:13:29.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.352 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbae38c-40ba-430b-ba19-0021c4ce7270]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.370 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0f1d5a-e9e2-4d9e-b93a-157ba855fa59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.371 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65a34a62-2dba-4846-897f-bdc42cd73eb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.390 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0595812c-8d6b-4d36-84e0-f2922e38fcaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576580, 'reachable_time': 38590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344948, 'error': None, 'target': 'ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.392 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8e1b054-5200-4e22-9702-c3f6d1f1a12e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:13:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:29.392 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[284b9bca-b6a6-4c55-853d-f9b20dbd02a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:13:29 np0005597378 systemd[1]: run-netns-ovnmeta\x2db8e1b054\x2d5200\x2d4e22\x2d9702\x2dc3f6d1f1a12e.mount: Deactivated successfully.
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.215 238945 DEBUG nova.network.neutron [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updated VIF entry in instance network info cache for port d02567c1-b424-4fc8-bf9d-3d0c7279063b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.216 238945 DEBUG nova.network.neutron [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [{"id": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "address": "fa:16:3e:18:aa:48", "network": {"id": "9964511f-1456-4111-a888-96329ab42c59", "bridge": "br-int", "label": "tempest-network-smoke--1281024359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd02567c1-b4", "ovs_interfaceid": "d02567c1-b424-4fc8-bf9d-3d0c7279063b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32a4e0d7-f322-4557-8734-4d3be1786b85", "address": "fa:16:3e:7b:df:4b", "network": {"id": "b8e1b054-5200-4e22-9702-c3f6d1f1a12e", "bridge": "br-int", "label": "tempest-network-smoke--1054260128", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:df4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32a4e0d7-f3", "ovs_interfaceid": "32a4e0d7-f322-4557-8734-4d3be1786b85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.260 238945 DEBUG oslo_concurrency.lockutils [req-556a7cc8-a47c-4998-8be3-671759f1a91b req-5768f681-aa0b-4cd6-bcef-e38667930585 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bf91edb-b66a-458b-b8bd-e8520cdc6349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.573 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.574 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.574 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.575 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.575 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-unplugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.575 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-unplugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.576 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.576 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.577 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.577 238945 DEBUG oslo_concurrency.lockutils [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.577 238945 DEBUG nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:30 np0005597378 nova_compute[238941]: 2026-01-27 14:13:30.578 238945 WARNING nova.compute.manager [req-e4c2e0a1-8111-4779-bd95-b2b45dd0c6fd req-70407199-dec0-4b26-b6ec-4e8641a25f8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-d02567c1-b424-4fc8-bf9d-3d0c7279063b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:13:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 105 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 12 KiB/s wr, 47 op/s
Jan 27 09:13:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.062 238945 DEBUG nova.network.neutron [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.082 238945 INFO nova.compute.manager [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Took 1.78 seconds to deallocate network for instance.#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.130 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.132 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.192 238945 DEBUG oslo_concurrency.processutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.253 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.255 238945 DEBUG oslo_concurrency.lockutils [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.256 238945 DEBUG oslo_concurrency.lockutils [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.256 238945 DEBUG oslo_concurrency.lockutils [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.256 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] No waiting events found dispatching network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.257 238945 WARNING nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received unexpected event network-vif-plugged-32a4e0d7-f322-4557-8734-4d3be1786b85 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.257 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-deleted-d02567c1-b424-4fc8-bf9d-3d0c7279063b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.258 238945 DEBUG nova.compute.manager [req-11842a13-5785-4063-ae21-7f2e588f141a req-fc5ec507-b8d4-4179-8105-a0c47d3f9ae0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Received event network-vif-deleted-32a4e0d7-f322-4557-8734-4d3be1786b85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:13:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:13:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048457322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.798 238945 DEBUG oslo_concurrency.processutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.808 238945 DEBUG nova.compute.provider_tree [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.827 238945 DEBUG nova.scheduler.client.report [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.852 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.879 238945 INFO nova.scheduler.client.report [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 6bf91edb-b66a-458b-b8bd-e8520cdc6349#033[00m
Jan 27 09:13:31 np0005597378 nova_compute[238941]: 2026-01-27 14:13:31.976 238945 DEBUG oslo_concurrency.lockutils [None req-8318700f-8498-4f1b-b171-a810a447cbc4 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6bf91edb-b66a-458b-b8bd-e8520cdc6349" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:32 np0005597378 nova_compute[238941]: 2026-01-27 14:13:32.552 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 105 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 27 09:13:33 np0005597378 nova_compute[238941]: 2026-01-27 14:13:33.015 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523198.0144117, e47cd4e5-669d-4001-af0c-57b561828b60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:13:33 np0005597378 nova_compute[238941]: 2026-01-27 14:13:33.016 238945 INFO nova.compute.manager [-] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:13:33 np0005597378 nova_compute[238941]: 2026-01-27 14:13:33.047 238945 DEBUG nova.compute.manager [None req-f832e9fe-faf4-49e8-83ac-dd540e8daa6c - - - - - -] [instance: e47cd4e5-669d-4001-af0c-57b561828b60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:33 np0005597378 nova_compute[238941]: 2026-01-27 14:13:33.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 57 op/s
Jan 27 09:13:35 np0005597378 nova_compute[238941]: 2026-01-27 14:13:35.189 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523200.188351, 8834b9bd-0324-4f5b-9b83-be852e0b96d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:13:35 np0005597378 nova_compute[238941]: 2026-01-27 14:13:35.190 238945 INFO nova.compute.manager [-] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:13:35 np0005597378 nova_compute[238941]: 2026-01-27 14:13:35.212 238945 DEBUG nova.compute.manager [None req-08935666-492f-4416-b5f7-87acc6ec7334 - - - - - -] [instance: 8834b9bd-0324-4f5b-9b83-be852e0b96d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:36 np0005597378 nova_compute[238941]: 2026-01-27 14:13:36.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Jan 27 09:13:37 np0005597378 nova_compute[238941]: 2026-01-27 14:13:37.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:37.226 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:13:37 np0005597378 nova_compute[238941]: 2026-01-27 14:13:37.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:37.228 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:13:37 np0005597378 nova_compute[238941]: 2026-01-27 14:13:37.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:38 np0005597378 nova_compute[238941]: 2026-01-27 14:13:38.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:13:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:40.233 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:13:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:13:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:42 np0005597378 nova_compute[238941]: 2026-01-27 14:13:42.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 27 09:13:43 np0005597378 nova_compute[238941]: 2026-01-27 14:13:43.874 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523208.8736942, 6bf91edb-b66a-458b-b8bd-e8520cdc6349 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:13:43 np0005597378 nova_compute[238941]: 2026-01-27 14:13:43.875 238945 INFO nova.compute.manager [-] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:13:43 np0005597378 nova_compute[238941]: 2026-01-27 14:13:43.899 238945 DEBUG nova.compute.manager [None req-a2a46f6a-fe18-4f2d-a8b5-27ab55202828 - - - - - -] [instance: 6bf91edb-b66a-458b-b8bd-e8520cdc6349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:13:43 np0005597378 nova_compute[238941]: 2026-01-27 14:13:43.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 27 09:13:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:46.318 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:13:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:13:47 np0005597378 nova_compute[238941]: 2026-01-27 14:13:47.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:13:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:13:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:13:48 np0005597378 nova_compute[238941]: 2026-01-27 14:13:48.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:13:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:52 np0005597378 nova_compute[238941]: 2026-01-27 14:13:52.561 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:13:53 np0005597378 podman[344972]: 2026-01-27 14:13:53.720133989 +0000 UTC m=+0.054951758 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:13:53 np0005597378 nova_compute[238941]: 2026-01-27 14:13:53.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:13:55 np0005597378 podman[344992]: 2026-01-27 14:13:55.73062974 +0000 UTC m=+0.072364567 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:13:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.180 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.181 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.200 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.283 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.283 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.290 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.290 238945 INFO nova.compute.claims [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.387 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:13:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079770442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.916 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.924 238945 DEBUG nova.compute.provider_tree [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:13:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 765 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.946 238945 DEBUG nova.scheduler.client.report [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.985 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:56 np0005597378 nova_compute[238941]: 2026-01-27 14:13:56.987 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.095 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.096 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.122 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.151 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.228 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.230 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.230 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Creating image(s)#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.254 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.276 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.302 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.311 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.381 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.382 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.382 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.383 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.403 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.406 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.522 238945 DEBUG nova.policy [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.563 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.695 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.777 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.868 238945 DEBUG nova.objects.instance [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.911 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Ensure instance console log exists: /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:13:57 np0005597378 nova_compute[238941]: 2026-01-27 14:13:57.912 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:13:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 65 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Jan 27 09:13:58 np0005597378 nova_compute[238941]: 2026-01-27 14:13:58.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:13:59 np0005597378 nova_compute[238941]: 2026-01-27 14:13:59.407 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Successfully created port: 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:13:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:13:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359657946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:13:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:13:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359657946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.408 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Successfully updated port: 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.452 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.452 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.453 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.540 238945 DEBUG nova.compute.manager [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.541 238945 DEBUG nova.compute.manager [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing instance network info cache due to event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.541 238945 DEBUG oslo_concurrency.lockutils [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:00 np0005597378 nova_compute[238941]: 2026-01-27 14:14:00.634 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:14:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:14:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:01 np0005597378 nova_compute[238941]: 2026-01-27 14:14:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:01 np0005597378 nova_compute[238941]: 2026-01-27 14:14:01.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:01 np0005597378 nova_compute[238941]: 2026-01-27 14:14:01.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:01 np0005597378 nova_compute[238941]: 2026-01-27 14:14:01.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:01 np0005597378 nova_compute[238941]: 2026-01-27 14:14:01.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:14:01 np0005597378 nova_compute[238941]: 2026-01-27 14:14:01.410 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:14:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170972737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.004 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.189 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.190 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3768MB free_disk=59.966786862351GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.191 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.191 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.313 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8df0cb66-9678-4f50-87e0-066cbafcb26b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.313 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.314 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.334 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.354 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.354 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.377 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.401 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.455 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.495 238945 DEBUG nova.network.neutron [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.525 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.526 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance network_info: |[{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.526 238945 DEBUG oslo_concurrency.lockutils [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.527 238945 DEBUG nova.network.neutron [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.530 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start _get_guest_xml network_info=[{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.538 238945 WARNING nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.553 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.554 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.561 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.561 238945 DEBUG nova.virt.libvirt.host [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.562 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.562 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.562 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.563 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.564 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.564 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.564 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.565 238945 DEBUG nova.virt.hardware [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.568 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:02 np0005597378 nova_compute[238941]: 2026-01-27 14:14:02.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:14:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:14:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255789616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.024 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.030 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.055 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.079 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.080 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2098986239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.129 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:14:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 34K writes, 137K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.85 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5362 writes, 19K keys, 5362 commit groups, 1.0 writes per commit group, ingest: 21.12 MB, 0.04 MB/s#012Interval WAL: 5362 writes, 2150 syncs, 2.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.153 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.160 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:14:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896762468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.768 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.770 238945 DEBUG nova.virt.libvirt.vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:13:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-144581117',display_name='tempest-TestNetworkBasicOps-server-144581117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-144581117',id=117,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClIsOMYBDW1mTHBPhBNzMnSebAst2LQIoqp5ISoghGMCqgK5cCtP8boVvXqJnI/aVkYSOd21OzhpfBfG/mCpRxC0QfzpZQ+ccWYmJrMDrV2A/8x5zjAOXMRJmK9HClK6w==',key_name='tempest-TestNetworkBasicOps-932126384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-r0ixdvtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:13:57Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8df0cb66-9678-4f50-87e0-066cbafcb26b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.770 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.771 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.772 238945 DEBUG nova.objects.instance [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.874 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <uuid>8df0cb66-9678-4f50-87e0-066cbafcb26b</uuid>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <name>instance-00000075</name>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-144581117</nova:name>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:14:02</nova:creationTime>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <nova:port uuid="21c0e79c-9d05-4b8c-89f6-b7f7e93c871d">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <entry name="serial">8df0cb66-9678-4f50-87e0-066cbafcb26b</entry>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <entry name="uuid">8df0cb66-9678-4f50-87e0-066cbafcb26b</entry>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8df0cb66-9678-4f50-87e0-066cbafcb26b_disk">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:27:df:73"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <target dev="tap21c0e79c-9d"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/console.log" append="off"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:14:03 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:14:03 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:14:03 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:14:03 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Preparing to wait for external event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.877 238945 DEBUG nova.virt.libvirt.vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:13:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-144581117',display_name='tempest-TestNetworkBasicOps-server-144581117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-144581117',id=117,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClIsOMYBDW1mTHBPhBNzMnSebAst2LQIoqp5ISoghGMCqgK5cCtP8boVvXqJnI/aVkYSOd21OzhpfBfG/mCpRxC0QfzpZQ+ccWYmJrMDrV2A/8x5zjAOXMRJmK9HClK6w==',key_name='tempest-TestNetworkBasicOps-932126384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-r0ixdvtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:13:57Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8df0cb66-9678-4f50-87e0-066cbafcb26b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.877 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.878 238945 DEBUG nova.network.os_vif_util [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.878 238945 DEBUG os_vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.879 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.880 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.882 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21c0e79c-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.882 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21c0e79c-9d, col_values=(('external_ids', {'iface-id': '21c0e79c-9d05-4b8c-89f6-b7f7e93c871d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:df:73', 'vm-uuid': '8df0cb66-9678-4f50-87e0-066cbafcb26b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:03 np0005597378 NetworkManager[48904]: <info>  [1769523243.8849] manager: (tap21c0e79c-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.893 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:03 np0005597378 nova_compute[238941]: 2026-01-27 14:14:03.894 238945 INFO os_vif [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d')#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.002 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.002 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.003 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:27:df:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.003 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Using config drive#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.029 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.805 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Creating config drive at /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.811 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqa0li7ez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.955 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqa0li7ez" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.985 238945 DEBUG nova.storage.rbd_utils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:04 np0005597378 nova_compute[238941]: 2026-01-27 14:14:04.988 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.023 238945 DEBUG nova.network.neutron [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated VIF entry in instance network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.024 238945 DEBUG nova.network.neutron [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.058 238945 DEBUG oslo_concurrency.lockutils [req-83030ee2-0b5e-4831-8459-02a282719faa req-1ab7dc7f-a42a-41cf-8bf8-b5966f3fdb21 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.080 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.145 238945 DEBUG oslo_concurrency.processutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config 8df0cb66-9678-4f50-87e0-066cbafcb26b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.147 238945 INFO nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deleting local config drive /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b/disk.config because it was imported into RBD.#033[00m
Jan 27 09:14:05 np0005597378 kernel: tap21c0e79c-9d: entered promiscuous mode
Jan 27 09:14:05 np0005597378 NetworkManager[48904]: <info>  [1769523245.1999] manager: (tap21c0e79c-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Jan 27 09:14:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:05Z|01212|binding|INFO|Claiming lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for this chassis.
Jan 27 09:14:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:05Z|01213|binding|INFO|21c0e79c-9d05-4b8c-89f6-b7f7e93c871d: Claiming fa:16:3e:27:df:73 10.100.0.14
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 systemd-udevd[345385]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:14:05 np0005597378 systemd-machined[207425]: New machine qemu-149-instance-00000075.
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.241 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:df:73 10.100.0.14'], port_security=['fa:16:3e:27:df:73 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8df0cb66-9678-4f50-87e0-066cbafcb26b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'accd4075-5a55-4bff-827f-ddb1794ed7d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a15460d-6ccd-40d2-9737-7ae06bf168e7, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.242 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d in datapath 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c bound to our chassis#033[00m
Jan 27 09:14:05 np0005597378 NetworkManager[48904]: <info>  [1769523245.2446] device (tap21c0e79c-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.244 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c#033[00m
Jan 27 09:14:05 np0005597378 NetworkManager[48904]: <info>  [1769523245.2451] device (tap21c0e79c-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:14:05 np0005597378 systemd[1]: Started Virtual Machine qemu-149-instance-00000075.
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.257 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7b145a-b131-4362-ac91-e79dfa6b880f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.258 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12f77fa9-61 in ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.260 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12f77fa9-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[278e7b3f-a21a-4b74-a959-5054b3b412d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[271f4aef-262e-4e9d-b721-8c9e1b39dfcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.276 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0aadc00d-3d5a-47ca-975d-339efd28a1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:05Z|01214|binding|INFO|Setting lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d ovn-installed in OVS
Jan 27 09:14:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:05Z|01215|binding|INFO|Setting lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d up in Southbound
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.291 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec391bd3-479b-4df2-a231-f1c5bd5e4c68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.318 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe1c41-62c7-4d52-9f9a-004244337b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 NetworkManager[48904]: <info>  [1769523245.3253] manager: (tap12f77fa9-60): new Veth device (/org/freedesktop/NetworkManager/Devices/495)
Jan 27 09:14:05 np0005597378 systemd-udevd[345387]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.324 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41a191ce-c23c-459a-87a4-ad6e7e352532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.355 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[99f363ca-b546-40ba-bcc8-3bf8331eed69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.358 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5b21c7-91e4-4c87-aaaa-5da8336300bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 NetworkManager[48904]: <info>  [1769523245.3822] device (tap12f77fa9-60): carrier: link connected
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.387 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6c962f7f-a522-4c3e-90bc-ce281e775895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.404 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9e01b1-e1da-4a07-b401-1614a8eff653]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12f77fa9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:9e:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588500, 'reachable_time': 33310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345418, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.420 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[be88e562-3468-4ded-9606-646eea7cc3d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:9e5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588500, 'tstamp': 588500}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345419, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.439 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec957810-5619-4f48-b396-793e16c55195]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12f77fa9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:9e:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588500, 'reachable_time': 33310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345420, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.473 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bdc802-015f-4393-9327-08d80eaa1dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.532 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98bd7e56-e351-4ad9-9201-ff8134fd6ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.534 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12f77fa9-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.534 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.535 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12f77fa9-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 NetworkManager[48904]: <info>  [1769523245.5378] manager: (tap12f77fa9-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Jan 27 09:14:05 np0005597378 kernel: tap12f77fa9-60: entered promiscuous mode
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.542 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12f77fa9-60, col_values=(('external_ids', {'iface-id': 'd783a246-d28e-44e1-a0e9-783e23a95051'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:05Z|01216|binding|INFO|Releasing lport d783a246-d28e-44e1-a0e9-783e23a95051 from this chassis (sb_readonly=0)
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.547 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.548 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[947f5b6a-3ed0-4934-8812-9a73c5420f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.549 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.pid.haproxy
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:14:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:05.550 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'env', 'PROCESS_TAG=haproxy-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.598 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523245.5977142, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.598 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Started (Lifecycle Event)#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.722 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.726 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523245.597901, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.726 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.801 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.805 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.811 238945 DEBUG nova.compute.manager [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.812 238945 DEBUG oslo_concurrency.lockutils [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.812 238945 DEBUG oslo_concurrency.lockutils [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.812 238945 DEBUG oslo_concurrency.lockutils [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.813 238945 DEBUG nova.compute.manager [req-d46c6e29-10ce-46b1-9c83-2c56e670d736 req-24149684-2b74-447f-9270-a92a2d707ae3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Processing event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.813 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.818 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.821 238945 INFO nova.virt.libvirt.driver [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance spawned successfully.#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.821 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.855 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.855 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523245.817121, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.855 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:14:05 np0005597378 podman[345494]: 2026-01-27 14:14:05.933772962 +0000 UTC m=+0.054339102 container create 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:14:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:05 np0005597378 systemd[1]: Started libpod-conmon-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6.scope.
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.995 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.995 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.996 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.996 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.996 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:05 np0005597378 nova_compute[238941]: 2026-01-27 14:14:05.997 238945 DEBUG nova.virt.libvirt.driver [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:06 np0005597378 podman[345494]: 2026-01-27 14:14:05.905227185 +0000 UTC m=+0.025793345 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:14:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9303fadf09d11970d4b5c7b6edc25e1039ed739f467345412508e92fff02dcff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:06 np0005597378 podman[345494]: 2026-01-27 14:14:06.032999969 +0000 UTC m=+0.153566139 container init 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:14:06 np0005597378 podman[345494]: 2026-01-27 14:14:06.038730173 +0000 UTC m=+0.159296313 container start 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:14:06 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : New worker (345516) forked
Jan 27 09:14:06 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : Loading success.
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.090 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.406 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.603 238945 INFO nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 9.37 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.604 238945 DEBUG nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.790 238945 INFO nova.compute.manager [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 10.54 seconds to build instance.#033[00m
Jan 27 09:14:06 np0005597378 nova_compute[238941]: 2026-01-27 14:14:06.876 238945 DEBUG oslo_concurrency.lockutils [None req-3789032f-0342-4a8d-9db8-7b9457dd8b51 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 27 09:14:07 np0005597378 nova_compute[238941]: 2026-01-27 14:14:07.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:14:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.3 total, 600.0 interval#012Cumulative writes: 37K writes, 140K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.78 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5177 writes, 19K keys, 5177 commit groups, 1.0 writes per commit group, ingest: 19.60 MB, 0.03 MB/s#012Interval WAL: 5177 writes, 2084 syncs, 2.48 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.378 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.380 238945 DEBUG nova.compute.manager [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.381 238945 DEBUG oslo_concurrency.lockutils [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.381 238945 DEBUG oslo_concurrency.lockutils [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.381 238945 DEBUG oslo_concurrency.lockutils [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.382 238945 DEBUG nova.compute.manager [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] No waiting events found dispatching network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.382 238945 WARNING nova.compute.manager [req-e2424daa-21a6-4313-9ea7-7a52c638a31b req-874ecd0b-09c2-449a-9552-31b729bcced6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received unexpected event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for instance with vm_state active and task_state None.#033[00m
Jan 27 09:14:08 np0005597378 nova_compute[238941]: 2026-01-27 14:14:08.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 27 09:14:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.909 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:26:f2 2001:db8:0:1:f816:3eff:fe08:26f2 2001:db8::f816:3eff:fe08:26f2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe08:26f2/64 2001:db8::f816:3eff:fe08:26f2/64', 'neutron:device_id': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0a181f74-30e7-4bcc-b817-e247dda31c08) old=Port_Binding(mac=['fa:16:3e:08:26:f2 2001:db8::f816:3eff:fe08:26f2'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe08:26f2/64', 'neutron:device_id': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.910 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0a181f74-30e7-4bcc-b817-e247dda31c08 in datapath f2539952-bab4-4694-909b-dbdd2d64b450 updated#033[00m
Jan 27 09:14:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.912 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2539952-bab4-4694-909b-dbdd2d64b450, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:14:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:09.913 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6f5337-8ccd-48cc-b390-c032dd208af8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 673 KiB/s wr, 75 op/s
Jan 27 09:14:11 np0005597378 nova_compute[238941]: 2026-01-27 14:14:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:11 np0005597378 nova_compute[238941]: 2026-01-27 14:14:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:14:11 np0005597378 nova_compute[238941]: 2026-01-27 14:14:11.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:11 np0005597378 NetworkManager[48904]: <info>  [1769523251.6503] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Jan 27 09:14:11 np0005597378 NetworkManager[48904]: <info>  [1769523251.6515] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Jan 27 09:14:11 np0005597378 nova_compute[238941]: 2026-01-27 14:14:11.673 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:14:11 np0005597378 nova_compute[238941]: 2026-01-27 14:14:11.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:11Z|01217|binding|INFO|Releasing lport d783a246-d28e-44e1-a0e9-783e23a95051 from this chassis (sb_readonly=0)
Jan 27 09:14:11 np0005597378 nova_compute[238941]: 2026-01-27 14:14:11.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:12 np0005597378 nova_compute[238941]: 2026-01-27 14:14:12.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.820 238945 DEBUG nova.compute.manager [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.821 238945 DEBUG nova.compute.manager [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing instance network info cache due to event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.821 238945 DEBUG oslo_concurrency.lockutils [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.821 238945 DEBUG oslo_concurrency.lockutils [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.822 238945 DEBUG nova.network.neutron [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:14:13 np0005597378 nova_compute[238941]: 2026-01-27 14:14:13.887 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:14:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.0 total, 600.0 interval#012Cumulative writes: 30K writes, 121K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.91 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4328 writes, 18K keys, 4328 commit groups, 1.0 writes per commit group, ingest: 21.21 MB, 0.04 MB/s#012Interval WAL: 4329 writes, 1619 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:14:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:14:16 np0005597378 nova_compute[238941]: 2026-01-27 14:14:16.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:16 np0005597378 nova_compute[238941]: 2026-01-27 14:14:16.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:14:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:14:17
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:14:17 np0005597378 nova_compute[238941]: 2026-01-27 14:14:17.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:17 np0005597378 ceph-mds[95200]: mds.beacon.cephfs.compute-0.ukpmyo missed beacon ack from the monitors
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:14:18 np0005597378 nova_compute[238941]: 2026-01-27 14:14:18.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:18 np0005597378 nova_compute[238941]: 2026-01-27 14:14:18.592 238945 DEBUG nova.network.neutron [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated VIF entry in instance network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:14:18 np0005597378 nova_compute[238941]: 2026-01-27 14:14:18.593 238945 DEBUG nova.network.neutron [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:18 np0005597378 nova_compute[238941]: 2026-01-27 14:14:18.765 238945 DEBUG oslo_concurrency.lockutils [req-d07f3f99-5a97-4a5d-a230-813ff7e12e7e req-d58af4ca-d69a-4992-a20f-71d76b334ad2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:18 np0005597378 nova_compute[238941]: 2026-01-27 14:14:18.889 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:14:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:14:20 np0005597378 nova_compute[238941]: 2026-01-27 14:14:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:14:20 np0005597378 podman[345737]: 2026-01-27 14:14:20.32315797 +0000 UTC m=+0.021987938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:14:20 np0005597378 podman[345737]: 2026-01-27 14:14:20.586438098 +0000 UTC m=+0.285268076 container create 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:14:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 93 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 774 KiB/s rd, 816 KiB/s wr, 40 op/s
Jan 27 09:14:20 np0005597378 systemd[1]: Started libpod-conmon-04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3.scope.
Jan 27 09:14:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:21 np0005597378 podman[345737]: 2026-01-27 14:14:21.124081788 +0000 UTC m=+0.822911756 container init 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:14:21 np0005597378 podman[345737]: 2026-01-27 14:14:21.131037004 +0000 UTC m=+0.829866942 container start 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:14:21 np0005597378 interesting_rubin[345753]: 167 167
Jan 27 09:14:21 np0005597378 systemd[1]: libpod-04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3.scope: Deactivated successfully.
Jan 27 09:14:21 np0005597378 podman[345737]: 2026-01-27 14:14:21.280854223 +0000 UTC m=+0.979684181 container attach 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:14:21 np0005597378 podman[345737]: 2026-01-27 14:14:21.28149103 +0000 UTC m=+0.980320978 container died 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fce389c0c177cd25ad079dc29fe61b74683f4e0e9be33449b6d3a7b3b431282e-merged.mount: Deactivated successfully.
Jan 27 09:14:21 np0005597378 podman[345737]: 2026-01-27 14:14:21.780885029 +0000 UTC m=+1.479714967 container remove 04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:14:21 np0005597378 systemd[1]: libpod-conmon-04cfd1b3e3dcb53a426d5ed29a40f5f83098d65472596dccb4dee51daf51d3d3.scope: Deactivated successfully.
Jan 27 09:14:21 np0005597378 podman[345778]: 2026-01-27 14:14:21.954061932 +0000 UTC m=+0.040916933 container create c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:14:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:21Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:df:73 10.100.0.14
Jan 27 09:14:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:21Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:df:73 10.100.0.14
Jan 27 09:14:21 np0005597378 systemd[1]: Started libpod-conmon-c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130.scope.
Jan 27 09:14:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:22 np0005597378 podman[345778]: 2026-01-27 14:14:21.938007453 +0000 UTC m=+0.024862474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:22 np0005597378 podman[345778]: 2026-01-27 14:14:22.046822697 +0000 UTC m=+0.133677728 container init c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:14:22 np0005597378 podman[345778]: 2026-01-27 14:14:22.055396696 +0000 UTC m=+0.142251697 container start c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:14:22 np0005597378 podman[345778]: 2026-01-27 14:14:22.062481505 +0000 UTC m=+0.149336506 container attach c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:14:22 np0005597378 wizardly_shirley[345794]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:14:22 np0005597378 wizardly_shirley[345794]: --> All data devices are unavailable
Jan 27 09:14:22 np0005597378 systemd[1]: libpod-c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130.scope: Deactivated successfully.
Jan 27 09:14:22 np0005597378 podman[345778]: 2026-01-27 14:14:22.533297463 +0000 UTC m=+0.620152464 container died c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:14:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-15a5cbeca92cfb22a7aba920ddda192c8f5b28686224099b391473ca12242389-merged.mount: Deactivated successfully.
Jan 27 09:14:22 np0005597378 nova_compute[238941]: 2026-01-27 14:14:22.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:22 np0005597378 podman[345778]: 2026-01-27 14:14:22.575571141 +0000 UTC m=+0.662426142 container remove c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:14:22 np0005597378 systemd[1]: libpod-conmon-c9d4ba559dcb62b2f8163a057a8e254a35e58340c8b25108f3f03d2dde568130.scope: Deactivated successfully.
Jan 27 09:14:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 93 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 816 KiB/s wr, 13 op/s
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.030263987 +0000 UTC m=+0.048655550 container create b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 09:14:23 np0005597378 systemd[1]: Started libpod-conmon-b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e.scope.
Jan 27 09:14:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.011506276 +0000 UTC m=+0.029897839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.124560114 +0000 UTC m=+0.142951697 container init b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.131080358 +0000 UTC m=+0.149471921 container start b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.134491969 +0000 UTC m=+0.152883562 container attach b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:14:23 np0005597378 modest_neumann[345905]: 167 167
Jan 27 09:14:23 np0005597378 systemd[1]: libpod-b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e.scope: Deactivated successfully.
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.139190264 +0000 UTC m=+0.157581847 container died b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:14:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9d3db06684efcfb39048fe5affe825f119d220c9e5308e4e9eec12624bb03684-merged.mount: Deactivated successfully.
Jan 27 09:14:23 np0005597378 podman[345889]: 2026-01-27 14:14:23.186677381 +0000 UTC m=+0.205068934 container remove b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:14:23 np0005597378 systemd[1]: libpod-conmon-b180382d0450a2d230555d40795100d943d5c93573e9b22e6665f3d353f7722e.scope: Deactivated successfully.
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.399959935 +0000 UTC m=+0.054522896 container create 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:14:23 np0005597378 systemd[1]: Started libpod-conmon-7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab.scope.
Jan 27 09:14:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.379245572 +0000 UTC m=+0.033808563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:14:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.488930629 +0000 UTC m=+0.143493610 container init 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.498563036 +0000 UTC m=+0.153126017 container start 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.50620603 +0000 UTC m=+0.160769011 container attach 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.539 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.541 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.564 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.643 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.643 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.651 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.651 238945 INFO nova.compute.claims [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.774 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]: {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:    "0": [
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:        {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "devices": [
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "/dev/loop3"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            ],
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_name": "ceph_lv0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_size": "21470642176",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "name": "ceph_lv0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "tags": {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cluster_name": "ceph",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.crush_device_class": "",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.encrypted": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.objectstore": "bluestore",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osd_id": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.type": "block",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.vdo": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.with_tpm": "0"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            },
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "type": "block",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "vg_name": "ceph_vg0"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:        }
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:    ],
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:    "1": [
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:        {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "devices": [
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "/dev/loop4"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            ],
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_name": "ceph_lv1",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_size": "21470642176",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "name": "ceph_lv1",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "tags": {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cluster_name": "ceph",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.crush_device_class": "",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.encrypted": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.objectstore": "bluestore",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osd_id": "1",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.type": "block",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.vdo": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.with_tpm": "0"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            },
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "type": "block",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "vg_name": "ceph_vg1"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:        }
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:    ],
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:    "2": [
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:        {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "devices": [
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "/dev/loop5"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            ],
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_name": "ceph_lv2",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_size": "21470642176",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "name": "ceph_lv2",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "tags": {
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.cluster_name": "ceph",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.crush_device_class": "",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.encrypted": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.objectstore": "bluestore",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osd_id": "2",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.type": "block",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.vdo": "0",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:                "ceph.with_tpm": "0"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            },
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "type": "block",
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:            "vg_name": "ceph_vg2"
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:        }
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]:    ]
Jan 27 09:14:23 np0005597378 blissful_noyce[345944]: }
Jan 27 09:14:23 np0005597378 systemd[1]: libpod-7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab.scope: Deactivated successfully.
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.850721766 +0000 UTC m=+0.505284727 container died 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:14:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3482c149fbd8910e93ac67959956fa15fa6b965fbf8181585952a86684763d54-merged.mount: Deactivated successfully.
Jan 27 09:14:23 np0005597378 nova_compute[238941]: 2026-01-27 14:14:23.892 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:23 np0005597378 podman[345928]: 2026-01-27 14:14:23.901265425 +0000 UTC m=+0.555828386 container remove 7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:14:23 np0005597378 systemd[1]: libpod-conmon-7824093d36a86c087b0b1f2bc8bd7215c9ecd19a63072a93e3664e55d809beab.scope: Deactivated successfully.
Jan 27 09:14:23 np0005597378 podman[345955]: 2026-01-27 14:14:23.961844472 +0000 UTC m=+0.075611849 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 09:14:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:14:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346327051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.366 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.374 238945 DEBUG nova.compute.provider_tree [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.403643764 +0000 UTC m=+0.047958651 container create bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.420 238945 DEBUG nova.scheduler.client.report [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:14:24 np0005597378 systemd[1]: Started libpod-conmon-bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa.scope.
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.448 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.449 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.382962893 +0000 UTC m=+0.027277790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:14:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.499 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.500 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.502940444 +0000 UTC m=+0.147255351 container init bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.513159778 +0000 UTC m=+0.157474665 container start bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.517077401 +0000 UTC m=+0.161392298 container attach bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:14:24 np0005597378 adoring_chatterjee[346081]: 167 167
Jan 27 09:14:24 np0005597378 systemd[1]: libpod-bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa.scope: Deactivated successfully.
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.52150056 +0000 UTC m=+0.165815447 container died bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:14:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.534 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:14:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9db9c15819082bb56e8ef508a53f3150527a131dcbbc6e79ee6b7740f2260496-merged.mount: Deactivated successfully.
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.554 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:14:24 np0005597378 podman[346063]: 2026-01-27 14:14:24.564717673 +0000 UTC m=+0.209032550 container remove bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:14:24 np0005597378 systemd[1]: libpod-conmon-bd5f146c5b4e9922f17d2bde666cdd54747270def1b30c95266464ca2b1557aa.scope: Deactivated successfully.
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.644 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.646 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.646 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Creating image(s)#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.668 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.695 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.717 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.720 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:24 np0005597378 podman[346147]: 2026-01-27 14:14:24.75304101 +0000 UTC m=+0.034860641 container create aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:14:24 np0005597378 systemd[1]: Started libpod-conmon-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope.
Jan 27 09:14:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.808 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.808 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.809 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.809 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:24 np0005597378 podman[346147]: 2026-01-27 14:14:24.814317746 +0000 UTC m=+0.096137397 container init aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:14:24 np0005597378 podman[346147]: 2026-01-27 14:14:24.821666261 +0000 UTC m=+0.103485892 container start aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:14:24 np0005597378 podman[346147]: 2026-01-27 14:14:24.826225523 +0000 UTC m=+0.108045154 container attach aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.832 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:24 np0005597378 podman[346147]: 2026-01-27 14:14:24.739157559 +0000 UTC m=+0.020977210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.837 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:24 np0005597378 nova_compute[238941]: 2026-01-27 14:14:24.873 238945 DEBUG nova.policy [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:14:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 120 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.122 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.182 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.286 238945 DEBUG nova.objects.instance [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid ffbbdbe0-9dc8-46b2-9492-e5d63351a47f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.301 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.301 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Ensure instance console log exists: /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.301 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.302 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:25 np0005597378 nova_compute[238941]: 2026-01-27 14:14:25.302 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:25 np0005597378 lvm[346365]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:14:25 np0005597378 lvm[346365]: VG ceph_vg1 finished
Jan 27 09:14:25 np0005597378 lvm[346364]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:14:25 np0005597378 lvm[346364]: VG ceph_vg0 finished
Jan 27 09:14:25 np0005597378 lvm[346367]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:14:25 np0005597378 lvm[346367]: VG ceph_vg2 finished
Jan 27 09:14:25 np0005597378 optimistic_engelbart[346175]: {}
Jan 27 09:14:25 np0005597378 systemd[1]: libpod-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope: Deactivated successfully.
Jan 27 09:14:25 np0005597378 systemd[1]: libpod-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope: Consumed 1.262s CPU time.
Jan 27 09:14:25 np0005597378 podman[346147]: 2026-01-27 14:14:25.637050055 +0000 UTC m=+0.918869716 container died aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:14:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-98c4eac2bb6c0f7992e9c381619c71ab7bd6b2b14db06c822c90e218fd1a1e4b-merged.mount: Deactivated successfully.
Jan 27 09:14:25 np0005597378 podman[346147]: 2026-01-27 14:14:25.783189756 +0000 UTC m=+1.065009427 container remove aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:14:25 np0005597378 systemd[1]: libpod-conmon-aff875fd6a16e8015024d9805c390594fc0ee5bcd8feee7de63db5d4c6d66a05.scope: Deactivated successfully.
Jan 27 09:14:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:14:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:14:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:25 np0005597378 podman[346384]: 2026-01-27 14:14:25.922082913 +0000 UTC m=+0.092256683 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 27 09:14:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:14:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 147 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.0 MiB/s wr, 69 op/s
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 09:14:27 np0005597378 nova_compute[238941]: 2026-01-27 14:14:27.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000932665148183161 of space, bias 1.0, pg target 0.2797995444549483 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695856858318249 of space, bias 1.0, pg target 0.20087570574954747 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0609991609285592e-06 of space, bias 4.0, pg target 0.001273198993114271 quantized to 16 (current 16)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:14:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:14:27 np0005597378 nova_compute[238941]: 2026-01-27 14:14:27.887 238945 INFO nova.compute.manager [None req-fc1fd90f-3ad7-4aa6-9085-d61293c57683 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Get console output#033[00m
Jan 27 09:14:27 np0005597378 nova_compute[238941]: 2026-01-27 14:14:27.893 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:14:28 np0005597378 nova_compute[238941]: 2026-01-27 14:14:28.147 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully created port: d98527e5-8812-43b6-957e-7529c80c2873 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:14:28 np0005597378 nova_compute[238941]: 2026-01-27 14:14:28.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 09:14:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:30 np0005597378 nova_compute[238941]: 2026-01-27 14:14:30.461 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully created port: 76b015b5-672a-451a-8d3a-e6c7459987af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:14:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 27 09:14:32 np0005597378 nova_compute[238941]: 2026-01-27 14:14:32.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.1 MiB/s wr, 81 op/s
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.191 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully updated port: d98527e5-8812-43b6-957e-7529c80c2873 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.315 238945 DEBUG nova.compute.manager [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.315 238945 DEBUG nova.compute.manager [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-d98527e5-8812-43b6-957e-7529c80c2873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.316 238945 DEBUG oslo_concurrency.lockutils [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.316 238945 DEBUG oslo_concurrency.lockutils [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.316 238945 DEBUG nova.network.neutron [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port d98527e5-8812-43b6-957e-7529c80c2873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.633 238945 DEBUG nova.network.neutron [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:14:33 np0005597378 nova_compute[238941]: 2026-01-27 14:14:33.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.234 238945 DEBUG nova.network.neutron [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.286 238945 DEBUG oslo_concurrency.lockutils [req-6bc40277-53a8-4b2a-a2c4-5cc5980240c4 req-50e6bafe-a43f-4e0d-8979-9be3df6438c9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.683 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Successfully updated port: 76b015b5-672a-451a-8d3a-e6c7459987af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.737 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.737 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.738 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:14:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.1 MiB/s wr, 82 op/s
Jan 27 09:14:34 np0005597378 nova_compute[238941]: 2026-01-27 14:14:34.987 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:14:35 np0005597378 nova_compute[238941]: 2026-01-27 14:14:35.482 238945 DEBUG nova.compute.manager [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:35 np0005597378 nova_compute[238941]: 2026-01-27 14:14:35.482 238945 DEBUG nova.compute.manager [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-76b015b5-672a-451a-8d3a-e6c7459987af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:14:35 np0005597378 nova_compute[238941]: 2026-01-27 14:14:35.483 238945 DEBUG oslo_concurrency.lockutils [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 27 09:14:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:37.469 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:37.471 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.535 238945 DEBUG nova.network.neutron [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.576 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.651 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.652 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance network_info: |[{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.652 238945 DEBUG oslo_concurrency.lockutils [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.652 238945 DEBUG nova.network.neutron [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port 76b015b5-672a-451a-8d3a-e6c7459987af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.657 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start _get_guest_xml network_info=[{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.662 238945 WARNING nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.669 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.669 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.672 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.673 238945 DEBUG nova.virt.libvirt.host [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.673 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.673 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.674 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.675 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.676 238945 DEBUG nova.virt.hardware [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:14:37 np0005597378 nova_compute[238941]: 2026-01-27 14:14:37.678 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867912657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.263 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.289 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.293 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:14:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416697755' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:14:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 973 KiB/s wr, 26 op/s
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.948 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.949 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.950 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.950 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.951 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.951 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.952 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.953 238945 DEBUG nova.objects.instance [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffbbdbe0-9dc8-46b2-9492-e5d63351a47f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.979 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <uuid>ffbbdbe0-9dc8-46b2-9492-e5d63351a47f</uuid>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <name>instance-00000076</name>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-342597412</nova:name>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:14:37</nova:creationTime>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:port uuid="d98527e5-8812-43b6-957e-7529c80c2873">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <nova:port uuid="76b015b5-672a-451a-8d3a-e6c7459987af">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe1d:e05c" ipVersion="6"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1d:e05c" ipVersion="6"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <entry name="serial">ffbbdbe0-9dc8-46b2-9492-e5d63351a47f</entry>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <entry name="uuid">ffbbdbe0-9dc8-46b2-9492-e5d63351a47f</entry>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:78:dc:ff"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <target dev="tapd98527e5-88"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:1d:e0:5c"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <target dev="tap76b015b5-67"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/console.log" append="off"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:14:38 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:14:38 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:14:38 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:14:38 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.980 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Preparing to wait for external event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.980 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Preparing to wait for external event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.981 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.982 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.982 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.982 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.983 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.983 238945 DEBUG os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.984 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.984 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.987 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd98527e5-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.988 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd98527e5-88, col_values=(('external_ids', {'iface-id': 'd98527e5-8812-43b6-957e-7529c80c2873', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:dc:ff', 'vm-uuid': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:38 np0005597378 NetworkManager[48904]: <info>  [1769523278.9902] manager: (tapd98527e5-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.995 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.995 238945 INFO os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88')#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.996 238945 DEBUG nova.virt.libvirt.vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.996 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG nova.network.os_vif_util [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.997 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:38 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.998 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:38.999 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76b015b5-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.000 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76b015b5-67, col_values=(('external_ids', {'iface-id': '76b015b5-672a-451a-8d3a-e6c7459987af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:e0:5c', 'vm-uuid': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:39 np0005597378 NetworkManager[48904]: <info>  [1769523279.0017] manager: (tap76b015b5-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.007 238945 INFO os_vif [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67')#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.091 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.092 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:78:dc:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.092 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:1d:e0:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.093 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Using config drive#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.116 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:39.472 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.687 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Creating config drive at /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.694 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_quhgvs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.841 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_quhgvs" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.864 238945 DEBUG nova.storage.rbd_utils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.868 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.990 238945 DEBUG oslo_concurrency.processutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:39 np0005597378 nova_compute[238941]: 2026-01-27 14:14:39.991 238945 INFO nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deleting local config drive /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f/disk.config because it was imported into RBD.#033[00m
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.0367] manager: (tapd98527e5-88): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Jan 27 09:14:40 np0005597378 kernel: tapd98527e5-88: entered promiscuous mode
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01218|binding|INFO|Claiming lport d98527e5-8812-43b6-957e-7529c80c2873 for this chassis.
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01219|binding|INFO|d98527e5-8812-43b6-957e-7529c80c2873: Claiming fa:16:3e:78:dc:ff 10.100.0.8
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.0538] manager: (tap76b015b5-67): new Tun device (/org/freedesktop/NetworkManager/Devices/502)
Jan 27 09:14:40 np0005597378 kernel: tap76b015b5-67: entered promiscuous mode
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01220|binding|INFO|Setting lport d98527e5-8812-43b6-957e-7529c80c2873 ovn-installed in OVS
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01221|if_status|INFO|Dropped 2 log messages in last 119 seconds (most recently, 119 seconds ago) due to excessive rate
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01222|if_status|INFO|Not updating pb chassis for 76b015b5-672a-451a-8d3a-e6c7459987af now as sb is readonly
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01223|binding|INFO|Claiming lport 76b015b5-672a-451a-8d3a-e6c7459987af for this chassis.
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01224|binding|INFO|76b015b5-672a-451a-8d3a-e6c7459987af: Claiming fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01225|binding|INFO|Setting lport d98527e5-8812-43b6-957e-7529c80c2873 up in Southbound
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.059 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:dc:ff 10.100.0.8'], port_security=['fa:16:3e:78:dc:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d98527e5-8812-43b6-957e-7529c80c2873) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d98527e5-8812-43b6-957e-7529c80c2873 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e bound to our chassis#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.062 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 107e1e32-614b-4ab8-bbad-b8ada050804e#033[00m
Jan 27 09:14:40 np0005597378 systemd-udevd[346578]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01226|binding|INFO|Setting lport 76b015b5-672a-451a-8d3a-e6c7459987af ovn-installed in OVS
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01227|binding|INFO|Setting lport 76b015b5-672a-451a-8d3a-e6c7459987af up in Southbound
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.072 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], port_security=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe1d:e05c/64 2001:db8::f816:3eff:fe1d:e05c/64', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=76b015b5-672a-451a-8d3a-e6c7459987af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:40 np0005597378 systemd-udevd[346579]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.079 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f57776d-d79b-4308-b9dc-4ea46aef68a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.080 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap107e1e32-61 in ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.083 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap107e1e32-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0b526f04-580d-4300-ae4b-e434754de355]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.085 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5853c03-8a05-4cb6-b760-3754e006148f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.0911] device (tap76b015b5-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.0916] device (tap76b015b5-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.0933] device (tapd98527e5-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.0938] device (tapd98527e5-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.100 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f641fdca-feb3-4547-ac4f-72b39a3baecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 systemd-machined[207425]: New machine qemu-150-instance-00000076.
Jan 27 09:14:40 np0005597378 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90188e3e-240b-45fd-9b28-699d9a046570]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[15ad35ae-d721-42b4-9f9b-c50eac29746b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.1513] manager: (tap107e1e32-60): new Veth device (/org/freedesktop/NetworkManager/Devices/503)
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb44072-6158-4895-b637-64114ecfd2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a9273de5-7193-42eb-9c0b-699362e4e9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[66ee43f1-a62e-434c-9b2f-246ea81bfac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.2019] device (tap107e1e32-60): carrier: link connected
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[10cf6f3c-cb6f-4569-b7f2-cc67a4ec2c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.224 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f6743a-aaac-4a1e-b134-ed98a70ceef2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346614, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.238 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff24d95-475c-448a-a506-a3e432b1695b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:2367'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591982, 'tstamp': 591982}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346615, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.255 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ba39d3-ff62-4de7-99c4-54e60ff2b271]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346616, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.292 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa8fc5d-2df9-409d-aafb-3385e142623e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.355 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[530451db-26f3-4e7a-a94f-c52000770d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap107e1e32-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 NetworkManager[48904]: <info>  [1769523280.3602] manager: (tap107e1e32-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Jan 27 09:14:40 np0005597378 kernel: tap107e1e32-60: entered promiscuous mode
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.368 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap107e1e32-60, col_values=(('external_ids', {'iface-id': '4892ac35-2643-4e0c-8a95-5275bc7e88da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:40Z|01228|binding|INFO|Releasing lport 4892ac35-2643-4e0c-8a95-5275bc7e88da from this chassis (sb_readonly=0)
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.372 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/107e1e32-614b-4ab8-bbad-b8ada050804e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/107e1e32-614b-4ab8-bbad-b8ada050804e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.373 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5b0263-d287-4004-8cc8-0cd3f887d860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.374 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/107e1e32-614b-4ab8-bbad-b8ada050804e.pid.haproxy
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 107e1e32-614b-4ab8-bbad-b8ada050804e
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.375 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'env', 'PROCESS_TAG=haproxy-107e1e32-614b-4ab8-bbad-b8ada050804e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/107e1e32-614b-4ab8-bbad-b8ada050804e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.733 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523280.7325273, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.733 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Started (Lifecycle Event)#033[00m
Jan 27 09:14:40 np0005597378 podman[346690]: 2026-01-27 14:14:40.751852978 +0000 UTC m=+0.061123461 container create df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.777 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.783 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523280.7327385, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.784 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:14:40 np0005597378 systemd[1]: Started libpod-conmon-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37.scope.
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.806 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.810 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:14:40 np0005597378 podman[346690]: 2026-01-27 14:14:40.714576494 +0000 UTC m=+0.023846997 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:14:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdffd4ed78cd708ae200ae801f9f3926a40989c5dc91892312c32c1cbf33b62c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:40 np0005597378 podman[346690]: 2026-01-27 14:14:40.852870935 +0000 UTC m=+0.162141418 container init df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:14:40 np0005597378 podman[346690]: 2026-01-27 14:14:40.858870355 +0000 UTC m=+0.168140838 container start df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 27 09:14:40 np0005597378 nova_compute[238941]: 2026-01-27 14:14:40.864 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:14:40 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : New worker (346712) forked
Jan 27 09:14:40 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : Loading success.
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.911 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 76b015b5-672a-451a-8d3a-e6c7459987af in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.913 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2539952-bab4-4694-909b-dbdd2d64b450#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.924 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05c34c59-c35d-4651-b772-d7382ce8b1db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.924 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf2539952-b1 in ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.926 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf2539952-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.926 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c66c85b5-2554-4131-bb6d-6d5a07b73ecb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.927 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[722f799c-19f3-4820-ab80-4b7daaab4699]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.938 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[16a439d1-02a9-40e5-8052-965a435d5590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 28 KiB/s wr, 5 op/s
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9de82c3e-caf8-47dd-ae14-bfca237158ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.992 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[092b8bfe-f558-4ba8-8159-becef1663a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:40.999 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1828556c-5f18-4aee-bda2-dfc9fa1c41ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 systemd-udevd[346602]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:14:41 np0005597378 NetworkManager[48904]: <info>  [1769523281.0005] manager: (tapf2539952-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/505)
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.037 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[182e66d3-b1c0-4bb9-b898-877daa259f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d58335bf-5653-4dc0-84e0-a91580ec2f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 NetworkManager[48904]: <info>  [1769523281.0702] device (tapf2539952-b0): carrier: link connected
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.077 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b93997-336c-44e7-9024-daef3deb960f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.097 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40d00149-f2d1-4464-a664-a94e1497ee75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346731, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c05af446-175c-4412-b7e8-d0076fa2c5fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:26f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592069, 'tstamp': 592069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346732, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9df62a-0fc0-4fa0-bb41-42ee518dca65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346733, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.168 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5f91ad-10b2-4dd8-af0d-c5588d5feed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7942d0fc-41cf-418c-820f-935c461ae51d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.200 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2539952-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:41 np0005597378 NetworkManager[48904]: <info>  [1769523281.2030] manager: (tapf2539952-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Jan 27 09:14:41 np0005597378 kernel: tapf2539952-b0: entered promiscuous mode
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.206 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2539952-b0, col_values=(('external_ids', {'iface-id': '0a181f74-30e7-4bcc-b817-e247dda31c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:41Z|01229|binding|INFO|Releasing lport 0a181f74-30e7-4bcc-b817-e247dda31c08 from this chassis (sb_readonly=0)
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.221 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f2539952-bab4-4694-909b-dbdd2d64b450.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f2539952-bab4-4694-909b-dbdd2d64b450.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.222 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[577c238f-63d6-402c-b16f-b8fec8a289dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.223 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/f2539952-bab4-4694-909b-dbdd2d64b450.pid.haproxy
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID f2539952-bab4-4694-909b-dbdd2d64b450
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:14:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:41.224 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'env', 'PROCESS_TAG=haproxy-f2539952-bab4-4694-909b-dbdd2d64b450', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f2539952-bab4-4694-909b-dbdd2d64b450.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.445 238945 DEBUG nova.compute.manager [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.445 238945 DEBUG oslo_concurrency.lockutils [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.446 238945 DEBUG oslo_concurrency.lockutils [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.446 238945 DEBUG oslo_concurrency.lockutils [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.446 238945 DEBUG nova.compute.manager [req-e9765976-9cdd-4753-85fa-5d30880d7718 req-3086ee97-c0dd-4e50-b76e-f9f9498f6be6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Processing event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.499 238945 DEBUG nova.network.neutron [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updated VIF entry in instance network info cache for port 76b015b5-672a-451a-8d3a-e6c7459987af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.500 238945 DEBUG nova.network.neutron [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.531 238945 DEBUG nova.compute.manager [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG oslo_concurrency.lockutils [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG oslo_concurrency.lockutils [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG oslo_concurrency.lockutils [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.532 238945 DEBUG nova.compute.manager [req-a47aec91-5cf0-446e-ad13-db08508c6183 req-d886ed35-6253-48f2-8dd3-d4e9568d2e87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Processing event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.533 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.536 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523281.5361135, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.536 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.538 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.541 238945 DEBUG oslo_concurrency.lockutils [req-f18c85ba-8636-449a-941a-349db6da2cad req-6cec474d-5cdd-4ff1-bcb6-cb34cf3c4a06 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.542 238945 INFO nova.virt.libvirt.driver [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance spawned successfully.#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.542 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.572 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:41 np0005597378 podman[346762]: 2026-01-27 14:14:41.576192701 +0000 UTC m=+0.046720557 container create cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.578 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.581 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.582 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.583 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.583 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.584 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.584 238945 DEBUG nova.virt.libvirt.driver [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:41 np0005597378 systemd[1]: Started libpod-conmon-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3.scope.
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.634 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:14:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e50b76deb35f09fe4e76fd932ccfbf2ba9078b715270b172138bed3ca5fc986/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:41 np0005597378 podman[346762]: 2026-01-27 14:14:41.553480615 +0000 UTC m=+0.024008491 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:14:41 np0005597378 podman[346762]: 2026-01-27 14:14:41.663750189 +0000 UTC m=+0.134278045 container init cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:14:41 np0005597378 podman[346762]: 2026-01-27 14:14:41.671178457 +0000 UTC m=+0.141706303 container start cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.684 238945 INFO nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 17.04 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.684 238945 DEBUG nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:41 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : New worker (346783) forked
Jan 27 09:14:41 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : Loading success.
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.749 238945 INFO nova.compute.manager [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 18.14 seconds to build instance.#033[00m
Jan 27 09:14:41 np0005597378 nova_compute[238941]: 2026-01-27 14:14:41.769 238945 DEBUG oslo_concurrency.lockutils [None req-1fc28e36-1906-4b48-98df-e71e5db33460 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:42 np0005597378 nova_compute[238941]: 2026-01-27 14:14:42.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 17 KiB/s wr, 5 op/s
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.661 238945 DEBUG nova.compute.manager [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG oslo_concurrency.lockutils [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG oslo_concurrency.lockutils [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG oslo_concurrency.lockutils [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.662 238945 DEBUG nova.compute.manager [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.663 238945 WARNING nova.compute.manager [req-e8cb9156-d377-461d-940f-cf647e2cf662 req-b9453d56-773e-47ef-82a1-b53186d7d674 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af for instance with vm_state active and task_state None.#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG nova.compute.manager [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG oslo_concurrency.lockutils [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG oslo_concurrency.lockutils [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG oslo_concurrency.lockutils [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.675 238945 DEBUG nova.compute.manager [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:14:43 np0005597378 nova_compute[238941]: 2026-01-27 14:14:43.676 238945 WARNING nova.compute.manager [req-a82aab51-f3a1-40e8-9fef-7d5849edc673 req-5c2da54d-96dc-4fc0-b01f-4faeaeb22556 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.277 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.278 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.302 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.405 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.406 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.434 238945 INFO nova.compute.claims [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:14:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:44 np0005597378 nova_compute[238941]: 2026-01-27 14:14:44.625 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 167 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 17 KiB/s wr, 40 op/s
Jan 27 09:14:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:14:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232190919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.229 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.236 238945 DEBUG nova.compute.provider_tree [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.252 238945 DEBUG nova.scheduler.client.report [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.273 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.274 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.331 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.332 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.361 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:14:45 np0005597378 nova_compute[238941]: 2026-01-27 14:14:45.379 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.292 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.295 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.296 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Creating image(s)#033[00m
Jan 27 09:14:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:46.319 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:46.320 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.331 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.360 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.383 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.388 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.467 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.468 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.469 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.469 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.491 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.497 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.621 238945 DEBUG nova.policy [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.747 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.810 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.896 238945 DEBUG nova.objects.instance [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid d27de200-a446-4d4f-a0dd-c3be9edf0f73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.915 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.915 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Ensure instance console log exists: /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.916 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.916 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:46 np0005597378 nova_compute[238941]: 2026-01-27 14:14:46.916 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 167 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 27 09:14:47 np0005597378 nova_compute[238941]: 2026-01-27 14:14:47.433 238945 DEBUG nova.compute.manager [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:47 np0005597378 nova_compute[238941]: 2026-01-27 14:14:47.434 238945 DEBUG nova.compute.manager [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-d98527e5-8812-43b6-957e-7529c80c2873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:14:47 np0005597378 nova_compute[238941]: 2026-01-27 14:14:47.434 238945 DEBUG oslo_concurrency.lockutils [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:47 np0005597378 nova_compute[238941]: 2026-01-27 14:14:47.435 238945 DEBUG oslo_concurrency.lockutils [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:47 np0005597378 nova_compute[238941]: 2026-01-27 14:14:47.435 238945 DEBUG nova.network.neutron [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port d98527e5-8812-43b6-957e-7529c80c2873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:14:47 np0005597378 nova_compute[238941]: 2026-01-27 14:14:47.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:14:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:14:48 np0005597378 nova_compute[238941]: 2026-01-27 14:14:48.573 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Successfully created port: cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:14:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 204 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 100 op/s
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.306 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Successfully updated port: cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.335 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.335 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.336 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.432 238945 DEBUG nova.network.neutron [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updated VIF entry in instance network info cache for port d98527e5-8812-43b6-957e-7529c80c2873. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.433 238945 DEBUG nova.network.neutron [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.473 238945 DEBUG oslo_concurrency.lockutils [req-3dea35f4-6a55-4e97-a362-efa109b3bee6 req-18704a35-675b-4bd2-9536-0f1ce86b637e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.518 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.527 238945 DEBUG nova.compute.manager [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-changed-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.527 238945 DEBUG nova.compute.manager [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Refreshing instance network info cache due to event network-changed-cad52f25-e715-4de1-a04c-d1f0ff0b8e07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:14:49 np0005597378 nova_compute[238941]: 2026-01-27 14:14:49.528 238945 DEBUG oslo_concurrency.lockutils [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:14:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.390 238945 DEBUG nova.network.neutron [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updating instance_info_cache with network_info: [{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.410 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.411 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance network_info: |[{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.411 238945 DEBUG oslo_concurrency.lockutils [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.411 238945 DEBUG nova.network.neutron [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Refreshing network info cache for port cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.414 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start _get_guest_xml network_info=[{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.419 238945 WARNING nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.423 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.424 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.431 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.431 238945 DEBUG nova.virt.libvirt.host [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.432 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.432 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.433 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.433 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.434 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.435 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.435 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.435 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.436 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.436 238945 DEBUG nova.virt.hardware [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.438 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.500 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.501 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:14:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.502 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:14:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:50.503 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0888777c-b45a-42e5-a0b5-5bfb5b79ac4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:14:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:14:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150065959' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:14:50 np0005597378 nova_compute[238941]: 2026-01-27 14:14:50.981 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.004 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.010 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:14:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946618463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.563 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.565 238945 DEBUG nova.virt.libvirt.vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-420507594',display_name='tempest-TestNetworkBasicOps-server-420507594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-420507594',id=119,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAXrhsWv7cYesLBLIUM/uRdxKvrOUW2+EAUZ2KPePP152JhfjVghGmWviGYZMjnuf8zP5zI+mKUsZz0VRmI3E31J4pwEULVaZMClZaffF0xhmqMW9QtGLlrSUDjoNDBm5Q==',key_name='tempest-TestNetworkBasicOps-2146781436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-1tijlfwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:46Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d27de200-a446-4d4f-a0dd-c3be9edf0f73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.565 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.566 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.567 238945 DEBUG nova.objects.instance [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid d27de200-a446-4d4f-a0dd-c3be9edf0f73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.582 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <uuid>d27de200-a446-4d4f-a0dd-c3be9edf0f73</uuid>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <name>instance-00000077</name>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-420507594</nova:name>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:14:50</nova:creationTime>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <nova:port uuid="cad52f25-e715-4de1-a04c-d1f0ff0b8e07">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <entry name="serial">d27de200-a446-4d4f-a0dd-c3be9edf0f73</entry>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <entry name="uuid">d27de200-a446-4d4f-a0dd-c3be9edf0f73</entry>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:a7:19:42"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <target dev="tapcad52f25-e7"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/console.log" append="off"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:14:51 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:14:51 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:14:51 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:14:51 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.584 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Preparing to wait for external event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.584 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.584 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.585 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.586 238945 DEBUG nova.virt.libvirt.vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-420507594',display_name='tempest-TestNetworkBasicOps-server-420507594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-420507594',id=119,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAXrhsWv7cYesLBLIUM/uRdxKvrOUW2+EAUZ2KPePP152JhfjVghGmWviGYZMjnuf8zP5zI+mKUsZz0VRmI3E31J4pwEULVaZMClZaffF0xhmqMW9QtGLlrSUDjoNDBm5Q==',key_name='tempest-TestNetworkBasicOps-2146781436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-1tijlfwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:14:46Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d27de200-a446-4d4f-a0dd-c3be9edf0f73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.586 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.587 238945 DEBUG nova.network.os_vif_util [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.587 238945 DEBUG os_vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.588 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.589 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.592 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcad52f25-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.593 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcad52f25-e7, col_values=(('external_ids', {'iface-id': 'cad52f25-e715-4de1-a04c-d1f0ff0b8e07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:19:42', 'vm-uuid': 'd27de200-a446-4d4f-a0dd-c3be9edf0f73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:51 np0005597378 NetworkManager[48904]: <info>  [1769523291.5952] manager: (tapcad52f25-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.604 238945 INFO os_vif [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7')#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.670 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.670 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.670 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:a7:19:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.671 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Using config drive#033[00m
Jan 27 09:14:51 np0005597378 nova_compute[238941]: 2026-01-27 14:14:51.690 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.047 238945 DEBUG nova.network.neutron [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updated VIF entry in instance network info cache for port cad52f25-e715-4de1-a04c-d1f0ff0b8e07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.047 238945 DEBUG nova.network.neutron [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updating instance_info_cache with network_info: [{"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.066 238945 DEBUG oslo_concurrency.lockutils [req-da98b779-3b93-469a-938a-49ee33c0434b req-2bf360a8-4597-463c-a484-db049e37b6f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d27de200-a446-4d4f-a0dd-c3be9edf0f73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.117 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Creating config drive at /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.125 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwdt_xc_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.274 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwdt_xc_x" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.301 238945 DEBUG nova.storage.rbd_utils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.305 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.593 238945 DEBUG oslo_concurrency.processutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config d27de200-a446-4d4f-a0dd-c3be9edf0f73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.594 238945 INFO nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deleting local config drive /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73/disk.config because it was imported into RBD.#033[00m
Jan 27 09:14:52 np0005597378 kernel: tapcad52f25-e7: entered promiscuous mode
Jan 27 09:14:52 np0005597378 NetworkManager[48904]: <info>  [1769523292.6388] manager: (tapcad52f25-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/508)
Jan 27 09:14:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:52Z|01230|binding|INFO|Claiming lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for this chassis.
Jan 27 09:14:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:52Z|01231|binding|INFO|cad52f25-e715-4de1-a04c-d1f0ff0b8e07: Claiming fa:16:3e:a7:19:42 10.100.0.26
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:52 np0005597378 systemd-udevd[347117]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:14:52 np0005597378 systemd-machined[207425]: New machine qemu-151-instance-00000077.
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:52Z|01232|binding|INFO|Setting lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 ovn-installed in OVS
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:52 np0005597378 NetworkManager[48904]: <info>  [1769523292.6888] device (tapcad52f25-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:14:52 np0005597378 NetworkManager[48904]: <info>  [1769523292.6894] device (tapcad52f25-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:14:52 np0005597378 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.695 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:19:42 10.100.0.26'], port_security=['fa:16:3e:a7:19:42 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'd27de200-a446-4d4f-a0dd-c3be9edf0f73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84042259-5d43-4b00-ad8b-0831283b7f54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5abd853e-4417-412e-a289-48ca97e3eaf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d9d5fe8-5e3c-4436-822b-ed7a88618dfd, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cad52f25-e715-4de1-a04c-d1f0ff0b8e07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.696 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cad52f25-e715-4de1-a04c-d1f0ff0b8e07 in datapath 84042259-5d43-4b00-ad8b-0831283b7f54 bound to our chassis#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.697 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84042259-5d43-4b00-ad8b-0831283b7f54#033[00m
Jan 27 09:14:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:52Z|01233|binding|INFO|Setting lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 up in Southbound
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.708 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[24c8a39f-17a2-4de3-bbcb-081ddd5cf46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.710 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84042259-51 in ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.711 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84042259-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ae899a80-1743-438b-ab6c-0dddd60062fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.712 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e9542f-3862-4064-a72a-0c91c92f01d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.724 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1b8fa3-6fc3-4c6b-89cc-b069f1ccb5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.741 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e12a74a-4be6-4a9e-b219-cc51acbc6194]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.773 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2bd2c8-47a2-4bbe-a7b8-e0ed71378728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.781 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8c8ba4-22e2-4dba-8b35-f758ed56d933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 NetworkManager[48904]: <info>  [1769523292.7823] manager: (tap84042259-50): new Veth device (/org/freedesktop/NetworkManager/Devices/509)
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.817 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5be1aeca-b244-4c29-97f2-f3467959244b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[83bd6657-46f3-47c1-a3bd-7f3f4a2d9518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 NetworkManager[48904]: <info>  [1769523292.8594] device (tap84042259-50): carrier: link connected
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.868 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0141a2-218a-47d0-b0a7-30e6b906a998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.889 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d208f4aa-8630-4257-8687-972c9b52878b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84042259-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:dd:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593248, 'reachable_time': 18043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347150, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c784d36a-865a-4c1c-9da5-d0365afaa6a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:dd9a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593248, 'tstamp': 593248}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347151, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc47ae4-3f71-4964-8cdd-8b7d52770ed2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84042259-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:dd:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593248, 'reachable_time': 18043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347152, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 213 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 27 09:14:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:52.963 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc80439-eb5a-4a50-b650-69fea06c7cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.991 238945 DEBUG nova.compute.manager [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG oslo_concurrency.lockutils [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG oslo_concurrency.lockutils [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG oslo_concurrency.lockutils [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:52 np0005597378 nova_compute[238941]: 2026-01-27 14:14:52.992 238945 DEBUG nova.compute.manager [req-0b7569cd-5da6-44ae-a61d-d7833367a0c0 req-f69a7fde-b836-43bb-9289-30effd4d57f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Processing event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.041 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e784f1fc-0e70-423d-9761-b2a0864ed5df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.043 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84042259-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.043 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.044 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84042259-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:53 np0005597378 kernel: tap84042259-50: entered promiscuous mode
Jan 27 09:14:53 np0005597378 NetworkManager[48904]: <info>  [1769523293.0475] manager: (tap84042259-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.048 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84042259-50, col_values=(('external_ids', {'iface-id': 'd64bfb4e-ab95-4e41-a25f-b23325a54f74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:14:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:53Z|01234|binding|INFO|Releasing lport d64bfb4e-ab95-4e41-a25f-b23325a54f74 from this chassis (sb_readonly=0)
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.065 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84042259-5d43-4b00-ad8b-0831283b7f54.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84042259-5d43-4b00-ad8b-0831283b7f54.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.066 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cdeff1-76ac-437d-8110-ef52dec934ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.067 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-84042259-5d43-4b00-ad8b-0831283b7f54
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/84042259-5d43-4b00-ad8b-0831283b7f54.pid.haproxy
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 84042259-5d43-4b00-ad8b-0831283b7f54
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:14:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:53.067 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'env', 'PROCESS_TAG=haproxy-84042259-5d43-4b00-ad8b-0831283b7f54', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84042259-5d43-4b00-ad8b-0831283b7f54.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.318 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523293.3181236, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.319 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Started (Lifecycle Event)#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.321 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.324 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.327 238945 INFO nova.virt.libvirt.driver [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance spawned successfully.#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.328 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.356 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.361 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.368 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.369 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.369 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.370 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.370 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.371 238945 DEBUG nova.virt.libvirt.driver [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.401 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.402 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523293.3182282, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.402 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.448 238945 INFO nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 7.15 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.449 238945 DEBUG nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.457 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.461 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523293.3233669, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.461 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:14:53 np0005597378 podman[347225]: 2026-01-27 14:14:53.471857553 +0000 UTC m=+0.055906574 container create 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:14:53 np0005597378 systemd[1]: Started libpod-conmon-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80.scope.
Jan 27 09:14:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:14:53 np0005597378 podman[347225]: 2026-01-27 14:14:53.440317741 +0000 UTC m=+0.024366772 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:14:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a5a731055046fcd368123580ea002c66bccc69f7df0b55b7994e1cde6f675e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:14:53 np0005597378 podman[347225]: 2026-01-27 14:14:53.549592678 +0000 UTC m=+0.133641719 container init 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:14:53 np0005597378 podman[347225]: 2026-01-27 14:14:53.554746235 +0000 UTC m=+0.138795246 container start 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:14:53 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : New worker (347246) forked
Jan 27 09:14:53 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : Loading success.
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.688 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.689 238945 INFO nova.compute.manager [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 9.33 seconds to build instance.#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.693 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:14:53 np0005597378 nova_compute[238941]: 2026-01-27 14:14:53.718 238945 DEBUG oslo_concurrency.lockutils [None req-303ad12a-a95c-4d41-b189-0a67e2a18bc3 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:53Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:dc:ff 10.100.0.8
Jan 27 09:14:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:14:53Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:dc:ff 10.100.0.8
Jan 27 09:14:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:54 np0005597378 podman[347255]: 2026-01-27 14:14:54.741628044 +0000 UTC m=+0.073432870 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:14:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.844 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:14:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.847 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:14:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:54.848 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3eab0b7-a024-43bc-a7fc-a158e281f569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 227 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 120 op/s
Jan 27 09:14:55 np0005597378 nova_compute[238941]: 2026-01-27 14:14:55.370 238945 DEBUG nova.compute.manager [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:14:55 np0005597378 nova_compute[238941]: 2026-01-27 14:14:55.371 238945 DEBUG oslo_concurrency.lockutils [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:14:55 np0005597378 nova_compute[238941]: 2026-01-27 14:14:55.371 238945 DEBUG oslo_concurrency.lockutils [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:14:55 np0005597378 nova_compute[238941]: 2026-01-27 14:14:55.372 238945 DEBUG oslo_concurrency.lockutils [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:14:55 np0005597378 nova_compute[238941]: 2026-01-27 14:14:55.372 238945 DEBUG nova.compute.manager [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] No waiting events found dispatching network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:14:55 np0005597378 nova_compute[238941]: 2026-01-27 14:14:55.372 238945 WARNING nova.compute.manager [req-68192994-44fa-40f4-9a7e-ba338eea04eb req-606ca303-79cf-4b6c-b9d7-1202e816dbea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received unexpected event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:14:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.713 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:14:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.715 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:14:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.716 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:14:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:14:55.717 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f58236-60e9-43b5-911b-b5a1fa7f982a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:14:56 np0005597378 nova_compute[238941]: 2026-01-27 14:14:56.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:56 np0005597378 podman[347275]: 2026-01-27 14:14:56.726528805 +0000 UTC m=+0.068993183 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:14:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 240 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 27 09:14:57 np0005597378 nova_compute[238941]: 2026-01-27 14:14:57.585 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:14:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Jan 27 09:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412418963' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:14:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:14:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412418963' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:15:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.173 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.174 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:15:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.175 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:00.176 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e87bce-9aeb-4ac2-8d48-b6429565c704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 139 op/s
Jan 27 09:15:00 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:15:00 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.419 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.419 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.420 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578680784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:01 np0005597378 nova_compute[238941]: 2026-01-27 14:15:01.983 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.118 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.118 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.125 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.125 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.133 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.133 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.380 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3248MB free_disk=59.875672115944326GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.384 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.556 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8df0cb66-9678-4f50-87e0-066cbafcb26b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance ffbbdbe0-9dc8-46b2-9492-e5d63351a47f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d27de200-a446-4d4f-a0dd-c3be9edf0f73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.588 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:02 np0005597378 nova_compute[238941]: 2026-01-27 14:15:02.647 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.665 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.667 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:15:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.669 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:02.670 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8729269b-78b3-4955-b32d-4498e7b93caa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Jan 27 09:15:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360002805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:03 np0005597378 nova_compute[238941]: 2026-01-27 14:15:03.234 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:03 np0005597378 nova_compute[238941]: 2026-01-27 14:15:03.239 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:03 np0005597378 nova_compute[238941]: 2026-01-27 14:15:03.266 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:03 np0005597378 nova_compute[238941]: 2026-01-27 14:15:03.301 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:15:03 np0005597378 nova_compute[238941]: 2026-01-27 14:15:03.301 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:04 np0005597378 nova_compute[238941]: 2026-01-27 14:15:04.302 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:04 np0005597378 nova_compute[238941]: 2026-01-27 14:15:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 27 09:15:06 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.346 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.346 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.365 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.513 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.514 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.521 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.521 238945 INFO nova.compute.claims [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:15:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.543 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.545 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:15:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.546 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:06.547 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[405d753d-e531-49f7-b985-c326bb346241]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:06 np0005597378 nova_compute[238941]: 2026-01-27 14:15:06.710 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 171 op/s
Jan 27 09:15:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232883653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.268 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.275 238945 DEBUG nova.compute.provider_tree [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.293 238945 DEBUG nova.scheduler.client.report [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.317 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.318 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.362 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.362 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.387 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.484 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.566 238945 DEBUG nova.policy [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.628 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.630 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:15:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.631 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:07.632 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[348ce798-97b9-4eca-b196-f79e3e3c9378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.667 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.668 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.669 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Creating image(s)#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.695 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.731 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.763 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.766 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.863 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.864 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.864 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.865 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.886 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:07 np0005597378 nova_compute[238941]: 2026-01-27 14:15:07.890 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c9010b63-5eae-497c-ace9-dc8788805086_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.188 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c9010b63-5eae-497c-ace9-dc8788805086_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.234 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.308 238945 DEBUG nova.objects.instance [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid c9010b63-5eae-497c-ace9-dc8788805086 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.342 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.343 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Ensure instance console log exists: /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.343 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.343 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.344 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:08Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:19:42 10.100.0.26
Jan 27 09:15:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:08Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:19:42 10.100.0.26
Jan 27 09:15:08 np0005597378 nova_compute[238941]: 2026-01-27 14:15:08.558 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully created port: e0d38998-b28f-4059-8b31-d26feeb41c76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:15:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 296 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 218 op/s
Jan 27 09:15:09 np0005597378 nova_compute[238941]: 2026-01-27 14:15:09.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:09 np0005597378 nova_compute[238941]: 2026-01-27 14:15:09.529 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully created port: b18543f0-85cc-4cd0-913c-5759062e76c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:15:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.402 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.562 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.563 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.563 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.563 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.606 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.855 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully updated port: e0d38998-b28f-4059-8b31-d26feeb41c76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.940 238945 DEBUG nova.compute.manager [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.940 238945 DEBUG nova.compute.manager [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.941 238945 DEBUG oslo_concurrency.lockutils [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.941 238945 DEBUG oslo_concurrency.lockutils [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:11 np0005597378 nova_compute[238941]: 2026-01-27 14:15:11.941 238945 DEBUG nova.network.neutron [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:12 np0005597378 nova_compute[238941]: 2026-01-27 14:15:12.097 238945 DEBUG nova.network.neutron [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:15:12 np0005597378 nova_compute[238941]: 2026-01-27 14:15:12.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:12 np0005597378 nova_compute[238941]: 2026-01-27 14:15:12.807 238945 DEBUG nova.network.neutron [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:12 np0005597378 nova_compute[238941]: 2026-01-27 14:15:12.826 238945 DEBUG oslo_concurrency.lockutils [req-3cc4253d-f05b-446e-8233-d5ff4fb63d26 req-43204da1-9d88-466e-aba4-bea005cca18d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.090 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.105 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Successfully updated port: b18543f0-85cc-4cd0-913c-5759062e76c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.107 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.107 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.120 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.121 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.121 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:15:13 np0005597378 nova_compute[238941]: 2026-01-27 14:15:13.272 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:15:14 np0005597378 nova_compute[238941]: 2026-01-27 14:15:14.030 238945 DEBUG nova.compute.manager [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:14 np0005597378 nova_compute[238941]: 2026-01-27 14:15:14.031 238945 DEBUG nova.compute.manager [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-b18543f0-85cc-4cd0-913c-5759062e76c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:14 np0005597378 nova_compute[238941]: 2026-01-27 14:15:14.031 238945 DEBUG oslo_concurrency.lockutils [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:14 np0005597378 nova_compute[238941]: 2026-01-27 14:15:14.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.630 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 10.100.0.2 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.631 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:15:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.632 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:14.633 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[26d26506-e0dd-4eda-b51c-f1339ec62b05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 27 09:15:16 np0005597378 nova_compute[238941]: 2026-01-27 14:15:16.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:15:17
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.control']
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:15:17 np0005597378 nova_compute[238941]: 2026-01-27 14:15:17.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:15:18 np0005597378 nova_compute[238941]: 2026-01-27 14:15:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:18 np0005597378 nova_compute[238941]: 2026-01-27 14:15:18.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:15:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.196 238945 DEBUG nova.network.neutron [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.221 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.221 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance network_info: |[{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.222 238945 DEBUG oslo_concurrency.lockutils [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.222 238945 DEBUG nova.network.neutron [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port b18543f0-85cc-4cd0-913c-5759062e76c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.225 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start _get_guest_xml network_info=[{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.229 238945 WARNING nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.234 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.235 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.240 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.240 238945 DEBUG nova.virt.libvirt.host [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.241 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.241 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.241 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.242 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.243 238945 DEBUG nova.virt.hardware [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.246 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.370 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.371 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.373 238945 INFO nova.compute.manager [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Terminating instance#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.374 238945 DEBUG nova.compute.manager [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:15:19 np0005597378 kernel: tapcad52f25-e7 (unregistering): left promiscuous mode
Jan 27 09:15:19 np0005597378 NetworkManager[48904]: <info>  [1769523319.4239] device (tapcad52f25-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:19Z|01235|binding|INFO|Releasing lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 from this chassis (sb_readonly=0)
Jan 27 09:15:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:19Z|01236|binding|INFO|Setting lport cad52f25-e715-4de1-a04c-d1f0ff0b8e07 down in Southbound
Jan 27 09:15:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:19Z|01237|binding|INFO|Removing iface tapcad52f25-e7 ovn-installed in OVS
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.438 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.446 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:19:42 10.100.0.26'], port_security=['fa:16:3e:a7:19:42 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'd27de200-a446-4d4f-a0dd-c3be9edf0f73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84042259-5d43-4b00-ad8b-0831283b7f54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5abd853e-4417-412e-a289-48ca97e3eaf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d9d5fe8-5e3c-4436-822b-ed7a88618dfd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cad52f25-e715-4de1-a04c-d1f0ff0b8e07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.447 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cad52f25-e715-4de1-a04c-d1f0ff0b8e07 in datapath 84042259-5d43-4b00-ad8b-0831283b7f54 unbound from our chassis#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.448 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84042259-5d43-4b00-ad8b-0831283b7f54, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.449 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4d1055-069f-4872-aa6f-ef47d5669cd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.449 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 namespace which is not needed anymore#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 27 09:15:19 np0005597378 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 14.969s CPU time.
Jan 27 09:15:19 np0005597378 systemd-machined[207425]: Machine qemu-151-instance-00000077 terminated.
Jan 27 09:15:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:19 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : haproxy version is 2.8.14-c23fe91
Jan 27 09:15:19 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [NOTICE]   (347244) : path to executable is /usr/sbin/haproxy
Jan 27 09:15:19 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [WARNING]  (347244) : Exiting Master process...
Jan 27 09:15:19 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [WARNING]  (347244) : Exiting Master process...
Jan 27 09:15:19 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [ALERT]    (347244) : Current worker (347246) exited with code 143 (Terminated)
Jan 27 09:15:19 np0005597378 neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54[347240]: [WARNING]  (347244) : All workers exited. Exiting... (0)
Jan 27 09:15:19 np0005597378 systemd[1]: libpod-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80.scope: Deactivated successfully.
Jan 27 09:15:19 np0005597378 podman[347577]: 2026-01-27 14:15:19.569662598 +0000 UTC m=+0.040011919 container died 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:15:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80-userdata-shm.mount: Deactivated successfully.
Jan 27 09:15:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-10a5a731055046fcd368123580ea002c66bccc69f7df0b55b7994e1cde6f675e-merged.mount: Deactivated successfully.
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.613 238945 INFO nova.virt.libvirt.driver [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Instance destroyed successfully.#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.614 238945 DEBUG nova.objects.instance [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid d27de200-a446-4d4f-a0dd-c3be9edf0f73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:19 np0005597378 podman[347577]: 2026-01-27 14:15:19.623283419 +0000 UTC m=+0.093632740 container cleanup 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.629 238945 DEBUG nova.virt.libvirt.vif [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-420507594',display_name='tempest-TestNetworkBasicOps-server-420507594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-420507594',id=119,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAXrhsWv7cYesLBLIUM/uRdxKvrOUW2+EAUZ2KPePP152JhfjVghGmWviGYZMjnuf8zP5zI+mKUsZz0VRmI3E31J4pwEULVaZMClZaffF0xhmqMW9QtGLlrSUDjoNDBm5Q==',key_name='tempest-TestNetworkBasicOps-2146781436',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-1tijlfwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:53Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d27de200-a446-4d4f-a0dd-c3be9edf0f73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.629 238945 DEBUG nova.network.os_vif_util [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "address": "fa:16:3e:a7:19:42", "network": {"id": "84042259-5d43-4b00-ad8b-0831283b7f54", "bridge": "br-int", "label": "tempest-network-smoke--207854085", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcad52f25-e7", "ovs_interfaceid": "cad52f25-e715-4de1-a04c-d1f0ff0b8e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.630 238945 DEBUG nova.network.os_vif_util [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.630 238945 DEBUG os_vif [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:15:19 np0005597378 systemd[1]: libpod-conmon-207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80.scope: Deactivated successfully.
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.636 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcad52f25-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.641 238945 INFO os_vif [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:19:42,bridge_name='br-int',has_traffic_filtering=True,id=cad52f25-e715-4de1-a04c-d1f0ff0b8e07,network=Network(84042259-5d43-4b00-ad8b-0831283b7f54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcad52f25-e7')#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.688 238945 DEBUG nova.compute.manager [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-unplugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.688 238945 DEBUG oslo_concurrency.lockutils [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG oslo_concurrency.lockutils [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG oslo_concurrency.lockutils [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG nova.compute.manager [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] No waiting events found dispatching network-vif-unplugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.689 238945 DEBUG nova.compute.manager [req-8ca3520e-5214-43e5-ab7c-d041e5e1c702 req-444108fe-7b18-4991-9a86-80b08bf28afb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-unplugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:15:19 np0005597378 podman[347616]: 2026-01-27 14:15:19.694607053 +0000 UTC m=+0.048412724 container remove 207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.701 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ead048-cd86-4d8d-91bc-20a5118f8821]: (4, ('Tue Jan 27 02:15:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 (207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80)\n207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80\nTue Jan 27 02:15:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 (207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80)\n207adddc51a55fce97a4d7fd97f07fc649dad390d98a115f194586252b4a2d80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.702 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f52bbb-240a-4a5a-8714-a376695b2c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.703 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84042259-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 kernel: tap84042259-50: left promiscuous mode
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.711 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[102f29de-5736-47b5-8d23-547b6b4da15f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.728 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea5efa8-b92f-419f-b23d-fd84d2b0c6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.730 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08342933-13c0-4a87-b388-1e1ffb5dfc3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6216912-284c-4597-b022-cf413694d288]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593239, 'reachable_time': 34099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347650, 'error': None, 'target': 'ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 systemd[1]: run-netns-ovnmeta\x2d84042259\x2d5d43\x2d4b00\x2dad8b\x2d0831283b7f54.mount: Deactivated successfully.
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.754 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84042259-5d43-4b00-ad8b-0831283b7f54 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:15:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:19.754 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e4388e4a-8c17-45db-a0a6-75a81f937f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:15:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187316128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.860 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.881 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:19 np0005597378 nova_compute[238941]: 2026-01-27 14:15:19.885 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.097 238945 INFO nova.virt.libvirt.driver [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deleting instance files /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73_del#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.098 238945 INFO nova.virt.libvirt.driver [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deletion of /var/lib/nova/instances/d27de200-a446-4d4f-a0dd-c3be9edf0f73_del complete#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.161 238945 INFO nova.compute.manager [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.162 238945 DEBUG oslo.service.loopingcall [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.162 238945 DEBUG nova.compute.manager [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.163 238945 DEBUG nova.network.neutron [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:15:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534768603' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.441 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.442 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.443 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.443 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.444 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.444 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.445 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.446 238945 DEBUG nova.objects.instance [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid c9010b63-5eae-497c-ace9-dc8788805086 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.493 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <uuid>c9010b63-5eae-497c-ace9-dc8788805086</uuid>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <name>instance-00000078</name>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-1944593165</nova:name>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:15:19</nova:creationTime>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:port uuid="e0d38998-b28f-4059-8b31-d26feeb41c76">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <nova:port uuid="b18543f0-85cc-4cd0-913c-5759062e76c0">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe3b:8214" ipVersion="6"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3b:8214" ipVersion="6"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <entry name="serial">c9010b63-5eae-497c-ace9-dc8788805086</entry>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <entry name="uuid">c9010b63-5eae-497c-ace9-dc8788805086</entry>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c9010b63-5eae-497c-ace9-dc8788805086_disk">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c9010b63-5eae-497c-ace9-dc8788805086_disk.config">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:7a:e3:79"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <target dev="tape0d38998-b2"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:3b:82:14"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <target dev="tapb18543f0-85"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/console.log" append="off"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:15:20 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:15:20 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:15:20 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:15:20 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.495 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Preparing to wait for external event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.495 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.495 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Preparing to wait for external event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.496 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.497 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.497 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.498 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.498 238945 DEBUG os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.498 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.499 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.499 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0d38998-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.502 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0d38998-b2, col_values=(('external_ids', {'iface-id': 'e0d38998-b28f-4059-8b31-d26feeb41c76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:e3:79', 'vm-uuid': 'c9010b63-5eae-497c-ace9-dc8788805086'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 NetworkManager[48904]: <info>  [1769523320.5055] manager: (tape0d38998-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.507 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.510 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.511 238945 INFO os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2')#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.512 238945 DEBUG nova.virt.libvirt.vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:07Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.512 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.513 238945 DEBUG nova.network.os_vif_util [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.513 238945 DEBUG os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.514 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.514 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.517 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb18543f0-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.518 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb18543f0-85, col_values=(('external_ids', {'iface-id': 'b18543f0-85cc-4cd0-913c-5759062e76c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:82:14', 'vm-uuid': 'c9010b63-5eae-497c-ace9-dc8788805086'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:20 np0005597378 NetworkManager[48904]: <info>  [1769523320.5200] manager: (tapb18543f0-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.528 238945 INFO os_vif [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85')#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.934 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.934 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.935 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:7a:e3:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.935 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:3b:82:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.936 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Using config drive#033[00m
Jan 27 09:15:20 np0005597378 nova_compute[238941]: 2026-01-27 14:15:20.963 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 305 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.352048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321352116, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1609, "num_deletes": 251, "total_data_size": 2588930, "memory_usage": 2624320, "flush_reason": "Manual Compaction"}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.411 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Creating config drive at /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.416 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvukifp3o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321439633, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2541291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43502, "largest_seqno": 45110, "table_properties": {"data_size": 2533828, "index_size": 4406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15655, "raw_average_key_size": 20, "raw_value_size": 2518900, "raw_average_value_size": 3237, "num_data_blocks": 197, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523146, "oldest_key_time": 1769523146, "file_creation_time": 1769523321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 87645 microseconds, and 8825 cpu microseconds.
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.439705) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2541291 bytes OK
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.439724) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.532717) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.532756) EVENT_LOG_v1 {"time_micros": 1769523321532748, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.532780) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2581904, prev total WAL file size 2581904, number of live WAL files 2.
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.533685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2481KB)], [101(7821KB)]
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321533719, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10550662, "oldest_snapshot_seqno": -1}
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.556 238945 DEBUG nova.network.neutron [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.564 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvukifp3o" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.589 238945 DEBUG nova.storage.rbd_utils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image c9010b63-5eae-497c-ace9-dc8788805086_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.593 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config c9010b63-5eae-497c-ace9-dc8788805086_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.637 238945 INFO nova.compute.manager [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Took 1.47 seconds to deallocate network for instance.#033[00m
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6642 keys, 8845139 bytes, temperature: kUnknown
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321658820, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8845139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8801133, "index_size": 26294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172700, "raw_average_key_size": 26, "raw_value_size": 8682725, "raw_average_value_size": 1307, "num_data_blocks": 1027, "num_entries": 6642, "num_filter_entries": 6642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.659214) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8845139 bytes
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.662885) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.2 rd, 70.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.6 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7156, records dropped: 514 output_compression: NoCompression
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.662907) EVENT_LOG_v1 {"time_micros": 1769523321662896, "job": 60, "event": "compaction_finished", "compaction_time_micros": 125335, "compaction_time_cpu_micros": 20549, "output_level": 6, "num_output_files": 1, "total_output_size": 8845139, "num_input_records": 7156, "num_output_records": 6642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321663873, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523321665503, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.533608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:15:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:15:21.665628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.777 238945 DEBUG nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG oslo_concurrency.lockutils [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG oslo_concurrency.lockutils [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG oslo_concurrency.lockutils [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] No waiting events found dispatching network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 WARNING nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received unexpected event network-vif-plugged-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.778 238945 DEBUG nova.compute.manager [req-c856e816-9f2a-4299-8f90-1df1837b141b req-ea2b5ac0-7c57-47ee-827f-e2fd76b7e6dd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Received event network-vif-deleted-cad52f25-e715-4de1-a04c-d1f0ff0b8e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.817 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.818 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.843 238945 DEBUG nova.network.neutron [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updated VIF entry in instance network info cache for port b18543f0-85cc-4cd0-913c-5759062e76c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.843 238945 DEBUG nova.network.neutron [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.862 238945 DEBUG oslo_concurrency.lockutils [req-f2a9d937-445a-43a6-a22d-e418b985239c req-7043c04e-3e55-4f6a-b267-372df98fd354 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.866 238945 DEBUG oslo_concurrency.processutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config c9010b63-5eae-497c-ace9-dc8788805086_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.867 238945 INFO nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deleting local config drive /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086/disk.config because it was imported into RBD.#033[00m
Jan 27 09:15:21 np0005597378 NetworkManager[48904]: <info>  [1769523321.9170] manager: (tape0d38998-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Jan 27 09:15:21 np0005597378 systemd-udevd[347559]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.919 238945 DEBUG oslo_concurrency.processutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:21 np0005597378 kernel: tape0d38998-b2: entered promiscuous mode
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01238|binding|INFO|Claiming lport e0d38998-b28f-4059-8b31-d26feeb41c76 for this chassis.
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01239|binding|INFO|e0d38998-b28f-4059-8b31-d26feeb41c76: Claiming fa:16:3e:7a:e3:79 10.100.0.5
Jan 27 09:15:21 np0005597378 NetworkManager[48904]: <info>  [1769523321.9337] device (tape0d38998-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:15:21 np0005597378 NetworkManager[48904]: <info>  [1769523321.9346] device (tape0d38998-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:15:21 np0005597378 NetworkManager[48904]: <info>  [1769523321.9444] manager: (tapb18543f0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Jan 27 09:15:21 np0005597378 kernel: tapb18543f0-85: entered promiscuous mode
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01240|binding|INFO|Claiming lport b18543f0-85cc-4cd0-913c-5759062e76c0 for this chassis.
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01241|binding|INFO|b18543f0-85cc-4cd0-913c-5759062e76c0: Claiming fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214
Jan 27 09:15:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.952 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:e3:79 10.100.0.5'], port_security=['fa:16:3e:7a:e3:79 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e0d38998-b28f-4059-8b31-d26feeb41c76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.953 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e0d38998-b28f-4059-8b31-d26feeb41c76 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e bound to our chassis#033[00m
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01242|binding|INFO|Setting lport e0d38998-b28f-4059-8b31-d26feeb41c76 ovn-installed in OVS
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01243|binding|INFO|Setting lport e0d38998-b28f-4059-8b31-d26feeb41c76 up in Southbound
Jan 27 09:15:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.955 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 107e1e32-614b-4ab8-bbad-b8ada050804e#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:21 np0005597378 NetworkManager[48904]: <info>  [1769523321.9616] device (tapb18543f0-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:15:21 np0005597378 NetworkManager[48904]: <info>  [1769523321.9638] device (tapb18543f0-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:15:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.967 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], port_security=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3b:8214/64 2001:db8::f816:3eff:fe3b:8214/64', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b18543f0-85cc-4cd0-913c-5759062e76c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.970 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01244|binding|INFO|Setting lport b18543f0-85cc-4cd0-913c-5759062e76c0 ovn-installed in OVS
Jan 27 09:15:21 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:21Z|01245|binding|INFO|Setting lport b18543f0-85cc-4cd0-913c-5759062e76c0 up in Southbound
Jan 27 09:15:21 np0005597378 nova_compute[238941]: 2026-01-27 14:15:21.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:21.979 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1b09ee-0581-4d4e-905e-48f15df2ea54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:21 np0005597378 systemd-machined[207425]: New machine qemu-152-instance-00000078.
Jan 27 09:15:22 np0005597378 systemd[1]: Started Virtual Machine qemu-152-instance-00000078.
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.035 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2e524eb0-088c-480f-8a48-ee71c5fbfa4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ab1fd3-03fc-48ff-808a-c90951d047d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.069 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[eebaec09-6de7-43b0-b341-56dddabd4e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.097 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e7096d29-fd86-4ec9-903c-396d3ee8b80f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347805, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e8af9d6e-e15b-4cb8-bee7-2ef37c06f1d0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591994, 'tstamp': 591994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347807, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591997, 'tstamp': 591997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347807, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.121 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap107e1e32-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap107e1e32-60, col_values=(('external_ids', {'iface-id': '4892ac35-2643-4e0c-8a95-5275bc7e88da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.125 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.126 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b18543f0-85cc-4cd0-913c-5759062e76c0 in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.128 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2539952-bab4-4694-909b-dbdd2d64b450#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.142 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45e0c828-0f8a-4b06-a926-3d89644f067c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.169 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b44333d-b4f8-477d-8578-f9b417523b84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.173 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c50bbb73-e403-4ff3-a92c-373446ee1ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.201 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd36dcef-29e6-459d-b183-82c9f00c8442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8c61b4-a502-4a12-b6d1-f0dbd96f1c75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347813, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.239 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78cb872b-1ed4-4243-bcb9-e53d8fc28b6f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf2539952-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592081, 'tstamp': 592081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347814, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.241 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.243 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2539952-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.245 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2539952-b0, col_values=(('external_ids', {'iface-id': '0a181f74-30e7-4bcc-b817-e247dda31c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:22.246 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050238617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.453 238945 DEBUG oslo_concurrency.processutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.459 238945 DEBUG nova.compute.provider_tree [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.486 238945 DEBUG nova.scheduler.client.report [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.509 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.559 238945 INFO nova.scheduler.client.report [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance d27de200-a446-4d4f-a0dd-c3be9edf0f73#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.598 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.633 238945 DEBUG oslo_concurrency.lockutils [None req-8388cd81-53f2-4f98-842d-ce8dd30418f0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d27de200-a446-4d4f-a0dd-c3be9edf0f73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.883 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523322.8827486, c9010b63-5eae-497c-ace9-dc8788805086 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.883 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Started (Lifecycle Event)#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.903 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.907 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523322.8828528, c9010b63-5eae-497c-ace9-dc8788805086 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.907 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.931 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:15:22 np0005597378 nova_compute[238941]: 2026-01-27 14:15:22.948 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:15:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 305 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.919 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.920 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.920 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.920 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Processing event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.921 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No event matching network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 in dict_keys([('network-vif-plugged', 'b18543f0-85cc-4cd0-913c-5759062e76c0')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 WARNING nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.922 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.923 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Processing event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.924 238945 DEBUG oslo_concurrency.lockutils [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.925 238945 DEBUG nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.925 238945 WARNING nova.compute.manager [req-217355c0-24e6-4c94-91a3-60b18a62475d req-d7a3a312-51c5-4ff6-b1da-fe15d66e6ea9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.926 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.929 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523323.929125, c9010b63-5eae-497c-ace9-dc8788805086 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.929 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.931 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.933 238945 INFO nova.virt.libvirt.driver [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance spawned successfully.#033[00m
Jan 27 09:15:23 np0005597378 nova_compute[238941]: 2026-01-27 14:15:23.933 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.091 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.095 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.096 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.096 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.097 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.097 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.098 238945 DEBUG nova.virt.libvirt.driver [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.142 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.209 238945 INFO nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 16.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.209 238945 DEBUG nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.280 238945 INFO nova.compute.manager [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 17.79 seconds to build instance.#033[00m
Jan 27 09:15:24 np0005597378 nova_compute[238941]: 2026-01-27 14:15:24.298 238945 DEBUG oslo_concurrency.lockutils [None req-1c030193-4eef-4af9-a927-67182cafbc1f 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 246 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s
Jan 27 09:15:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:25Z|01246|binding|INFO|Releasing lport 0a181f74-30e7-4bcc-b817-e247dda31c08 from this chassis (sb_readonly=0)
Jan 27 09:15:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:25Z|01247|binding|INFO|Releasing lport 4892ac35-2643-4e0c-8a95-5275bc7e88da from this chassis (sb_readonly=0)
Jan 27 09:15:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:25Z|01248|binding|INFO|Releasing lport d783a246-d28e-44e1-a0e9-783e23a95051 from this chassis (sb_readonly=0)
Jan 27 09:15:25 np0005597378 nova_compute[238941]: 2026-01-27 14:15:25.142 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:25 np0005597378 nova_compute[238941]: 2026-01-27 14:15:25.519 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:25 np0005597378 podman[347861]: 2026-01-27 14:15:25.721105548 +0000 UTC m=+0.057695090 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.382 238945 DEBUG nova.compute.manager [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.382 238945 DEBUG nova.compute.manager [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing instance network info cache due to event network-changed-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.382 238945 DEBUG oslo_concurrency.lockutils [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.383 238945 DEBUG oslo_concurrency.lockutils [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.383 238945 DEBUG nova.network.neutron [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Refreshing network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.409 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.410 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.411 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.412 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.412 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.413 238945 INFO nova.compute.manager [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Terminating instance#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.414 238945 DEBUG nova.compute.manager [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:15:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:15:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:26 np0005597378 kernel: tap21c0e79c-9d (unregistering): left promiscuous mode
Jan 27 09:15:26 np0005597378 NetworkManager[48904]: <info>  [1769523326.5624] device (tap21c0e79c-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:15:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:26Z|01249|binding|INFO|Releasing lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d from this chassis (sb_readonly=0)
Jan 27 09:15:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:26Z|01250|binding|INFO|Setting lport 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d down in Southbound
Jan 27 09:15:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:26Z|01251|binding|INFO|Removing iface tap21c0e79c-9d ovn-installed in OVS
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.585 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:df:73 10.100.0.14'], port_security=['fa:16:3e:27:df:73 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8df0cb66-9678-4f50-87e0-066cbafcb26b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'accd4075-5a55-4bff-827f-ddb1794ed7d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a15460d-6ccd-40d2-9737-7ae06bf168e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.587 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d in datapath 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c unbound from our chassis#033[00m
Jan 27 09:15:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.589 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.590 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d740db91-df4f-4ee3-9c4e-6f06e9918faa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:26.590 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c namespace which is not needed anymore#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:26 np0005597378 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 27 09:15:26 np0005597378 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Consumed 15.585s CPU time.
Jan 27 09:15:26 np0005597378 systemd-machined[207425]: Machine qemu-149-instance-00000075 terminated.
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.657 238945 INFO nova.virt.libvirt.driver [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Instance destroyed successfully.#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.658 238945 DEBUG nova.objects.instance [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 8df0cb66-9678-4f50-87e0-066cbafcb26b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.675 238945 DEBUG nova.virt.libvirt.vif [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:13:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-144581117',display_name='tempest-TestNetworkBasicOps-server-144581117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-144581117',id=117,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClIsOMYBDW1mTHBPhBNzMnSebAst2LQIoqp5ISoghGMCqgK5cCtP8boVvXqJnI/aVkYSOd21OzhpfBfG/mCpRxC0QfzpZQ+ccWYmJrMDrV2A/8x5zjAOXMRJmK9HClK6w==',key_name='tempest-TestNetworkBasicOps-932126384',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-r0ixdvtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:06Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8df0cb66-9678-4f50-87e0-066cbafcb26b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.675 238945 DEBUG nova.network.os_vif_util [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.676 238945 DEBUG nova.network.os_vif_util [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.676 238945 DEBUG os_vif [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.678 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21c0e79c-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:26 np0005597378 nova_compute[238941]: 2026-01-27 14:15:26.685 238945 INFO os_vif [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:df:73,bridge_name='br-int',has_traffic_filtering=True,id=21c0e79c-9d05-4b8c-89f6-b7f7e93c871d,network=Network(12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21c0e79c-9d')#033[00m
Jan 27 09:15:26 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : haproxy version is 2.8.14-c23fe91
Jan 27 09:15:26 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [NOTICE]   (345514) : path to executable is /usr/sbin/haproxy
Jan 27 09:15:26 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [WARNING]  (345514) : Exiting Master process...
Jan 27 09:15:26 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [WARNING]  (345514) : Exiting Master process...
Jan 27 09:15:26 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [ALERT]    (345514) : Current worker (345516) exited with code 143 (Terminated)
Jan 27 09:15:26 np0005597378 neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c[345510]: [WARNING]  (345514) : All workers exited. Exiting... (0)
Jan 27 09:15:26 np0005597378 systemd[1]: libpod-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6.scope: Deactivated successfully.
Jan 27 09:15:26 np0005597378 podman[348037]: 2026-01-27 14:15:26.757114151 +0000 UTC m=+0.056360805 container died 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:15:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6-userdata-shm.mount: Deactivated successfully.
Jan 27 09:15:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9303fadf09d11970d4b5c7b6edc25e1039ed739f467345412508e92fff02dcff-merged.mount: Deactivated successfully.
Jan 27 09:15:26 np0005597378 podman[348037]: 2026-01-27 14:15:26.91891782 +0000 UTC m=+0.218164474 container cleanup 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:15:26 np0005597378 systemd[1]: libpod-conmon-40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6.scope: Deactivated successfully.
Jan 27 09:15:26 np0005597378 podman[348068]: 2026-01-27 14:15:26.966260143 +0000 UTC m=+0.185232805 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 27 09:15:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 246 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 748 KiB/s rd, 21 KiB/s wr, 62 op/s
Jan 27 09:15:27 np0005597378 podman[348116]: 2026-01-27 14:15:27.001849523 +0000 UTC m=+0.055761729 container remove 40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.008 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f5243e39-346b-4a8d-88cb-37c2d2b8b0e7]: (4, ('Tue Jan 27 02:15:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c (40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6)\n40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6\nTue Jan 27 02:15:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c (40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6)\n40e878a7f18f6aea78a5f1a239b883b6d4e6af8dc06340a04f81f8804f002ef6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.010 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41c22113-4e9b-4d85-9a62-407d3efaf84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.011 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12f77fa9-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:27 np0005597378 kernel: tap12f77fa9-60: left promiscuous mode
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.028 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.034 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[453f31e5-02ee-4981-8c3b-2d556d24c111]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b042415f-7083-4ccc-b723-3640efc95164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.067 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c11daa3-3b12-4745-af02-f65f557ab940]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.088 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d42c32b-7a1e-4741-828f-48702cc80c7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588493, 'reachable_time': 26193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348137, 'error': None, 'target': 'ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 systemd[1]: run-netns-ovnmeta\x2d12f77fa9\x2d6387\x2d4edc\x2da4eb\x2da8f1b9ccdb0c.mount: Deactivated successfully.
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.092 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:15:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:27.092 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ceb334-7e9b-4822-9a51-05298d8090c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.179 238945 INFO nova.virt.libvirt.driver [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deleting instance files /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b_del#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.180 238945 INFO nova.virt.libvirt.driver [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deletion of /var/lib/nova/instances/8df0cb66-9678-4f50-87e0-066cbafcb26b_del complete#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.349 238945 INFO nova.compute.manager [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.349 238945 DEBUG oslo.service.loopingcall [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.350 238945 DEBUG nova.compute.manager [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.350 238945 DEBUG nova.network.neutron [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:27 np0005597378 nova_compute[238941]: 2026-01-27 14:15:27.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.74729207 +0000 UTC m=+0.045051544 container create c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018808328912090012 of space, bias 1.0, pg target 0.5642498673627003 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695846922270153 of space, bias 1.0, pg target 0.20087540766810458 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0601763319455298e-06 of space, bias 4.0, pg target 0.0012722115983346358 quantized to 16 (current 16)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:15:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:15:27 np0005597378 systemd[1]: Started libpod-conmon-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope.
Jan 27 09:15:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.725911119 +0000 UTC m=+0.023670613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.842190873 +0000 UTC m=+0.139950367 container init c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.851021558 +0000 UTC m=+0.148781032 container start c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.854220305 +0000 UTC m=+0.151979799 container attach c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:15:27 np0005597378 optimistic_bhabha[348232]: 167 167
Jan 27 09:15:27 np0005597378 systemd[1]: libpod-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope: Deactivated successfully.
Jan 27 09:15:27 np0005597378 conmon[348232]: conmon c8f4167b4d8401931a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope/container/memory.events
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.858582911 +0000 UTC m=+0.156342385 container died c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:15:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1812990cec9545388f1052786c9c9004143348d5980e37c4c9e0913590d75d4b-merged.mount: Deactivated successfully.
Jan 27 09:15:27 np0005597378 podman[348216]: 2026-01-27 14:15:27.895945757 +0000 UTC m=+0.193705221 container remove c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 27 09:15:27 np0005597378 systemd[1]: libpod-conmon-c8f4167b4d8401931a6871135dafac8d5a789c3ac9a268c2019a49a9a166d196.scope: Deactivated successfully.
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.095264218 +0000 UTC m=+0.060267989 container create b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:15:28 np0005597378 systemd[1]: Started libpod-conmon-b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6.scope.
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.0649886 +0000 UTC m=+0.029992401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.188742232 +0000 UTC m=+0.153746013 container init b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.195022451 +0000 UTC m=+0.160026212 container start b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.198759111 +0000 UTC m=+0.163762882 container attach b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.312 238945 DEBUG nova.network.neutron [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.345 238945 INFO nova.compute.manager [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Took 0.99 seconds to deallocate network for instance.#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.402 238945 DEBUG nova.compute.manager [req-049f7ed2-3d04-412d-a7ac-1354f604c18d req-7481ccdb-7850-4c78-b086-5fc0df968b4c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-deleted-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.404 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.404 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.479 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-unplugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.480 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.480 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.481 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.481 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] No waiting events found dispatching network-vif-unplugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.481 238945 WARNING nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received unexpected event network-vif-unplugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.482 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.482 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.482 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.483 238945 DEBUG oslo_concurrency.lockutils [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.483 238945 DEBUG nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] No waiting events found dispatching network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.483 238945 WARNING nova.compute.manager [req-97fce49d-d9d5-4ae5-a057-7b564b9a6902 req-5f0fa286-7099-4a71-a50d-bbf4da8b86ef 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Received unexpected event network-vif-plugged-21c0e79c-9d05-4b8c-89f6-b7f7e93c871d for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.500 238945 DEBUG oslo_concurrency.processutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.544 238945 DEBUG nova.network.neutron [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updated VIF entry in instance network info cache for port 21c0e79c-9d05-4b8c-89f6-b7f7e93c871d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.545 238945 DEBUG nova.network.neutron [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Updating instance_info_cache with network_info: [{"id": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "address": "fa:16:3e:27:df:73", "network": {"id": "12f77fa9-6387-4edc-a4eb-a8f1b9ccdb0c", "bridge": "br-int", "label": "tempest-network-smoke--897101286", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21c0e79c-9d", "ovs_interfaceid": "21c0e79c-9d05-4b8c-89f6-b7f7e93c871d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:28 np0005597378 nova_compute[238941]: 2026-01-27 14:15:28.650 238945 DEBUG oslo_concurrency.lockutils [req-5198fcf6-fe73-41de-93d9-55227126c1bc req-44aaaf0f-e02a-44d6-8dda-fa18441864b0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8df0cb66-9678-4f50-87e0-066cbafcb26b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:28 np0005597378 elated_wilson[348271]: [
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:    {
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "available": false,
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "being_replaced": false,
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "ceph_device_lvm": false,
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "lsm_data": {},
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "lvs": [],
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "path": "/dev/sr0",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "rejected_reasons": [
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "Insufficient space (<5GB)",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "Has a FileSystem"
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        ],
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        "sys_api": {
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "actuators": null,
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "device_nodes": [
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:                "sr0"
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            ],
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "devname": "sr0",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "human_readable_size": "482.00 KB",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "id_bus": "ata",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "model": "QEMU DVD-ROM",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "nr_requests": "2",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "parent": "/dev/sr0",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "partitions": {},
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "path": "/dev/sr0",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "removable": "1",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "rev": "2.5+",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "ro": "0",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "rotational": "1",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "sas_address": "",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "sas_device_handle": "",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "scheduler_mode": "mq-deadline",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "sectors": 0,
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "sectorsize": "2048",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "size": 493568.0,
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "support_discard": "2048",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "type": "disk",
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:            "vendor": "QEMU"
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:        }
Jan 27 09:15:28 np0005597378 elated_wilson[348271]:    }
Jan 27 09:15:28 np0005597378 elated_wilson[348271]: ]
Jan 27 09:15:28 np0005597378 systemd[1]: libpod-b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6.scope: Deactivated successfully.
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.746288275 +0000 UTC m=+0.711292036 container died b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 27 09:15:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e99c17a73f904f192a9178078416d39d25769961986fbf1da7e16073acd29fc5-merged.mount: Deactivated successfully.
Jan 27 09:15:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 196 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 120 op/s
Jan 27 09:15:28 np0005597378 podman[348256]: 2026-01-27 14:15:28.988663214 +0000 UTC m=+0.953666975 container remove b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:15:29 np0005597378 systemd[1]: libpod-conmon-b2ba0f5995c107e619b2658267c85bc85ebd98b0f37897f2055b870733f143a6.scope: Deactivated successfully.
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734658799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:15:29 np0005597378 nova_compute[238941]: 2026-01-27 14:15:29.092 238945 DEBUG oslo_concurrency.processutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:29 np0005597378 nova_compute[238941]: 2026-01-27 14:15:29.101 238945 DEBUG nova.compute.provider_tree [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:29 np0005597378 nova_compute[238941]: 2026-01-27 14:15:29.162 238945 DEBUG nova.scheduler.client.report [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:29 np0005597378 nova_compute[238941]: 2026-01-27 14:15:29.191 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:29 np0005597378 nova_compute[238941]: 2026-01-27 14:15:29.247 238945 INFO nova.scheduler.client.report [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 8df0cb66-9678-4f50-87e0-066cbafcb26b#033[00m
Jan 27 09:15:29 np0005597378 nova_compute[238941]: 2026-01-27 14:15:29.349 238945 DEBUG oslo_concurrency.lockutils [None req-405f86c2-a4c2-4308-b82c-e434eb09c4ce 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8df0cb66-9678-4f50-87e0-066cbafcb26b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.424721523 +0000 UTC m=+0.035787256 container create ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:15:29 np0005597378 systemd[1]: Started libpod-conmon-ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423.scope.
Jan 27 09:15:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.497139166 +0000 UTC m=+0.108204929 container init ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.504265146 +0000 UTC m=+0.115330889 container start ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.408937702 +0000 UTC m=+0.020003465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.507653666 +0000 UTC m=+0.118719429 container attach ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:15:29 np0005597378 nifty_jennings[349166]: 167 167
Jan 27 09:15:29 np0005597378 systemd[1]: libpod-ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423.scope: Deactivated successfully.
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.509147797 +0000 UTC m=+0.120213540 container died ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:15:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-532faf712984806757dad8c48b97b61da28007db126530ab53511e77b19e1b1e-merged.mount: Deactivated successfully.
Jan 27 09:15:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:29 np0005597378 podman[349150]: 2026-01-27 14:15:29.549960866 +0000 UTC m=+0.161026609 container remove ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:15:29 np0005597378 systemd[1]: libpod-conmon-ebde4e26c2cc6f795c2fa78f01578c27e256a3df69b4feb318d3c1f26bec2423.scope: Deactivated successfully.
Jan 27 09:15:29 np0005597378 podman[349188]: 2026-01-27 14:15:29.734398208 +0000 UTC m=+0.043741868 container create 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:29 np0005597378 systemd[1]: Started libpod-conmon-432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09.scope.
Jan 27 09:15:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:29 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:29 np0005597378 podman[349188]: 2026-01-27 14:15:29.717181719 +0000 UTC m=+0.026525399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:29 np0005597378 podman[349188]: 2026-01-27 14:15:29.824067041 +0000 UTC m=+0.133410721 container init 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:15:29 np0005597378 podman[349188]: 2026-01-27 14:15:29.830521464 +0000 UTC m=+0.139865134 container start 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:15:29 np0005597378 podman[349188]: 2026-01-27 14:15:29.834946572 +0000 UTC m=+0.144290232 container attach 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:15:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:15:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:15:30 np0005597378 epic_galois[349205]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:15:30 np0005597378 epic_galois[349205]: --> All data devices are unavailable
Jan 27 09:15:30 np0005597378 systemd[1]: libpod-432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09.scope: Deactivated successfully.
Jan 27 09:15:30 np0005597378 podman[349188]: 2026-01-27 14:15:30.310708001 +0000 UTC m=+0.620051671 container died 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:15:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2e42a945adf9ac817b27668c5622ba4645164bd116733a378f72e79f20ea1ee5-merged.mount: Deactivated successfully.
Jan 27 09:15:30 np0005597378 podman[349188]: 2026-01-27 14:15:30.356628077 +0000 UTC m=+0.665971737 container remove 432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 09:15:30 np0005597378 systemd[1]: libpod-conmon-432b994db40da19e829dfe6d10f7229eac938f32ccebcb639428b78c8693ea09.scope: Deactivated successfully.
Jan 27 09:15:30 np0005597378 nova_compute[238941]: 2026-01-27 14:15:30.568 238945 DEBUG nova.compute.manager [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:30 np0005597378 nova_compute[238941]: 2026-01-27 14:15:30.571 238945 DEBUG nova.compute.manager [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:30 np0005597378 nova_compute[238941]: 2026-01-27 14:15:30.572 238945 DEBUG oslo_concurrency.lockutils [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:30 np0005597378 nova_compute[238941]: 2026-01-27 14:15:30.572 238945 DEBUG oslo_concurrency.lockutils [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:30 np0005597378 nova_compute[238941]: 2026-01-27 14:15:30.572 238945 DEBUG nova.network.neutron [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.788017031 +0000 UTC m=+0.041828257 container create 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:15:30 np0005597378 systemd[1]: Started libpod-conmon-8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b.scope.
Jan 27 09:15:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.865618732 +0000 UTC m=+0.119429938 container init 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.77112525 +0000 UTC m=+0.024936466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.872175537 +0000 UTC m=+0.125986743 container start 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.876706468 +0000 UTC m=+0.130517674 container attach 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:30 np0005597378 condescending_swartz[349317]: 167 167
Jan 27 09:15:30 np0005597378 systemd[1]: libpod-8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b.scope: Deactivated successfully.
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.879553154 +0000 UTC m=+0.133364370 container died 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:15:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c4e00c361ff3a21fabe8f8e1a32bb5e7d524942be727ec59aa6fb1e2c8d85d3b-merged.mount: Deactivated successfully.
Jan 27 09:15:30 np0005597378 podman[349301]: 2026-01-27 14:15:30.92175466 +0000 UTC m=+0.175565866 container remove 8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_swartz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:15:30 np0005597378 systemd[1]: libpod-conmon-8d70280b23f2d7ebca02a207de7dbc004431632278d572466714dda22913523b.scope: Deactivated successfully.
Jan 27 09:15:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 130 op/s
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.112160332 +0000 UTC m=+0.049745448 container create 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:15:31 np0005597378 systemd[1]: Started libpod-conmon-584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3.scope.
Jan 27 09:15:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.094928973 +0000 UTC m=+0.032514129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.199805002 +0000 UTC m=+0.137390158 container init 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.206745237 +0000 UTC m=+0.144330363 container start 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.210445737 +0000 UTC m=+0.148030893 container attach 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]: {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:    "0": [
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:        {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "devices": [
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "/dev/loop3"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            ],
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_name": "ceph_lv0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_size": "21470642176",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "name": "ceph_lv0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "tags": {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cluster_name": "ceph",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.crush_device_class": "",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.encrypted": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.objectstore": "bluestore",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osd_id": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.type": "block",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.vdo": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.with_tpm": "0"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            },
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "type": "block",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "vg_name": "ceph_vg0"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:        }
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:    ],
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:    "1": [
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:        {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "devices": [
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "/dev/loop4"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            ],
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_name": "ceph_lv1",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_size": "21470642176",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "name": "ceph_lv1",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "tags": {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cluster_name": "ceph",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.crush_device_class": "",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.encrypted": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.objectstore": "bluestore",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osd_id": "1",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.type": "block",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.vdo": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.with_tpm": "0"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            },
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "type": "block",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "vg_name": "ceph_vg1"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:        }
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:    ],
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:    "2": [
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:        {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "devices": [
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "/dev/loop5"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            ],
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_name": "ceph_lv2",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_size": "21470642176",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "name": "ceph_lv2",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "tags": {
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.cluster_name": "ceph",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.crush_device_class": "",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.encrypted": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.objectstore": "bluestore",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osd_id": "2",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.type": "block",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.vdo": "0",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:                "ceph.with_tpm": "0"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            },
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "type": "block",
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:            "vg_name": "ceph_vg2"
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:        }
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]:    ]
Jan 27 09:15:31 np0005597378 frosty_bardeen[349356]: }
Jan 27 09:15:31 np0005597378 systemd[1]: libpod-584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3.scope: Deactivated successfully.
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.493188193 +0000 UTC m=+0.430773319 container died 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:15:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-298e1638a21a3eae97eca0706b04420127a3243aa02ca2d9924fee1e363da86e-merged.mount: Deactivated successfully.
Jan 27 09:15:31 np0005597378 podman[349339]: 2026-01-27 14:15:31.533219581 +0000 UTC m=+0.470804697 container remove 584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:15:31 np0005597378 systemd[1]: libpod-conmon-584120f558dfa931e1a69c22696e4c6124b90d4b597ce74d7b53ccaaa4456cd3.scope: Deactivated successfully.
Jan 27 09:15:31 np0005597378 nova_compute[238941]: 2026-01-27 14:15:31.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:31 np0005597378 podman[349440]: 2026-01-27 14:15:31.988694928 +0000 UTC m=+0.071879679 container create 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:15:32 np0005597378 podman[349440]: 2026-01-27 14:15:31.94192975 +0000 UTC m=+0.025114531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:32 np0005597378 systemd[1]: Started libpod-conmon-0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a.scope.
Jan 27 09:15:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:32 np0005597378 podman[349440]: 2026-01-27 14:15:32.128466849 +0000 UTC m=+0.211651610 container init 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:15:32 np0005597378 podman[349440]: 2026-01-27 14:15:32.136137574 +0000 UTC m=+0.219322335 container start 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:15:32 np0005597378 pedantic_thompson[349457]: 167 167
Jan 27 09:15:32 np0005597378 systemd[1]: libpod-0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a.scope: Deactivated successfully.
Jan 27 09:15:32 np0005597378 podman[349440]: 2026-01-27 14:15:32.149016828 +0000 UTC m=+0.232201569 container attach 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:15:32 np0005597378 podman[349440]: 2026-01-27 14:15:32.149689225 +0000 UTC m=+0.232873976 container died 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:15:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2ee910eaf6d914b2870b302580ce6c2a10f533d0959b5701a63a94825afad413-merged.mount: Deactivated successfully.
Jan 27 09:15:32 np0005597378 podman[349440]: 2026-01-27 14:15:32.266104684 +0000 UTC m=+0.349289435 container remove 0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_thompson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:15:32 np0005597378 systemd[1]: libpod-conmon-0578af83bc753f00c712839e0b05e0525eabf971ead49ea6f5bf995d785d653a.scope: Deactivated successfully.
Jan 27 09:15:32 np0005597378 podman[349480]: 2026-01-27 14:15:32.419665362 +0000 UTC m=+0.021497245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:15:32 np0005597378 podman[349480]: 2026-01-27 14:15:32.516579198 +0000 UTC m=+0.118411061 container create eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:15:32 np0005597378 nova_compute[238941]: 2026-01-27 14:15:32.595 238945 DEBUG nova.network.neutron [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updated VIF entry in instance network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:15:32 np0005597378 nova_compute[238941]: 2026-01-27 14:15:32.596 238945 DEBUG nova.network.neutron [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:32 np0005597378 nova_compute[238941]: 2026-01-27 14:15:32.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:32 np0005597378 systemd[1]: Started libpod-conmon-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope.
Jan 27 09:15:32 np0005597378 nova_compute[238941]: 2026-01-27 14:15:32.622 238945 DEBUG oslo_concurrency.lockutils [req-2d27ea84-ceb6-4f33-961d-a86d1d2afefa req-c2bf300b-cda1-46bd-ade4-b899e20a33e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:15:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:15:32 np0005597378 podman[349480]: 2026-01-27 14:15:32.68861973 +0000 UTC m=+0.290451593 container init eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:15:32 np0005597378 podman[349480]: 2026-01-27 14:15:32.695672229 +0000 UTC m=+0.297504092 container start eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:15:32 np0005597378 podman[349480]: 2026-01-27 14:15:32.700351114 +0000 UTC m=+0.302182977 container attach eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:15:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 09:15:33 np0005597378 lvm[349572]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:15:33 np0005597378 lvm[349572]: VG ceph_vg0 finished
Jan 27 09:15:33 np0005597378 lvm[349575]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:15:33 np0005597378 lvm[349575]: VG ceph_vg1 finished
Jan 27 09:15:33 np0005597378 lvm[349577]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:15:33 np0005597378 lvm[349577]: VG ceph_vg2 finished
Jan 27 09:15:33 np0005597378 lvm[349578]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:15:33 np0005597378 lvm[349578]: VG ceph_vg1 finished
Jan 27 09:15:33 np0005597378 lvm[349580]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:15:33 np0005597378 lvm[349580]: VG ceph_vg1 finished
Jan 27 09:15:33 np0005597378 confident_brattain[349496]: {}
Jan 27 09:15:33 np0005597378 systemd[1]: libpod-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope: Deactivated successfully.
Jan 27 09:15:33 np0005597378 podman[349480]: 2026-01-27 14:15:33.560979035 +0000 UTC m=+1.162810898 container died eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:15:33 np0005597378 systemd[1]: libpod-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope: Consumed 1.349s CPU time.
Jan 27 09:15:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-02e16d762ff5321280bd4046870e6cefe7a2bdb5e23b9d6f2873e6102cba4df3-merged.mount: Deactivated successfully.
Jan 27 09:15:33 np0005597378 podman[349480]: 2026-01-27 14:15:33.605154474 +0000 UTC m=+1.206986357 container remove eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:15:33 np0005597378 systemd[1]: libpod-conmon-eacc8e5cb0ed1d18f0dc7008475b57ca058d56e559500ee3b80cfcf4500f50a4.scope: Deactivated successfully.
Jan 27 09:15:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:15:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:15:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:34 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:34 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:15:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:34Z|01252|binding|INFO|Releasing lport 0a181f74-30e7-4bcc-b817-e247dda31c08 from this chassis (sb_readonly=0)
Jan 27 09:15:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:34Z|01253|binding|INFO|Releasing lport 4892ac35-2643-4e0c-8a95-5275bc7e88da from this chassis (sb_readonly=0)
Jan 27 09:15:34 np0005597378 nova_compute[238941]: 2026-01-27 14:15:34.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:34 np0005597378 nova_compute[238941]: 2026-01-27 14:15:34.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:15:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:34 np0005597378 nova_compute[238941]: 2026-01-27 14:15:34.612 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523319.608535, d27de200-a446-4d4f-a0dd-c3be9edf0f73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:15:34 np0005597378 nova_compute[238941]: 2026-01-27 14:15:34.612 238945 INFO nova.compute.manager [-] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:15:34 np0005597378 nova_compute[238941]: 2026-01-27 14:15:34.662 238945 DEBUG nova.compute.manager [None req-336c6785-e893-40cd-8f7b-4c980faf83e0 - - - - - -] [instance: d27de200-a446-4d4f-a0dd-c3be9edf0f73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:15:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 125 op/s
Jan 27 09:15:36 np0005597378 nova_compute[238941]: 2026-01-27 14:15:36.687 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 172 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 416 KiB/s wr, 99 op/s
Jan 27 09:15:37 np0005597378 nova_compute[238941]: 2026-01-27 14:15:37.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:37Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:e3:79 10.100.0.5
Jan 27 09:15:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:37Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:e3:79 10.100.0.5
Jan 27 09:15:37 np0005597378 nova_compute[238941]: 2026-01-27 14:15:37.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:37.876 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:37.877 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:15:37 np0005597378 nova_compute[238941]: 2026-01-27 14:15:37.876 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 186 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 112 op/s
Jan 27 09:15:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:40 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:40.879 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 27 09:15:41 np0005597378 nova_compute[238941]: 2026-01-27 14:15:41.656 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523326.6553862, 8df0cb66-9678-4f50-87e0-066cbafcb26b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:15:41 np0005597378 nova_compute[238941]: 2026-01-27 14:15:41.657 238945 INFO nova.compute.manager [-] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:15:41 np0005597378 nova_compute[238941]: 2026-01-27 14:15:41.686 238945 DEBUG nova.compute.manager [None req-8263fbab-bc47-4f98-a35f-f3ad799a2ca7 - - - - - -] [instance: 8df0cb66-9678-4f50-87e0-066cbafcb26b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:15:41 np0005597378 nova_compute[238941]: 2026-01-27 14:15:41.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:42 np0005597378 nova_compute[238941]: 2026-01-27 14:15:42.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:42 np0005597378 nova_compute[238941]: 2026-01-27 14:15:42.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:15:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:15:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:46.320 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:46.322 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:46 np0005597378 nova_compute[238941]: 2026-01-27 14:15:46.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 09:15:47 np0005597378 nova_compute[238941]: 2026-01-27 14:15:47.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:15:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:15:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 1.7 MiB/s wr, 61 op/s
Jan 27 09:15:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.663 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.664 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.664 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.665 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.665 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.668 238945 INFO nova.compute.manager [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Terminating instance#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.670 238945 DEBUG nova.compute.manager [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:15:49 np0005597378 kernel: tape0d38998-b2 (unregistering): left promiscuous mode
Jan 27 09:15:49 np0005597378 NetworkManager[48904]: <info>  [1769523349.7215] device (tape0d38998-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:15:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:49Z|01254|binding|INFO|Releasing lport e0d38998-b28f-4059-8b31-d26feeb41c76 from this chassis (sb_readonly=0)
Jan 27 09:15:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:49Z|01255|binding|INFO|Setting lport e0d38998-b28f-4059-8b31-d26feeb41c76 down in Southbound
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.739 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:49Z|01256|binding|INFO|Removing iface tape0d38998-b2 ovn-installed in OVS
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.746 238945 DEBUG nova.compute.manager [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.746 238945 DEBUG nova.compute.manager [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing instance network info cache due to event network-changed-e0d38998-b28f-4059-8b31-d26feeb41c76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.746 238945 DEBUG oslo_concurrency.lockutils [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.747 238945 DEBUG oslo_concurrency.lockutils [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.747 238945 DEBUG nova.network.neutron [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Refreshing network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.749 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:e3:79 10.100.0.5'], port_security=['fa:16:3e:7a:e3:79 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=e0d38998-b28f-4059-8b31-d26feeb41c76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.750 154802 INFO neutron.agent.ovn.metadata.agent [-] Port e0d38998-b28f-4059-8b31-d26feeb41c76 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e unbound from our chassis#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.752 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 107e1e32-614b-4ab8-bbad-b8ada050804e#033[00m
Jan 27 09:15:49 np0005597378 kernel: tapb18543f0-85 (unregistering): left promiscuous mode
Jan 27 09:15:49 np0005597378 NetworkManager[48904]: <info>  [1769523349.7617] device (tapb18543f0-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.761 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:49Z|01257|binding|INFO|Releasing lport b18543f0-85cc-4cd0-913c-5759062e76c0 from this chassis (sb_readonly=0)
Jan 27 09:15:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:49Z|01258|binding|INFO|Setting lport b18543f0-85cc-4cd0-913c-5759062e76c0 down in Southbound
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:49Z|01259|binding|INFO|Removing iface tapb18543f0-85 ovn-installed in OVS
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.773 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], port_security=['fa:16:3e:3b:82:14 2001:db8:0:1:f816:3eff:fe3b:8214 2001:db8::f816:3eff:fe3b:8214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3b:8214/64 2001:db8::f816:3eff:fe3b:8214/64', 'neutron:device_id': 'c9010b63-5eae-497c-ace9-dc8788805086', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b18543f0-85cc-4cd0-913c-5759062e76c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.773 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83224ea9-1973-403d-b820-ca620c9d3f01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.783 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.809 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[85bacbe6-3e37-44c9-8124-1a88b62a3e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.812 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b79b39c7-ce10-46d9-a870-9d115b2c651c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Deactivated successfully.
Jan 27 09:15:49 np0005597378 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Consumed 14.259s CPU time.
Jan 27 09:15:49 np0005597378 systemd-machined[207425]: Machine qemu-152-instance-00000078 terminated.
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.847 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f31e8600-ae38-429a-bb41-4b70f48af94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.865 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aba62c9a-4b3b-43c3-b324-40aa35a7b727]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap107e1e32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:23:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591982, 'reachable_time': 37220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349634, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.885 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cacb95-436e-4228-86e1-f5e0e9c807a3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591994, 'tstamp': 591994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349635, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap107e1e32-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591997, 'tstamp': 591997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349635, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.887 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.899 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap107e1e32-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap107e1e32-60, col_values=(('external_ids', {'iface-id': '4892ac35-2643-4e0c-8a95-5275bc7e88da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.900 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.901 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b18543f0-85cc-4cd0-913c-5759062e76c0 in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.902 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2539952-bab4-4694-909b-dbdd2d64b450#033[00m
Jan 27 09:15:49 np0005597378 NetworkManager[48904]: <info>  [1769523349.9046] manager: (tapb18543f0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[596b1cff-1064-431c-a186-90718b99b8db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.924 238945 INFO nova.virt.libvirt.driver [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Instance destroyed successfully.#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.926 238945 DEBUG nova.objects.instance [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid c9010b63-5eae-497c-ace9-dc8788805086 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.940 238945 DEBUG nova.virt.libvirt.vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:15:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:15:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.941 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.942 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.942 238945 DEBUG os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.945 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0d38998-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.954 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a7de5ea1-38e9-462b-addd-a8d0d2fda99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.955 238945 INFO os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:e3:79,bridge_name='br-int',has_traffic_filtering=True,id=e0d38998-b28f-4059-8b31-d26feeb41c76,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0d38998-b2')#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.956 238945 DEBUG nova.virt.libvirt.vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1944593165',display_name='tempest-TestGettingAddress-server-1944593165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1944593165',id=120,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:15:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ixu7d8tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:15:24Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=c9010b63-5eae-497c-ace9-dc8788805086,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.956 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.957 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bed7ffca-fd5c-48a8-af27-d3b5eea63f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.957 238945 DEBUG nova.network.os_vif_util [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.957 238945 DEBUG os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.959 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18543f0-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:49 np0005597378 nova_compute[238941]: 2026-01-27 14:15:49.967 238945 INFO os_vif [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:82:14,bridge_name='br-int',has_traffic_filtering=True,id=b18543f0-85cc-4cd0-913c-5759062e76c0,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb18543f0-85')#033[00m
Jan 27 09:15:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:49.985 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b2ce8d-e167-4c1e-84c5-20d1c4eb871c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.005 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[083f04a3-fecb-402a-8c3c-b43bd3383295]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2539952-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592069, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349680, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.023 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[11924e8b-03a8-4466-a313-4385e600dd0e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf2539952-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592081, 'tstamp': 592081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349684, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.025 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.028 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2539952-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.029 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.030 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2539952-b0, col_values=(('external_ids', {'iface-id': '0a181f74-30e7-4bcc-b817-e247dda31c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:50.030 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.630 238945 INFO nova.virt.libvirt.driver [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deleting instance files /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086_del#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.631 238945 INFO nova.virt.libvirt.driver [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deletion of /var/lib/nova/instances/c9010b63-5eae-497c-ace9-dc8788805086_del complete#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.702 238945 INFO nova.compute.manager [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.703 238945 DEBUG oslo.service.loopingcall [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.703 238945 DEBUG nova.compute.manager [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:15:50 np0005597378 nova_compute[238941]: 2026-01-27 14:15:50.704 238945 DEBUG nova.network.neutron [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:15:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 1.0 MiB/s wr, 22 op/s
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.495 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.496 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.544 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.633 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.633 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.642 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.643 238945 INFO nova.compute.claims [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.812 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.874 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-unplugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.875 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.876 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.876 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.877 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-unplugged-e0d38998-b28f-4059-8b31-d26feeb41c76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.877 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-unplugged-e0d38998-b28f-4059-8b31-d26feeb41c76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.878 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.878 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.879 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.879 238945 DEBUG oslo_concurrency.lockutils [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.879 238945 DEBUG nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:51 np0005597378 nova_compute[238941]: 2026-01-27 14:15:51.880 238945 WARNING nova.compute.manager [req-8e04eefe-0688-45cc-b172-a4da9f130cfb req-3880248b-4f64-4c9a-9340-b251d9cff775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-e0d38998-b28f-4059-8b31-d26feeb41c76 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.006 238945 DEBUG nova.compute.manager [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-deleted-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.007 238945 INFO nova.compute.manager [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Neutron deleted interface b18543f0-85cc-4cd0-913c-5759062e76c0; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.008 238945 DEBUG nova.network.neutron [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.043 238945 DEBUG nova.compute.manager [req-1d3fbfc9-ff2a-4b9d-a9df-ded60dcd3206 req-403999b8-7e24-457b-9036-d2a5c9d7cdd5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Detach interface failed, port_id=b18543f0-85cc-4cd0-913c-5759062e76c0, reason: Instance c9010b63-5eae-497c-ace9-dc8788805086 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.207 238945 DEBUG nova.network.neutron [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updated VIF entry in instance network info cache for port e0d38998-b28f-4059-8b31-d26feeb41c76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.207 238945 DEBUG nova.network.neutron [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [{"id": "e0d38998-b28f-4059-8b31-d26feeb41c76", "address": "fa:16:3e:7a:e3:79", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0d38998-b2", "ovs_interfaceid": "e0d38998-b28f-4059-8b31-d26feeb41c76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b18543f0-85cc-4cd0-913c-5759062e76c0", "address": "fa:16:3e:3b:82:14", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:8214", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb18543f0-85", "ovs_interfaceid": "b18543f0-85cc-4cd0-913c-5759062e76c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.228 238945 DEBUG oslo_concurrency.lockutils [req-e5e3acdc-8233-48ba-a3d6-4e3eff860892 req-ae975ce0-bc96-4528-865e-b500fd89a1ba 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c9010b63-5eae-497c-ace9-dc8788805086" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.258 238945 DEBUG nova.network.neutron [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.273 238945 INFO nova.compute.manager [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Took 1.57 seconds to deallocate network for instance.#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.330 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801404889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.410 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.417 238945 DEBUG nova.compute.provider_tree [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.430 238945 DEBUG nova.scheduler.client.report [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.462 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.463 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.465 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.536 238945 DEBUG oslo_concurrency.processutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.653 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8:0:1:f816:3eff:fe83:f593'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8::f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.654 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:15:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.655 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b89b739d-0901-41ef-b947-5f41e390c219, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:52.656 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[995f4fdc-a5a1-464a-be67-a1cdc80455e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.657 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.658 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.731 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.809 238945 DEBUG nova.policy [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:15:52 np0005597378 nova_compute[238941]: 2026-01-27 14:15:52.822 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:15:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.051 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.053 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.054 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Creating image(s)#033[00m
Jan 27 09:15:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335155750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.082 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.109 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.130 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.134 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.166 238945 DEBUG oslo_concurrency.processutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.173 238945 DEBUG nova.compute.provider_tree [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.206 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.206 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.207 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.207 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.230 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.234 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.271 238945 DEBUG nova.scheduler.client.report [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.445 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.485 238945 INFO nova.scheduler.client.report [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance c9010b63-5eae-497c-ace9-dc8788805086#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.502 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.552 238945 DEBUG oslo_concurrency.lockutils [None req-3b6d4138-ce22-40f9-b97e-198c861a7b07 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.558 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.630 238945 DEBUG nova.objects.instance [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.655 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.656 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Ensure instance console log exists: /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.656 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.658 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.658 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.967 238945 DEBUG nova.compute.manager [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.968 238945 DEBUG oslo_concurrency.lockutils [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c9010b63-5eae-497c-ace9-dc8788805086-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.968 238945 DEBUG oslo_concurrency.lockutils [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.968 238945 DEBUG oslo_concurrency.lockutils [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c9010b63-5eae-497c-ace9-dc8788805086-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.969 238945 DEBUG nova.compute.manager [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] No waiting events found dispatching network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:53 np0005597378 nova_compute[238941]: 2026-01-27 14:15:53.970 238945 WARNING nova.compute.manager [req-901ad6cc-befd-4021-ba99-d4282446e80f req-b8f2fdd6-0510-4199-a903-191c014d7bc2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received unexpected event network-vif-plugged-b18543f0-85cc-4cd0-913c-5759062e76c0 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:15:54 np0005597378 nova_compute[238941]: 2026-01-27 14:15:54.097 238945 DEBUG nova.compute.manager [req-f8f944b7-f5d8-44b6-9158-be5690faf041 req-3fa43336-24fa-4d10-a9b6-4eeebbbea610 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Received event network-vif-deleted-e0d38998-b28f-4059-8b31-d26feeb41c76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:54 np0005597378 nova_compute[238941]: 2026-01-27 14:15:54.625 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully created port: 1d99d15e-516c-4957-8a1e-0e818b4990cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:15:54 np0005597378 nova_compute[238941]: 2026-01-27 14:15:54.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 166 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 75 KiB/s wr, 42 op/s
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.573 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.573 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.574 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.574 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.574 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.575 238945 INFO nova.compute.manager [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Terminating instance#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.576 238945 DEBUG nova.compute.manager [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:15:55 np0005597378 kernel: tapd98527e5-88 (unregistering): left promiscuous mode
Jan 27 09:15:55 np0005597378 NetworkManager[48904]: <info>  [1769523355.6263] device (tapd98527e5-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.637 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:55Z|01260|binding|INFO|Releasing lport d98527e5-8812-43b6-957e-7529c80c2873 from this chassis (sb_readonly=0)
Jan 27 09:15:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:55Z|01261|binding|INFO|Setting lport d98527e5-8812-43b6-957e-7529c80c2873 down in Southbound
Jan 27 09:15:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:55Z|01262|binding|INFO|Removing iface tapd98527e5-88 ovn-installed in OVS
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.647 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:dc:ff 10.100.0.8'], port_security=['fa:16:3e:78:dc:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-107e1e32-614b-4ab8-bbad-b8ada050804e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c2bb1b0-9703-440f-a697-1b5346ed2fe2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d98527e5-8812-43b6-957e-7529c80c2873) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.649 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d98527e5-8812-43b6-957e-7529c80c2873 in datapath 107e1e32-614b-4ab8-bbad-b8ada050804e unbound from our chassis#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.650 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 107e1e32-614b-4ab8-bbad-b8ada050804e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e881bf3d-96c2-4656-9f4f-73025dfaf67e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.651 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e namespace which is not needed anymore#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 kernel: tap76b015b5-67 (unregistering): left promiscuous mode
Jan 27 09:15:55 np0005597378 NetworkManager[48904]: <info>  [1769523355.6698] device (tap76b015b5-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:55Z|01263|binding|INFO|Releasing lport 76b015b5-672a-451a-8d3a-e6c7459987af from this chassis (sb_readonly=0)
Jan 27 09:15:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:55Z|01264|binding|INFO|Setting lport 76b015b5-672a-451a-8d3a-e6c7459987af down in Southbound
Jan 27 09:15:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:15:55Z|01265|binding|INFO|Removing iface tap76b015b5-67 ovn-installed in OVS
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.682 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.694 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Jan 27 09:15:55 np0005597378 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 15.772s CPU time.
Jan 27 09:15:55 np0005597378 systemd-machined[207425]: Machine qemu-150-instance-00000076 terminated.
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.768 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], port_security=['fa:16:3e:1d:e0:5c 2001:db8:0:1:f816:3eff:fe1d:e05c 2001:db8::f816:3eff:fe1d:e05c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe1d:e05c/64 2001:db8::f816:3eff:fe1d:e05c/64', 'neutron:device_id': 'ffbbdbe0-9dc8-46b2-9492-e5d63351a47f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2539952-bab4-4694-909b-dbdd2d64b450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268332ab-f289-4ec1-a982-e71e7edae5df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a89e419-fc51-4e3e-9f6e-6978eb8bc060, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=76b015b5-672a-451a-8d3a-e6c7459987af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:15:55 np0005597378 NetworkManager[48904]: <info>  [1769523355.7999] manager: (tapd98527e5-88): new Tun device (/org/freedesktop/NetworkManager/Devices/516)
Jan 27 09:15:55 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : haproxy version is 2.8.14-c23fe91
Jan 27 09:15:55 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [NOTICE]   (346710) : path to executable is /usr/sbin/haproxy
Jan 27 09:15:55 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [WARNING]  (346710) : Exiting Master process...
Jan 27 09:15:55 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [WARNING]  (346710) : Exiting Master process...
Jan 27 09:15:55 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [ALERT]    (346710) : Current worker (346712) exited with code 143 (Terminated)
Jan 27 09:15:55 np0005597378 neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e[346706]: [WARNING]  (346710) : All workers exited. Exiting... (0)
Jan 27 09:15:55 np0005597378 NetworkManager[48904]: <info>  [1769523355.8121] manager: (tap76b015b5-67): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Jan 27 09:15:55 np0005597378 systemd[1]: libpod-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37.scope: Deactivated successfully.
Jan 27 09:15:55 np0005597378 podman[349924]: 2026-01-27 14:15:55.819684617 +0000 UTC m=+0.059489439 container died df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.828 238945 INFO nova.virt.libvirt.driver [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Instance destroyed successfully.#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.829 238945 DEBUG nova.objects.instance [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid ffbbdbe0-9dc8-46b2-9492-e5d63351a47f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.848 238945 DEBUG nova.virt.libvirt.vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:41Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.848 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.849 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.849 238945 DEBUG os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.851 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd98527e5-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37-userdata-shm.mount: Deactivated successfully.
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cdffd4ed78cd708ae200ae801f9f3926a40989c5dc91892312c32c1cbf33b62c-merged.mount: Deactivated successfully.
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.866 238945 INFO os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:dc:ff,bridge_name='br-int',has_traffic_filtering=True,id=d98527e5-8812-43b6-957e-7529c80c2873,network=Network(107e1e32-614b-4ab8-bbad-b8ada050804e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd98527e5-88')#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.867 238945 DEBUG nova.virt.libvirt.vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-342597412',display_name='tempest-TestGettingAddress-server-342597412',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-342597412',id=118,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCoj5qS4KPlO9x3dx2NnROfd7/x58gUa/DRDPJ5PRUKx3+OHCgRtrH31FZPoiNQUXaKOfD6/RlfUNgoLjVQ2B5LWAPtrzvDxIUfFwQBCuOfK+jnEDsN4WP8jSfcSF//B+w==',key_name='tempest-TestGettingAddress-31617790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:14:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5swdz93a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:14:41Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ffbbdbe0-9dc8-46b2-9492-e5d63351a47f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.867 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.868 238945 DEBUG nova.network.os_vif_util [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.868 238945 DEBUG os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.869 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.869 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76b015b5-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:55 np0005597378 podman[349924]: 2026-01-27 14:15:55.872457076 +0000 UTC m=+0.112261898 container cleanup df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:55 np0005597378 podman[349923]: 2026-01-27 14:15:55.874999433 +0000 UTC m=+0.110496520 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.875 238945 INFO os_vif [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e0:5c,bridge_name='br-int',has_traffic_filtering=True,id=76b015b5-672a-451a-8d3a-e6c7459987af,network=Network(f2539952-bab4-4694-909b-dbdd2d64b450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76b015b5-67')#033[00m
Jan 27 09:15:55 np0005597378 systemd[1]: libpod-conmon-df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37.scope: Deactivated successfully.
Jan 27 09:15:55 np0005597378 podman[349999]: 2026-01-27 14:15:55.96030892 +0000 UTC m=+0.065217761 container remove df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba66b52-b856-4c84-9517-8b5788501631]: (4, ('Tue Jan 27 02:15:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e (df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37)\ndf0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37\nTue Jan 27 02:15:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e (df0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37)\ndf0e9b61d174e5c5ed408bcad97e657772c3f5ecf80fc9f7f915cf20c2fb9b37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.967 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[107a9448-7cf0-4be2-ae8d-87e13136b49f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.969 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap107e1e32-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:55 np0005597378 kernel: tap107e1e32-60: left promiscuous mode
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.972 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:55 np0005597378 nova_compute[238941]: 2026-01-27 14:15:55.986 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully updated port: 1d99d15e-516c-4957-8a1e-0e818b4990cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:15:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:55.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[482cf679-ee06-4a9f-973b-4174367acd67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.000 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9627b566-f83f-4d03-9800-ce3389c3c0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.001 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c31afc2-ee60-49ab-887e-71941cfd3188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e918927-a019-44b0-9e0b-282882582be7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591976, 'reachable_time': 35380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350022, 'error': None, 'target': 'ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.020 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-107e1e32-614b-4ab8-bbad-b8ada050804e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.020 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a810c048-648d-48be-9e86-93f05a66b4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 systemd[1]: run-netns-ovnmeta\x2d107e1e32\x2d614b\x2d4ab8\x2dbbad\x2db8ada050804e.mount: Deactivated successfully.
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.022 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 76b015b5-672a-451a-8d3a-e6c7459987af in datapath f2539952-bab4-4694-909b-dbdd2d64b450 unbound from our chassis#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.024 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2539952-bab4-4694-909b-dbdd2d64b450, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.025 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4e555d7e-2a91-4636-938a-67b759945f6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.025 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 namespace which is not needed anymore#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.151 238945 INFO nova.virt.libvirt.driver [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deleting instance files /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_del#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.152 238945 INFO nova.virt.libvirt.driver [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deletion of /var/lib/nova/instances/ffbbdbe0-9dc8-46b2-9492-e5d63351a47f_del complete#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.155 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.155 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.155 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:15:56 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : haproxy version is 2.8.14-c23fe91
Jan 27 09:15:56 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [NOTICE]   (346781) : path to executable is /usr/sbin/haproxy
Jan 27 09:15:56 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [WARNING]  (346781) : Exiting Master process...
Jan 27 09:15:56 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [WARNING]  (346781) : Exiting Master process...
Jan 27 09:15:56 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [ALERT]    (346781) : Current worker (346783) exited with code 143 (Terminated)
Jan 27 09:15:56 np0005597378 neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450[346777]: [WARNING]  (346781) : All workers exited. Exiting... (0)
Jan 27 09:15:56 np0005597378 systemd[1]: libpod-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3.scope: Deactivated successfully.
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-changed-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing instance network info cache due to event network-changed-d98527e5-8812-43b6-957e-7529c80c2873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.164 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.165 238945 DEBUG nova.network.neutron [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Refreshing network info cache for port d98527e5-8812-43b6-957e-7529c80c2873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:56 np0005597378 podman[350041]: 2026-01-27 14:15:56.170986124 +0000 UTC m=+0.047084328 container died cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:15:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3-userdata-shm.mount: Deactivated successfully.
Jan 27 09:15:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0e50b76deb35f09fe4e76fd932ccfbf2ba9078b715270b172138bed3ca5fc986-merged.mount: Deactivated successfully.
Jan 27 09:15:56 np0005597378 podman[350041]: 2026-01-27 14:15:56.205503015 +0000 UTC m=+0.081601209 container cleanup cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 09:15:56 np0005597378 systemd[1]: libpod-conmon-cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3.scope: Deactivated successfully.
Jan 27 09:15:56 np0005597378 podman[350071]: 2026-01-27 14:15:56.267622203 +0000 UTC m=+0.043768949 container remove cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.275 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7fca1d-9bfc-4984-b8be-3d1f5c3c7332]: (4, ('Tue Jan 27 02:15:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 (cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3)\ncb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3\nTue Jan 27 02:15:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 (cb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3)\ncb1c4e1ead65aff3a4282facba19c7b5a264d0a195cfc1bbe367adaf3b7750d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.277 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6df6c43f-c1b2-47ae-a9d6-8d55cccb6c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2539952-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:56 np0005597378 kernel: tapf2539952-b0: left promiscuous mode
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.292 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.293 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-unplugged-d98527e5-8812-43b6-957e-7529c80c2873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-d98527e5-8812-43b6-957e-7529c80c2873 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG nova.compute.manager [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.294 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.297 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.300 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0ca32b-790a-4ee9-b17c-dd5bd55d6fdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d770ad52-0de3-4091-8233-520aa2383586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa8cd5c-25c0-4fb1-8670-54f2f1ae75e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.319 238945 INFO nova.compute.manager [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.320 238945 DEBUG oslo.service.loopingcall [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.320 238945 DEBUG nova.compute.manager [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.320 238945 DEBUG nova.network.neutron [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.339 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4099e6-e308-4e7b-9e8b-f1b8a9d62836]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592061, 'reachable_time': 32112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350086, 'error': None, 'target': 'ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.341 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f2539952-bab4-4694-909b-dbdd2d64b450 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:15:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:15:56.342 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[042bb48d-0de9-4d77-bed7-dddbffa7867a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:15:56 np0005597378 nova_compute[238941]: 2026-01-27 14:15:56.620 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:15:56 np0005597378 systemd[1]: run-netns-ovnmeta\x2df2539952\x2dbab4\x2d4694\x2d909b\x2ddbdd2d64b450.mount: Deactivated successfully.
Jan 27 09:15:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 152 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 967 KiB/s wr, 49 op/s
Jan 27 09:15:57 np0005597378 nova_compute[238941]: 2026-01-27 14:15:57.614 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:57 np0005597378 podman[350087]: 2026-01-27 14:15:57.749357423 +0000 UTC m=+0.082367450 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.163 238945 DEBUG nova.network.neutron [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.252 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.252 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance network_info: |[{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.253 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.253 238945 DEBUG nova.network.neutron [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.256 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start _get_guest_xml network_info=[{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.259 238945 WARNING nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.266 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.266 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.269 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.269 238945 DEBUG nova.virt.libvirt.host [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.269 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.270 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.271 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.272 238945 DEBUG nova.virt.hardware [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.274 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.437 238945 DEBUG nova.compute.manager [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.437 238945 DEBUG oslo_concurrency.lockutils [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 DEBUG oslo_concurrency.lockutils [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 DEBUG oslo_concurrency.lockutils [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 DEBUG nova.compute.manager [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.438 238945 WARNING nova.compute.manager [req-be6b6c92-1c7c-45da-b99d-636f9a45cd85 req-ebbc0f52-3421-4da7-b566-26045a85c99d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-d98527e5-8812-43b6-957e-7529c80c2873 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.521 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG oslo_concurrency.lockutils [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG oslo_concurrency.lockutils [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG oslo_concurrency.lockutils [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 WARNING nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received unexpected event network-vif-plugged-76b015b5-672a-451a-8d3a-e6c7459987af for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-deleted-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.522 238945 INFO nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Neutron deleted interface 76b015b5-672a-451a-8d3a-e6c7459987af; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.523 238945 DEBUG nova.network.neutron [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.584 238945 DEBUG nova.network.neutron [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updated VIF entry in instance network info cache for port d98527e5-8812-43b6-957e-7529c80c2873. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.585 238945 DEBUG nova.network.neutron [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [{"id": "d98527e5-8812-43b6-957e-7529c80c2873", "address": "fa:16:3e:78:dc:ff", "network": {"id": "107e1e32-614b-4ab8-bbad-b8ada050804e", "bridge": "br-int", "label": "tempest-network-smoke--849954697", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd98527e5-88", "ovs_interfaceid": "d98527e5-8812-43b6-957e-7529c80c2873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "76b015b5-672a-451a-8d3a-e6c7459987af", "address": "fa:16:3e:1d:e0:5c", "network": {"id": "f2539952-bab4-4694-909b-dbdd2d64b450", "bridge": "br-int", "label": "tempest-network-smoke--448892073", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1d:e05c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76b015b5-67", "ovs_interfaceid": "76b015b5-672a-451a-8d3a-e6c7459987af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.590 238945 DEBUG nova.compute.manager [req-810e1ef1-41dd-4b12-becd-cf4992e5e18a req-f873040e-b7f5-44eb-b9bc-cc8f2dd3cb5a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Detach interface failed, port_id=76b015b5-672a-451a-8d3a-e6c7459987af, reason: Instance ffbbdbe0-9dc8-46b2-9492-e5d63351a47f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.757 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.757 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-76b015b5-672a-451a-8d3a-e6c7459987af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.757 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG oslo_concurrency.lockutils [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] No waiting events found dispatching network-vif-unplugged-76b015b5-672a-451a-8d3a-e6c7459987af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.758 238945 DEBUG nova.compute.manager [req-f237856f-ce1c-4a0a-88ac-f053b3649173 req-529f28df-7800-4a51-8e2d-15db4d406372 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-unplugged-76b015b5-672a-451a-8d3a-e6c7459987af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.760 238945 DEBUG nova.network.neutron [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:15:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:15:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659919202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.828 238945 INFO nova.compute.manager [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Took 2.51 seconds to deallocate network for instance.#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.839 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.860 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.864 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.902 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.903 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:58 np0005597378 nova_compute[238941]: 2026-01-27 14:15:58.968 238945 DEBUG oslo_concurrency.processutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 127 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975984702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.421 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.422 238945 DEBUG nova.virt.libvirt.vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.422 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.423 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.424 238945 DEBUG nova.objects.instance [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.444 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <name>instance-00000079</name>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:15:58</nova:creationTime>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <entry name="serial">b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <entry name="uuid">b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e4:9f:54"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <target dev="tap1d99d15e-51"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log" append="off"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:15:59 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:15:59 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:15:59 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:15:59 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.444 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Preparing to wait for external event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.445 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.445 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.445 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.446 238945 DEBUG nova.virt.libvirt.vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:15:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.446 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.447 238945 DEBUG nova.network.os_vif_util [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.447 238945 DEBUG os_vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.448 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.448 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.449 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.451 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d99d15e-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.452 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d99d15e-51, col_values=(('external_ids', {'iface-id': '1d99d15e-516c-4957-8a1e-0e818b4990cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:9f:54', 'vm-uuid': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:59 np0005597378 NetworkManager[48904]: <info>  [1769523359.4545] manager: (tap1d99d15e-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.460 238945 INFO os_vif [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51')#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.528 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.529 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.529 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:e4:9f:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.529 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Using config drive#033[00m
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2470686623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.551 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.560 238945 DEBUG oslo_concurrency.processutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.566 238945 DEBUG nova.compute.provider_tree [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.589 238945 DEBUG nova.scheduler.client.report [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.618 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4024611782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:15:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4024611782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.654 238945 INFO nova.scheduler.client.report [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance ffbbdbe0-9dc8-46b2-9492-e5d63351a47f#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.728 238945 DEBUG oslo_concurrency.lockutils [None req-98cc8bcd-4815-484c-a234-babe215a97de 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ffbbdbe0-9dc8-46b2-9492-e5d63351a47f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.925 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Creating config drive at /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.930 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp56j7dg9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.967 238945 DEBUG nova.network.neutron [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updated VIF entry in instance network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:15:59 np0005597378 nova_compute[238941]: 2026-01-27 14:15:59.968 238945 DEBUG nova.network.neutron [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.023 238945 DEBUG oslo_concurrency.lockutils [req-0ddc97f1-62c3-4333-b3fd-e72c7c12439c req-23e498e5-0a46-4665-ac3e-6e94bd185e5b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.073 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:f5:93'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '31', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6313275b-59e8-417a-83af-9be84d69fb75, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60089f87-3962-4179-9e02-98a075c77729) old=Port_Binding(mac=['fa:16:3e:83:f5:93 2001:db8:0:1:f816:3eff:fe83:f593'], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe83:f593/64', 'neutron:device_id': 'ovnmeta-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b89b739d-0901-41ef-b947-5f41e390c219', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0df46629f5fc440f9b410d388f2dd5e8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.074 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60089f87-3962-4179-9e02-98a075c77729 in datapath b89b739d-0901-41ef-b947-5f41e390c219 updated#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.074 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp56j7dg9x" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.075 154802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b89b739d-0901-41ef-b947-5f41e390c219 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.075 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[09b1cc4e-fc3e-4209-81ad-258fff6ea47b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.098 238945 DEBUG nova.storage.rbd_utils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.101 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.226 238945 DEBUG oslo_concurrency.processutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.226 238945 INFO nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deleting local config drive /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/disk.config because it was imported into RBD.#033[00m
Jan 27 09:16:00 np0005597378 kernel: tap1d99d15e-51: entered promiscuous mode
Jan 27 09:16:00 np0005597378 NetworkManager[48904]: <info>  [1769523360.2734] manager: (tap1d99d15e-51): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Jan 27 09:16:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:00Z|01266|binding|INFO|Claiming lport 1d99d15e-516c-4957-8a1e-0e818b4990cc for this chassis.
Jan 27 09:16:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:00Z|01267|binding|INFO|1d99d15e-516c-4957-8a1e-0e818b4990cc: Claiming fa:16:3e:e4:9f:54 10.100.0.9
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:00Z|01268|binding|INFO|Setting lport 1d99d15e-516c-4957-8a1e-0e818b4990cc ovn-installed in OVS
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:00Z|01269|binding|INFO|Setting lport 1d99d15e-516c-4957-8a1e-0e818b4990cc up in Southbound
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.296 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:9f:54 10.100.0.9'], port_security=['fa:16:3e:e4:9f:54 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f38de00a-acde-421e-810a-9fcf0a3eab72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427dbdf1-b775-43da-9d3b-699424d2ae63, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1d99d15e-516c-4957-8a1e-0e818b4990cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.297 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1d99d15e-516c-4957-8a1e-0e818b4990cc in datapath 7cd5e693-867f-488c-9ed9-c443b3e1e05e bound to our chassis#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.298 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7cd5e693-867f-488c-9ed9-c443b3e1e05e#033[00m
Jan 27 09:16:00 np0005597378 systemd-machined[207425]: New machine qemu-153-instance-00000079.
Jan 27 09:16:00 np0005597378 systemd-udevd[350270]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28b15183-c533-4778-8ba6-fd2fa2952d58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.311 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7cd5e693-81 in ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.312 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7cd5e693-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.312 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[654c1477-2123-434d-bb3e-3bc2b07bd371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.313 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[47dc4586-5860-42ea-be98-8edff3c15f34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 NetworkManager[48904]: <info>  [1769523360.3161] device (tap1d99d15e-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:16:00 np0005597378 NetworkManager[48904]: <info>  [1769523360.3168] device (tap1d99d15e-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:16:00 np0005597378 systemd[1]: Started Virtual Machine qemu-153-instance-00000079.
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.324 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c99d65b-698e-4d88-829a-8c5c1d528c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.336 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c17014a4-0a5d-4cb9-b0a5-3facf77c30e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.363 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[174aab1d-946a-44fe-b1d3-8afdcabf0934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 systemd-udevd[350273]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6de11ea2-8a56-486b-9e57-39aa8adaf791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 NetworkManager[48904]: <info>  [1769523360.3702] manager: (tap7cd5e693-80): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.396 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[32ce7f09-60a8-4f0d-9010-d477fbea7035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.400 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[da0c520f-7455-4728-b3a4-6cd5fe26b9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 NetworkManager[48904]: <info>  [1769523360.4315] device (tap7cd5e693-80): carrier: link connected
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.438 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fecf4802-1f70-4fba-b16b-6e925a01bef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.453 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08fcc1d3-8da3-4793-8096-ab26ee8cfedf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cd5e693-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:a9:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600005, 'reachable_time': 21019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350303, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.466 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d891a1-089f-40cc-8c64-543334693608]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:a9aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600005, 'tstamp': 600005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350304, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.483 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9464227d-162d-408a-a3dc-4cdec1d551ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cd5e693-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:a9:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600005, 'reachable_time': 21019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350305, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff85c0d-644a-4747-b7b7-84a9427efc09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83282227-847b-43e1-9cd6-3b172ed42bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.570 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cd5e693-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.571 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.571 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cd5e693-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 kernel: tap7cd5e693-80: entered promiscuous mode
Jan 27 09:16:00 np0005597378 NetworkManager[48904]: <info>  [1769523360.5739] manager: (tap7cd5e693-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.579 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7cd5e693-80, col_values=(('external_ids', {'iface-id': 'a90e8b56-9d67-44ef-93cd-35087fe7e207'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.580 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:00Z|01270|binding|INFO|Releasing lport a90e8b56-9d67-44ef-93cd-35087fe7e207 from this chassis (sb_readonly=0)
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.581 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.582 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7cd5e693-867f-488c-9ed9-c443b3e1e05e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7cd5e693-867f-488c-9ed9-c443b3e1e05e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.583 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[374da5fe-df4d-4b9e-997b-14e6ab2776b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.584 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-7cd5e693-867f-488c-9ed9-c443b3e1e05e
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/7cd5e693-867f-488c-9ed9-c443b3e1e05e.pid.haproxy
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 7cd5e693-867f-488c-9ed9-c443b3e1e05e
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:16:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:00.585 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'env', 'PROCESS_TAG=haproxy-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7cd5e693-867f-488c-9ed9-c443b3e1e05e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.702 238945 DEBUG nova.compute.manager [req-76b21fa0-05d9-4bf5-a73f-1b246b80feb8 req-c391539b-6339-4511-82ae-4792bc551102 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Received event network-vif-deleted-d98527e5-8812-43b6-957e-7529c80c2873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.741 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523360.7408214, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.742 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Started (Lifecycle Event)#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.816 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.819 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523360.741856, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.820 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.906 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:00 np0005597378 nova_compute[238941]: 2026-01-27 14:16:00.910 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:16:00 np0005597378 podman[350379]: 2026-01-27 14:16:00.929764123 +0000 UTC m=+0.044819718 container create 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 09:16:00 np0005597378 systemd[1]: Started libpod-conmon-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope.
Jan 27 09:16:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 27 09:16:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:01 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76d68d1644397fd720ae7e313fe1d458e3162880129039d681b3a14a20ca458/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:01 np0005597378 podman[350379]: 2026-01-27 14:16:00.906489641 +0000 UTC m=+0.021545266 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:16:01 np0005597378 podman[350379]: 2026-01-27 14:16:01.014823602 +0000 UTC m=+0.129879197 container init 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 09:16:01 np0005597378 podman[350379]: 2026-01-27 14:16:01.020323629 +0000 UTC m=+0.135379214 container start 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:16:01 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : New worker (350398) forked
Jan 27 09:16:01 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : Loading success.
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.173 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.439 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.439 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.868 238945 DEBUG nova.compute.manager [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.869 238945 DEBUG oslo_concurrency.lockutils [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.870 238945 DEBUG oslo_concurrency.lockutils [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.870 238945 DEBUG oslo_concurrency.lockutils [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.871 238945 DEBUG nova.compute.manager [req-e971ea82-3829-47b3-ac40-103467919b62 req-276618d4-d07a-4b23-ab7f-11bff21f97a8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Processing event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.872 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.883 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523361.8827348, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.883 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.887 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.890 238945 INFO nova.virt.libvirt.driver [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance spawned successfully.#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.890 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.905 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.910 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.914 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.915 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.915 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.915 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.916 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.916 238945 DEBUG nova.virt.libvirt.driver [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.945 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.984 238945 INFO nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 8.93 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:16:01 np0005597378 nova_compute[238941]: 2026-01-27 14:16:01.985 238945 DEBUG nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:16:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457472977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.021 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.077 238945 INFO nova.compute.manager [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 10.47 seconds to build instance.#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.141 238945 DEBUG oslo_concurrency.lockutils [None req-57686fc3-5b38-4eb9-aa8f-c49c857465e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.142 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.142 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.304 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.305 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3651MB free_disk=59.96677211020142GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.305 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.306 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.494 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.496 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.496 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:02 np0005597378 nova_compute[238941]: 2026-01-27 14:16:02.720 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 27 09:16:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:16:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335454737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.333 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.338 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.352 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.376 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.377 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.944 238945 DEBUG nova.compute.manager [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG oslo_concurrency.lockutils [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG oslo_concurrency.lockutils [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG oslo_concurrency.lockutils [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.945 238945 DEBUG nova.compute.manager [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:03 np0005597378 nova_compute[238941]: 2026-01-27 14:16:03.946 238945 WARNING nova.compute.manager [req-477d9f9b-123d-47f6-95f6-3371417e7998 req-4e08acc8-6c59-4151-94b4-31871e41e07f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc for instance with vm_state active and task_state None.#033[00m
Jan 27 09:16:04 np0005597378 nova_compute[238941]: 2026-01-27 14:16:04.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:04 np0005597378 nova_compute[238941]: 2026-01-27 14:16:04.922 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523349.921508, c9010b63-5eae-497c-ace9-dc8788805086 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:04 np0005597378 nova_compute[238941]: 2026-01-27 14:16:04.923 238945 INFO nova.compute.manager [-] [instance: c9010b63-5eae-497c-ace9-dc8788805086] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:16:04 np0005597378 nova_compute[238941]: 2026-01-27 14:16:04.943 238945 DEBUG nova.compute.manager [None req-c7e47057-b4ef-4bdf-bc5f-70257598d1c0 - - - - - -] [instance: c9010b63-5eae-497c-ace9-dc8788805086] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.590 238945 DEBUG nova.compute.manager [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.591 238945 DEBUG nova.compute.manager [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.591 238945 DEBUG oslo_concurrency.lockutils [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.592 238945 DEBUG oslo_concurrency.lockutils [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:05 np0005597378 nova_compute[238941]: 2026-01-27 14:16:05.592 238945 DEBUG nova.network.neutron [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:16:06 np0005597378 nova_compute[238941]: 2026-01-27 14:16:06.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Jan 27 09:16:07 np0005597378 nova_compute[238941]: 2026-01-27 14:16:07.619 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:08 np0005597378 nova_compute[238941]: 2026-01-27 14:16:08.027 238945 DEBUG nova.network.neutron [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updated VIF entry in instance network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:16:08 np0005597378 nova_compute[238941]: 2026-01-27 14:16:08.028 238945 DEBUG nova.network.neutron [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:08 np0005597378 nova_compute[238941]: 2026-01-27 14:16:08.048 238945 DEBUG oslo_concurrency.lockutils [req-02e7e52d-3bdf-468d-bbb0-a91ab594f626 req-d4e87d78-24d5-4f44-8881-7a740bbf5a67 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:08Z|01271|binding|INFO|Releasing lport a90e8b56-9d67-44ef-93cd-35087fe7e207 from this chassis (sb_readonly=0)
Jan 27 09:16:08 np0005597378 nova_compute[238941]: 2026-01-27 14:16:08.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 878 KiB/s wr, 109 op/s
Jan 27 09:16:09 np0005597378 nova_compute[238941]: 2026-01-27 14:16:09.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:09 np0005597378 nova_compute[238941]: 2026-01-27 14:16:09.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:10 np0005597378 nova_compute[238941]: 2026-01-27 14:16:10.825 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523355.8244243, ffbbdbe0-9dc8-46b2-9492-e5d63351a47f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:10 np0005597378 nova_compute[238941]: 2026-01-27 14:16:10.826 238945 INFO nova.compute.manager [-] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:16:10 np0005597378 nova_compute[238941]: 2026-01-27 14:16:10.844 238945 DEBUG nova.compute.manager [None req-3979c9dd-2e17-4ff7-ac22-b9735b7e6113 - - - - - -] [instance: ffbbdbe0-9dc8-46b2-9492-e5d63351a47f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Jan 27 09:16:11 np0005597378 nova_compute[238941]: 2026-01-27 14:16:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:11 np0005597378 nova_compute[238941]: 2026-01-27 14:16:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:16:11 np0005597378 nova_compute[238941]: 2026-01-27 14:16:11.414 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:16:11 np0005597378 nova_compute[238941]: 2026-01-27 14:16:11.415 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:11 np0005597378 nova_compute[238941]: 2026-01-27 14:16:11.415 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:16:11 np0005597378 nova_compute[238941]: 2026-01-27 14:16:11.427 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:16:12 np0005597378 nova_compute[238941]: 2026-01-27 14:16:12.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 27 09:16:14 np0005597378 nova_compute[238941]: 2026-01-27 14:16:14.394 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:14 np0005597378 nova_compute[238941]: 2026-01-27 14:16:14.461 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:14Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:9f:54 10.100.0.9
Jan 27 09:16:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:14Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:9f:54 10.100.0.9
Jan 27 09:16:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 102 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Jan 27 09:16:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 113 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:16:17
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:16:17 np0005597378 nova_compute[238941]: 2026-01-27 14:16:17.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:16:18 np0005597378 nova_compute[238941]: 2026-01-27 14:16:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:18 np0005597378 nova_compute[238941]: 2026-01-27 14:16:18.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:16:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 118 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 27 09:16:19 np0005597378 nova_compute[238941]: 2026-01-27 14:16:19.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:20 np0005597378 nova_compute[238941]: 2026-01-27 14:16:20.361 238945 INFO nova.compute.manager [None req-4c7c7a6e-7234-4096-9e11-be36f8b3396c 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Get console output#033[00m
Jan 27 09:16:20 np0005597378 nova_compute[238941]: 2026-01-27 14:16:20.367 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:16:20 np0005597378 nova_compute[238941]: 2026-01-27 14:16:20.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 09:16:21 np0005597378 nova_compute[238941]: 2026-01-27 14:16:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:22 np0005597378 nova_compute[238941]: 2026-01-27 14:16:22.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 09:16:23 np0005597378 nova_compute[238941]: 2026-01-27 14:16:23.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:23 np0005597378 nova_compute[238941]: 2026-01-27 14:16:23.996 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:23 np0005597378 nova_compute[238941]: 2026-01-27 14:16:23.996 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:23 np0005597378 nova_compute[238941]: 2026-01-27 14:16:23.997 238945 DEBUG nova.objects.instance [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:24 np0005597378 nova_compute[238941]: 2026-01-27 14:16:24.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:24 np0005597378 nova_compute[238941]: 2026-01-27 14:16:24.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:24 np0005597378 nova_compute[238941]: 2026-01-27 14:16:24.861 238945 DEBUG nova.objects.instance [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_requests' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:24 np0005597378 nova_compute[238941]: 2026-01-27 14:16:24.945 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:16:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:16:25 np0005597378 nova_compute[238941]: 2026-01-27 14:16:25.143 238945 DEBUG nova.policy [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:16:25 np0005597378 nova_compute[238941]: 2026-01-27 14:16:25.666 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully created port: 9339637c-8c37-4436-80bb-f794ba43dd1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.668 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Successfully updated port: 9339637c-8c37-4436-80bb-f794ba43dd1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.682 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.682 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.683 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:16:26 np0005597378 podman[350453]: 2026-01-27 14:16:26.720175183 +0000 UTC m=+0.053961601 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.897 238945 DEBUG nova.compute.manager [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.897 238945 DEBUG nova.compute.manager [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-9339637c-8c37-4436-80bb-f794ba43dd1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:16:26 np0005597378 nova_compute[238941]: 2026-01-27 14:16:26.898 238945 DEBUG oslo_concurrency.lockutils [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 1004 KiB/s wr, 47 op/s
Jan 27 09:16:27 np0005597378 nova_compute[238941]: 2026-01-27 14:16:27.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:16:27 np0005597378 nova_compute[238941]: 2026-01-27 14:16:27.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007714501595973752 of space, bias 1.0, pg target 0.23143504787921254 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695982456176226 of space, bias 1.0, pg target 0.20087947368528677 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0594000781879547e-06 of space, bias 4.0, pg target 0.0012712800938255457 quantized to 16 (current 16)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:16:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:28 np0005597378 podman[350473]: 2026-01-27 14:16:28.741798802 +0000 UTC m=+0.087184578 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.974 238945 DEBUG nova.network.neutron [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.994 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 166 KiB/s wr, 25 op/s
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.995 238945 DEBUG oslo_concurrency.lockutils [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.995 238945 DEBUG nova.network.neutron [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 9339637c-8c37-4436-80bb-f794ba43dd1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.998 238945 DEBUG nova.virt.libvirt.vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.998 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:28 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.999 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:28.999 238945 DEBUG os_vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.001 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.001 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.003 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9339637c-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.004 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9339637c-8c, col_values=(('external_ids', {'iface-id': '9339637c-8c37-4436-80bb-f794ba43dd1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:9e:c6', 'vm-uuid': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.006 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.0066] manager: (tap9339637c-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.016 238945 INFO os_vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c')#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.017 238945 DEBUG nova.virt.libvirt.vif [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.017 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.018 238945 DEBUG nova.network.os_vif_util [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.021 238945 DEBUG nova.virt.libvirt.guest [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] attach device xml: <interface type="ethernet">
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:e8:9e:c6"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <target dev="tap9339637c-8c"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]: </interface>
Jan 27 09:16:29 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 09:16:29 np0005597378 kernel: tap9339637c-8c: entered promiscuous mode
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.0355] manager: (tap9339637c-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Jan 27 09:16:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:29Z|01272|binding|INFO|Claiming lport 9339637c-8c37-4436-80bb-f794ba43dd1b for this chassis.
Jan 27 09:16:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:29Z|01273|binding|INFO|9339637c-8c37-4436-80bb-f794ba43dd1b: Claiming fa:16:3e:e8:9e:c6 10.100.0.28
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.050 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:9e:c6 10.100.0.28'], port_security=['fa:16:3e:e8:9e:c6 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f360e54-0cca-4ea0-b647-63f71337e04a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9339637c-8c37-4436-80bb-f794ba43dd1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.052 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9339637c-8c37-4436-80bb-f794ba43dd1b in datapath 04141c0a-d533-4efe-bd72-e2f93f7d8ae4 bound to our chassis#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.053 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 04141c0a-d533-4efe-bd72-e2f93f7d8ae4#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.065 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa300bf-a296-4433-940d-0e6419530f76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.066 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap04141c0a-d1 in ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.068 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap04141c0a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.069 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b19a66be-d490-46ae-9467-0f520b8f0812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.069 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c24c0a-34c4-4d66-94ac-92f66432f55c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 systemd-udevd[350506]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.082 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[43a983bc-d325-4213-bee2-604bf89e5972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.0876] device (tap9339637c-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.0881] device (tap9339637c-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:29Z|01274|binding|INFO|Setting lport 9339637c-8c37-4436-80bb-f794ba43dd1b ovn-installed in OVS
Jan 27 09:16:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:29Z|01275|binding|INFO|Setting lport 9339637c-8c37-4436-80bb-f794ba43dd1b up in Southbound
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.098 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a85c6c6-b1f3-4e34-8f3c-5cb67170e2e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a704d102-564d-40bf-9987-64b475166d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.1526] manager: (tap04141c0a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/524)
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.151 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ed31372e-880a-4cc9-ab57-cc9317e545f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.158 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:e4:9f:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.160 238945 DEBUG nova.virt.libvirt.driver [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:e8:9e:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:16:29 np0005597378 systemd-udevd[350509]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.185 238945 DEBUG nova.virt.libvirt.guest [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:29</nova:creationTime>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:29 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    <nova:port uuid="9339637c-8c37-4436-80bb-f794ba43dd1b">
Jan 27 09:16:29 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:29 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:29 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:29 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.187 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[88ff8cb5-e6ab-46c1-9cd6-fd47d1c7bc20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.190 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[92e313f8-1b3c-49b7-ac09-7f2af5d02274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.209 238945 DEBUG oslo_concurrency.lockutils [None req-bacb6dba-2a95-4088-bcec-d9d09c9d8039 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.2212] device (tap04141c0a-d0): carrier: link connected
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.227 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[bacafba3-25cb-478a-8792-264b47a742f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.245 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa6e74d-56d8-4947-a349-39b8a49e5218]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04141c0a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:87:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602884, 'reachable_time': 33048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350534, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49058dbc-9588-49f5-abee-a0c6636186e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:87f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602884, 'tstamp': 602884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350535, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.275 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d52e4b39-d75b-4c12-8a80-597e33cc2519]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04141c0a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:87:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602884, 'reachable_time': 33048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350536, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a107eec-f78d-4f5d-a1a5-43187f79a26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.363 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0966be5f-79a7-4936-b20e-aa738fcf5fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04141c0a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.365 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04141c0a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.367 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 NetworkManager[48904]: <info>  [1769523389.3680] manager: (tap04141c0a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Jan 27 09:16:29 np0005597378 kernel: tap04141c0a-d0: entered promiscuous mode
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.371 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap04141c0a-d0, col_values=(('external_ids', {'iface-id': 'ef522e9e-cf8f-452a-be30-2ec3260bc0cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:29Z|01276|binding|INFO|Releasing lport ef522e9e-cf8f-452a-be30-2ec3260bc0cd from this chassis (sb_readonly=0)
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.374 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.375 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fba484dd-2e98-4ccd-80a0-9a7500fdd828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.377 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-04141c0a-d533-4efe-bd72-e2f93f7d8ae4
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.pid.haproxy
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 04141c0a-d533-4efe-bd72-e2f93f7d8ae4
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:16:29 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:29.377 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'env', 'PROCESS_TAG=haproxy-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/04141c0a-d533-4efe-bd72-e2f93f7d8ae4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.476 238945 DEBUG nova.compute.manager [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.477 238945 DEBUG oslo_concurrency.lockutils [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.478 238945 DEBUG oslo_concurrency.lockutils [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.479 238945 DEBUG oslo_concurrency.lockutils [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.479 238945 DEBUG nova.compute.manager [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:29 np0005597378 nova_compute[238941]: 2026-01-27 14:16:29.480 238945 WARNING nova.compute.manager [req-d3f148a7-44c4-40ab-a55d-11f1576982db req-947485cf-64e3-4700-bd23-d435b2e9cbbf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:16:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:29 np0005597378 podman[350568]: 2026-01-27 14:16:29.748666257 +0000 UTC m=+0.026269472 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:16:30 np0005597378 podman[350568]: 2026-01-27 14:16:30.085756665 +0000 UTC m=+0.363359830 container create 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:16:30 np0005597378 systemd[1]: Started libpod-conmon-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5.scope.
Jan 27 09:16:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9be2b46cf2742465da24c23fbeaf4ac3762d6f52004496c3cc06d6e856f5f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:30 np0005597378 podman[350568]: 2026-01-27 14:16:30.214225544 +0000 UTC m=+0.491828689 container init 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:16:30 np0005597378 podman[350568]: 2026-01-27 14:16:30.219439133 +0000 UTC m=+0.497042278 container start 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:16:30 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : New worker (350589) forked
Jan 27 09:16:30 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : Loading success.
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.826 238945 DEBUG nova.network.neutron [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updated VIF entry in instance network info cache for port 9339637c-8c37-4436-80bb-f794ba43dd1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.828 238945 DEBUG nova.network.neutron [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.831 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-9339637c-8c37-4436-80bb-f794ba43dd1b" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.832 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-9339637c-8c37-4436-80bb-f794ba43dd1b" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.849 238945 DEBUG oslo_concurrency.lockutils [req-09418c0f-848a-42f9-8012-e153c4fd5e53 req-e6fb65d6-08df-49f7-b351-17b4ed54eb83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.859 238945 DEBUG nova.objects.instance [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.891 238945 DEBUG nova.virt.libvirt.vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.892 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.893 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.898 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.901 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.905 238945 DEBUG nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Attempting to detach device tap9339637c-8c from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.906 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:e8:9e:c6"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <target dev="tap9339637c-8c"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: </interface>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.920 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.925 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <name>instance-00000079</name>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:29</nova:creationTime>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <nova:port uuid="9339637c-8c37-4436-80bb-f794ba43dd1b">
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target dev='tap1d99d15e-51'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:e8:9e:c6'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target dev='tap9339637c-8c'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='net1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/0'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.927 238945 INFO nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tap9339637c-8c from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the persistent domain config.#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.928 238945 DEBUG nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] (1/8): Attempting to detach device tap9339637c-8c with device alias net1 from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.928 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:e8:9e:c6"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]:  <target dev="tap9339637c-8c"/>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: </interface>
Jan 27 09:16:30 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 09:16:30 np0005597378 kernel: tap9339637c-8c (unregistering): left promiscuous mode
Jan 27 09:16:30 np0005597378 NetworkManager[48904]: <info>  [1769523390.9888] device (tap9339637c-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:16:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 4 op/s
Jan 27 09:16:30 np0005597378 nova_compute[238941]: 2026-01-27 14:16:30.998 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769523390.9971106, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.001 238945 DEBUG nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Start waiting for the detach event from libvirt for device tap9339637c-8c with device alias net1 for instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.002 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.007 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:31Z|01277|binding|INFO|Releasing lport 9339637c-8c37-4436-80bb-f794ba43dd1b from this chassis (sb_readonly=0)
Jan 27 09:16:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:31Z|01278|binding|INFO|Setting lport 9339637c-8c37-4436-80bb-f794ba43dd1b down in Southbound
Jan 27 09:16:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:31Z|01279|binding|INFO|Removing iface tap9339637c-8c ovn-installed in OVS
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.009 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <name>instance-00000079</name>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:29</nova:creationTime>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:port uuid="9339637c-8c37-4436-80bb-f794ba43dd1b">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:31 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target dev='tap1d99d15e-51'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/0'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:31 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:16:31 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.009 238945 INFO nova.virt.libvirt.driver [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tap9339637c-8c from instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db from the live domain config.#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.010 238945 DEBUG nova.virt.libvirt.vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.010 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.011 238945 DEBUG nova.network.os_vif_util [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.011 238945 DEBUG os_vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.014 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9339637c-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.016 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:9e:c6 10.100.0.28'], port_security=['fa:16:3e:e8:9e:c6 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f360e54-0cca-4ea0-b647-63f71337e04a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9339637c-8c37-4436-80bb-f794ba43dd1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.018 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9339637c-8c37-4436-80bb-f794ba43dd1b in datapath 04141c0a-d533-4efe-bd72-e2f93f7d8ae4 unbound from our chassis#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.020 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04141c0a-d533-4efe-bd72-e2f93f7d8ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52a105bb-7d1b-486c-a2c7-fc37b3f99bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.023 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 namespace which is not needed anymore#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.026 238945 INFO os_vif [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c')#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.026 238945 DEBUG nova.virt.libvirt.guest [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:31</nova:creationTime>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:31 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:31 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:31 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:31 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.061 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:e4:46 2001:db8:0:1:f816:3eff:fe9e:e446 2001:db8::f816:3eff:fe9e:e446'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe9e:e446/64 2001:db8::f816:3eff:fe9e:e446/64', 'neutron:device_id': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7641ffb0-ddda-4391-aadd-cbcdb9365edb) old=Port_Binding(mac=['fa:16:3e:9e:e4:46 2001:db8::f816:3eff:fe9e:e446'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9e:e446/64', 'neutron:device_id': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:31 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : haproxy version is 2.8.14-c23fe91
Jan 27 09:16:31 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [NOTICE]   (350587) : path to executable is /usr/sbin/haproxy
Jan 27 09:16:31 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [WARNING]  (350587) : Exiting Master process...
Jan 27 09:16:31 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [ALERT]    (350587) : Current worker (350589) exited with code 143 (Terminated)
Jan 27 09:16:31 np0005597378 neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4[350583]: [WARNING]  (350587) : All workers exited. Exiting... (0)
Jan 27 09:16:31 np0005597378 systemd[1]: libpod-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5.scope: Deactivated successfully.
Jan 27 09:16:31 np0005597378 podman[350617]: 2026-01-27 14:16:31.200981172 +0000 UTC m=+0.064731220 container died 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5-userdata-shm.mount: Deactivated successfully.
Jan 27 09:16:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ce9be2b46cf2742465da24c23fbeaf4ac3762d6f52004496c3cc06d6e856f5f7-merged.mount: Deactivated successfully.
Jan 27 09:16:31 np0005597378 podman[350617]: 2026-01-27 14:16:31.431513384 +0000 UTC m=+0.295263432 container cleanup 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:16:31 np0005597378 systemd[1]: libpod-conmon-7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5.scope: Deactivated successfully.
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.590 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.591 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.591 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 WARNING nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.592 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-unplugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.593 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.593 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.593 238945 DEBUG oslo_concurrency.lockutils [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.594 238945 DEBUG nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-unplugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.594 238945 WARNING nova.compute.manager [req-80fc41c5-b8a4-4a33-a779-de7317c2e727 req-5e64ee8d-9843-46fb-8381-09c5e4e42148 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-unplugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:16:31 np0005597378 podman[350646]: 2026-01-27 14:16:31.618843804 +0000 UTC m=+0.152736927 container remove 7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.627 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28d5e4c4-7014-4d95-9099-35546c70c826]: (4, ('Tue Jan 27 02:16:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 (7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5)\n7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5\nTue Jan 27 02:16:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 (7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5)\n7d10641690444642c3a95bb7696f4930b31a79e80dce1737c68140b887d933e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.629 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad76d24-ef7e-49ba-949a-79341fa5423e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.630 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04141c0a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 kernel: tap04141c0a-d0: left promiscuous mode
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3828cc-5632-43a6-a1cd-30e38e7f04d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a27320c-bc97-4ddc-96c4-a2aa1a57c751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.668 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[debc0503-5ecc-46f1-89d6-627e9b91dd45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.688 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb22960-8d90-4634-9953-78ed572c2588]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602876, 'reachable_time': 19873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350661, 'error': None, 'target': 'ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.691 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-04141c0a-d533-4efe-bd72-e2f93f7d8ae4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.691 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a7603289-0bc7-496a-a542-d3e8107d6eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.692 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 7641ffb0-ddda-4391-aadd-cbcdb9365edb in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis#033[00m
Jan 27 09:16:31 np0005597378 systemd[1]: run-netns-ovnmeta\x2d04141c0a\x2dd533\x2d4efe\x2dbd72\x2de2f93f7d8ae4.mount: Deactivated successfully.
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.693 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:16:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:31.694 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b66227-f67b-46b1-a4a4-7775613f9f99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.963 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.964 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:31 np0005597378 nova_compute[238941]: 2026-01-27 14:16:31.965 238945 DEBUG nova.network.neutron [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:16:32 np0005597378 nova_compute[238941]: 2026-01-27 14:16:32.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:32Z|01280|binding|INFO|Releasing lport a90e8b56-9d67-44ef-93cd-35087fe7e207 from this chassis (sb_readonly=0)
Jan 27 09:16:32 np0005597378 nova_compute[238941]: 2026-01-27 14:16:32.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 13 KiB/s wr, 0 op/s
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.692 238945 DEBUG nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG oslo_concurrency.lockutils [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG oslo_concurrency.lockutils [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG oslo_concurrency.lockutils [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 DEBUG nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.693 238945 WARNING nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-9339637c-8c37-4436-80bb-f794ba43dd1b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.694 238945 DEBUG nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-deleted-9339637c-8c37-4436-80bb-f794ba43dd1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.694 238945 INFO nova.compute.manager [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Neutron deleted interface 9339637c-8c37-4436-80bb-f794ba43dd1b; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.694 238945 DEBUG nova.network.neutron [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.723 238945 DEBUG nova.objects.instance [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.755 238945 DEBUG nova.objects.instance [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.779 238945 DEBUG nova.virt.libvirt.vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.779 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.779 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.783 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.786 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <name>instance-00000079</name>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:31</nova:creationTime>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target dev='tap1d99d15e-51'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.787 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.791 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e8:9e:c6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9339637c-8c"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <name>instance-00000079</name>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <uuid>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</uuid>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:31</nova:creationTime>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='serial'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='uuid'>b6d22bc4-3a93-41d9-8495-ef8b33fa64db</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk' index='2'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_disk.config' index='1'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:e4:9f:54'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target dev='tap1d99d15e-51'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <source path='/dev/pts/0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db/console.log' append='off'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c611,c820</label>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c611,c820</imagelabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.791 238945 WARNING nova.virt.libvirt.driver [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Detaching interface fa:16:3e:e8:9e:c6 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9339637c-8c' not found.#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.792 238945 DEBUG nova.virt.libvirt.vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.793 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "9339637c-8c37-4436-80bb-f794ba43dd1b", "address": "fa:16:3e:e8:9e:c6", "network": {"id": "04141c0a-d533-4efe-bd72-e2f93f7d8ae4", "bridge": "br-int", "label": "tempest-network-smoke--1907219504", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9339637c-8c", "ovs_interfaceid": "9339637c-8c37-4436-80bb-f794ba43dd1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.794 238945 DEBUG nova.network.os_vif_util [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.794 238945 DEBUG os_vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.796 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9339637c-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.797 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.799 238945 INFO os_vif [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:9e:c6,bridge_name='br-int',has_traffic_filtering=True,id=9339637c-8c37-4436-80bb-f794ba43dd1b,network=Network(04141c0a-d533-4efe-bd72-e2f93f7d8ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9339637c-8c')#033[00m
Jan 27 09:16:33 np0005597378 nova_compute[238941]: 2026-01-27 14:16:33.800 238945 DEBUG nova.virt.libvirt.guest [req-0e8652ad-4b6e-44ef-a267-a282c45ea5ce req-25e408bc-8b66-4d0d-bdaf-cfcd7abd7ad8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-1466908137</nova:name>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:16:33</nova:creationTime>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    <nova:port uuid="1d99d15e-516c-4957-8a1e-0e818b4990cc">
Jan 27 09:16:33 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:16:33 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:16:33 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.048 238945 INFO nova.network.neutron [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Port 9339637c-8c37-4436-80bb-f794ba43dd1b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.048 238945 DEBUG nova.network.neutron [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [{"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.075 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.102 238945 DEBUG oslo_concurrency.lockutils [None req-ba0cf75f-5971-4b77-9859-5c386e901293 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-b6d22bc4-3a93-41d9-8495-ef8b33fa64db-9339637c-8c37-4436-80bb-f794ba43dd1b" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.368 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.368 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.369 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.369 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.369 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.370 238945 INFO nova.compute.manager [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Terminating instance#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.371 238945 DEBUG nova.compute.manager [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:16:34 np0005597378 kernel: tap1d99d15e-51 (unregistering): left promiscuous mode
Jan 27 09:16:34 np0005597378 NetworkManager[48904]: <info>  [1769523394.4200] device (tap1d99d15e-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:16:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:34Z|01281|binding|INFO|Releasing lport 1d99d15e-516c-4957-8a1e-0e818b4990cc from this chassis (sb_readonly=0)
Jan 27 09:16:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:34Z|01282|binding|INFO|Setting lport 1d99d15e-516c-4957-8a1e-0e818b4990cc down in Southbound
Jan 27 09:16:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:34Z|01283|binding|INFO|Removing iface tap1d99d15e-51 ovn-installed in OVS
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.440 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:9f:54 10.100.0.9'], port_security=['fa:16:3e:e4:9f:54 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6d22bc4-3a93-41d9-8495-ef8b33fa64db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f38de00a-acde-421e-810a-9fcf0a3eab72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427dbdf1-b775-43da-9d3b-699424d2ae63, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1d99d15e-516c-4957-8a1e-0e818b4990cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.442 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1d99d15e-516c-4957-8a1e-0e818b4990cc in datapath 7cd5e693-867f-488c-9ed9-c443b3e1e05e unbound from our chassis#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.443 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cd5e693-867f-488c-9ed9-c443b3e1e05e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.444 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05373639-69e0-4ac3-985c-4210aa45d616]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.445 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e namespace which is not needed anymore#033[00m
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:16:34 np0005597378 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 27 09:16:34 np0005597378 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Consumed 14.557s CPU time.
Jan 27 09:16:34 np0005597378 systemd-machined[207425]: Machine qemu-153-instance-00000079 terminated.
Jan 27 09:16:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:34 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : haproxy version is 2.8.14-c23fe91
Jan 27 09:16:34 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [NOTICE]   (350396) : path to executable is /usr/sbin/haproxy
Jan 27 09:16:34 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [WARNING]  (350396) : Exiting Master process...
Jan 27 09:16:34 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [ALERT]    (350396) : Current worker (350398) exited with code 143 (Terminated)
Jan 27 09:16:34 np0005597378 neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e[350392]: [WARNING]  (350396) : All workers exited. Exiting... (0)
Jan 27 09:16:34 np0005597378 systemd[1]: libpod-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope: Deactivated successfully.
Jan 27 09:16:34 np0005597378 conmon[350392]: conmon 740ea9c999eaacd6b361 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope/container/memory.events
Jan 27 09:16:34 np0005597378 podman[350791]: 2026-01-27 14:16:34.58706156 +0000 UTC m=+0.049040200 container died 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.620 238945 INFO nova.virt.libvirt.driver [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance destroyed successfully.#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.622 238945 DEBUG nova.objects.instance [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid b6d22bc4-3a93-41d9-8495-ef8b33fa64db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678-userdata-shm.mount: Deactivated successfully.
Jan 27 09:16:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a76d68d1644397fd720ae7e313fe1d458e3162880129039d681b3a14a20ca458-merged.mount: Deactivated successfully.
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.639 238945 DEBUG nova.virt.libvirt.vif [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1466908137',display_name='tempest-TestNetworkBasicOps-server-1466908137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1466908137',id=121,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDo+3E7sUJGDrLhiZ4zYws5Pnk1J6gbL19mTH7Ge2hNgajVpaHWH4xm2fwF7DNQ98qylpn1V80k+3AHJsgp9QLZGsa5kvYG1uqWwp2ssQG6pAHwds+TtjtyMeGIKqIT7eg==',key_name='tempest-TestNetworkBasicOps-1043557889',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-b7rxdzgf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=b6d22bc4-3a93-41d9-8495-ef8b33fa64db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.639 238945 DEBUG nova.network.os_vif_util [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "address": "fa:16:3e:e4:9f:54", "network": {"id": "7cd5e693-867f-488c-9ed9-c443b3e1e05e", "bridge": "br-int", "label": "tempest-network-smoke--1459832865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d99d15e-51", "ovs_interfaceid": "1d99d15e-516c-4957-8a1e-0e818b4990cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.640 238945 DEBUG nova.network.os_vif_util [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.640 238945 DEBUG os_vif [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:16:34 np0005597378 podman[350791]: 2026-01-27 14:16:34.641088732 +0000 UTC m=+0.103067372 container cleanup 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.643 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d99d15e-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.648 238945 INFO os_vif [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:9f:54,bridge_name='br-int',has_traffic_filtering=True,id=1d99d15e-516c-4957-8a1e-0e818b4990cc,network=Network(7cd5e693-867f-488c-9ed9-c443b3e1e05e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d99d15e-51')#033[00m
Jan 27 09:16:34 np0005597378 systemd[1]: libpod-conmon-740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678.scope: Deactivated successfully.
Jan 27 09:16:34 np0005597378 podman[350855]: 2026-01-27 14:16:34.721150649 +0000 UTC m=+0.053910910 container remove 740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.734 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[52f7392a-9642-4f35-b09d-fe11064a47dc]: (4, ('Tue Jan 27 02:16:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e (740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678)\n740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678\nTue Jan 27 02:16:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e (740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678)\n740ea9c999eaacd6b36164507cbc187f8aa08217952f95453ca8024d9a0ae678\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.737 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86b9199a-947d-446d-83a7-e94891acc1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.738 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cd5e693-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 kernel: tap7cd5e693-80: left promiscuous mode
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c7463095-2d2a-41d5-82aa-5585aa182cba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[faec07e8-a231-4fc7-bfbf-674a6c21f8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dca3f9-4252-4bf7-b7c2-c7b4e2c03dfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.798 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86010471-ee63-46d0-b5c9-7610b95727a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599998, 'reachable_time': 29320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350889, 'error': None, 'target': 'ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.801 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7cd5e693-867f-488c-9ed9-c443b3e1e05e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:16:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:34.801 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d64e98-a182-48fa-a7b0-b7c2a2915846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:34 np0005597378 systemd[1]: run-netns-ovnmeta\x2d7cd5e693\x2d867f\x2d488c\x2d9ed9\x2dc443b3e1e05e.mount: Deactivated successfully.
Jan 27 09:16:34 np0005597378 podman[350901]: 2026-01-27 14:16:34.913427461 +0000 UTC m=+0.054167677 container create 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.963 238945 INFO nova.virt.libvirt.driver [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deleting instance files /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_del#033[00m
Jan 27 09:16:34 np0005597378 nova_compute[238941]: 2026-01-27 14:16:34.965 238945 INFO nova.virt.libvirt.driver [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deletion of /var/lib/nova/instances/b6d22bc4-3a93-41d9-8495-ef8b33fa64db_del complete#033[00m
Jan 27 09:16:34 np0005597378 systemd[1]: Started libpod-conmon-6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10.scope.
Jan 27 09:16:34 np0005597378 podman[350901]: 2026-01-27 14:16:34.89016866 +0000 UTC m=+0.030908896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:16:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 121 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 27 09:16:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:35 np0005597378 podman[350901]: 2026-01-27 14:16:35.01678284 +0000 UTC m=+0.157523086 container init 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:16:35 np0005597378 podman[350901]: 2026-01-27 14:16:35.024828765 +0000 UTC m=+0.165569001 container start 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:16:35 np0005597378 podman[350901]: 2026-01-27 14:16:35.028840532 +0000 UTC m=+0.169580768 container attach 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.030 238945 INFO nova.compute.manager [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.031 238945 DEBUG oslo.service.loopingcall [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.031 238945 DEBUG nova.compute.manager [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:16:35 np0005597378 clever_hofstadter[350917]: 167 167
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.031 238945 DEBUG nova.network.neutron [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:16:35 np0005597378 systemd[1]: libpod-6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10.scope: Deactivated successfully.
Jan 27 09:16:35 np0005597378 podman[350901]: 2026-01-27 14:16:35.032985752 +0000 UTC m=+0.173725988 container died 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:16:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9d86db4c575d45950d92887a33032dfa39a43d81f789cac65888639bf30e36b9-merged.mount: Deactivated successfully.
Jan 27 09:16:35 np0005597378 podman[350901]: 2026-01-27 14:16:35.076567886 +0000 UTC m=+0.217308102 container remove 6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:16:35 np0005597378 systemd[1]: libpod-conmon-6e4c821a13a9f9bd98c821ec49c6c79aa1cebf3ad7011133dd8120356764be10.scope: Deactivated successfully.
Jan 27 09:16:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:16:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:16:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.237878871 +0000 UTC m=+0.040701807 container create b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:16:35 np0005597378 systemd[1]: Started libpod-conmon-b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319.scope.
Jan 27 09:16:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.223094936 +0000 UTC m=+0.025917892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:16:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.335162977 +0000 UTC m=+0.137985943 container init b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.342696989 +0000 UTC m=+0.145519925 container start b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.346089439 +0000 UTC m=+0.148912375 container attach b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.724 238945 DEBUG nova.network.neutron [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.742 238945 INFO nova.compute.manager [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Took 0.71 seconds to deallocate network for instance.#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.793 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.794 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.800 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.800 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing instance network info cache due to event network-changed-1d99d15e-516c-4957-8a1e-0e818b4990cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.801 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.801 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.802 238945 DEBUG nova.network.neutron [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Refreshing network info cache for port 1d99d15e-516c-4957-8a1e-0e818b4990cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:16:35 np0005597378 determined_meninsky[350958]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:16:35 np0005597378 determined_meninsky[350958]: --> All data devices are unavailable
Jan 27 09:16:35 np0005597378 systemd[1]: libpod-b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319.scope: Deactivated successfully.
Jan 27 09:16:35 np0005597378 nova_compute[238941]: 2026-01-27 14:16:35.867 238945 DEBUG oslo_concurrency.processutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.869547801 +0000 UTC m=+0.672370747 container died b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:16:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-23652a610afb3e992d145cd1b93d019a8b18d2978e5cba228e2abf63e8a76eb9-merged.mount: Deactivated successfully.
Jan 27 09:16:35 np0005597378 podman[350941]: 2026-01-27 14:16:35.921469197 +0000 UTC m=+0.724292143 container remove b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:35 np0005597378 systemd[1]: libpod-conmon-b34b9d5cb072eb8d1b152aafcd05cdf778e04300dd0a3c01f0a066bf30188319.scope: Deactivated successfully.
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.003 238945 DEBUG nova.network.neutron [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.316 238945 DEBUG nova.network.neutron [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.329 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b6d22bc4-3a93-41d9-8495-ef8b33fa64db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.329 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-unplugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.330 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.330 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.330 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-unplugged-1d99d15e-516c-4957-8a1e-0e818b4990cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 WARNING nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-unplugged-1d99d15e-516c-4957-8a1e-0e818b4990cc for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.331 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.332 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.332 238945 DEBUG oslo_concurrency.lockutils [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.332 238945 DEBUG nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] No waiting events found dispatching network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.333 238945 WARNING nova.compute.manager [req-b07275ec-c9a6-4c6d-a273-1c3992328c14 req-0c94ae97-2f60-48ad-8590-5708ca8e1b34 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received unexpected event network-vif-plugged-1d99d15e-516c-4957-8a1e-0e818b4990cc for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.399593229 +0000 UTC m=+0.058621876 container create 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:16:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:16:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481324424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:16:36 np0005597378 systemd[1]: Started libpod-conmon-148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39.scope.
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.444 238945 DEBUG oslo_concurrency.processutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.450 238945 DEBUG nova.compute.provider_tree [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:16:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.470 238945 DEBUG nova.scheduler.client.report [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.382576924 +0000 UTC m=+0.041605591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.479688647 +0000 UTC m=+0.138717314 container init 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.486176129 +0000 UTC m=+0.145204776 container start 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.489675513 +0000 UTC m=+0.148704190 container attach 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:16:36 np0005597378 ecstatic_bohr[351090]: 167 167
Jan 27 09:16:36 np0005597378 systemd[1]: libpod-148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39.scope: Deactivated successfully.
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.492276263 +0000 UTC m=+0.151304900 container died 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.502 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-08289c47972c6e87e2a3aedebc68cd1fdca57f3c63efaa2446672650d29a7f32-merged.mount: Deactivated successfully.
Jan 27 09:16:36 np0005597378 podman[351071]: 2026-01-27 14:16:36.528997013 +0000 UTC m=+0.188025660 container remove 148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:16:36 np0005597378 systemd[1]: libpod-conmon-148367c9a49a84350e9f8e9a48c7b9c8790f116391e1b244521979933da02a39.scope: Deactivated successfully.
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.546 238945 INFO nova.scheduler.client.report [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance b6d22bc4-3a93-41d9-8495-ef8b33fa64db#033[00m
Jan 27 09:16:36 np0005597378 nova_compute[238941]: 2026-01-27 14:16:36.614 238945 DEBUG oslo_concurrency.lockutils [None req-109363a9-b4a2-4de9-a14e-5428564e1e31 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "b6d22bc4-3a93-41d9-8495-ef8b33fa64db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:36 np0005597378 podman[351112]: 2026-01-27 14:16:36.690632076 +0000 UTC m=+0.045351141 container create c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:36 np0005597378 systemd[1]: Started libpod-conmon-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope.
Jan 27 09:16:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:36 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:36 np0005597378 podman[351112]: 2026-01-27 14:16:36.762782493 +0000 UTC m=+0.117501558 container init c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:16:36 np0005597378 podman[351112]: 2026-01-27 14:16:36.673303224 +0000 UTC m=+0.028022319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:16:36 np0005597378 podman[351112]: 2026-01-27 14:16:36.774491715 +0000 UTC m=+0.129210780 container start c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:16:36 np0005597378 podman[351112]: 2026-01-27 14:16:36.777170327 +0000 UTC m=+0.131889412 container attach c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:16:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 84 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 14 op/s
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]: {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:    "0": [
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:        {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "devices": [
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "/dev/loop3"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            ],
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_name": "ceph_lv0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_size": "21470642176",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "name": "ceph_lv0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "tags": {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cluster_name": "ceph",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.crush_device_class": "",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.encrypted": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.objectstore": "bluestore",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osd_id": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.type": "block",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.vdo": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.with_tpm": "0"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            },
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "type": "block",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "vg_name": "ceph_vg0"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:        }
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:    ],
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:    "1": [
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:        {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "devices": [
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "/dev/loop4"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            ],
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_name": "ceph_lv1",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_size": "21470642176",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "name": "ceph_lv1",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "tags": {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cluster_name": "ceph",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.crush_device_class": "",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.encrypted": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.objectstore": "bluestore",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osd_id": "1",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.type": "block",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.vdo": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.with_tpm": "0"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            },
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "type": "block",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "vg_name": "ceph_vg1"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:        }
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:    ],
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:    "2": [
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:        {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "devices": [
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "/dev/loop5"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            ],
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_name": "ceph_lv2",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_size": "21470642176",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "name": "ceph_lv2",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "tags": {
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.cluster_name": "ceph",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.crush_device_class": "",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.encrypted": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.objectstore": "bluestore",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osd_id": "2",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.type": "block",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.vdo": "0",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:                "ceph.with_tpm": "0"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            },
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "type": "block",
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:            "vg_name": "ceph_vg2"
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:        }
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]:    ]
Jan 27 09:16:37 np0005597378 confident_goldberg[351128]: }
Jan 27 09:16:37 np0005597378 systemd[1]: libpod-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope: Deactivated successfully.
Jan 27 09:16:37 np0005597378 conmon[351128]: conmon c8238af70b99f5aa527d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope/container/memory.events
Jan 27 09:16:37 np0005597378 podman[351112]: 2026-01-27 14:16:37.101223106 +0000 UTC m=+0.455942171 container died c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.121 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.124 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a3f3e8a701ef6143627e49671e4a6b7e7145fbf47676b2c8fb250d16aba53a66-merged.mount: Deactivated successfully.
Jan 27 09:16:37 np0005597378 podman[351112]: 2026-01-27 14:16:37.153771729 +0000 UTC m=+0.508490794 container remove c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goldberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.159 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:16:37 np0005597378 systemd[1]: libpod-conmon-c8238af70b99f5aa527d5bbeedf27278fb9edaf057b2808808b49013440d4cdf.scope: Deactivated successfully.
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.270 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.270 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.278 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.279 238945 INFO nova.compute.claims [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.387 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.668512948 +0000 UTC m=+0.050357425 container create 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:16:37 np0005597378 systemd[1]: Started libpod-conmon-4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f.scope.
Jan 27 09:16:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.651861853 +0000 UTC m=+0.033706360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.758709855 +0000 UTC m=+0.140554352 container init 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.769152194 +0000 UTC m=+0.150996671 container start 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.772872603 +0000 UTC m=+0.154717080 container attach 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:16:37 np0005597378 eloquent_diffie[351249]: 167 167
Jan 27 09:16:37 np0005597378 systemd[1]: libpod-4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f.scope: Deactivated successfully.
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.779474559 +0000 UTC m=+0.161319036 container died 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:16:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0e5fcd5453f8b0e9e9932bceae51967f34c594e49c7800af2d71bda675ff4c43-merged.mount: Deactivated successfully.
Jan 27 09:16:37 np0005597378 podman[351233]: 2026-01-27 14:16:37.819472378 +0000 UTC m=+0.201316855 container remove 4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:16:37 np0005597378 systemd[1]: libpod-conmon-4baf23a99c394d9d56998a9030b308eb7820cbb3b1c884da0198ce2968ef069f.scope: Deactivated successfully.
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.889 238945 DEBUG nova.compute.manager [req-20f1092b-d2f6-4511-b677-d33b54845e30 req-30537dec-385e-4684-ba36-f29e5b9bb753 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Received event network-vif-deleted-1d99d15e-516c-4957-8a1e-0e818b4990cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:16:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060711170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.978 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:37 np0005597378 nova_compute[238941]: 2026-01-27 14:16:37.986 238945 DEBUG nova.compute.provider_tree [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:16:38 np0005597378 podman[351271]: 2026-01-27 14:16:38.009371456 +0000 UTC m=+0.056835278 container create 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.040 238945 DEBUG nova.scheduler.client.report [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:16:38 np0005597378 systemd[1]: Started libpod-conmon-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope.
Jan 27 09:16:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:38 np0005597378 podman[351271]: 2026-01-27 14:16:37.990640886 +0000 UTC m=+0.038104748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:16:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.086 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.087 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:16:38 np0005597378 podman[351271]: 2026-01-27 14:16:38.098932326 +0000 UTC m=+0.146396158 container init 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:16:38 np0005597378 podman[351271]: 2026-01-27 14:16:38.107139515 +0000 UTC m=+0.154603347 container start 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:16:38 np0005597378 podman[351271]: 2026-01-27 14:16:38.110495214 +0000 UTC m=+0.157959076 container attach 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.141 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.142 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.162 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.177 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.276 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.277 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.277 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Creating image(s)#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.308 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.333 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.355 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.358 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.435 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.436 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.436 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.437 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.461 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.468 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.620 238945 DEBUG nova.policy [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.779 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.854 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:16:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:38.887 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:38.888 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.895 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:38 np0005597378 lvm[351516]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:16:38 np0005597378 lvm[351516]: VG ceph_vg1 finished
Jan 27 09:16:38 np0005597378 lvm[351515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:16:38 np0005597378 lvm[351515]: VG ceph_vg0 finished
Jan 27 09:16:38 np0005597378 lvm[351518]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:16:38 np0005597378 lvm[351518]: VG ceph_vg2 finished
Jan 27 09:16:38 np0005597378 nova_compute[238941]: 2026-01-27 14:16:38.967 238945 DEBUG nova.objects.instance [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:38 np0005597378 festive_mendel[351289]: {}
Jan 27 09:16:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.003 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.003 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Ensure instance console log exists: /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.004 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.004 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.005 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:39 np0005597378 systemd[1]: libpod-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope: Deactivated successfully.
Jan 27 09:16:39 np0005597378 systemd[1]: libpod-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope: Consumed 1.456s CPU time.
Jan 27 09:16:39 np0005597378 podman[351271]: 2026-01-27 14:16:39.02560676 +0000 UTC m=+1.073070592 container died 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:16:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dda148415fdc6a58dd1fa3a39e0ce5250f13d31932cf20bf63194898d4167d93-merged.mount: Deactivated successfully.
Jan 27 09:16:39 np0005597378 podman[351271]: 2026-01-27 14:16:39.123593636 +0000 UTC m=+1.171057468 container remove 6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:16:39 np0005597378 systemd[1]: libpod-conmon-6c77eb20c808db85a98c941e8e9d635d96ce2b394b9edea4fb928ed78e44c783.scope: Deactivated successfully.
Jan 27 09:16:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:16:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:16:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:16:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.436 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully created port: 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:16:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:39 np0005597378 nova_compute[238941]: 2026-01-27 14:16:39.946 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully created port: 32def379-eb10-485a-99b5-d69fe5f3b228 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:16:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:16:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:16:40 np0005597378 nova_compute[238941]: 2026-01-27 14:16:40.683 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully updated port: 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:16:40 np0005597378 nova_compute[238941]: 2026-01-27 14:16:40.771 238945 DEBUG nova.compute.manager [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:40 np0005597378 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG nova.compute.manager [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:16:40 np0005597378 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG oslo_concurrency.lockutils [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:40 np0005597378 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG oslo_concurrency.lockutils [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:40 np0005597378 nova_compute[238941]: 2026-01-27 14:16:40.772 238945 DEBUG nova.network.neutron [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:16:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 53 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 328 KiB/s wr, 40 op/s
Jan 27 09:16:41 np0005597378 nova_compute[238941]: 2026-01-27 14:16:41.060 238945 DEBUG nova.network.neutron [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:16:41 np0005597378 nova_compute[238941]: 2026-01-27 14:16:41.944 238945 DEBUG nova.network.neutron [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:41 np0005597378 nova_compute[238941]: 2026-01-27 14:16:41.961 238945 DEBUG oslo_concurrency.lockutils [req-33342f35-6cff-4303-8222-13d10f05b8dd req-79c6441d-bc76-4b68-86d8-163b36455171 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.569 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Successfully updated port: 32def379-eb10-485a-99b5-d69fe5f3b228 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.594 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.595 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.595 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.683 238945 DEBUG nova.compute.manager [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.684 238945 DEBUG nova.compute.manager [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-32def379-eb10-485a-99b5-d69fe5f3b228. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.684 238945 DEBUG oslo_concurrency.lockutils [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.799 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:16:42 np0005597378 nova_compute[238941]: 2026-01-27 14:16:42.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 53 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 328 KiB/s wr, 40 op/s
Jan 27 09:16:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:44 np0005597378 nova_compute[238941]: 2026-01-27 14:16:44.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.392 238945 DEBUG nova.network.neutron [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.421 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.421 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance network_info: |[{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.421 238945 DEBUG oslo_concurrency.lockutils [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.422 238945 DEBUG nova.network.neutron [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 32def379-eb10-485a-99b5-d69fe5f3b228 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.425 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start _get_guest_xml network_info=[{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.429 238945 WARNING nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.436 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.437 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.447 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.448 238945 DEBUG nova.virt.libvirt.host [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.449 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.449 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.450 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.451 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.451 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.451 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.452 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.452 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.452 238945 DEBUG nova.virt.hardware [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:16:45 np0005597378 nova_compute[238941]: 2026-01-27 14:16:45.457 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:16:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839424247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.063 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.085 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.089 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.321 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:16:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817252725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.755 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.757 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.758 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.758 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.759 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.760 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.761 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:46 np0005597378 nova_compute[238941]: 2026-01-27 14:16:46.762 238945 DEBUG nova.objects.instance [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:16:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:46.890 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.078 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <uuid>851e0dd0-2021-44f6-8c51-af4bc91e02f2</uuid>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <name>instance-0000007a</name>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-857988838</nova:name>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:16:45</nova:creationTime>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:port uuid="27f90ae7-2cc0-4208-a70d-88c06320e5a3">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <nova:port uuid="32def379-eb10-485a-99b5-d69fe5f3b228">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7c:d49f" ipVersion="6"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe7c:d49f" ipVersion="6"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <entry name="serial">851e0dd0-2021-44f6-8c51-af4bc91e02f2</entry>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <entry name="uuid">851e0dd0-2021-44f6-8c51-af4bc91e02f2</entry>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e2:d7:02"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <target dev="tap27f90ae7-2c"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:7c:d4:9f"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <target dev="tap32def379-eb"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/console.log" append="off"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:16:47 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:16:47 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:16:47 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:16:47 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.080 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Preparing to wait for external event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.081 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.081 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.081 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Preparing to wait for external event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.082 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.083 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.083 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.084 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.084 238945 DEBUG os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.085 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.085 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.088 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27f90ae7-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.089 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27f90ae7-2c, col_values=(('external_ids', {'iface-id': '27f90ae7-2cc0-4208-a70d-88c06320e5a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:d7:02', 'vm-uuid': '851e0dd0-2021-44f6-8c51-af4bc91e02f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 NetworkManager[48904]: <info>  [1769523407.0913] manager: (tap27f90ae7-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.093 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.102 238945 INFO os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c')#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.103 238945 DEBUG nova.virt.libvirt.vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:16:38Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.103 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.104 238945 DEBUG nova.network.os_vif_util [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.105 238945 DEBUG os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.110 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32def379-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.111 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32def379-eb, col_values=(('external_ids', {'iface-id': '32def379-eb10-485a-99b5-d69fe5f3b228', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:d4:9f', 'vm-uuid': '851e0dd0-2021-44f6-8c51-af4bc91e02f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 NetworkManager[48904]: <info>  [1769523407.1136] manager: (tap32def379-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.120 238945 INFO os_vif [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb')#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.175 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.176 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.176 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:e2:d7:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.177 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:7c:d4:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.177 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Using config drive#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.196 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.617 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Creating config drive at /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.623 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuchl2vfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.771 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuchl2vfm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.797 238945 DEBUG nova.storage.rbd_utils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.801 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:16:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.929 238945 DEBUG nova.network.neutron [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated VIF entry in instance network info cache for port 32def379-eb10-485a-99b5-d69fe5f3b228. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.931 238945 DEBUG nova.network.neutron [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.934 238945 DEBUG oslo_concurrency.processutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config 851e0dd0-2021-44f6-8c51-af4bc91e02f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.934 238945 INFO nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deleting local config drive /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2/disk.config because it was imported into RBD.#033[00m
Jan 27 09:16:47 np0005597378 kernel: tap27f90ae7-2c: entered promiscuous mode
Jan 27 09:16:47 np0005597378 NetworkManager[48904]: <info>  [1769523407.9822] manager: (tap27f90ae7-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/528)
Jan 27 09:16:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:47Z|01284|binding|INFO|Claiming lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 for this chassis.
Jan 27 09:16:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:47Z|01285|binding|INFO|27f90ae7-2cc0-4208-a70d-88c06320e5a3: Claiming fa:16:3e:e2:d7:02 10.100.0.4
Jan 27 09:16:47 np0005597378 nova_compute[238941]: 2026-01-27 14:16:47.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:47 np0005597378 NetworkManager[48904]: <info>  [1769523407.9971] manager: (tap32def379-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Jan 27 09:16:48 np0005597378 systemd-udevd[351716]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:48 np0005597378 systemd-udevd[351717]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.0212] device (tap27f90ae7-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.0221] device (tap27f90ae7-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:16:48 np0005597378 systemd-machined[207425]: New machine qemu-154-instance-0000007a.
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.044 238945 DEBUG oslo_concurrency.lockutils [req-3835c119-8ff5-4939-bf0c-08b5f3aa40e6 req-3d74dd74-487a-49aa-b243-b1f81917d32b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:48 np0005597378 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.059 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:d7:02 10.100.0.4'], port_security=['fa:16:3e:e2:d7:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27f90ae7-2cc0-4208-a70d-88c06320e5a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde bound to our chassis#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.061 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1ef49dc-ad74-4940-a002-d8d212c0abde#033[00m
Jan 27 09:16:48 np0005597378 kernel: tap32def379-eb: entered promiscuous mode
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.0722] device (tap32def379-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.0732] device (tap32def379-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.073 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f604481-530c-49c4-b422-469b9b73a694]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.075 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1ef49dc-a1 in ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01286|binding|INFO|Claiming lport 32def379-eb10-485a-99b5-d69fe5f3b228 for this chassis.
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01287|binding|INFO|32def379-eb10-485a-99b5-d69fe5f3b228: Claiming fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.076 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1ef49dc-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.076 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4dc715-65e6-436e-a4f2-28313eadf199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97636a2f-561c-4f25-85b6-2314e8c8dfbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.080 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01288|binding|INFO|Setting lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 ovn-installed in OVS
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.090 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c86f6c28-f22c-4f8e-850e-4f5d0add7fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01289|binding|INFO|Setting lport 32def379-eb10-485a-99b5-d69fe5f3b228 ovn-installed in OVS
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.112 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], port_security=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7c:d49f/64 2001:db8::f816:3eff:fe7c:d49f/64', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32def379-eb10-485a-99b5-d69fe5f3b228) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01290|binding|INFO|Setting lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 up in Southbound
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01291|binding|INFO|Setting lport 32def379-eb10-485a-99b5-d69fe5f3b228 up in Southbound
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.116 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2888e94e-d06e-4fdb-bced-e641b3a11a49]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.145 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fe04ceb2-4480-4afa-9f1a-54cb5979cf79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.150 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88c45e69-d42a-4715-a791-c16e9366a7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.1546] manager: (tapa1ef49dc-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/530)
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.184 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f61b120e-5ae8-4d01-a519-5dd764084880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.188 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[491bc506-112a-40b6-9465-b54a2b8f63a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.2078] device (tapa1ef49dc-a0): carrier: link connected
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.215 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[728c2fd8-5c3f-4ed1-84ee-c82e06fb7dd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.234 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b105cbd-3b88-4e54-9061-1cd9849c1ca1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351753, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd025da5-afe1-4509-8568-0299bdc4288e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:5774'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604783, 'tstamp': 604783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351754, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.266 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d20bb4f-0bed-4426-af22-6c0a69e246e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351755, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.297 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd34304-51fd-42e5-bd5e-e480e377df0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.354 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d16dd207-0218-4169-8b25-5909b7f1aa77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.355 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.355 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.355 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1ef49dc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.3577] manager: (tapa1ef49dc-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Jan 27 09:16:48 np0005597378 kernel: tapa1ef49dc-a0: entered promiscuous mode
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.360 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1ef49dc-a0, col_values=(('external_ids', {'iface-id': '73d5d48d-2488-422a-a3b3-324997b57d60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:48Z|01292|binding|INFO|Releasing lport 73d5d48d-2488-422a-a3b3-324997b57d60 from this chassis (sb_readonly=0)
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.377 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.378 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1ef49dc-ad74-4940-a002-d8d212c0abde.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1ef49dc-ad74-4940-a002-d8d212c0abde.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.379 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b9adb54c-bf5a-4dc8-b8a8-a81fa56cf895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.379 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/a1ef49dc-ad74-4940-a002-d8d212c0abde.pid.haproxy
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID a1ef49dc-ad74-4940-a002-d8d212c0abde
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.380 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'env', 'PROCESS_TAG=haproxy-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1ef49dc-ad74-4940-a002-d8d212c0abde.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.451 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523408.4510882, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.452 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Started (Lifecycle Event)#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.478 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.481 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523408.4513533, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.481 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.530 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.533 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.608 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:16:48 np0005597378 podman[351828]: 2026-01-27 14:16:48.718010583 +0000 UTC m=+0.049491352 container create bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:16:48 np0005597378 systemd[1]: Started libpod-conmon-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83.scope.
Jan 27 09:16:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e168c9dcead1ae33c6d74ff3d85146d54e18497146f1a6c1d1086209c9eb098/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:48 np0005597378 podman[351828]: 2026-01-27 14:16:48.780197363 +0000 UTC m=+0.111678132 container init bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:16:48 np0005597378 podman[351828]: 2026-01-27 14:16:48.78496316 +0000 UTC m=+0.116443929 container start bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:16:48 np0005597378 podman[351828]: 2026-01-27 14:16:48.689057221 +0000 UTC m=+0.020538010 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:16:48 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : New worker (351849) forked
Jan 27 09:16:48 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : Loading success.
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.836 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32def379-eb10-485a-99b5-d69fe5f3b228 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.838 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.848 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[248fecf0-d653-42a9-b9ed-71e25731b84d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.848 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ff1fc04-01 in ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.850 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ff1fc04-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.850 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3270e5e6-c3af-42ef-a17f-ff9a0c835148]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c73fdccf-cd0f-48c1-94d8-b13d3b87f95b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.856 238945 DEBUG nova.compute.manager [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.856 238945 DEBUG oslo_concurrency.lockutils [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.857 238945 DEBUG oslo_concurrency.lockutils [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.857 238945 DEBUG oslo_concurrency.lockutils [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:48 np0005597378 nova_compute[238941]: 2026-01-27 14:16:48.857 238945 DEBUG nova.compute.manager [req-a501259d-9aff-48eb-992a-81b4adeb1894 req-a983ce2c-64f8-4cb6-a596-cc955841cb9b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Processing event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.865 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[1e266cd0-1b85-4533-a405-318f2a9cae42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.879 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51845a8a-1359-47c7-aa4d-279f70a86f85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.904 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6cef4b66-a36a-4219-bfdc-6f6bab5d4aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.910 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31fc6c5b-be87-4dfa-ad47-76f5d2de8378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.9115] manager: (tap4ff1fc04-00): new Veth device (/org/freedesktop/NetworkManager/Devices/532)
Jan 27 09:16:48 np0005597378 systemd-udevd[351745]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.947 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8e309177-0cd9-4ef4-8243-585ac9567992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.949 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1c16a050-3083-4e11-a585-19f2949489ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 NetworkManager[48904]: <info>  [1769523408.9677] device (tap4ff1fc04-00): carrier: link connected
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.971 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[35a12dc1-dd45-4b3f-bcc7-1969bf83e41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:48.990 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a791d2-f922-402e-9126-12a286dff7f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351868, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.005 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edb7a46c-674e-4c5b-b113-7673f94a7df3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:e446'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604859, 'tstamp': 604859}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351869, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.028 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[34bcb3bf-ed6e-43fd-90bb-2d9420197b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351870, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f71b659f-c986-4c22-bc97-9fa67f325e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.091 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2a13ea98-f11a-4791-a7a5-7f083dbb93f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.092 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.092 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.093 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ff1fc04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:49 np0005597378 NetworkManager[48904]: <info>  [1769523409.0957] manager: (tap4ff1fc04-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Jan 27 09:16:49 np0005597378 kernel: tap4ff1fc04-00: entered promiscuous mode
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.099 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ff1fc04-00, col_values=(('external_ids', {'iface-id': '7641ffb0-ddda-4391-aadd-cbcdb9365edb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:16:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:49Z|01293|binding|INFO|Releasing lport 7641ffb0-ddda-4391-aadd-cbcdb9365edb from this chassis (sb_readonly=0)
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.100 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.101 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.102 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d40839eb-2c9e-4741-a2bb-c787eeabc8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.103 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.pid.haproxy
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:16:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:16:49.103 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'env', 'PROCESS_TAG=haproxy-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:49 np0005597378 podman[351900]: 2026-01-27 14:16:49.447127774 +0000 UTC m=+0.022627345 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:16:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.616 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523394.6144521, b6d22bc4-3a93-41d9-8495-ef8b33fa64db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.616 238945 INFO nova.compute.manager [-] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:16:49 np0005597378 podman[351900]: 2026-01-27 14:16:49.645604472 +0000 UTC m=+0.221104023 container create ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 27 09:16:49 np0005597378 systemd[1]: Started libpod-conmon-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f.scope.
Jan 27 09:16:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:16:49 np0005597378 nova_compute[238941]: 2026-01-27 14:16:49.715 238945 DEBUG nova.compute.manager [None req-95d04fe4-4ec6-423c-8bbd-48c20319c149 - - - - - -] [instance: b6d22bc4-3a93-41d9-8495-ef8b33fa64db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6492a2ecf9735b6ad702ce6ce8a5eac75de24fbc6fda3d2c7395609022e5c69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:16:49 np0005597378 podman[351900]: 2026-01-27 14:16:49.764934487 +0000 UTC m=+0.340434038 container init ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:16:49 np0005597378 podman[351900]: 2026-01-27 14:16:49.776096655 +0000 UTC m=+0.351596246 container start ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 09:16:49 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : New worker (351922) forked
Jan 27 09:16:49 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : Loading success.
Jan 27 09:16:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.070 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.071 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.071 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.072 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.072 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] No event matching network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 in dict_keys([('network-vif-plugged', '27f90ae7-2cc0-4208-a70d-88c06320e5a3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.072 238945 WARNING nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received unexpected event network-vif-plugged-32def379-eb10-485a-99b5-d69fe5f3b228 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.073 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.073 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.073 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.074 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.074 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Processing event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.074 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG oslo_concurrency.lockutils [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.075 238945 DEBUG nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] No waiting events found dispatching network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.076 238945 WARNING nova.compute.manager [req-ba7e3f78-31fa-426b-9236-66df9ae16340 req-4ab4f466-fdba-454a-859a-621251da3d89 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received unexpected event network-vif-plugged-27f90ae7-2cc0-4208-a70d-88c06320e5a3 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.076 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.081 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523411.0810359, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.081 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.084 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.087 238945 INFO nova.virt.libvirt.driver [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance spawned successfully.#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.088 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.144 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.148 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.412 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.413 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.413 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.414 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.414 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.415 238945 DEBUG nova.virt.libvirt.driver [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.428 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.573 238945 INFO nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 13.30 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.574 238945 DEBUG nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.702 238945 INFO nova.compute.manager [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 14.45 seconds to build instance.#033[00m
Jan 27 09:16:51 np0005597378 nova_compute[238941]: 2026-01-27 14:16:51.766 238945 DEBUG oslo_concurrency.lockutils [None req-ba72860f-0d04-4d5f-8ba0-15a8381bb475 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:16:52 np0005597378 nova_compute[238941]: 2026-01-27 14:16:52.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:52 np0005597378 nova_compute[238941]: 2026-01-27 14:16:52.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.5 MiB/s wr, 20 op/s
Jan 27 09:16:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 27 09:16:55 np0005597378 NetworkManager[48904]: <info>  [1769523415.9312] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Jan 27 09:16:55 np0005597378 NetworkManager[48904]: <info>  [1769523415.9319] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Jan 27 09:16:55 np0005597378 nova_compute[238941]: 2026-01-27 14:16:55.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:56Z|01294|binding|INFO|Releasing lport 73d5d48d-2488-422a-a3b3-324997b57d60 from this chassis (sb_readonly=0)
Jan 27 09:16:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:16:56Z|01295|binding|INFO|Releasing lport 7641ffb0-ddda-4391-aadd-cbcdb9365edb from this chassis (sb_readonly=0)
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.361 238945 DEBUG nova.compute.manager [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.361 238945 DEBUG nova.compute.manager [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.361 238945 DEBUG oslo_concurrency.lockutils [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.362 238945 DEBUG oslo_concurrency.lockutils [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:16:56 np0005597378 nova_compute[238941]: 2026-01-27 14:16:56.362 238945 DEBUG nova.network.neutron [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:16:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:16:57 np0005597378 nova_compute[238941]: 2026-01-27 14:16:57.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:57 np0005597378 nova_compute[238941]: 2026-01-27 14:16:57.507 238945 DEBUG nova.network.neutron [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated VIF entry in instance network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:16:57 np0005597378 nova_compute[238941]: 2026-01-27 14:16:57.508 238945 DEBUG nova.network.neutron [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:16:57 np0005597378 nova_compute[238941]: 2026-01-27 14:16:57.529 238945 DEBUG oslo_concurrency.lockutils [req-255f3876-6914-434f-89a2-191380201b4a req-7c6be1b4-e334-4b73-88f6-08e25952e218 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:16:57 np0005597378 nova_compute[238941]: 2026-01-27 14:16:57.640 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:16:57 np0005597378 podman[351932]: 2026-01-27 14:16:57.714021959 +0000 UTC m=+0.049077272 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 27 09:16:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:16:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:16:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:16:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526259920' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:16:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:16:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526259920' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:16:59 np0005597378 podman[351951]: 2026-01-27 14:16:59.749652652 +0000 UTC m=+0.083879099 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 09:17:00 np0005597378 nova_compute[238941]: 2026-01-27 14:17:00.988 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:00 np0005597378 nova_compute[238941]: 2026-01-27 14:17:00.988 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.018 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.108 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.109 238945 INFO nova.compute.claims [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.245 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.406 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.430 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:17:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652483258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.835 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.845 238945 DEBUG nova.compute.provider_tree [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.878 238945 DEBUG nova.scheduler.client.report [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.921 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.922 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.925 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.925 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.925 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:17:01 np0005597378 nova_compute[238941]: 2026-01-27 14:17:01.926 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.032 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.033 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.069 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.090 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.197 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.198 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.199 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Creating image(s)#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.218 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.242 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.264 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.268 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.345 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.346 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.346 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.347 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.370 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.374 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:17:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710746645' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.522 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.562 238945 DEBUG nova.policy [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.608 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.609 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.779 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.782 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.966621374711394GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.782 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.782 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.798 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.855 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.885 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.885 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5b3093c8-a99f-45d8-b612-447b6a5412c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.885 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.886 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.949 238945 DEBUG nova.objects.instance [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 5b3093c8-a99f-45d8-b612-447b6a5412c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:02 np0005597378 nova_compute[238941]: 2026-01-27 14:17:02.962 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.007 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.008 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Ensure instance console log exists: /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.008 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.009 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.009 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:17:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541340737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.573 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.578 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.593 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.613 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:17:03 np0005597378 nova_compute[238941]: 2026-01-27 14:17:03.614 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:04Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:d7:02 10.100.0.4
Jan 27 09:17:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:04Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:d7:02 10.100.0.4
Jan 27 09:17:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:04 np0005597378 nova_compute[238941]: 2026-01-27 14:17:04.821 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Successfully created port: 674bbd90-8ebb-485f-bccf-39535e0b1f3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:17:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 144 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.085 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Successfully updated port: 674bbd90-8ebb-485f-bccf-39535e0b1f3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.099 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.099 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.185 238945 DEBUG nova.compute.manager [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.185 238945 DEBUG nova.compute.manager [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing instance network info cache due to event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.185 238945 DEBUG oslo_concurrency.lockutils [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:06 np0005597378 nova_compute[238941]: 2026-01-27 14:17:06.244 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:17:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 161 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 647 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.211 238945 DEBUG nova.network.neutron [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.320 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.321 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance network_info: |[{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.321 238945 DEBUG oslo_concurrency.lockutils [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.322 238945 DEBUG nova.network.neutron [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.326 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start _get_guest_xml network_info=[{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.332 238945 WARNING nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.345 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.346 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.350 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.351 238945 DEBUG nova.virt.libvirt.host [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.351 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.351 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.352 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.352 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.353 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.354 238945 DEBUG nova.virt.hardware [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.358 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.590 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.594 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.644 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:17:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301289775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.901 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.921 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:07 np0005597378 nova_compute[238941]: 2026-01-27 14:17:07.925 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:17:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722750330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.482 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.485 238945 DEBUG nova.virt.libvirt.vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1403334171',display_name='tempest-TestNetworkBasicOps-server-1403334171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1403334171',id=123,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFC1cqJrt5q8dhbfCopKr7DkgHsb6klvfm+5aRYlinfR2B21lI9vu1jBorPfSj3isXMHvTevkrNKtSRdTzcTp9q4s/p/QJbfs+zmsxfAbbF0PDMabDhurjSdiaAYWO65Q==',key_name='tempest-TestNetworkBasicOps-940230937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-9ajp9grx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=5b3093c8-a99f-45d8-b612-447b6a5412c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.487 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.488 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.490 238945 DEBUG nova.objects.instance [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b3093c8-a99f-45d8-b612-447b6a5412c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.578 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <uuid>5b3093c8-a99f-45d8-b612-447b6a5412c5</uuid>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <name>instance-0000007b</name>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-1403334171</nova:name>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:17:07</nova:creationTime>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <nova:port uuid="674bbd90-8ebb-485f-bccf-39535e0b1f3e">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <entry name="serial">5b3093c8-a99f-45d8-b612-447b6a5412c5</entry>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <entry name="uuid">5b3093c8-a99f-45d8-b612-447b6a5412c5</entry>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5b3093c8-a99f-45d8-b612-447b6a5412c5_disk">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:ac:47:86"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <target dev="tap674bbd90-8e"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/console.log" append="off"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:17:08 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:17:08 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:17:08 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:17:08 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.579 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Preparing to wait for external event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.580 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.580 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.580 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.581 238945 DEBUG nova.virt.libvirt.vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:16:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1403334171',display_name='tempest-TestNetworkBasicOps-server-1403334171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1403334171',id=123,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFC1cqJrt5q8dhbfCopKr7DkgHsb6klvfm+5aRYlinfR2B21lI9vu1jBorPfSj3isXMHvTevkrNKtSRdTzcTp9q4s/p/QJbfs+zmsxfAbbF0PDMabDhurjSdiaAYWO65Q==',key_name='tempest-TestNetworkBasicOps-940230937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-9ajp9grx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=5b3093c8-a99f-45d8-b612-447b6a5412c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.581 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.582 238945 DEBUG nova.network.os_vif_util [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.582 238945 DEBUG os_vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.583 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.583 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.583 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap674bbd90-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.587 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap674bbd90-8e, col_values=(('external_ids', {'iface-id': '674bbd90-8ebb-485f-bccf-39535e0b1f3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:47:86', 'vm-uuid': '5b3093c8-a99f-45d8-b612-447b6a5412c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.589 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:08 np0005597378 NetworkManager[48904]: <info>  [1769523428.5900] manager: (tap674bbd90-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.596 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:08 np0005597378 nova_compute[238941]: 2026-01-27 14:17:08.597 238945 INFO os_vif [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e')#033[00m
Jan 27 09:17:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 167 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.227 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.227 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.227 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:ac:47:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.228 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Using config drive#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.251 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.318 238945 DEBUG nova.network.neutron [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated VIF entry in instance network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.319 238945 DEBUG nova.network.neutron [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.473 238945 DEBUG oslo_concurrency.lockutils [req-51bd4fe1-eb3f-498c-95e5-cd5974bb8400 req-67716914-cb60-43b7-9240-c4f86216f9b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.811 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Creating config drive at /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.816 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodrwjpob execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.960 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpodrwjpob" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.994 238945 DEBUG nova.storage.rbd_utils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:09 np0005597378 nova_compute[238941]: 2026-01-27 14:17:09.998 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.127 238945 DEBUG oslo_concurrency.processutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config 5b3093c8-a99f-45d8-b612-447b6a5412c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.128 238945 INFO nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deleting local config drive /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5/disk.config because it was imported into RBD.#033[00m
Jan 27 09:17:10 np0005597378 NetworkManager[48904]: <info>  [1769523430.1819] manager: (tap674bbd90-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/537)
Jan 27 09:17:10 np0005597378 kernel: tap674bbd90-8e: entered promiscuous mode
Jan 27 09:17:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:10Z|01296|binding|INFO|Claiming lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e for this chassis.
Jan 27 09:17:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:10Z|01297|binding|INFO|674bbd90-8ebb-485f-bccf-39535e0b1f3e: Claiming fa:16:3e:ac:47:86 10.100.0.6
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:10Z|01298|binding|INFO|Setting lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e ovn-installed in OVS
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.201 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:10Z|01299|binding|INFO|Setting lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e up in Southbound
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.211 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:47:86 10.100.0.6'], port_security=['fa:16:3e:ac:47:86 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5b3093c8-a99f-45d8-b612-447b6a5412c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5556729e-c674-4885-93a2-19d3e66349dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=674bbd90-8ebb-485f-bccf-39535e0b1f3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.212 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 674bbd90-8ebb-485f-bccf-39535e0b1f3e in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 bound to our chassis#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.214 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 998c1efd-c4d9-4646-b881-3c79abc13336#033[00m
Jan 27 09:17:10 np0005597378 systemd-udevd[352347]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:17:10 np0005597378 systemd-machined[207425]: New machine qemu-155-instance-0000007b.
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[18406f80-a889-47d8-93ad-29556be064d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.228 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap998c1efd-c1 in ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.229 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap998c1efd-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.229 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8866245d-e7b0-4b63-8d63-94c6a7b32655]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 NetworkManager[48904]: <info>  [1769523430.2307] device (tap674bbd90-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:17:10 np0005597378 NetworkManager[48904]: <info>  [1769523430.2316] device (tap674bbd90-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.230 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea8470-055d-453a-b5af-10c735c1d65e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.243 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e419a1ff-dfff-4b0a-b93c-63fa73bf0427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.269 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb52171b-0f0d-496f-9a0d-4a67431cac0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.295 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b25234-6b7c-432f-ba21-9f5c07175dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 NetworkManager[48904]: <info>  [1769523430.3026] manager: (tap998c1efd-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/538)
Jan 27 09:17:10 np0005597378 systemd-udevd[352351]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6542ce54-f7ef-4c51-ae0e-3ec68eac65a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.333 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[753b87de-aeda-46a9-b6a8-1a840d251ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.336 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[56722deb-965d-4145-b03c-8e9bad617912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 NetworkManager[48904]: <info>  [1769523430.3605] device (tap998c1efd-c0): carrier: link connected
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.369 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e93c3264-e21e-416d-a4ff-e528ba142a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.378 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[948f9724-ffce-4adc-9cb6-c14034375c46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352381, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.407 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ddb3d38-9fc0-4c58-9100-b69b5c40c932]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:9d1b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606998, 'tstamp': 606998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352382, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.425 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9642399a-9097-4c1d-9684-c9394d765a14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352383, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.462 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37c183b9-445d-412f-b4bd-599ca9a41751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.527 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba80ab5-6fd9-4b08-8d4e-7799a1d1f727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.530 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.530 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.531 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap998c1efd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.533 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 NetworkManager[48904]: <info>  [1769523430.5340] manager: (tap998c1efd-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Jan 27 09:17:10 np0005597378 kernel: tap998c1efd-c0: entered promiscuous mode
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.536 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.538 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap998c1efd-c0, col_values=(('external_ids', {'iface-id': 'b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:10Z|01300|binding|INFO|Releasing lport b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8 from this chassis (sb_readonly=0)
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.568 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/998c1efd-c4d9-4646-b881-3c79abc13336.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/998c1efd-c4d9-4646-b881-3c79abc13336.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca104336-f549-41da-af22-2a73575f70c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.570 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/998c1efd-c4d9-4646-b881-3c79abc13336.pid.haproxy
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 998c1efd-c4d9-4646-b881-3c79abc13336
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:17:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:10.572 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'env', 'PROCESS_TAG=haproxy-998c1efd-c4d9-4646-b881-3c79abc13336', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/998c1efd-c4d9-4646-b881-3c79abc13336.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.749 238945 DEBUG nova.compute.manager [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.750 238945 DEBUG oslo_concurrency.lockutils [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.750 238945 DEBUG oslo_concurrency.lockutils [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.751 238945 DEBUG oslo_concurrency.lockutils [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.751 238945 DEBUG nova.compute.manager [req-fc00e98a-a75c-4ae7-8542-737654635b15 req-39ad5163-3192-481e-9871-78ef25b61f6c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Processing event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:17:10 np0005597378 podman[352452]: 2026-01-27 14:17:10.962648201 +0000 UTC m=+0.052094641 container create e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:17:10 np0005597378 systemd[1]: Started libpod-conmon-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6.scope.
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.990 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.991 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523430.9902322, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.991 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Started (Lifecycle Event)#033[00m
Jan 27 09:17:10 np0005597378 nova_compute[238941]: 2026-01-27 14:17:10.997 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.000 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance spawned successfully.#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.000 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:17:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 09:17:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:11 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95177fbfd83e72f9cf40a6761ac0ab496e7395a1fd358fb4e054c5fc63f4474c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:11 np0005597378 podman[352452]: 2026-01-27 14:17:10.936704888 +0000 UTC m=+0.026151338 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:17:11 np0005597378 podman[352452]: 2026-01-27 14:17:11.036706488 +0000 UTC m=+0.126152928 container init e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:17:11 np0005597378 podman[352452]: 2026-01-27 14:17:11.043205881 +0000 UTC m=+0.132652301 container start e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.052 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.055 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:17:11 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : New worker (352475) forked
Jan 27 09:17:11 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : Loading success.
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.090 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.092 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.093 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.093 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.093 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.094 238945 DEBUG nova.virt.libvirt.driver [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.106 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.107 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523430.993998, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.107 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.147 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.149 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523430.9961832, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.150 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.285 238945 INFO nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 9.09 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.285 238945 DEBUG nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.307 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.311 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.353 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.366 238945 INFO nova.compute.manager [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 10.30 seconds to build instance.#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.452 238945 DEBUG oslo_concurrency.lockutils [None req-65bcc85d-7f71-4345-912c-7618ecf37ab9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:17:11 np0005597378 nova_compute[238941]: 2026-01-27 14:17:11.597 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:12 np0005597378 nova_compute[238941]: 2026-01-27 14:17:12.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG nova.compute.manager [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG oslo_concurrency.lockutils [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG oslo_concurrency.lockutils [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.213 238945 DEBUG oslo_concurrency.lockutils [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.214 238945 DEBUG nova.compute.manager [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] No waiting events found dispatching network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.214 238945 WARNING nova.compute.manager [req-93cd29ec-3c00-4111-a4c4-a883b5c02d40 req-ceae096c-b07d-4640-8b4a-c13e89202e0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received unexpected event network-vif-plugged-674bbd90-8ebb-485f-bccf-39535e0b1f3e for instance with vm_state active and task_state None.#033[00m
Jan 27 09:17:13 np0005597378 nova_compute[238941]: 2026-01-27 14:17:13.590 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:14 np0005597378 nova_compute[238941]: 2026-01-27 14:17:14.652 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:14 np0005597378 nova_compute[238941]: 2026-01-27 14:17:14.676 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:14 np0005597378 nova_compute[238941]: 2026-01-27 14:17:14.677 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:17:14 np0005597378 nova_compute[238941]: 2026-01-27 14:17:14.913 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:14 np0005597378 nova_compute[238941]: 2026-01-27 14:17:14.914 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:14 np0005597378 nova_compute[238941]: 2026-01-27 14:17:14.946 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:17:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.031 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.031 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.038 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.038 238945 INFO nova.compute.claims [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.162 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:17:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666984476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.712 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.720 238945 DEBUG nova.compute.provider_tree [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.738 238945 DEBUG nova.scheduler.client.report [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.763 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.764 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.815 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.816 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.838 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.864 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.936 238945 DEBUG nova.compute.manager [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.937 238945 DEBUG nova.compute.manager [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing instance network info cache due to event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.938 238945 DEBUG oslo_concurrency.lockutils [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.939 238945 DEBUG oslo_concurrency.lockutils [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.939 238945 DEBUG nova.network.neutron [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.957 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.960 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.962 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Creating image(s)#033[00m
Jan 27 09:17:15 np0005597378 nova_compute[238941]: 2026-01-27 14:17:15.998 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.032 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.059 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.064 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.104 238945 DEBUG nova.policy [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.140 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.142 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.142 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.143 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.166 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.170 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.461 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.531 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.601 238945 DEBUG nova.objects.instance [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 903e2fd2-4c0a-486c-9be2-3e2844ea09aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.614 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.614 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Ensure instance console log exists: /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.615 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.615 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:16 np0005597378 nova_compute[238941]: 2026-01-27 14:17:16.616 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1001 KiB/s wr, 121 op/s
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:17:17
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:17:17 np0005597378 nova_compute[238941]: 2026-01-27 14:17:17.255 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully created port: f21ed56b-30c8-4f36-ac00-096f72413945 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:17:17 np0005597378 nova_compute[238941]: 2026-01-27 14:17:17.582 238945 DEBUG nova.network.neutron [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated VIF entry in instance network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:17 np0005597378 nova_compute[238941]: 2026-01-27 14:17:17.583 238945 DEBUG nova.network.neutron [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:17 np0005597378 nova_compute[238941]: 2026-01-27 14:17:17.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:17 np0005597378 nova_compute[238941]: 2026-01-27 14:17:17.809 238945 DEBUG oslo_concurrency.lockutils [req-4e96a688-34ec-4329-ac0f-946113a2bf12 req-93a3b69d-0766-4b56-951a-979aa4456bf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:17:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:17:18 np0005597378 nova_compute[238941]: 2026-01-27 14:17:18.327 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully created port: cc6a70ca-7bf0-4028-84a7-a57071c090b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:17:18 np0005597378 nova_compute[238941]: 2026-01-27 14:17:18.591 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 192 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 624 KiB/s wr, 98 op/s
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.389 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully updated port: f21ed56b-30c8-4f36-ac00-096f72413945 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:17:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.561 238945 DEBUG nova.compute.manager [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.562 238945 DEBUG nova.compute.manager [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.562 238945 DEBUG oslo_concurrency.lockutils [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.563 238945 DEBUG oslo_concurrency.lockutils [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.563 238945 DEBUG nova.network.neutron [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:19 np0005597378 nova_compute[238941]: 2026-01-27 14:17:19.804 238945 DEBUG nova.network.neutron [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.069 238945 DEBUG nova.network.neutron [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.102 238945 DEBUG oslo_concurrency.lockutils [req-95c86d33-5a90-424e-930b-3c2af29056ca req-59231390-72f5-4d22-bf08-38fa76a03bb2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.322 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Successfully updated port: cc6a70ca-7bf0-4028-84a7-a57071c090b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.409 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.409 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.410 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:17:20 np0005597378 nova_compute[238941]: 2026-01-27 14:17:20.601 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:17:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 09:17:21 np0005597378 nova_compute[238941]: 2026-01-27 14:17:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:21 np0005597378 nova_compute[238941]: 2026-01-27 14:17:21.650 238945 DEBUG nova.compute.manager [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:21 np0005597378 nova_compute[238941]: 2026-01-27 14:17:21.650 238945 DEBUG nova.compute.manager [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-cc6a70ca-7bf0-4028-84a7-a57071c090b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:21 np0005597378 nova_compute[238941]: 2026-01-27 14:17:21.650 238945 DEBUG oslo_concurrency.lockutils [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:22 np0005597378 nova_compute[238941]: 2026-01-27 14:17:22.649 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 27 09:17:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:23Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:47:86 10.100.0.6
Jan 27 09:17:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:23Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:47:86 10.100.0.6
Jan 27 09:17:23 np0005597378 nova_compute[238941]: 2026-01-27 14:17:23.594 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 229 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 131 op/s
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.514 238945 DEBUG nova.network.neutron [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.540 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.540 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance network_info: |[{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.540 238945 DEBUG oslo_concurrency.lockutils [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.541 238945 DEBUG nova.network.neutron [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port cc6a70ca-7bf0-4028-84a7-a57071c090b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.544 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start _get_guest_xml network_info=[{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.547 238945 WARNING nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.555 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.556 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.564 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.564 238945 DEBUG nova.virt.libvirt.host [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.565 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.566 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.567 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.567 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.567 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.568 238945 DEBUG nova.virt.hardware [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:17:25 np0005597378 nova_compute[238941]: 2026-01-27 14:17:25.570 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:17:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542687252' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.136 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.157 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.160 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:17:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464307343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.755 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.756 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.757 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.758 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.758 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.759 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.759 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.760 238945 DEBUG nova.objects.instance [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 903e2fd2-4c0a-486c-9be2-3e2844ea09aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.777 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <uuid>903e2fd2-4c0a-486c-9be2-3e2844ea09aa</uuid>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <name>instance-0000007c</name>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-283462546</nova:name>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:17:25</nova:creationTime>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:port uuid="f21ed56b-30c8-4f36-ac00-096f72413945">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <nova:port uuid="cc6a70ca-7bf0-4028-84a7-a57071c090b8">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe01:1c99" ipVersion="6"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe01:1c99" ipVersion="6"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <entry name="serial">903e2fd2-4c0a-486c-9be2-3e2844ea09aa</entry>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <entry name="uuid">903e2fd2-4c0a-486c-9be2-3e2844ea09aa</entry>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:c4:13:bc"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <target dev="tapf21ed56b-30"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:01:1c:99"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <target dev="tapcc6a70ca-7b"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/console.log" append="off"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:17:26 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:17:26 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:17:26 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:17:26 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.778 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Preparing to wait for external event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.779 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.779 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Preparing to wait for external event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.780 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.781 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.782 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.782 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.783 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.784 238945 DEBUG os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.785 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.786 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.791 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf21ed56b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.791 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf21ed56b-30, col_values=(('external_ids', {'iface-id': 'f21ed56b-30c8-4f36-ac00-096f72413945', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:13:bc', 'vm-uuid': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 NetworkManager[48904]: <info>  [1769523446.7943] manager: (tapf21ed56b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.800 238945 INFO os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30')#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.801 238945 DEBUG nova.virt.libvirt.vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:15Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.801 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.802 238945 DEBUG nova.network.os_vif_util [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.802 238945 DEBUG os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.803 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.803 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.804 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.806 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc6a70ca-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc6a70ca-7b, col_values=(('external_ids', {'iface-id': 'cc6a70ca-7bf0-4028-84a7-a57071c090b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:1c:99', 'vm-uuid': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 NetworkManager[48904]: <info>  [1769523446.8089] manager: (tapcc6a70ca-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.814 238945 INFO os_vif [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b')#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.869 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:c4:13:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:01:1c:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.870 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Using config drive#033[00m
Jan 27 09:17:26 np0005597378 nova_compute[238941]: 2026-01-27 14:17:26.892 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 236 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.8 MiB/s wr, 91 op/s
Jan 27 09:17:27 np0005597378 nova_compute[238941]: 2026-01-27 14:17:27.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:27 np0005597378 nova_compute[238941]: 2026-01-27 14:17:27.774 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Creating config drive at /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config#033[00m
Jan 27 09:17:27 np0005597378 nova_compute[238941]: 2026-01-27 14:17:27.780 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ivntulq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018652107532876555 of space, bias 1.0, pg target 0.5595632259862967 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669394370330733 of space, bias 1.0, pg target 0.2008183110992199 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.057537069169775e-06 of space, bias 4.0, pg target 0.0012690444830037299 quantized to 16 (current 16)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:17:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:17:27 np0005597378 nova_compute[238941]: 2026-01-27 14:17:27.918 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ivntulq" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:27 np0005597378 nova_compute[238941]: 2026-01-27 14:17:27.945 238945 DEBUG nova.storage.rbd_utils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:27 np0005597378 nova_compute[238941]: 2026-01-27 14:17:27.949 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.087 238945 DEBUG oslo_concurrency.processutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config 903e2fd2-4c0a-486c-9be2-3e2844ea09aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.088 238945 INFO nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deleting local config drive /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa/disk.config because it was imported into RBD.#033[00m
Jan 27 09:17:28 np0005597378 NetworkManager[48904]: <info>  [1769523448.1331] manager: (tapf21ed56b-30): new Tun device (/org/freedesktop/NetworkManager/Devices/542)
Jan 27 09:17:28 np0005597378 kernel: tapf21ed56b-30: entered promiscuous mode
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01301|binding|INFO|Claiming lport f21ed56b-30c8-4f36-ac00-096f72413945 for this chassis.
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01302|binding|INFO|f21ed56b-30c8-4f36-ac00-096f72413945: Claiming fa:16:3e:c4:13:bc 10.100.0.11
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.149 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:13:bc 10.100.0.11'], port_security=['fa:16:3e:c4:13:bc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f21ed56b-30c8-4f36-ac00-096f72413945) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.151 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f21ed56b-30c8-4f36-ac00-096f72413945 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde bound to our chassis#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.152 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1ef49dc-ad74-4940-a002-d8d212c0abde#033[00m
Jan 27 09:17:28 np0005597378 NetworkManager[48904]: <info>  [1769523448.1546] manager: (tapcc6a70ca-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Jan 27 09:17:28 np0005597378 kernel: tapcc6a70ca-7b: entered promiscuous mode
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01303|binding|INFO|Setting lport f21ed56b-30c8-4f36-ac00-096f72413945 ovn-installed in OVS
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01304|binding|INFO|Setting lport f21ed56b-30c8-4f36-ac00-096f72413945 up in Southbound
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01305|if_status|INFO|Not updating pb chassis for cc6a70ca-7bf0-4028-84a7-a57071c090b8 now as sb is readonly
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72177c69-8556-45b9-a5fb-e0de23b1894e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 systemd-udevd[352820]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:17:28 np0005597378 systemd-udevd[352822]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:17:28 np0005597378 NetworkManager[48904]: <info>  [1769523448.1934] device (tapf21ed56b-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:17:28 np0005597378 NetworkManager[48904]: <info>  [1769523448.1939] device (tapcc6a70ca-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:17:28 np0005597378 NetworkManager[48904]: <info>  [1769523448.1943] device (tapf21ed56b-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:17:28 np0005597378 NetworkManager[48904]: <info>  [1769523448.1946] device (tapcc6a70ca-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:17:28 np0005597378 systemd-machined[207425]: New machine qemu-156-instance-0000007c.
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.209 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a83a5e56-deaa-411e-ab47-456b14396b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.212 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f61f1cbd-f806-49b7-937b-415b682d7b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 systemd[1]: Started Virtual Machine qemu-156-instance-0000007c.
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01306|binding|INFO|Claiming lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 for this chassis.
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01307|binding|INFO|cc6a70ca-7bf0-4028-84a7-a57071c090b8: Claiming fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01308|binding|INFO|Setting lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 ovn-installed in OVS
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.220 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], port_security=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe01:1c99/64 2001:db8::f816:3eff:fe01:1c99/64', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc6a70ca-7bf0-4028-84a7-a57071c090b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:28Z|01309|binding|INFO|Setting lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 up in Southbound
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.241 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f908feb2-f2dd-4580-8b1b-2e8f3e40fa98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 podman[352808]: 2026-01-27 14:17:28.25069067 +0000 UTC m=+0.080681593 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.262 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d3972488-8f05-4dc3-aa01-7e7da65b80a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352843, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.276 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f0cd69-ee3e-4bf9-8f0f-9b64509297b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604794, 'tstamp': 604794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352848, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604797, 'tstamp': 604797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352848, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.278 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1ef49dc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1ef49dc-a0, col_values=(('external_ids', {'iface-id': '73d5d48d-2488-422a-a3b3-324997b57d60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.283 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc6a70ca-7bf0-4028-84a7-a57071c090b8 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.285 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8adf88c-ff23-4dd4-b6fd-1af52ba228f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.330 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fa46a790-8952-4139-a959-ba727be52d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.333 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ec35792f-55d7-46b2-b197-f7080dcc19b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.364 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ac6aad-ee4e-474e-939e-62d2661e78be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.380 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf77730-367e-49bb-b67c-87e2e358f3fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1916, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1916, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352856, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.397 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ecde5eef-4d1e-4189-b408-3f3dc5b1d020]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ff1fc04-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604870, 'tstamp': 604870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352857, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.398 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.399 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.400 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.403 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ff1fc04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.403 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.403 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ff1fc04-00, col_values=(('external_ids', {'iface-id': '7641ffb0-ddda-4391-aadd-cbcdb9365edb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:28.404 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG nova.compute.manager [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG oslo_concurrency.lockutils [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG oslo_concurrency.lockutils [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.483 238945 DEBUG oslo_concurrency.lockutils [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.484 238945 DEBUG nova.compute.manager [req-652c1d22-e464-4382-b5a0-eb17bd2bf1bc req-642e2847-73ca-4231-8134-28f7cdf8a80f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Processing event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.492 238945 DEBUG nova.compute.manager [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.492 238945 DEBUG oslo_concurrency.lockutils [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.493 238945 DEBUG oslo_concurrency.lockutils [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.493 238945 DEBUG oslo_concurrency.lockutils [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.493 238945 DEBUG nova.compute.manager [req-bb85845c-88c1-4538-b965-335f3871322c req-32c771d1-2f93-4376-9af8-f46eaec72b83 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Processing event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.622 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523448.6213558, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.623 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Started (Lifecycle Event)#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.625 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.634 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.638 238945 INFO nova.virt.libvirt.driver [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance spawned successfully.#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.638 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.647 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.650 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.659 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.660 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.660 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.661 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.661 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.661 238945 DEBUG nova.virt.libvirt.driver [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.674 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.675 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523448.6220562, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.675 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.720 238945 DEBUG nova.network.neutron [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updated VIF entry in instance network info cache for port cc6a70ca-7bf0-4028-84a7-a57071c090b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.720 238945 DEBUG nova.network.neutron [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.742 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.747 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523448.63365, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.747 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.752 238945 DEBUG oslo_concurrency.lockutils [req-40580d26-84e2-46bd-acbf-ea74a09bdfd5 req-6d992b09-55d7-4b97-8565-22780ee7dc65 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.771 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.774 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.779 238945 INFO nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 12.82 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.780 238945 DEBUG nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.792 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.831 238945 INFO nova.compute.manager [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 13.83 seconds to build instance.#033[00m
Jan 27 09:17:28 np0005597378 nova_compute[238941]: 2026-01-27 14:17:28.852 238945 DEBUG oslo_concurrency.lockutils [None req-45200acc-ce42-471f-aef9-1cd85aa4998c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 09:17:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:30 np0005597378 podman[352901]: 2026-01-27 14:17:30.760930022 +0000 UTC m=+0.102148337 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.788 238945 DEBUG nova.compute.manager [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG oslo_concurrency.lockutils [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG oslo_concurrency.lockutils [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG oslo_concurrency.lockutils [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.789 238945 DEBUG nova.compute.manager [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.790 238945 WARNING nova.compute.manager [req-634b5037-0dbd-48d1-8d0c-bf615302a048 req-181ca567-c43e-4e51-b16d-f6180a0d85a6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received unexpected event network-vif-plugged-cc6a70ca-7bf0-4028-84a7-a57071c090b8 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.875 238945 DEBUG nova.compute.manager [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.876 238945 DEBUG oslo_concurrency.lockutils [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.876 238945 DEBUG oslo_concurrency.lockutils [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.877 238945 DEBUG oslo_concurrency.lockutils [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.877 238945 DEBUG nova.compute.manager [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:17:30 np0005597378 nova_compute[238941]: 2026-01-27 14:17:30.878 238945 WARNING nova.compute.manager [req-48ed4c9f-d3c2-47b8-ba66-5e3789ca24f7 req-2a983244-ff08-4e1f-8506-f54d16d8b4b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received unexpected event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:17:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 134 op/s
Jan 27 09:17:31 np0005597378 nova_compute[238941]: 2026-01-27 14:17:31.053 238945 INFO nova.compute.manager [None req-cce14e31-1b32-4378-8d21-a592911f3790 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Get console output#033[00m
Jan 27 09:17:31 np0005597378 nova_compute[238941]: 2026-01-27 14:17:31.057 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:17:31 np0005597378 nova_compute[238941]: 2026-01-27 14:17:31.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:32 np0005597378 nova_compute[238941]: 2026-01-27 14:17:32.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 27 09:17:33 np0005597378 nova_compute[238941]: 2026-01-27 14:17:33.496 238945 DEBUG nova.compute.manager [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:33 np0005597378 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG nova.compute.manager [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing instance network info cache due to event network-changed-674bbd90-8ebb-485f-bccf-39535e0b1f3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:33 np0005597378 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG oslo_concurrency.lockutils [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:33 np0005597378 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG oslo_concurrency.lockutils [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:33 np0005597378 nova_compute[238941]: 2026-01-27 14:17:33.497 238945 DEBUG nova.network.neutron [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Refreshing network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:34 np0005597378 nova_compute[238941]: 2026-01-27 14:17:34.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:17:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.197 238945 DEBUG nova.network.neutron [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated VIF entry in instance network info cache for port 674bbd90-8ebb-485f-bccf-39535e0b1f3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.198 238945 DEBUG nova.network.neutron [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.228 238945 DEBUG oslo_concurrency.lockutils [req-98b8692e-a44a-4d1d-961b-2a9259cfbf86 req-eee5e02c-4821-4b0f-8768-64d78e17448f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.599 238945 DEBUG nova.compute.manager [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.599 238945 DEBUG nova.compute.manager [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.600 238945 DEBUG oslo_concurrency.lockutils [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.600 238945 DEBUG oslo_concurrency.lockutils [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:35 np0005597378 nova_compute[238941]: 2026-01-27 14:17:35.600 238945 DEBUG nova.network.neutron [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:36 np0005597378 nova_compute[238941]: 2026-01-27 14:17:36.553 238945 DEBUG nova.network.neutron [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updated VIF entry in instance network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:36 np0005597378 nova_compute[238941]: 2026-01-27 14:17:36.554 238945 DEBUG nova.network.neutron [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:36 np0005597378 nova_compute[238941]: 2026-01-27 14:17:36.578 238945 DEBUG oslo_concurrency.lockutils [req-08ac480f-b1bd-4898-a312-317ac49140cd req-bfd8ccae-daf5-416a-b403-aada406035fe 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:36 np0005597378 nova_compute[238941]: 2026-01-27 14:17:36.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 638 KiB/s wr, 103 op/s
Jan 27 09:17:37 np0005597378 nova_compute[238941]: 2026-01-27 14:17:37.658 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 246 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 89 op/s
Jan 27 09:17:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:39.463 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:39.464 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:17:39 np0005597378 nova_compute[238941]: 2026-01-27 14:17:39.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:17:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:17:40 np0005597378 podman[353073]: 2026-01-27 14:17:40.510896222 +0000 UTC m=+0.059286664 container create 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:17:40 np0005597378 systemd[1]: Started libpod-conmon-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope.
Jan 27 09:17:40 np0005597378 podman[353073]: 2026-01-27 14:17:40.478376493 +0000 UTC m=+0.026766935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:17:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:40 np0005597378 podman[353073]: 2026-01-27 14:17:40.600531414 +0000 UTC m=+0.148921956 container init 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:17:40 np0005597378 podman[353073]: 2026-01-27 14:17:40.608877817 +0000 UTC m=+0.157268259 container start 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:40 np0005597378 podman[353073]: 2026-01-27 14:17:40.612345269 +0000 UTC m=+0.160735811 container attach 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 27 09:17:40 np0005597378 great_roentgen[353089]: 167 167
Jan 27 09:17:40 np0005597378 systemd[1]: libpod-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope: Deactivated successfully.
Jan 27 09:17:40 np0005597378 conmon[353089]: conmon 9c6cbea25c5afda3d3fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope/container/memory.events
Jan 27 09:17:40 np0005597378 podman[353094]: 2026-01-27 14:17:40.660286129 +0000 UTC m=+0.030741082 container died 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:17:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-69568851dd5689a25ef7c16769bc512fea1c3fbb531439e3a5a04c60ed9dc0b6-merged.mount: Deactivated successfully.
Jan 27 09:17:40 np0005597378 podman[353094]: 2026-01-27 14:17:40.702610899 +0000 UTC m=+0.073065832 container remove 9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:17:40 np0005597378 systemd[1]: libpod-conmon-9c6cbea25c5afda3d3fe88af700c71d94090ad9ae79eb409e46d953698fd3ebb.scope: Deactivated successfully.
Jan 27 09:17:40 np0005597378 podman[353116]: 2026-01-27 14:17:40.935582347 +0000 UTC m=+0.077197181 container create 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:17:40 np0005597378 podman[353116]: 2026-01-27 14:17:40.886602 +0000 UTC m=+0.028216834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:17:41 np0005597378 systemd[1]: Started libpod-conmon-05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d.scope.
Jan 27 09:17:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 254 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 483 KiB/s wr, 79 op/s
Jan 27 09:17:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:41 np0005597378 podman[353116]: 2026-01-27 14:17:41.059294609 +0000 UTC m=+0.200909513 container init 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.059 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.061 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:41 np0005597378 podman[353116]: 2026-01-27 14:17:41.068725531 +0000 UTC m=+0.210340345 container start 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:17:41 np0005597378 podman[353116]: 2026-01-27 14:17:41.07242828 +0000 UTC m=+0.214043134 container attach 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:17:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:17:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:17:41 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.302 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.514 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.515 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.524 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.525 238945 INFO nova.compute.claims [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:17:41 np0005597378 xenodochial_raman[353133]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:17:41 np0005597378 xenodochial_raman[353133]: --> All data devices are unavailable
Jan 27 09:17:41 np0005597378 systemd[1]: libpod-05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d.scope: Deactivated successfully.
Jan 27 09:17:41 np0005597378 podman[353153]: 2026-01-27 14:17:41.655127373 +0000 UTC m=+0.025051660 container died 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:17:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-db4d74b1a4435a2a97a5b99c9996be6ee1abfc33fb09b6aded9e2dc15d491715-merged.mount: Deactivated successfully.
Jan 27 09:17:41 np0005597378 podman[353153]: 2026-01-27 14:17:41.69587053 +0000 UTC m=+0.065794817 container remove 05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:17:41 np0005597378 systemd[1]: libpod-conmon-05895502a47365dbae3ecaa206ae756c4192c7f39263a0f69f74b35594c38f4d.scope: Deactivated successfully.
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:41 np0005597378 nova_compute[238941]: 2026-01-27 14:17:41.842 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:42 np0005597378 podman[353250]: 2026-01-27 14:17:42.275113931 +0000 UTC m=+0.097162155 container create af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:17:42 np0005597378 podman[353250]: 2026-01-27 14:17:42.211362859 +0000 UTC m=+0.033411103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:17:42 np0005597378 systemd[1]: Started libpod-conmon-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope.
Jan 27 09:17:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:17:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19376752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.432 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.438 238945 DEBUG nova.compute.provider_tree [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:17:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:42.466 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.488 238945 DEBUG nova.scheduler.client.report [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:17:42 np0005597378 podman[353250]: 2026-01-27 14:17:42.489223126 +0000 UTC m=+0.311271370 container init af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:17:42 np0005597378 podman[353250]: 2026-01-27 14:17:42.496840129 +0000 UTC m=+0.318888393 container start af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:42 np0005597378 sweet_goldwasser[353266]: 167 167
Jan 27 09:17:42 np0005597378 systemd[1]: libpod-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope: Deactivated successfully.
Jan 27 09:17:42 np0005597378 conmon[353266]: conmon af624487a757303481d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope/container/memory.events
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.511 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.511 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.570 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.570 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.602 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.617 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:42 np0005597378 podman[353250]: 2026-01-27 14:17:42.668011128 +0000 UTC m=+0.490059372 container attach af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:17:42 np0005597378 podman[353250]: 2026-01-27 14:17:42.669266901 +0000 UTC m=+0.491315125 container died af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:17:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:42Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:13:bc 10.100.0.11
Jan 27 09:17:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:42Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:13:bc 10.100.0.11
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.704 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.705 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.706 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Creating image(s)#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.725 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.744 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.767 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.770 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.837 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.838 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.839 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.839 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.861 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.865 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:42 np0005597378 nova_compute[238941]: 2026-01-27 14:17:42.927 238945 DEBUG nova.policy [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:17:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 254 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 482 KiB/s wr, 24 op/s
Jan 27 09:17:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a259dddc06f6bef12d92d6f9accfc64171e14d2df308026a6ad40ce41bc5b35c-merged.mount: Deactivated successfully.
Jan 27 09:17:43 np0005597378 podman[353250]: 2026-01-27 14:17:43.719714279 +0000 UTC m=+1.541762503 container remove af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:43 np0005597378 systemd[1]: libpod-conmon-af624487a757303481d5d9c4d901976ea3cd1de8fb00728c6cdc12d299852c90.scope: Deactivated successfully.
Jan 27 09:17:43 np0005597378 nova_compute[238941]: 2026-01-27 14:17:43.901 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:43 np0005597378 podman[353387]: 2026-01-27 14:17:43.928149713 +0000 UTC m=+0.044436628 container create d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:43 np0005597378 systemd[1]: Started libpod-conmon-d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404.scope.
Jan 27 09:17:43 np0005597378 nova_compute[238941]: 2026-01-27 14:17:43.969 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:17:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:44 np0005597378 podman[353387]: 2026-01-27 14:17:43.91005833 +0000 UTC m=+0.026345255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:17:44 np0005597378 podman[353387]: 2026-01-27 14:17:44.110729367 +0000 UTC m=+0.227016302 container init d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:17:44 np0005597378 podman[353387]: 2026-01-27 14:17:44.117652671 +0000 UTC m=+0.233939586 container start d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:17:44 np0005597378 podman[353387]: 2026-01-27 14:17:44.208731872 +0000 UTC m=+0.325018897 container attach d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.285 238945 DEBUG nova.objects.instance [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid dcd76996-d627-4d51-8860-9ce1dcefbb7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.303 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.303 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Ensure instance console log exists: /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.303 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.304 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.304 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]: {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:    "0": [
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:        {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "devices": [
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "/dev/loop3"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            ],
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_name": "ceph_lv0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_size": "21470642176",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "name": "ceph_lv0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "tags": {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cluster_name": "ceph",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.crush_device_class": "",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.encrypted": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.objectstore": "bluestore",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osd_id": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.type": "block",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.vdo": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.with_tpm": "0"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            },
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "type": "block",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "vg_name": "ceph_vg0"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:        }
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:    ],
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:    "1": [
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:        {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "devices": [
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "/dev/loop4"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            ],
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_name": "ceph_lv1",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_size": "21470642176",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "name": "ceph_lv1",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "tags": {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cluster_name": "ceph",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.crush_device_class": "",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.encrypted": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.objectstore": "bluestore",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osd_id": "1",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.type": "block",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.vdo": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.with_tpm": "0"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            },
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "type": "block",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "vg_name": "ceph_vg1"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:        }
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:    ],
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:    "2": [
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:        {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "devices": [
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "/dev/loop5"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            ],
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_name": "ceph_lv2",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_size": "21470642176",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "name": "ceph_lv2",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "tags": {
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.cluster_name": "ceph",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.crush_device_class": "",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.encrypted": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.objectstore": "bluestore",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osd_id": "2",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.type": "block",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.vdo": "0",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:                "ceph.with_tpm": "0"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            },
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "type": "block",
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:            "vg_name": "ceph_vg2"
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:        }
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]:    ]
Jan 27 09:17:44 np0005597378 elastic_einstein[353439]: }
Jan 27 09:17:44 np0005597378 systemd[1]: libpod-d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404.scope: Deactivated successfully.
Jan 27 09:17:44 np0005597378 podman[353387]: 2026-01-27 14:17:44.452758455 +0000 UTC m=+0.569045370 container died d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-377a68ccebd19843c50dc45c807d377d519a037fc2ab41e7dfd911cbc9a18eec-merged.mount: Deactivated successfully.
Jan 27 09:17:44 np0005597378 podman[353387]: 2026-01-27 14:17:44.496478032 +0000 UTC m=+0.612764947 container remove d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:17:44 np0005597378 systemd[1]: libpod-conmon-d2fa712b698e6b8b6ec8a08b5b4567f0fe02c581b5e361db298250cb27602404.scope: Deactivated successfully.
Jan 27 09:17:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:44 np0005597378 nova_compute[238941]: 2026-01-27 14:17:44.927 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Successfully created port: c7e0af25-155a-4e70-a998-b6873c027009 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:17:44 np0005597378 podman[353558]: 2026-01-27 14:17:44.950456879 +0000 UTC m=+0.045459714 container create 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:17:44 np0005597378 systemd[1]: Started libpod-conmon-1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4.scope.
Jan 27 09:17:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:45 np0005597378 podman[353558]: 2026-01-27 14:17:44.930161778 +0000 UTC m=+0.025164653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:17:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 267 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 742 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 27 09:17:45 np0005597378 podman[353558]: 2026-01-27 14:17:45.039084076 +0000 UTC m=+0.134086941 container init 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:17:45 np0005597378 podman[353558]: 2026-01-27 14:17:45.045589478 +0000 UTC m=+0.140592323 container start 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:17:45 np0005597378 podman[353558]: 2026-01-27 14:17:45.049132484 +0000 UTC m=+0.144135469 container attach 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:17:45 np0005597378 unruffled_kapitsa[353575]: 167 167
Jan 27 09:17:45 np0005597378 systemd[1]: libpod-1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4.scope: Deactivated successfully.
Jan 27 09:17:45 np0005597378 podman[353558]: 2026-01-27 14:17:45.050463959 +0000 UTC m=+0.145466824 container died 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:17:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5b43c3b6da1644ccf25a4a0ab494caf6e2ffcaaf3b74a1dab5bc7feaf9a0c8f8-merged.mount: Deactivated successfully.
Jan 27 09:17:45 np0005597378 podman[353558]: 2026-01-27 14:17:45.084298822 +0000 UTC m=+0.179301667 container remove 1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:17:45 np0005597378 systemd[1]: libpod-conmon-1901f14ae7a7cc18b2406737b71b4e695234a814aa659c224bf54036e6e1c5d4.scope: Deactivated successfully.
Jan 27 09:17:45 np0005597378 podman[353600]: 2026-01-27 14:17:45.269428593 +0000 UTC m=+0.045635988 container create 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:17:45 np0005597378 systemd[1]: Started libpod-conmon-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope.
Jan 27 09:17:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:17:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:17:45 np0005597378 podman[353600]: 2026-01-27 14:17:45.25244748 +0000 UTC m=+0.028654875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:17:45 np0005597378 podman[353600]: 2026-01-27 14:17:45.348366821 +0000 UTC m=+0.124574216 container init 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:17:45 np0005597378 podman[353600]: 2026-01-27 14:17:45.356482017 +0000 UTC m=+0.132689392 container start 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:17:45 np0005597378 podman[353600]: 2026-01-27 14:17:45.360159305 +0000 UTC m=+0.136366700 container attach 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:17:46 np0005597378 lvm[353697]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:17:46 np0005597378 lvm[353697]: VG ceph_vg1 finished
Jan 27 09:17:46 np0005597378 lvm[353696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:17:46 np0005597378 lvm[353696]: VG ceph_vg0 finished
Jan 27 09:17:46 np0005597378 lvm[353699]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:17:46 np0005597378 lvm[353699]: VG ceph_vg2 finished
Jan 27 09:17:46 np0005597378 kind_ride[353617]: {}
Jan 27 09:17:46 np0005597378 systemd[1]: libpod-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope: Deactivated successfully.
Jan 27 09:17:46 np0005597378 systemd[1]: libpod-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope: Consumed 1.346s CPU time.
Jan 27 09:17:46 np0005597378 podman[353600]: 2026-01-27 14:17:46.172047226 +0000 UTC m=+0.948254611 container died 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:17:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c76775716c88686e193331a9dc9e39b329822b76152e65f644b9f8e91c317b55-merged.mount: Deactivated successfully.
Jan 27 09:17:46 np0005597378 podman[353600]: 2026-01-27 14:17:46.213226494 +0000 UTC m=+0.989433869 container remove 178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:17:46 np0005597378 systemd[1]: libpod-conmon-178329577bd574008f1fd2355ae86afd54bf65bbbf7f8bffa068c77b8de8b351.scope: Deactivated successfully.
Jan 27 09:17:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:17:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:17:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:17:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:17:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:46.322 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:46.323 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:46 np0005597378 nova_compute[238941]: 2026-01-27 14:17:46.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 300 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 406 KiB/s rd, 2.5 MiB/s wr, 90 op/s
Jan 27 09:17:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:17:47 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.320 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Successfully updated port: c7e0af25-155a-4e70-a998-b6873c027009 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.336 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.336 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.337 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.403 238945 DEBUG nova.compute.manager [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-changed-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.403 238945 DEBUG nova.compute.manager [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing instance network info cache due to event network-changed-c7e0af25-155a-4e70-a998-b6873c027009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.403 238945 DEBUG oslo_concurrency.lockutils [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.472 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:17:47 np0005597378 nova_compute[238941]: 2026-01-27 14:17:47.662 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:17:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:17:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.173 238945 DEBUG nova.network.neutron [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.191 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.192 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance network_info: |[{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.192 238945 DEBUG oslo_concurrency.lockutils [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.192 238945 DEBUG nova.network.neutron [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing network info cache for port c7e0af25-155a-4e70-a998-b6873c027009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.196 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start _get_guest_xml network_info=[{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.201 238945 WARNING nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.209 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.210 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.213 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.214 238945 DEBUG nova.virt.libvirt.host [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.215 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.215 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.215 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.216 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.217 238945 DEBUG nova.virt.hardware [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.221 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:17:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769223014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.778 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.803 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:49 np0005597378 nova_compute[238941]: 2026-01-27 14:17:49.807 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:17:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496492632' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.361 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.363 238945 DEBUG nova.virt.libvirt.vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292571685',display_name='tempest-TestNetworkBasicOps-server-1292571685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292571685',id=125,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcQ6iPHxjx5eMagbDrGG9rMyCE4ElFPNFC7P8D70WeVLMUO6egwKUZDgsveGWRPjOVFr8hQ/AJ2QI5jXuhGCtAaGFdbvz7JRIiI1o4raZ1PCI5I5iQCSrMRLtfefQbhGw==',key_name='tempest-TestNetworkBasicOps-476831281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-k00c9ci8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:42Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=dcd76996-d627-4d51-8860-9ce1dcefbb7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.364 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.365 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.366 238945 DEBUG nova.objects.instance [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcd76996-d627-4d51-8860-9ce1dcefbb7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.399 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <uuid>dcd76996-d627-4d51-8860-9ce1dcefbb7f</uuid>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <name>instance-0000007d</name>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-1292571685</nova:name>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:17:49</nova:creationTime>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <nova:port uuid="c7e0af25-155a-4e70-a998-b6873c027009">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <entry name="serial">dcd76996-d627-4d51-8860-9ce1dcefbb7f</entry>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <entry name="uuid">dcd76996-d627-4d51-8860-9ce1dcefbb7f</entry>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:19:0a:41"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <target dev="tapc7e0af25-15"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/console.log" append="off"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:17:50 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:17:50 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:17:50 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:17:50 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.399 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Preparing to wait for external event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.400 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.400 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.400 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.401 238945 DEBUG nova.virt.libvirt.vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292571685',display_name='tempest-TestNetworkBasicOps-server-1292571685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292571685',id=125,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcQ6iPHxjx5eMagbDrGG9rMyCE4ElFPNFC7P8D70WeVLMUO6egwKUZDgsveGWRPjOVFr8hQ/AJ2QI5jXuhGCtAaGFdbvz7JRIiI1o4raZ1PCI5I5iQCSrMRLtfefQbhGw==',key_name='tempest-TestNetworkBasicOps-476831281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-k00c9ci8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:17:42Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=dcd76996-d627-4d51-8860-9ce1dcefbb7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.401 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.402 238945 DEBUG nova.network.os_vif_util [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.402 238945 DEBUG os_vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.403 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.403 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.407 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7e0af25-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.408 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7e0af25-15, col_values=(('external_ids', {'iface-id': 'c7e0af25-155a-4e70-a998-b6873c027009', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:0a:41', 'vm-uuid': 'dcd76996-d627-4d51-8860-9ce1dcefbb7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:50 np0005597378 NetworkManager[48904]: <info>  [1769523470.4108] manager: (tapc7e0af25-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.412 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.417 238945 INFO os_vif [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15')#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.467 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.468 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.468 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:19:0a:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.469 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Using config drive#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.492 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.521 238945 DEBUG nova.network.neutron [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updated VIF entry in instance network info cache for port c7e0af25-155a-4e70-a998-b6873c027009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.521 238945 DEBUG nova.network.neutron [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:50 np0005597378 nova_compute[238941]: 2026-01-27 14:17:50.537 238945 DEBUG oslo_concurrency.lockutils [req-03dc089f-cac8-4802-b571-173e77fd8558 req-c92c836a-6a6b-4f65-9e3b-f2ddc2e124ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.590 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Creating config drive at /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.596 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwrt3v74 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.754 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwrt3v74" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.790 238945 DEBUG nova.storage.rbd_utils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.794 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.928 238945 DEBUG oslo_concurrency.processutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config dcd76996-d627-4d51-8860-9ce1dcefbb7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.929 238945 INFO nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deleting local config drive /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f/disk.config because it was imported into RBD.#033[00m
Jan 27 09:17:51 np0005597378 kernel: tapc7e0af25-15: entered promiscuous mode
Jan 27 09:17:51 np0005597378 NetworkManager[48904]: <info>  [1769523471.9747] manager: (tapc7e0af25-15): new Tun device (/org/freedesktop/NetworkManager/Devices/545)
Jan 27 09:17:51 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:51Z|01310|binding|INFO|Claiming lport c7e0af25-155a-4e70-a998-b6873c027009 for this chassis.
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.978 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:51 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:51Z|01311|binding|INFO|c7e0af25-155a-4e70-a998-b6873c027009: Claiming fa:16:3e:19:0a:41 10.100.0.14
Jan 27 09:17:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:51.987 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:0a:41 10.100.0.14'], port_security=['fa:16:3e:19:0a:41 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dcd76996-d627-4d51-8860-9ce1dcefbb7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4bc9c7a-795c-4d0d-88e5-37500aeca13a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c7e0af25-155a-4e70-a998-b6873c027009) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:51.988 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c7e0af25-155a-4e70-a998-b6873c027009 in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 bound to our chassis#033[00m
Jan 27 09:17:51 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:51.990 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 998c1efd-c4d9-4646-b881-3c79abc13336#033[00m
Jan 27 09:17:51 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:51 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:51Z|01312|binding|INFO|Setting lport c7e0af25-155a-4e70-a998-b6873c027009 ovn-installed in OVS
Jan 27 09:17:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:51Z|01313|binding|INFO|Setting lport c7e0af25-155a-4e70-a998-b6873c027009 up in Southbound
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:51.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:52 np0005597378 systemd-udevd[353874]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fc51a72a-6586-4688-aba0-345b69b21198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:52 np0005597378 systemd-machined[207425]: New machine qemu-157-instance-0000007d.
Jan 27 09:17:52 np0005597378 NetworkManager[48904]: <info>  [1769523472.0265] device (tapc7e0af25-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:17:52 np0005597378 NetworkManager[48904]: <info>  [1769523472.0279] device (tapc7e0af25-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:17:52 np0005597378 systemd[1]: Started Virtual Machine qemu-157-instance-0000007d.
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.048 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccfa75f-b633-4b33-b7a1-56864298a201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.053 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f54a25-a07b-49c2-8a27-2ed6fc8bae52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.090 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[18980852-b4fc-405a-a8c6-bb47217e3a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.117 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9c66a782-437b-4b66-bfe6-da20d477f16f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353888, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.139 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[32475a60-6fa5-4d08-ba52-100d6d583e9b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607011, 'tstamp': 607011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353889, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607014, 'tstamp': 607014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353889, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.143 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap998c1efd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.147 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap998c1efd-c0, col_values=(('external_ids', {'iface-id': 'b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:52.148 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.521 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523472.5207715, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.522 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Started (Lifecycle Event)#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.549 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.554 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523472.5210223, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.555 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.581 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.585 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.611 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:17:52 np0005597378 nova_compute[238941]: 2026-01-27 14:17:52.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.5 MiB/s wr, 88 op/s
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.675 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.676 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.677 238945 INFO nova.compute.manager [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Terminating instance#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.678 238945 DEBUG nova.compute.manager [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:17:53 np0005597378 kernel: tapf21ed56b-30 (unregistering): left promiscuous mode
Jan 27 09:17:53 np0005597378 NetworkManager[48904]: <info>  [1769523473.7165] device (tapf21ed56b-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:17:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:53Z|01314|binding|INFO|Releasing lport f21ed56b-30c8-4f36-ac00-096f72413945 from this chassis (sb_readonly=0)
Jan 27 09:17:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:53Z|01315|binding|INFO|Setting lport f21ed56b-30c8-4f36-ac00-096f72413945 down in Southbound
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:53Z|01316|binding|INFO|Removing iface tapf21ed56b-30 ovn-installed in OVS
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.729 238945 DEBUG nova.compute.manager [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.729 238945 DEBUG nova.compute.manager [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing instance network info cache due to event network-changed-f21ed56b-30c8-4f36-ac00-096f72413945. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.729 238945 DEBUG oslo_concurrency.lockutils [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.730 238945 DEBUG oslo_concurrency.lockutils [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.730 238945 DEBUG nova.network.neutron [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Refreshing network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.736 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:13:bc 10.100.0.11'], port_security=['fa:16:3e:c4:13:bc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=f21ed56b-30c8-4f36-ac00-096f72413945) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.737 154802 INFO neutron.agent.ovn.metadata.agent [-] Port f21ed56b-30c8-4f36-ac00-096f72413945 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde unbound from our chassis#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.739 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1ef49dc-ad74-4940-a002-d8d212c0abde#033[00m
Jan 27 09:17:53 np0005597378 kernel: tapcc6a70ca-7b (unregistering): left promiscuous mode
Jan 27 09:17:53 np0005597378 NetworkManager[48904]: <info>  [1769523473.7451] device (tapcc6a70ca-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:53Z|01317|binding|INFO|Releasing lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 from this chassis (sb_readonly=0)
Jan 27 09:17:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:53Z|01318|binding|INFO|Setting lport cc6a70ca-7bf0-4028-84a7-a57071c090b8 down in Southbound
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:17:53Z|01319|binding|INFO|Removing iface tapcc6a70ca-7b ovn-installed in OVS
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.759 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1a665448-2555-4705-aae8-153bde37e2e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.763 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], port_security=['fa:16:3e:01:1c:99 2001:db8:0:1:f816:3eff:fe01:1c99 2001:db8::f816:3eff:fe01:1c99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe01:1c99/64 2001:db8::f816:3eff:fe01:1c99/64', 'neutron:device_id': '903e2fd2-4c0a-486c-9be2-3e2844ea09aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc6a70ca-7bf0-4028-84a7-a57071c090b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.789 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd7b95e-6e52-44d6-bbf7-485a1cafaa59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.791 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[27d63947-c0f9-43c4-ab7d-a22c16f65536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 27 09:17:53 np0005597378 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Consumed 13.738s CPU time.
Jan 27 09:17:53 np0005597378 systemd-machined[207425]: Machine qemu-156-instance-0000007c terminated.
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02c769b9-4213-45da-8016-900641c9ddc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.848 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81e80971-3050-4667-b9fc-711480db250b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1ef49dc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:57:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604783, 'reachable_time': 29713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353945, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.863 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4d0fd1-5248-48da-9388-d350eb8e3c61]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604794, 'tstamp': 604794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353946, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1ef49dc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604797, 'tstamp': 604797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353946, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.864 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.866 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.874 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1ef49dc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.874 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.874 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1ef49dc-a0, col_values=(('external_ids', {'iface-id': '73d5d48d-2488-422a-a3b3-324997b57d60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.875 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.876 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc6a70ca-7bf0-4028-84a7-a57071c090b8 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.878 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05fbd16f-7a91-49d7-89a0-fd69db17f600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 NetworkManager[48904]: <info>  [1769523473.9110] manager: (tapcc6a70ca-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/546)
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.920 238945 INFO nova.virt.libvirt.driver [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Instance destroyed successfully.#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.921 238945 DEBUG nova.objects.instance [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 903e2fd2-4c0a-486c-9be2-3e2844ea09aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.933 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fb9ad0-9a48-4f99-b15b-05d366213e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.935 238945 DEBUG nova.virt.libvirt.vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:28Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.936 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.937 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.937 238945 DEBUG os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.937 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9b7f2a-152a-4d6a-b869-762956a125d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.939 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf21ed56b-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.945 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.948 238945 INFO os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:13:bc,bridge_name='br-int',has_traffic_filtering=True,id=f21ed56b-30c8-4f36-ac00-096f72413945,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf21ed56b-30')#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.949 238945 DEBUG nova.virt.libvirt.vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-283462546',display_name='tempest-TestGettingAddress-server-283462546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-283462546',id=124,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-fbug5qgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:28Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=903e2fd2-4c0a-486c-9be2-3e2844ea09aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.950 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.951 238945 DEBUG nova.network.os_vif_util [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.952 238945 DEBUG os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.955 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc6a70ca-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:17:53 np0005597378 nova_compute[238941]: 2026-01-27 14:17:53.963 238945 INFO os_vif [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:1c:99,bridge_name='br-int',has_traffic_filtering=True,id=cc6a70ca-7bf0-4028-84a7-a57071c090b8,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc6a70ca-7b')#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.969 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8f763c89-c2b1-46d2-9624-b6cf51420f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:53.991 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[262db23f-abfa-4c75-a8bc-fd70fe8060da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ff1fc04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:e4:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3300, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3300, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604859, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353993, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[78863fc9-bef6-40cf-8a19-0dc6e1125d91]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ff1fc04-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604870, 'tstamp': 604870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353999, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:17:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.014 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.016 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ff1fc04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.017 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.016 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.017 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ff1fc04-00, col_values=(('external_ids', {'iface-id': '7641ffb0-ddda-4391-aadd-cbcdb9365edb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:17:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:17:54.017 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.213 238945 INFO nova.virt.libvirt.driver [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deleting instance files /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_del#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.215 238945 INFO nova.virt.libvirt.driver [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deletion of /var/lib/nova/instances/903e2fd2-4c0a-486c-9be2-3e2844ea09aa_del complete#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.269 238945 INFO nova.compute.manager [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.270 238945 DEBUG oslo.service.loopingcall [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.271 238945 DEBUG nova.compute.manager [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:17:54 np0005597378 nova_compute[238941]: 2026-01-27 14:17:54.271 238945 DEBUG nova.network.neutron [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:17:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 326 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 3.5 MiB/s wr, 97 op/s
Jan 27 09:17:55 np0005597378 nova_compute[238941]: 2026-01-27 14:17:55.661 238945 DEBUG nova.compute.manager [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-unplugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:55 np0005597378 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG oslo_concurrency.lockutils [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:55 np0005597378 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG oslo_concurrency.lockutils [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:55 np0005597378 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG oslo_concurrency.lockutils [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:55 np0005597378 nova_compute[238941]: 2026-01-27 14:17:55.662 238945 DEBUG nova.compute.manager [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-unplugged-f21ed56b-30c8-4f36-ac00-096f72413945 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:17:55 np0005597378 nova_compute[238941]: 2026-01-27 14:17:55.663 238945 DEBUG nova.compute.manager [req-12e2b230-7f71-4027-98ff-e5d1e639766d req-ae92e022-4bdc-413f-9182-080af1ebda97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-unplugged-f21ed56b-30c8-4f36-ac00-096f72413945 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:17:56 np0005597378 nova_compute[238941]: 2026-01-27 14:17:56.766 238945 DEBUG nova.compute.manager [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-deleted-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:56 np0005597378 nova_compute[238941]: 2026-01-27 14:17:56.766 238945 INFO nova.compute.manager [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Neutron deleted interface f21ed56b-30c8-4f36-ac00-096f72413945; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:17:56 np0005597378 nova_compute[238941]: 2026-01-27 14:17:56.767 238945 DEBUG nova.network.neutron [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:56 np0005597378 nova_compute[238941]: 2026-01-27 14:17:56.798 238945 DEBUG nova.compute.manager [req-acd8cb6e-6499-458f-be84-455df676cfb9 req-53f29e84-c27f-450f-8c23-4c59ab72c677 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Detach interface failed, port_id=f21ed56b-30c8-4f36-ac00-096f72413945, reason: Instance 903e2fd2-4c0a-486c-9be2-3e2844ea09aa could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:17:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 285 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 27 09:17:57 np0005597378 nova_compute[238941]: 2026-01-27 14:17:57.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.708 238945 DEBUG nova.network.neutron [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updated VIF entry in instance network info cache for port f21ed56b-30c8-4f36-ac00-096f72413945. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.709 238945 DEBUG nova.network.neutron [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [{"id": "f21ed56b-30c8-4f36-ac00-096f72413945", "address": "fa:16:3e:c4:13:bc", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf21ed56b-30", "ovs_interfaceid": "f21ed56b-30c8-4f36-ac00-096f72413945", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "address": "fa:16:3e:01:1c:99", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe01:1c99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc6a70ca-7b", "ovs_interfaceid": "cc6a70ca-7bf0-4028-84a7-a57071c090b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:58 np0005597378 podman[354001]: 2026-01-27 14:17:58.712194048 +0000 UTC m=+0.051259148 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.728 238945 DEBUG nova.compute.manager [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG oslo_concurrency.lockutils [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG oslo_concurrency.lockutils [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG oslo_concurrency.lockutils [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 DEBUG nova.compute.manager [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] No waiting events found dispatching network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.729 238945 WARNING nova.compute.manager [req-11e71461-b9f4-4a59-87df-8f7b3d4584b5 req-2b76df24-0c2c-4b6c-824c-64b5ec17f901 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received unexpected event network-vif-plugged-f21ed56b-30c8-4f36-ac00-096f72413945 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.732 238945 DEBUG oslo_concurrency.lockutils [req-25d8f37b-af69-4177-a205-8fd48562534f req-f944b076-8996-4a2b-8bd2-5efe0190bf9c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-903e2fd2-4c0a-486c-9be2-3e2844ea09aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.851 238945 DEBUG nova.compute.manager [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG oslo_concurrency.lockutils [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG oslo_concurrency.lockutils [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG oslo_concurrency.lockutils [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.852 238945 DEBUG nova.compute.manager [req-adff9769-b8e2-4e12-8508-7fc8a345cad5 req-3da43550-5643-4ce4-965a-827a5f89bee1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Processing event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.853 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.857 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523478.8576372, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.858 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.859 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.863 238945 INFO nova.virt.libvirt.driver [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance spawned successfully.#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.863 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.878 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.895 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.895 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.896 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.897 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.897 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.898 238945 DEBUG nova.virt.libvirt.driver [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.902 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.948 238945 INFO nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 16.24 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.948 238945 DEBUG nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:17:58 np0005597378 nova_compute[238941]: 2026-01-27 14:17:58.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.011 238945 INFO nova.compute.manager [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 17.54 seconds to build instance.#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.027 238945 DEBUG oslo_concurrency.lockutils [None req-ee097286-6ff0-4f03-b9b7-f60ddcd56ce0 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:17:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.046 238945 DEBUG nova.network.neutron [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.061 238945 INFO nova.compute.manager [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Took 4.79 seconds to deallocate network for instance.#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.101 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.101 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.212 238945 DEBUG oslo_concurrency.processutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945841579' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945841579' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:17:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220739632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.890 238945 DEBUG oslo_concurrency.processutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:17:59 np0005597378 nova_compute[238941]: 2026-01-27 14:17:59.897 238945 DEBUG nova.compute.provider_tree [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.012 238945 DEBUG nova.scheduler.client.report [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.094 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.141 238945 INFO nova.scheduler.client.report [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 903e2fd2-4c0a-486c-9be2-3e2844ea09aa#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.223 238945 DEBUG oslo_concurrency.lockutils [None req-48a71656-6c02-49da-99a0-9d19891ad826 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "903e2fd2-4c0a-486c-9be2-3e2844ea09aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.944 238945 DEBUG nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.945 238945 DEBUG oslo_concurrency.lockutils [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.945 238945 DEBUG oslo_concurrency.lockutils [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.946 238945 DEBUG oslo_concurrency.lockutils [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.946 238945 DEBUG nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] No waiting events found dispatching network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.947 238945 WARNING nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received unexpected event network-vif-plugged-c7e0af25-155a-4e70-a998-b6873c027009 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:18:00 np0005597378 nova_compute[238941]: 2026-01-27 14:18:00.947 238945 DEBUG nova.compute.manager [req-bae6cfaa-29ee-4805-be22-ec4b3beb591c req-bc21d3f2-eeae-4f2f-ba6a-571d47c51991 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Received event network-vif-deleted-cc6a70ca-7bf0-4028-84a7-a57071c090b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 45 KiB/s wr, 54 op/s
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.406 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.406 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:01 np0005597378 podman[354062]: 2026-01-27 14:18:01.741232098 +0000 UTC m=+0.078455975 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.921 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.921 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.922 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.922 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.922 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.924 238945 INFO nova.compute.manager [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Terminating instance#033[00m
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.924 238945 DEBUG nova.compute.manager [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:18:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837883524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:01 np0005597378 nova_compute[238941]: 2026-01-27 14:18:01.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:02 np0005597378 kernel: tap27f90ae7-2c (unregistering): left promiscuous mode
Jan 27 09:18:02 np0005597378 NetworkManager[48904]: <info>  [1769523482.0409] device (tap27f90ae7-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:18:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:02Z|01320|binding|INFO|Releasing lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 from this chassis (sb_readonly=0)
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.049 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:02Z|01321|binding|INFO|Setting lport 27f90ae7-2cc0-4208-a70d-88c06320e5a3 down in Southbound
Jan 27 09:18:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:02Z|01322|binding|INFO|Removing iface tap27f90ae7-2c ovn-installed in OVS
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.052 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.056 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:d7:02 10.100.0.4'], port_security=['fa:16:3e:e2:d7:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bbba920-bccc-4cba-962d-9f41f23c2c63, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27f90ae7-2cc0-4208-a70d-88c06320e5a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.057 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 in datapath a1ef49dc-ad74-4940-a002-d8d212c0abde unbound from our chassis#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.058 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1ef49dc-ad74-4940-a002-d8d212c0abde, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.059 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[62a70350-6fe8-425d-9db2-955ee68dd85b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.060 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde namespace which is not needed anymore#033[00m
Jan 27 09:18:02 np0005597378 kernel: tap32def379-eb (unregistering): left promiscuous mode
Jan 27 09:18:02 np0005597378 NetworkManager[48904]: <info>  [1769523482.0693] device (tap32def379-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.073 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:02Z|01323|binding|INFO|Releasing lport 32def379-eb10-485a-99b5-d69fe5f3b228 from this chassis (sb_readonly=0)
Jan 27 09:18:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:02Z|01324|binding|INFO|Setting lport 32def379-eb10-485a-99b5-d69fe5f3b228 down in Southbound
Jan 27 09:18:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:02Z|01325|binding|INFO|Removing iface tap32def379-eb ovn-installed in OVS
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.097 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], port_security=['fa:16:3e:7c:d4:9f 2001:db8:0:1:f816:3eff:fe7c:d49f 2001:db8::f816:3eff:fe7c:d49f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7c:d49f/64 2001:db8::f816:3eff:fe7c:d49f/64', 'neutron:device_id': '851e0dd0-2021-44f6-8c51-af4bc91e02f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f2f9e7-99e2-4963-9c7f-3adb4ea137b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab7056f0-caf7-445d-b0fc-9f3ae613d37b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=32def379-eb10-485a-99b5-d69fe5f3b228) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Jan 27 09:18:02 np0005597378 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 15.436s CPU time.
Jan 27 09:18:02 np0005597378 systemd-machined[207425]: Machine qemu-154-instance-0000007a terminated.
Jan 27 09:18:02 np0005597378 NetworkManager[48904]: <info>  [1769523482.1539] manager: (tap32def379-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.168 238945 INFO nova.virt.libvirt.driver [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance destroyed successfully.#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.168 238945 DEBUG nova.objects.instance [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 851e0dd0-2021-44f6-8c51-af4bc91e02f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.180 238945 DEBUG nova.virt.libvirt.vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:51Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.181 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.181 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.182 238945 DEBUG os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.184 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27f90ae7-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.192 238945 INFO os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:d7:02,bridge_name='br-int',has_traffic_filtering=True,id=27f90ae7-2cc0-4208-a70d-88c06320e5a3,network=Network(a1ef49dc-ad74-4940-a002-d8d212c0abde),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27f90ae7-2c')#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.193 238945 DEBUG nova.virt.libvirt.vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-857988838',display_name='tempest-TestGettingAddress-server-857988838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-857988838',id=122,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHrJpK+e6PfpIDJ/keT8zK80wl+7eLiC9CMyttQpZhoKrYgFTdynFV87Mm3O3NXiCTqH83Wvv4jrTgTHE0XOLGtLjAuREg6vQYW8nUg6bG0CBT3fS6oyig0cHBOxh2NRQg==',key_name='tempest-TestGettingAddress-365943767',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-hsxjzmrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:16:51Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=851e0dd0-2021-44f6-8c51-af4bc91e02f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.193 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.194 238945 DEBUG nova.network.os_vif_util [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.194 238945 DEBUG os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.196 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.196 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32def379-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.202 238945 INFO os_vif [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d4:9f,bridge_name='br-int',has_traffic_filtering=True,id=32def379-eb10-485a-99b5-d69fe5f3b228,network=Network(4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32def379-eb')#033[00m
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : haproxy version is 2.8.14-c23fe91
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [NOTICE]   (351847) : path to executable is /usr/sbin/haproxy
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [WARNING]  (351847) : Exiting Master process...
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [WARNING]  (351847) : Exiting Master process...
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [ALERT]    (351847) : Current worker (351849) exited with code 143 (Terminated)
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde[351843]: [WARNING]  (351847) : All workers exited. Exiting... (0)
Jan 27 09:18:02 np0005597378 systemd[1]: libpod-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83.scope: Deactivated successfully.
Jan 27 09:18:02 np0005597378 podman[354119]: 2026-01-27 14:18:02.220984403 +0000 UTC m=+0.069517876 container died bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.251 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.252 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:18:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83-userdata-shm.mount: Deactivated successfully.
Jan 27 09:18:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8e168c9dcead1ae33c6d74ff3d85146d54e18497146f1a6c1d1086209c9eb098-merged.mount: Deactivated successfully.
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.262 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.262 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.267 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.267 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:18:02 np0005597378 podman[354119]: 2026-01-27 14:18:02.269290993 +0000 UTC m=+0.117824456 container cleanup bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:18:02 np0005597378 systemd[1]: libpod-conmon-bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83.scope: Deactivated successfully.
Jan 27 09:18:02 np0005597378 podman[354187]: 2026-01-27 14:18:02.356367467 +0000 UTC m=+0.061234046 container remove bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.366 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5161c0f1-d065-4257-a798-282134652755]: (4, ('Tue Jan 27 02:18:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde (bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83)\nbc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83\nTue Jan 27 02:18:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde (bc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83)\nbc6a0a726e0841db5978f12150816f454a472c2a38598ca517c1d98b7d3e0c83\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[06df2a6d-ef8d-4ac8-bb68-0509ed4606fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.369 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1ef49dc-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 kernel: tapa1ef49dc-a0: left promiscuous mode
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d23bad-53c0-459e-b77b-4378ba25f1f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.401 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfda44b-8cef-4d86-9f2e-96921439bbd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.402 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7c04851c-8df7-43aa-8a30-f200bde65acd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60da8578-a644-40fd-b5c8-1a5b4d319f14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604776, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354203, 'error': None, 'target': 'ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 systemd[1]: run-netns-ovnmeta\x2da1ef49dc\x2dad74\x2d4940\x2da002\x2dd8d212c0abde.mount: Deactivated successfully.
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.432 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1ef49dc-ad74-4940-a002-d8d212c0abde deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.432 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4bf11d-2973-458b-8f9a-60e518bc926a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.436 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 32def379-eb10-485a-99b5-d69fe5f3b228 in datapath 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 unbound from our chassis#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.438 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.439 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0974bc-c296-4497-a4bd-84d4280d1625]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.440 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 namespace which is not needed anymore#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.512 238945 INFO nova.virt.libvirt.driver [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deleting instance files /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2_del#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.513 238945 INFO nova.virt.libvirt.driver [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deletion of /var/lib/nova/instances/851e0dd0-2021-44f6-8c51-af4bc91e02f2_del complete#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.535 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.536 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3341MB free_disk=59.875420562922955GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.536 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.536 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.578 238945 INFO nova.compute.manager [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.578 238945 DEBUG oslo.service.loopingcall [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.579 238945 DEBUG nova.compute.manager [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.579 238945 DEBUG nova.network.neutron [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : haproxy version is 2.8.14-c23fe91
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [NOTICE]   (351920) : path to executable is /usr/sbin/haproxy
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [WARNING]  (351920) : Exiting Master process...
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [WARNING]  (351920) : Exiting Master process...
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [ALERT]    (351920) : Current worker (351922) exited with code 143 (Terminated)
Jan 27 09:18:02 np0005597378 neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148[351916]: [WARNING]  (351920) : All workers exited. Exiting... (0)
Jan 27 09:18:02 np0005597378 systemd[1]: libpod-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f.scope: Deactivated successfully.
Jan 27 09:18:02 np0005597378 podman[354221]: 2026-01-27 14:18:02.607180451 +0000 UTC m=+0.047059238 container died ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 5b3093c8-a99f-45d8-b612-447b6a5412c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance dcd76996-d627-4d51-8860-9ce1dcefbb7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.623 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.623 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:18:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f-userdata-shm.mount: Deactivated successfully.
Jan 27 09:18:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c6492a2ecf9735b6ad702ce6ce8a5eac75de24fbc6fda3d2c7395609022e5c69-merged.mount: Deactivated successfully.
Jan 27 09:18:02 np0005597378 podman[354221]: 2026-01-27 14:18:02.647558839 +0000 UTC m=+0.087437626 container cleanup ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:18:02 np0005597378 systemd[1]: libpod-conmon-ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f.scope: Deactivated successfully.
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.692 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:02 np0005597378 podman[354249]: 2026-01-27 14:18:02.703610095 +0000 UTC m=+0.036159676 container remove ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.719 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1be81199-6c3d-4b27-bafd-9ff2d93ff7f3]: (4, ('Tue Jan 27 02:18:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 (ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f)\nca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f\nTue Jan 27 02:18:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 (ca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f)\nca7c4f4073ba6fd60b88c2335160ad45e286422a42f460080fce063a0e9e596f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1f9610-daef-435c-bf64-966d32689ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.721 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ff1fc04-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:02 np0005597378 kernel: tap4ff1fc04-00: left promiscuous mode
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.728 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 nova_compute[238941]: 2026-01-27 14:18:02.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.739 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c42507-1c74-4d8e-a6fd-c55f28b93e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.757 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e56c1b-91de-4ff3-8c6b-d58e6865cc08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.758 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[945955b5-959b-45fd-a0bb-0e830d857b64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.777 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4956ffac-4e82-4b2a-8809-0d55f507ad72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604852, 'reachable_time': 29533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354265, 'error': None, 'target': 'ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.778 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:18:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:02.779 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dc56b1-ca31-44b0-9b0e-9202d003e0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.033 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.033 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing instance network info cache due to event network-changed-27f90ae7-2cc0-4208-a70d-88c06320e5a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.035 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.035 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.035 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Refreshing network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:18:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106744337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:03 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4ff1fc04\x2d0fb7\x2d4fd3\x2d8eb6\x2d08eae05f7148.mount: Deactivated successfully.
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.268 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.746 238945 DEBUG nova.network.neutron [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.761 238945 INFO nova.compute.manager [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Took 1.18 seconds to deallocate network for instance.#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.807 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 246 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 34 KiB/s wr, 53 op/s
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.820 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.837 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.875 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.876 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:03 np0005597378 nova_compute[238941]: 2026-01-27 14:18:03.968 238945 DEBUG oslo_concurrency.processutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422866648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.568 238945 DEBUG oslo_concurrency.processutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.574 238945 DEBUG nova.compute.provider_tree [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.588 238945 DEBUG nova.scheduler.client.report [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.606 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updated VIF entry in instance network info cache for port 27f90ae7-2cc0-4208-a70d-88c06320e5a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.607 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Updating instance_info_cache with network_info: [{"id": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "address": "fa:16:3e:e2:d7:02", "network": {"id": "a1ef49dc-ad74-4940-a002-d8d212c0abde", "bridge": "br-int", "label": "tempest-network-smoke--1062681754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27f90ae7-2c", "ovs_interfaceid": "27f90ae7-2cc0-4208-a70d-88c06320e5a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "32def379-eb10-485a-99b5-d69fe5f3b228", "address": "fa:16:3e:7c:d4:9f", "network": {"id": "4ff1fc04-0fb7-4fd3-8eb6-08eae05f7148", "bridge": "br-int", "label": "tempest-network-smoke--2112090247", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7c:d49f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32def379-eb", "ovs_interfaceid": "32def379-eb10-485a-99b5-d69fe5f3b228", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.610 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.627 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-851e0dd0-2021-44f6-8c51-af4bc91e02f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.627 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-changed-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG nova.compute.manager [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing instance network info cache due to event network-changed-c7e0af25-155a-4e70-a998-b6873c027009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.628 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Refreshing network info cache for port c7e0af25-155a-4e70-a998-b6873c027009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.640 238945 INFO nova.scheduler.client.report [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2#033[00m
Jan 27 09:18:04 np0005597378 nova_compute[238941]: 2026-01-27 14:18:04.708 238945 DEBUG oslo_concurrency.lockutils [None req-d32ea264-5af5-4638-9d5f-560c2ab62a49 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "851e0dd0-2021-44f6-8c51-af4bc91e02f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 188 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 124 op/s
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.112 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-deleted-32def379-eb10-485a-99b5-d69fe5f3b228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.112 238945 INFO nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Neutron deleted interface 32def379-eb10-485a-99b5-d69fe5f3b228; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.112 238945 DEBUG nova.network.neutron [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.114 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Detach interface failed, port_id=32def379-eb10-485a-99b5-d69fe5f3b228, reason: Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.115 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Received event network-vif-deleted-27f90ae7-2cc0-4208-a70d-88c06320e5a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.115 238945 INFO nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Neutron deleted interface 27f90ae7-2cc0-4208-a70d-88c06320e5a3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.115 238945 DEBUG nova.network.neutron [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 27 09:18:05 np0005597378 nova_compute[238941]: 2026-01-27 14:18:05.117 238945 DEBUG nova.compute.manager [req-d592b95c-f820-4589-9a79-b72bfc4490ee req-eb8290da-bbda-4157-8fa2-2709fe1733ea 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Detach interface failed, port_id=27f90ae7-2cc0-4208-a70d-88c06320e5a3, reason: Instance 851e0dd0-2021-44f6-8c51-af4bc91e02f2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:18:06 np0005597378 nova_compute[238941]: 2026-01-27 14:18:06.687 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updated VIF entry in instance network info cache for port c7e0af25-155a-4e70-a998-b6873c027009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:18:06 np0005597378 nova_compute[238941]: 2026-01-27 14:18:06.688 238945 DEBUG nova.network.neutron [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [{"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:06 np0005597378 nova_compute[238941]: 2026-01-27 14:18:06.711 238945 DEBUG oslo_concurrency.lockutils [req-dc8c10af-3a26-4001-9b2a-74c3ba059873 req-d8dcca1c-a84b-416d-a522-a4d6b1f2f7da 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dcd76996-d627-4d51-8860-9ce1dcefbb7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:18:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 123 op/s
Jan 27 09:18:07 np0005597378 nova_compute[238941]: 2026-01-27 14:18:07.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:07 np0005597378 nova_compute[238941]: 2026-01-27 14:18:07.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:08 np0005597378 nova_compute[238941]: 2026-01-27 14:18:08.877 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:08 np0005597378 nova_compute[238941]: 2026-01-27 14:18:08.877 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:08 np0005597378 nova_compute[238941]: 2026-01-27 14:18:08.919 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523473.9189448, 903e2fd2-4c0a-486c-9be2-3e2844ea09aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:18:08 np0005597378 nova_compute[238941]: 2026-01-27 14:18:08.919 238945 INFO nova.compute.manager [-] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:18:08 np0005597378 nova_compute[238941]: 2026-01-27 14:18:08.940 238945 DEBUG nova.compute.manager [None req-a525dc5b-5c5c-44b0-99dd-20f3b5ff9840 - - - - - -] [instance: 903e2fd2-4c0a-486c-9be2-3e2844ea09aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:18:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 101 op/s
Jan 27 09:18:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.6 KiB/s wr, 96 op/s
Jan 27 09:18:11 np0005597378 nova_compute[238941]: 2026-01-27 14:18:11.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:11 np0005597378 nova_compute[238941]: 2026-01-27 14:18:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:11 np0005597378 nova_compute[238941]: 2026-01-27 14:18:11.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:18:11 np0005597378 nova_compute[238941]: 2026-01-27 14:18:11.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:18:11 np0005597378 nova_compute[238941]: 2026-01-27 14:18:11.602 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:18:11 np0005597378 nova_compute[238941]: 2026-01-27 14:18:11.602 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:18:12 np0005597378 nova_compute[238941]: 2026-01-27 14:18:12.202 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:12 np0005597378 nova_compute[238941]: 2026-01-27 14:18:12.673 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 9.6 KiB/s wr, 83 op/s
Jan 27 09:18:13 np0005597378 nova_compute[238941]: 2026-01-27 14:18:13.208 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [{"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:13 np0005597378 nova_compute[238941]: 2026-01-27 14:18:13.225 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-5b3093c8-a99f-45d8-b612-447b6a5412c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:18:13 np0005597378 nova_compute[238941]: 2026-01-27 14:18:13.225 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:18:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:13Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:0a:41 10.100.0.14
Jan 27 09:18:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:13Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:0a:41 10.100.0.14
Jan 27 09:18:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:13Z|01326|binding|INFO|Releasing lport b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8 from this chassis (sb_readonly=0)
Jan 27 09:18:13 np0005597378 nova_compute[238941]: 2026-01-27 14:18:13.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 190 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 200 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:18:17
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'backups', '.mgr']
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:18:17 np0005597378 nova_compute[238941]: 2026-01-27 14:18:17.167 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523482.1656342, 851e0dd0-2021-44f6-8c51-af4bc91e02f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:18:17 np0005597378 nova_compute[238941]: 2026-01-27 14:18:17.167 238945 INFO nova.compute.manager [-] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:18:17 np0005597378 nova_compute[238941]: 2026-01-27 14:18:17.191 238945 DEBUG nova.compute.manager [None req-b6c8cad0-5ef1-4fc8-8415-a5f3e91b0094 - - - - - -] [instance: 851e0dd0-2021-44f6-8c51-af4bc91e02f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:18:17 np0005597378 nova_compute[238941]: 2026-01-27 14:18:17.205 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:17 np0005597378 nova_compute[238941]: 2026-01-27 14:18:17.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:17 np0005597378 nova_compute[238941]: 2026-01-27 14:18:17.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:18:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.621 238945 INFO nova.compute.manager [None req-bc109405-6dbe-437a-ad70-3ffd13aca3e1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Get console output#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.627 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.935 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.935 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.936 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.936 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.936 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.937 238945 INFO nova.compute.manager [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Terminating instance#033[00m
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.938 238945 DEBUG nova.compute.manager [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:18:18 np0005597378 kernel: tapc7e0af25-15 (unregistering): left promiscuous mode
Jan 27 09:18:18 np0005597378 NetworkManager[48904]: <info>  [1769523498.9854] device (tapc7e0af25-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:18:18 np0005597378 nova_compute[238941]: 2026-01-27 14:18:18.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:18Z|01327|binding|INFO|Releasing lport c7e0af25-155a-4e70-a998-b6873c027009 from this chassis (sb_readonly=0)
Jan 27 09:18:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:18Z|01328|binding|INFO|Setting lport c7e0af25-155a-4e70-a998-b6873c027009 down in Southbound
Jan 27 09:18:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:18Z|01329|binding|INFO|Removing iface tapc7e0af25-15 ovn-installed in OVS
Jan 27 09:18:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:19Z|01330|binding|INFO|Releasing lport b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8 from this chassis (sb_readonly=0)
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.010 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:0a:41 10.100.0.14'], port_security=['fa:16:3e:19:0a:41 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dcd76996-d627-4d51-8860-9ce1dcefbb7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4bc9c7a-795c-4d0d-88e5-37500aeca13a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c7e0af25-155a-4e70-a998-b6873c027009) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.011 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c7e0af25-155a-4e70-a998-b6873c027009 in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 unbound from our chassis#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.013 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 998c1efd-c4d9-4646-b881-3c79abc13336#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.031 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e93a00-8223-4807-98c1-2561f919f475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 200 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 27 09:18:19 np0005597378 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 27 09:18:19 np0005597378 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Consumed 12.981s CPU time.
Jan 27 09:18:19 np0005597378 systemd-machined[207425]: Machine qemu-157-instance-0000007d terminated.
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.066 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2c04a1ee-7959-4294-88d5-ce33db02773b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.069 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f53003bf-0a3e-4a00-8c96-797dfa9c5862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.096 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce4f708-c62c-4eab-b3ce-03b0a3ec0047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.110 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6f09ae03-3d3b-4b5d-8579-83e6f02a3816]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap998c1efd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:9d:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606998, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354320, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0df83e78-d7b8-4cc2-8c1e-978d2cf70f95]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607011, 'tstamp': 607011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354321, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap998c1efd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607014, 'tstamp': 607014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354321, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.127 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.129 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.132 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap998c1efd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap998c1efd-c0, col_values=(('external_ids', {'iface-id': 'b0077a9b-7ee7-4f6c-9114-22d1ebb8d9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:19 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:19.133 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.155 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.164 238945 INFO nova.virt.libvirt.driver [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Instance destroyed successfully.#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.164 238945 DEBUG nova.objects.instance [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid dcd76996-d627-4d51-8860-9ce1dcefbb7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.177 238945 DEBUG nova.virt.libvirt.vif [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292571685',display_name='tempest-TestNetworkBasicOps-server-1292571685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292571685',id=125,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcQ6iPHxjx5eMagbDrGG9rMyCE4ElFPNFC7P8D70WeVLMUO6egwKUZDgsveGWRPjOVFr8hQ/AJ2QI5jXuhGCtAaGFdbvz7JRIiI1o4raZ1PCI5I5iQCSrMRLtfefQbhGw==',key_name='tempest-TestNetworkBasicOps-476831281',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-k00c9ci8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:58Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=dcd76996-d627-4d51-8860-9ce1dcefbb7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.179 238945 DEBUG nova.network.os_vif_util [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "c7e0af25-155a-4e70-a998-b6873c027009", "address": "fa:16:3e:19:0a:41", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7e0af25-15", "ovs_interfaceid": "c7e0af25-155a-4e70-a998-b6873c027009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.180 238945 DEBUG nova.network.os_vif_util [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.180 238945 DEBUG os_vif [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.182 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.182 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7e0af25-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.185 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.187 238945 INFO os_vif [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:0a:41,bridge_name='br-int',has_traffic_filtering=True,id=c7e0af25-155a-4e70-a998-b6873c027009,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7e0af25-15')#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.418 238945 INFO nova.virt.libvirt.driver [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deleting instance files /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f_del#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.418 238945 INFO nova.virt.libvirt.driver [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deletion of /var/lib/nova/instances/dcd76996-d627-4d51-8860-9ce1dcefbb7f_del complete#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.476 238945 INFO nova.compute.manager [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.477 238945 DEBUG oslo.service.loopingcall [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.477 238945 DEBUG nova.compute.manager [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:18:19 np0005597378 nova_compute[238941]: 2026-01-27 14:18:19.477 238945 DEBUG nova.network.neutron [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:18:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:20 np0005597378 nova_compute[238941]: 2026-01-27 14:18:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:20 np0005597378 nova_compute[238941]: 2026-01-27 14:18:20.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:18:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 169 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.287 238945 DEBUG nova.network.neutron [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.340 238945 INFO nova.compute.manager [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Took 1.86 seconds to deallocate network for instance.#033[00m
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.403 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.403 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.436 238945 DEBUG nova.compute.manager [req-660da04d-a548-4297-8c82-f15ca2cace7d req-e1a752d2-0a7c-4f09-81be-ecb14ed6ec9a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Received event network-vif-deleted-c7e0af25-155a-4e70-a998-b6873c027009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:21 np0005597378 nova_compute[238941]: 2026-01-27 14:18:21.488 238945 DEBUG oslo_concurrency.processutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783670946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.044 238945 DEBUG oslo_concurrency.processutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.049 238945 DEBUG nova.compute.provider_tree [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.134 238945 DEBUG nova.scheduler.client.report [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.224 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.412 238945 INFO nova.scheduler.client.report [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance dcd76996-d627-4d51-8860-9ce1dcefbb7f#033[00m
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:22 np0005597378 nova_compute[238941]: 2026-01-27 14:18:22.739 238945 DEBUG oslo_concurrency.lockutils [None req-4f6dea81-8aa2-45e5-a367-6b521c8ac98f 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "dcd76996-d627-4d51-8860-9ce1dcefbb7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 169 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 27 09:18:24 np0005597378 nova_compute[238941]: 2026-01-27 14:18:24.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 121 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.166 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.167 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.167 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.168 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.168 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.169 238945 INFO nova.compute.manager [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Terminating instance#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.170 238945 DEBUG nova.compute.manager [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:18:25 np0005597378 kernel: tap674bbd90-8e (unregistering): left promiscuous mode
Jan 27 09:18:25 np0005597378 NetworkManager[48904]: <info>  [1769523505.2207] device (tap674bbd90-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:25Z|01331|binding|INFO|Releasing lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e from this chassis (sb_readonly=0)
Jan 27 09:18:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:25Z|01332|binding|INFO|Setting lport 674bbd90-8ebb-485f-bccf-39535e0b1f3e down in Southbound
Jan 27 09:18:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:18:25Z|01333|binding|INFO|Removing iface tap674bbd90-8e ovn-installed in OVS
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.234 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:47:86 10.100.0.6'], port_security=['fa:16:3e:ac:47:86 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5b3093c8-a99f-45d8-b612-447b6a5412c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-998c1efd-c4d9-4646-b881-3c79abc13336', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5556729e-c674-4885-93a2-19d3e66349dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=00e7300d-3433-4606-9131-5e74b7c09c27, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=674bbd90-8ebb-485f-bccf-39535e0b1f3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.235 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 674bbd90-8ebb-485f-bccf-39535e0b1f3e in datapath 998c1efd-c4d9-4646-b881-3c79abc13336 unbound from our chassis#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.236 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 998c1efd-c4d9-4646-b881-3c79abc13336, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.237 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a870f794-82d7-48ea-87c8-c81741cd3ca6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.238 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 namespace which is not needed anymore#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 27 09:18:25 np0005597378 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 15.724s CPU time.
Jan 27 09:18:25 np0005597378 systemd-machined[207425]: Machine qemu-155-instance-0000007b terminated.
Jan 27 09:18:25 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : haproxy version is 2.8.14-c23fe91
Jan 27 09:18:25 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [NOTICE]   (352473) : path to executable is /usr/sbin/haproxy
Jan 27 09:18:25 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [WARNING]  (352473) : Exiting Master process...
Jan 27 09:18:25 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [WARNING]  (352473) : Exiting Master process...
Jan 27 09:18:25 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [ALERT]    (352473) : Current worker (352475) exited with code 143 (Terminated)
Jan 27 09:18:25 np0005597378 neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336[352469]: [WARNING]  (352473) : All workers exited. Exiting... (0)
Jan 27 09:18:25 np0005597378 systemd[1]: libpod-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6.scope: Deactivated successfully.
Jan 27 09:18:25 np0005597378 podman[354401]: 2026-01-27 14:18:25.374623363 +0000 UTC m=+0.048524456 container died e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:18:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6-userdata-shm.mount: Deactivated successfully.
Jan 27 09:18:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-95177fbfd83e72f9cf40a6761ac0ab496e7395a1fd358fb4e054c5fc63f4474c-merged.mount: Deactivated successfully.
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.403 238945 INFO nova.virt.libvirt.driver [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Instance destroyed successfully.#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.404 238945 DEBUG nova.objects.instance [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 5b3093c8-a99f-45d8-b612-447b6a5412c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:18:25 np0005597378 podman[354401]: 2026-01-27 14:18:25.41647648 +0000 UTC m=+0.090377573 container cleanup e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:18:25 np0005597378 systemd[1]: libpod-conmon-e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6.scope: Deactivated successfully.
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.430 238945 DEBUG nova.virt.libvirt.vif [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:16:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1403334171',display_name='tempest-TestNetworkBasicOps-server-1403334171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1403334171',id=123,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFC1cqJrt5q8dhbfCopKr7DkgHsb6klvfm+5aRYlinfR2B21lI9vu1jBorPfSj3isXMHvTevkrNKtSRdTzcTp9q4s/p/QJbfs+zmsxfAbbF0PDMabDhurjSdiaAYWO65Q==',key_name='tempest-TestNetworkBasicOps-940230937',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:17:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-9ajp9grx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:17:11Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=5b3093c8-a99f-45d8-b612-447b6a5412c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.431 238945 DEBUG nova.network.os_vif_util [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "address": "fa:16:3e:ac:47:86", "network": {"id": "998c1efd-c4d9-4646-b881-3c79abc13336", "bridge": "br-int", "label": "tempest-network-smoke--1562643028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap674bbd90-8e", "ovs_interfaceid": "674bbd90-8ebb-485f-bccf-39535e0b1f3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.432 238945 DEBUG nova.network.os_vif_util [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.432 238945 DEBUG os_vif [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.435 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap674bbd90-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.441 238945 INFO os_vif [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:47:86,bridge_name='br-int',has_traffic_filtering=True,id=674bbd90-8ebb-485f-bccf-39535e0b1f3e,network=Network(998c1efd-c4d9-4646-b881-3c79abc13336),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap674bbd90-8e')#033[00m
Jan 27 09:18:25 np0005597378 podman[354439]: 2026-01-27 14:18:25.483786397 +0000 UTC m=+0.046925794 container remove e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1eae981a-a25a-4984-9181-7cb6433c831a]: (4, ('Tue Jan 27 02:18:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 (e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6)\ne30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6\nTue Jan 27 02:18:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 (e30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6)\ne30f0e30ffca256737449c2bf1004349deeb9e6e352825a2e99591bd97f68be6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.490 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b65b224d-38be-4942-8452-a484a041aa18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.491 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap998c1efd-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:25 np0005597378 kernel: tap998c1efd-c0: left promiscuous mode
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.509 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9a2402-24ad-45cc-a61e-7f31cbb44be4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.520 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[670ebc58-d549-4b6e-a4a9-6f3bc0f77959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.522 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2debbe-eb26-4606-ba3f-9a3eb9be691c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.538 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac574269-ffdc-485d-bfc3-1a0743b99476]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606991, 'reachable_time': 17961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354471, 'error': None, 'target': 'ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.541 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-998c1efd-c4d9-4646-b881-3c79abc13336 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:18:25 np0005597378 systemd[1]: run-netns-ovnmeta\x2d998c1efd\x2dc4d9\x2d4646\x2db881\x2d3c79abc13336.mount: Deactivated successfully.
Jan 27 09:18:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:25.542 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[6018555d-b7c4-487a-8aea-15fb1c34addb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.716 238945 INFO nova.virt.libvirt.driver [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deleting instance files /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5_del#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.717 238945 INFO nova.virt.libvirt.driver [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deletion of /var/lib/nova/instances/5b3093c8-a99f-45d8-b612-447b6a5412c5_del complete#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.781 238945 INFO nova.compute.manager [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.782 238945 DEBUG oslo.service.loopingcall [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.783 238945 DEBUG nova.compute.manager [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:18:25 np0005597378 nova_compute[238941]: 2026-01-27 14:18:25.783 238945 DEBUG nova.network.neutron [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.497 238945 DEBUG nova.network.neutron [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.567 238945 INFO nova.compute.manager [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Took 0.78 seconds to deallocate network for instance.#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.631 238945 DEBUG nova.compute.manager [req-8b1d7d1f-3c9d-4a88-8874-e64ecded085c req-95cf4fc8-2704-49f2-ada6-547d4b05888f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Received event network-vif-deleted-674bbd90-8ebb-485f-bccf-39535e0b1f3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.647 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.647 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:26 np0005597378 nova_compute[238941]: 2026-01-27 14:18:26.694 238945 DEBUG oslo_concurrency.processutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 121 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 981 KiB/s wr, 60 op/s
Jan 27 09:18:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880390883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:27 np0005597378 nova_compute[238941]: 2026-01-27 14:18:27.241 238945 DEBUG oslo_concurrency.processutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:27 np0005597378 nova_compute[238941]: 2026-01-27 14:18:27.247 238945 DEBUG nova.compute.provider_tree [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:27 np0005597378 nova_compute[238941]: 2026-01-27 14:18:27.405 238945 DEBUG nova.scheduler.client.report [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:27 np0005597378 nova_compute[238941]: 2026-01-27 14:18:27.530 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:27 np0005597378 nova_compute[238941]: 2026-01-27 14:18:27.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007727044614690149 of space, bias 1.0, pg target 0.23181133844070448 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006694095072790057 of space, bias 1.0, pg target 0.20082285218370172 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:18:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:18:27 np0005597378 nova_compute[238941]: 2026-01-27 14:18:27.833 238945 INFO nova.scheduler.client.report [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 5b3093c8-a99f-45d8-b612-447b6a5412c5#033[00m
Jan 27 09:18:28 np0005597378 nova_compute[238941]: 2026-01-27 14:18:28.183 238945 DEBUG oslo_concurrency.lockutils [None req-a14e6dd2-3687-4a74-b333-805efc026e87 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "5b3093c8-a99f-45d8-b612-447b6a5412c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 57 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 27 09:18:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:29 np0005597378 podman[354496]: 2026-01-27 14:18:29.713188494 +0000 UTC m=+0.054257629 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 09:18:30 np0005597378 nova_compute[238941]: 2026-01-27 14:18:30.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 55 op/s
Jan 27 09:18:32 np0005597378 nova_compute[238941]: 2026-01-27 14:18:32.233 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:32 np0005597378 nova_compute[238941]: 2026-01-27 14:18:32.680 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:32 np0005597378 podman[354515]: 2026-01-27 14:18:32.733025078 +0000 UTC m=+0.070450502 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 27 09:18:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 27 09:18:33 np0005597378 nova_compute[238941]: 2026-01-27 14:18:33.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:33 np0005597378 nova_compute[238941]: 2026-01-27 14:18:33.413 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:34 np0005597378 nova_compute[238941]: 2026-01-27 14:18:34.163 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523499.1618607, dcd76996-d627-4d51-8860-9ce1dcefbb7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:18:34 np0005597378 nova_compute[238941]: 2026-01-27 14:18:34.163 238945 INFO nova.compute.manager [-] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:18:34 np0005597378 nova_compute[238941]: 2026-01-27 14:18:34.184 238945 DEBUG nova.compute.manager [None req-6d633fbf-4a8e-4567-b000-29cb2b218845 - - - - - -] [instance: dcd76996-d627-4d51-8860-9ce1dcefbb7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:18:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 27 09:18:35 np0005597378 nova_compute[238941]: 2026-01-27 14:18:35.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:18:37 np0005597378 nova_compute[238941]: 2026-01-27 14:18:37.683 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:18:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:39.671 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:18:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:39.672 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:18:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:39.672 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:18:39 np0005597378 nova_compute[238941]: 2026-01-27 14:18:39.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:40 np0005597378 nova_compute[238941]: 2026-01-27 14:18:40.403 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523505.3983688, 5b3093c8-a99f-45d8-b612-447b6a5412c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:18:40 np0005597378 nova_compute[238941]: 2026-01-27 14:18:40.403 238945 INFO nova.compute.manager [-] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:18:40 np0005597378 nova_compute[238941]: 2026-01-27 14:18:40.429 238945 DEBUG nova.compute.manager [None req-188390a0-f1ea-4a56-8ace-d2e6e490691c - - - - - -] [instance: 5b3093c8-a99f-45d8-b612-447b6a5412c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:18:40 np0005597378 nova_compute[238941]: 2026-01-27 14:18:40.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.684 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.700 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.701 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.714 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.810 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.811 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.822 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.822 238945 INFO nova.compute.claims [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:18:42 np0005597378 nova_compute[238941]: 2026-01-27 14:18:42.995 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:18:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23747561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.602 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.609 238945 DEBUG nova.compute.provider_tree [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.625 238945 DEBUG nova.scheduler.client.report [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.649 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.650 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.696 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.697 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.714 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.732 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.816 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.817 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.818 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Creating image(s)#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.837 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.858 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.880 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.884 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.934 238945 DEBUG nova.policy [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.961 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.962 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.962 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.963 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.982 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:43 np0005597378 nova_compute[238941]: 2026-01-27 14:18:43.984 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 98452226-e32f-475f-814f-d0eba538b8ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.240 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 98452226-e32f-475f-814f-d0eba538b8ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.313 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.490 238945 DEBUG nova.objects.instance [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.507 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.507 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Ensure instance console log exists: /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.508 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.508 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:44 np0005597378 nova_compute[238941]: 2026-01-27 14:18:44.508 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 803 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:18:45 np0005597378 nova_compute[238941]: 2026-01-27 14:18:45.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:45 np0005597378 nova_compute[238941]: 2026-01-27 14:18:45.851 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully created port: 4e0bfd53-3592-45ef-aef8-c273dbee749b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:18:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:46.323 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:46.323 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:18:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:47 np0005597378 podman[354824]: 2026-01-27 14:18:47.039101445 +0000 UTC m=+0.186251862 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 53 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 609 KiB/s wr, 24 op/s
Jan 27 09:18:47 np0005597378 podman[354824]: 2026-01-27 14:18:47.133183196 +0000 UTC m=+0.280333593 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:18:47 np0005597378 nova_compute[238941]: 2026-01-27 14:18:47.173 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully created port: 0bd6bb45-6845-4dd7-abd7-26549236c21b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:18:47 np0005597378 nova_compute[238941]: 2026-01-27 14:18:47.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:18:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:18:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:18:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:18:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:18:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:18:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:18:49 np0005597378 podman[355154]: 2026-01-27 14:18:49.296783305 +0000 UTC m=+0.028756158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:18:49 np0005597378 nova_compute[238941]: 2026-01-27 14:18:49.477 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully updated port: 4e0bfd53-3592-45ef-aef8-c273dbee749b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:18:49 np0005597378 podman[355154]: 2026-01-27 14:18:49.481533487 +0000 UTC m=+0.213506350 container create 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:18:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:18:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:18:49 np0005597378 systemd[1]: Started libpod-conmon-4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642.scope.
Jan 27 09:18:49 np0005597378 nova_compute[238941]: 2026-01-27 14:18:49.605 238945 DEBUG nova.compute.manager [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:49 np0005597378 nova_compute[238941]: 2026-01-27 14:18:49.605 238945 DEBUG nova.compute.manager [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:18:49 np0005597378 nova_compute[238941]: 2026-01-27 14:18:49.606 238945 DEBUG oslo_concurrency.lockutils [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:18:49 np0005597378 nova_compute[238941]: 2026-01-27 14:18:49.606 238945 DEBUG oslo_concurrency.lockutils [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:18:49 np0005597378 nova_compute[238941]: 2026-01-27 14:18:49.606 238945 DEBUG nova.network.neutron [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:18:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:18:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:49 np0005597378 podman[355154]: 2026-01-27 14:18:49.836987734 +0000 UTC m=+0.568960587 container init 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:18:49 np0005597378 podman[355154]: 2026-01-27 14:18:49.849086578 +0000 UTC m=+0.581059491 container start 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:18:49 np0005597378 admiring_chaplygin[355170]: 167 167
Jan 27 09:18:49 np0005597378 systemd[1]: libpod-4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642.scope: Deactivated successfully.
Jan 27 09:18:50 np0005597378 podman[355154]: 2026-01-27 14:18:50.086261358 +0000 UTC m=+0.818234211 container attach 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:18:50 np0005597378 podman[355154]: 2026-01-27 14:18:50.087568753 +0000 UTC m=+0.819541586 container died 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:18:50 np0005597378 nova_compute[238941]: 2026-01-27 14:18:50.442 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-de29951c174fc91a9404a5c1984bd19d649f160f3742768a63d43f947d3563c4-merged.mount: Deactivated successfully.
Jan 27 09:18:50 np0005597378 podman[355154]: 2026-01-27 14:18:50.599251531 +0000 UTC m=+1.331224364 container remove 4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:18:50 np0005597378 systemd[1]: libpod-conmon-4f70204fa0ba67416ccbe2738b8a05090d330f22fa0b2b046db515186092d642.scope: Deactivated successfully.
Jan 27 09:18:50 np0005597378 podman[355193]: 2026-01-27 14:18:50.790172227 +0000 UTC m=+0.069851686 container create 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:50 np0005597378 podman[355193]: 2026-01-27 14:18:50.744202379 +0000 UTC m=+0.023881858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:18:50 np0005597378 systemd[1]: Started libpod-conmon-2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d.scope.
Jan 27 09:18:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:18:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:51 np0005597378 podman[355193]: 2026-01-27 14:18:51.010306882 +0000 UTC m=+0.289986431 container init 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:51 np0005597378 podman[355193]: 2026-01-27 14:18:51.024204233 +0000 UTC m=+0.303883692 container start 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:18:51 np0005597378 podman[355193]: 2026-01-27 14:18:51.053658239 +0000 UTC m=+0.333337738 container attach 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:18:51 np0005597378 frosty_ardinghelli[355210]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:18:51 np0005597378 frosty_ardinghelli[355210]: --> All data devices are unavailable
Jan 27 09:18:51 np0005597378 systemd[1]: libpod-2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d.scope: Deactivated successfully.
Jan 27 09:18:51 np0005597378 podman[355193]: 2026-01-27 14:18:51.525279647 +0000 UTC m=+0.804959136 container died 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:18:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-62c5e57d5ec75ccf93330e3aeb240b40b0a3b1419872feadd4ab010a9d47b66a-merged.mount: Deactivated successfully.
Jan 27 09:18:51 np0005597378 nova_compute[238941]: 2026-01-27 14:18:51.656 238945 DEBUG nova.network.neutron [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:18:51 np0005597378 podman[355193]: 2026-01-27 14:18:51.713113061 +0000 UTC m=+0.992792520 container remove 2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_ardinghelli, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:18:51 np0005597378 systemd[1]: libpod-conmon-2fa0507943e8c164b6aa118bc11c2be492ac6581db8c55dfa1079a8723a9ae3d.scope: Deactivated successfully.
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.160561084 +0000 UTC m=+0.051905596 container create 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:52 np0005597378 systemd[1]: Started libpod-conmon-1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0.scope.
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.132073273 +0000 UTC m=+0.023417795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:18:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.265742861 +0000 UTC m=+0.157087383 container init 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.272172822 +0000 UTC m=+0.163517324 container start 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:18:52 np0005597378 competent_tesla[355321]: 167 167
Jan 27 09:18:52 np0005597378 systemd[1]: libpod-1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0.scope: Deactivated successfully.
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.284576504 +0000 UTC m=+0.175921026 container attach 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.284907642 +0000 UTC m=+0.176252144 container died 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:18:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-954ae55d57c0aca3f6c6bb6f2ef7bef683460298ca5c4f59f1ddae21d584e33f-merged.mount: Deactivated successfully.
Jan 27 09:18:52 np0005597378 podman[355305]: 2026-01-27 14:18:52.402260945 +0000 UTC m=+0.293605447 container remove 1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_tesla, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:18:52 np0005597378 systemd[1]: libpod-conmon-1cb328229c353a58b92f1b1fbec22e35048a6b4eea58c837c80e92a2c5380fb0.scope: Deactivated successfully.
Jan 27 09:18:52 np0005597378 podman[355347]: 2026-01-27 14:18:52.567122815 +0000 UTC m=+0.049572963 container create bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:18:52 np0005597378 systemd[1]: Started libpod-conmon-bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95.scope.
Jan 27 09:18:52 np0005597378 podman[355347]: 2026-01-27 14:18:52.537386672 +0000 UTC m=+0.019836830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:18:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:18:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:52 np0005597378 podman[355347]: 2026-01-27 14:18:52.653763798 +0000 UTC m=+0.136213966 container init bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:18:52 np0005597378 podman[355347]: 2026-01-27 14:18:52.661774842 +0000 UTC m=+0.144224990 container start bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:52 np0005597378 podman[355347]: 2026-01-27 14:18:52.675036256 +0000 UTC m=+0.157486424 container attach bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:18:52 np0005597378 nova_compute[238941]: 2026-01-27 14:18:52.688 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:52 np0005597378 nova_compute[238941]: 2026-01-27 14:18:52.821 238945 DEBUG nova.network.neutron [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:18:52 np0005597378 nova_compute[238941]: 2026-01-27 14:18:52.845 238945 DEBUG oslo_concurrency.lockutils [req-528411f9-9631-4bfd-9000-71d491d4b1a6 req-0093a14a-3f79-4a7d-8c26-78b4bb207980 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]: {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:    "0": [
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:        {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "devices": [
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "/dev/loop3"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            ],
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_name": "ceph_lv0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_size": "21470642176",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "name": "ceph_lv0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "tags": {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cluster_name": "ceph",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.crush_device_class": "",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.encrypted": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.objectstore": "bluestore",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osd_id": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.type": "block",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.vdo": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.with_tpm": "0"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            },
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "type": "block",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "vg_name": "ceph_vg0"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:        }
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:    ],
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:    "1": [
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:        {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "devices": [
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "/dev/loop4"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            ],
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_name": "ceph_lv1",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_size": "21470642176",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "name": "ceph_lv1",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "tags": {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cluster_name": "ceph",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.crush_device_class": "",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.encrypted": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.objectstore": "bluestore",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osd_id": "1",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.type": "block",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.vdo": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.with_tpm": "0"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            },
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "type": "block",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "vg_name": "ceph_vg1"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:        }
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:    ],
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:    "2": [
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:        {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "devices": [
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "/dev/loop5"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            ],
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_name": "ceph_lv2",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_size": "21470642176",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "name": "ceph_lv2",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "tags": {
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.cluster_name": "ceph",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.crush_device_class": "",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.encrypted": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.objectstore": "bluestore",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osd_id": "2",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.type": "block",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.vdo": "0",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:                "ceph.with_tpm": "0"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            },
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "type": "block",
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:            "vg_name": "ceph_vg2"
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:        }
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]:    ]
Jan 27 09:18:52 np0005597378 amazing_hofstadter[355364]: }
Jan 27 09:18:52 np0005597378 systemd[1]: libpod-bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95.scope: Deactivated successfully.
Jan 27 09:18:52 np0005597378 podman[355347]: 2026-01-27 14:18:52.981953157 +0000 UTC m=+0.464403305 container died bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-492f1423ad557854a5c3a6281f6da74fb7205631dd7057df59be996cf99e0ca8-merged.mount: Deactivated successfully.
Jan 27 09:18:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:18:53 np0005597378 podman[355347]: 2026-01-27 14:18:53.130835442 +0000 UTC m=+0.613285590 container remove bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:53 np0005597378 systemd[1]: libpod-conmon-bac72d0818c9be0280048753a660bda960f88f460012e38aba7efc34b294bf95.scope: Deactivated successfully.
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.623170702 +0000 UTC m=+0.086690435 container create ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.562803091 +0000 UTC m=+0.026322824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:18:53 np0005597378 systemd[1]: Started libpod-conmon-ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7.scope.
Jan 27 09:18:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.7249598 +0000 UTC m=+0.188479553 container init ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.732458969 +0000 UTC m=+0.195978702 container start ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 27 09:18:53 np0005597378 trusting_easley[355464]: 167 167
Jan 27 09:18:53 np0005597378 systemd[1]: libpod-ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7.scope: Deactivated successfully.
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.74968102 +0000 UTC m=+0.213200753 container attach ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.750098931 +0000 UTC m=+0.213618664 container died ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ab415b4fcaab14b722667db54e6f46874b8f5d0c31833e3d2483a5eb3a379995-merged.mount: Deactivated successfully.
Jan 27 09:18:53 np0005597378 podman[355446]: 2026-01-27 14:18:53.823453048 +0000 UTC m=+0.286972781 container remove ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:18:53 np0005597378 systemd[1]: libpod-conmon-ca2fcffd5dec93a0da6c3bba657118f17d0a95dab3b51ecb5fbea03fad5d72d7.scope: Deactivated successfully.
Jan 27 09:18:53 np0005597378 podman[355489]: 2026-01-27 14:18:53.989501411 +0000 UTC m=+0.039619329 container create fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:18:54 np0005597378 systemd[1]: Started libpod-conmon-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope.
Jan 27 09:18:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:18:54 np0005597378 podman[355489]: 2026-01-27 14:18:53.972207189 +0000 UTC m=+0.022325127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:18:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:18:54 np0005597378 podman[355489]: 2026-01-27 14:18:54.098657804 +0000 UTC m=+0.148775732 container init fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:18:54 np0005597378 podman[355489]: 2026-01-27 14:18:54.107425688 +0000 UTC m=+0.157543606 container start fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:18:54 np0005597378 podman[355489]: 2026-01-27 14:18:54.163368421 +0000 UTC m=+0.213486349 container attach fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:18:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:54 np0005597378 nova_compute[238941]: 2026-01-27 14:18:54.835 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Successfully updated port: 0bd6bb45-6845-4dd7-abd7-26549236c21b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:18:54 np0005597378 lvm[355584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:18:54 np0005597378 lvm[355585]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:18:54 np0005597378 lvm[355584]: VG ceph_vg0 finished
Jan 27 09:18:54 np0005597378 lvm[355585]: VG ceph_vg1 finished
Jan 27 09:18:54 np0005597378 nova_compute[238941]: 2026-01-27 14:18:54.864 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:18:54 np0005597378 nova_compute[238941]: 2026-01-27 14:18:54.864 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:18:54 np0005597378 nova_compute[238941]: 2026-01-27 14:18:54.864 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:18:54 np0005597378 lvm[355587]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:18:54 np0005597378 lvm[355587]: VG ceph_vg2 finished
Jan 27 09:18:54 np0005597378 exciting_curie[355506]: {}
Jan 27 09:18:55 np0005597378 systemd[1]: libpod-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope: Deactivated successfully.
Jan 27 09:18:55 np0005597378 podman[355489]: 2026-01-27 14:18:55.00366249 +0000 UTC m=+1.053780398 container died fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:18:55 np0005597378 systemd[1]: libpod-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope: Consumed 1.356s CPU time.
Jan 27 09:18:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.174 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.262 238945 DEBUG nova.compute.manager [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.263 238945 DEBUG nova.compute.manager [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-0bd6bb45-6845-4dd7-abd7-26549236c21b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.263 238945 DEBUG oslo_concurrency.lockutils [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-023c4c4f5abfa18f1d1d2c4466920f7a1d98705dd57947395ffe5e6537ad2387-merged.mount: Deactivated successfully.
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.662 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.662 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.679 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.757 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.758 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.768 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.768 238945 INFO nova.compute.claims [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:18:55 np0005597378 nova_compute[238941]: 2026-01-27 14:18:55.908 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:55 np0005597378 podman[355489]: 2026-01-27 14:18:55.927500658 +0000 UTC m=+1.977618576 container remove fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:18:55 np0005597378 systemd[1]: libpod-conmon-fca34d5cbc93298a134d1b40037dd59c417b34dd7020b43bd56384aac63acfbd.scope: Deactivated successfully.
Jan 27 09:18:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:18:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:18:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:18:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544708214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.602 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.609 238945 DEBUG nova.compute.provider_tree [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.627 238945 DEBUG nova.scheduler.client.report [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.657 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.658 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.707 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.708 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.730 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.750 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.841 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.842 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.843 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Creating image(s)#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.867 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.894 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.915 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.919 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.997 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.998 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.998 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:56 np0005597378 nova_compute[238941]: 2026-01-27 14:18:56.999 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:18:57 np0005597378 nova_compute[238941]: 2026-01-27 14:18:57.019 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:18:57 np0005597378 nova_compute[238941]: 2026-01-27 14:18:57.022 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:18:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 88 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:18:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:18:57 np0005597378 nova_compute[238941]: 2026-01-27 14:18:57.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:18:57 np0005597378 nova_compute[238941]: 2026-01-27 14:18:57.701 238945 DEBUG nova.policy [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.179636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523538179666, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 251, "total_data_size": 3476977, "memory_usage": 3541488, "flush_reason": "Manual Compaction"}
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523538475414, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3375545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45111, "largest_seqno": 47182, "table_properties": {"data_size": 3366146, "index_size": 5893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19231, "raw_average_key_size": 20, "raw_value_size": 3347473, "raw_average_value_size": 3516, "num_data_blocks": 262, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523322, "oldest_key_time": 1769523322, "file_creation_time": 1769523538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 295841 microseconds, and 7867 cpu microseconds.
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.475467) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3375545 bytes OK
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.475493) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.821435) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.821475) EVENT_LOG_v1 {"time_micros": 1769523538821467, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.821504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3468246, prev total WAL file size 3468246, number of live WAL files 2.
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.822989) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3296KB)], [104(8637KB)]
Jan 27 09:18:58 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523538823020, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12220684, "oldest_snapshot_seqno": -1}
Jan 27 09:18:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 94 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.6 MiB/s wr, 5 op/s
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7080 keys, 10483365 bytes, temperature: kUnknown
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523539110255, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10483365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10434976, "index_size": 29589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 182504, "raw_average_key_size": 25, "raw_value_size": 10307512, "raw_average_value_size": 1455, "num_data_blocks": 1164, "num_entries": 7080, "num_filter_entries": 7080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523538, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.110517) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10483365 bytes
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.151856) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.5 rd, 36.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7594, records dropped: 514 output_compression: NoCompression
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.151887) EVENT_LOG_v1 {"time_micros": 1769523539151876, "job": 62, "event": "compaction_finished", "compaction_time_micros": 287321, "compaction_time_cpu_micros": 26028, "output_level": 6, "num_output_files": 1, "total_output_size": 10483365, "num_input_records": 7594, "num_output_records": 7080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523539152890, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523539154615, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:58.822879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:18:59.154750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.293 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.353 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.433 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully created port: 4be63359-1372-48ba-b3a8-f60edc16d879 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952001739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952001739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:18:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.721 238945 DEBUG nova.objects.instance [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.763 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.763 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Ensure instance console log exists: /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.764 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.764 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:18:59 np0005597378 nova_compute[238941]: 2026-01-27 14:18:59.765 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.179 238945 DEBUG nova.network.neutron [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.205 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.205 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance network_info: |[{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.206 238945 DEBUG oslo_concurrency.lockutils [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.206 238945 DEBUG nova.network.neutron [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 0bd6bb45-6845-4dd7-abd7-26549236c21b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.209 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start _get_guest_xml network_info=[{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.216 238945 WARNING nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.224 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.224 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.229 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.230 238945 DEBUG nova.virt.libvirt.host [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.230 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.231 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.232 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.232 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.232 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.233 238945 DEBUG nova.virt.hardware [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.237 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.445 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:00 np0005597378 podman[355836]: 2026-01-27 14:19:00.747938561 +0000 UTC m=+0.088753710 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:19:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:19:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422583667' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.838 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.860 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:00 np0005597378 nova_compute[238941]: 2026-01-27 14:19:00.866 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 119 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.427 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully updated port: 4be63359-1372-48ba-b3a8-f60edc16d879 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.443 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.443 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:19:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:19:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170815019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.487 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.488 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.489 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.489 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.490 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.491 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.491 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.492 238945 DEBUG nova.objects.instance [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.516 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <uuid>98452226-e32f-475f-814f-d0eba538b8ca</uuid>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <name>instance-0000007e</name>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-1495948444</nova:name>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:19:00</nova:creationTime>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:port uuid="4e0bfd53-3592-45ef-aef8-c273dbee749b">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <nova:port uuid="0bd6bb45-6845-4dd7-abd7-26549236c21b">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9d:4fef" ipVersion="6"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <entry name="serial">98452226-e32f-475f-814f-d0eba538b8ca</entry>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <entry name="uuid">98452226-e32f-475f-814f-d0eba538b8ca</entry>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/98452226-e32f-475f-814f-d0eba538b8ca_disk">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/98452226-e32f-475f-814f-d0eba538b8ca_disk.config">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e7:f3:c2"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <target dev="tap4e0bfd53-35"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:9d:4f:ef"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <target dev="tap0bd6bb45-68"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/console.log" append="off"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:19:01 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:19:01 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:19:01 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:19:01 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Preparing to wait for external event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.518 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Preparing to wait for external event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.519 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.520 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.520 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.521 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.521 238945 DEBUG os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.522 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.522 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.523 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.526 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.526 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e0bfd53-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.527 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e0bfd53-35, col_values=(('external_ids', {'iface-id': '4e0bfd53-3592-45ef-aef8-c273dbee749b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:f3:c2', 'vm-uuid': '98452226-e32f-475f-814f-d0eba538b8ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 NetworkManager[48904]: <info>  [1769523541.5291] manager: (tap4e0bfd53-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.536 238945 INFO os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35')#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.537 238945 DEBUG nova.virt.libvirt.vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:43Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.538 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.538 238945 DEBUG nova.network.os_vif_util [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.539 238945 DEBUG os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.540 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.540 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.540 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.543 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd6bb45-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.544 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0bd6bb45-68, col_values=(('external_ids', {'iface-id': '0bd6bb45-6845-4dd7-abd7-26549236c21b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:4f:ef', 'vm-uuid': '98452226-e32f-475f-814f-d0eba538b8ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.545 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 NetworkManager[48904]: <info>  [1769523541.5462] manager: (tap0bd6bb45-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.547 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.554 238945 INFO os_vif [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68')#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.584 238945 DEBUG nova.compute.manager [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.585 238945 DEBUG nova.compute.manager [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.585 238945 DEBUG oslo_concurrency.lockutils [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.609 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.610 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.610 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:e7:f3:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.611 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:9d:4f:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.611 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Using config drive#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.634 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.727 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.983 238945 DEBUG nova.network.neutron [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated VIF entry in instance network info cache for port 0bd6bb45-6845-4dd7-abd7-26549236c21b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:19:01 np0005597378 nova_compute[238941]: 2026-01-27 14:19:01.983 238945 DEBUG nova.network.neutron [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.026 238945 DEBUG oslo_concurrency.lockutils [req-96b9ac63-c967-43ee-9398-36574269c366 req-b4798e7c-d278-4734-94ff-620ca4cc161b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.178 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Creating config drive at /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.182 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpycfzby3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.321 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpycfzby3x" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.345 238945 DEBUG nova.storage.rbd_utils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 98452226-e32f-475f-814f-d0eba538b8ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.348 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config 98452226-e32f-475f-814f-d0eba538b8ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.413 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.413 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.505 238945 DEBUG oslo_concurrency.processutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config 98452226-e32f-475f-814f-d0eba538b8ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.506 238945 INFO nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deleting local config drive /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca/disk.config because it was imported into RBD.#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.536 238945 DEBUG nova.network.neutron [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.556 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.558 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance network_info: |[{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.559 238945 DEBUG oslo_concurrency.lockutils [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.559 238945 DEBUG nova.network.neutron [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.562 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start _get_guest_xml network_info=[{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:19:02 np0005597378 kernel: tap4e0bfd53-35: entered promiscuous mode
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.5698] manager: (tap4e0bfd53-35): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.570 238945 WARNING nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01334|binding|INFO|Claiming lport 4e0bfd53-3592-45ef-aef8-c273dbee749b for this chassis.
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01335|binding|INFO|4e0bfd53-3592-45ef-aef8-c273dbee749b: Claiming fa:16:3e:e7:f3:c2 10.100.0.8
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.6076] manager: (tap0bd6bb45-68): new Tun device (/org/freedesktop/NetworkManager/Devices/551)
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.611 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.613 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.613 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:f3:c2 10.100.0.8'], port_security=['fa:16:3e:e7:f3:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4e0bfd53-3592-45ef-aef8-c273dbee749b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.614 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4e0bfd53-3592-45ef-aef8-c273dbee749b in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f bound to our chassis#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.616 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f#033[00m
Jan 27 09:19:02 np0005597378 systemd-udevd[355998]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:02 np0005597378 systemd-udevd[355997]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.629 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3079f8f7-6301-4502-a3fa-1894d9f66695]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.630 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd37f3d1-31 in ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.631 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.632 238945 DEBUG nova.virt.libvirt.host [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.632 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.632 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd37f3d1-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.632 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86a8d710-792c-4cdb-8174-2b5b7736e0a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.633 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.633 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.633 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e24d714-aff2-4c2e-9abc-09ba0e0467c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.634 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.634 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.634 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.635 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.638 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.639 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.639 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.639 238945 DEBUG nova.virt.hardware [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.644 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.647 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1ac4ca-ccab-4561-9d73-09d61bef9a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.6498] device (tap4e0bfd53-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.6508] device (tap4e0bfd53-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:19:02 np0005597378 systemd-machined[207425]: New machine qemu-158-instance-0000007e.
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.667 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b70e32c-d148-412f-93e6-b8741eb9c403]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 kernel: tap0bd6bb45-68: entered promiscuous mode
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.6900] device (tap0bd6bb45-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.6909] device (tap0bd6bb45-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:19:02 np0005597378 systemd[1]: Started Virtual Machine qemu-158-instance-0000007e.
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01336|binding|INFO|Claiming lport 0bd6bb45-6845-4dd7-abd7-26549236c21b for this chassis.
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01337|binding|INFO|0bd6bb45-6845-4dd7-abd7-26549236c21b: Claiming fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.699 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], port_security=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:4fef/64', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0bd6bb45-6845-4dd7-abd7-26549236c21b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.701 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5b27950e-0294-4173-9920-2a9a272efc78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01338|binding|INFO|Setting lport 4e0bfd53-3592-45ef-aef8-c273dbee749b ovn-installed in OVS
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01339|binding|INFO|Setting lport 4e0bfd53-3592-45ef-aef8-c273dbee749b up in Southbound
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.7094] manager: (tapbd37f3d1-30): new Veth device (/org/freedesktop/NetworkManager/Devices/552)
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.709 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce505f41-2845-4fdd-8bf6-5881abd3b48a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01340|binding|INFO|Setting lport 0bd6bb45-6845-4dd7-abd7-26549236c21b ovn-installed in OVS
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01341|binding|INFO|Setting lport 0bd6bb45-6845-4dd7-abd7-26549236c21b up in Southbound
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.741 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b126283a-874e-41cd-9151-f197b4d1315f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.744 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[586844ab-0dff-4dbf-9471-5165594e6c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.7712] device (tapbd37f3d1-30): carrier: link connected
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.777 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c8704a5b-80ba-438e-96d8-753ed4ff4787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.800 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1765dc-5594-430f-98f9-227e24100405]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 26528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356034, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.817 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d55b4ce-25ef-47ff-bc5f-63c61ad302f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:2c11'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618239, 'tstamp': 618239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356045, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.835 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b838c2d2-7df2-4ead-b185-d306b7ac214d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 26528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356055, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.864 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80573122-884f-474c-bc9b-9f25fccc3648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.913 238945 DEBUG nova.compute.manager [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.913 238945 DEBUG oslo_concurrency.lockutils [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.914 238945 DEBUG oslo_concurrency.lockutils [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.914 238945 DEBUG oslo_concurrency.lockutils [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.914 238945 DEBUG nova.compute.manager [req-6f52112a-15a7-427e-957a-a49e7fd38628 req-90e3c7ba-1b9f-4088-9de5-2b33b1906007 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Processing event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[379203ce-4751-471f-8d33-ef557db5a33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.924 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.924 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.925 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd37f3d1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 NetworkManager[48904]: <info>  [1769523542.9271] manager: (tapbd37f3d1-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Jan 27 09:19:02 np0005597378 kernel: tapbd37f3d1-30: entered promiscuous mode
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.935 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd37f3d1-30, col_values=(('external_ids', {'iface-id': '40babe7c-93a1-447f-a7bf-393e56c7e18c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:02Z|01342|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 nova_compute[238941]: 2026-01-27 14:19:02.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.952 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.953 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b4856e-c17a-46c5-8cb9-6b1fa5a4ed71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.954 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.pid.haproxy
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID bd37f3d1-36c6-44a7-9f3e-1ef294aba42f
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:19:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:02.954 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'env', 'PROCESS_TAG=haproxy-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd37f3d1-36c6-44a7-9f3e-1ef294aba42f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:19:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:19:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397691604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.054 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 119 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.123 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:19:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:19:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442749539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.287 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.339 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.343 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.374 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523543.357024, 98452226-e32f-475f-814f-d0eba538b8ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.375 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Started (Lifecycle Event)#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.400 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.403 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523543.3575044, 98452226-e32f-475f-814f-d0eba538b8ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.404 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:19:03 np0005597378 podman[356129]: 2026-01-27 14:19:03.324457912 +0000 UTC m=+0.025180183 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.421 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.422 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.94598943088204GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.422 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.423 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.448 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.452 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.541 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.596 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 98452226-e32f-475f-814f-d0eba538b8ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.596 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.597 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.597 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.621 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.639 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.640 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.657 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.682 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.742 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:03 np0005597378 podman[356129]: 2026-01-27 14:19:03.762479394 +0000 UTC m=+0.463201635 container create 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.784 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.785 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.785 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.785 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.786 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Processing event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.786 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.786 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 DEBUG oslo_concurrency.lockutils [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 DEBUG nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.787 238945 WARNING nova.compute.manager [req-4443080e-ef96-4ae0-9444-52e4fc5cb3ab req-62c0599c-8735-428c-9c6a-106af605da7d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.788 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.793 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.795 238945 INFO nova.virt.libvirt.driver [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance spawned successfully.#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.796 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.800 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523543.799145, 98452226-e32f-475f-814f-d0eba538b8ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.801 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.816 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.816 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.817 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.817 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.818 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.818 238945 DEBUG nova.virt.libvirt.driver [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.822 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.825 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.860 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:19:03 np0005597378 systemd[1]: Started libpod-conmon-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef.scope.
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.883 238945 INFO nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 20.07 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.884 238945 DEBUG nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:19:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6a54763952998e1a1b93e3b8227d10c26571ca7886da7a77820100ec3d732f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:19:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17608802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.949 238945 INFO nova.compute.manager [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 21.20 seconds to build instance.#033[00m
Jan 27 09:19:03 np0005597378 nova_compute[238941]: 2026-01-27 14:19:03.967 238945 DEBUG oslo_concurrency.lockutils [None req-7da242d5-fc62-4268-850b-fa69ad0a0663 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:19:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062331404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.406 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.408 238945 DEBUG nova.virt.libvirt.vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:56Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.408 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.409 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.410 238945 DEBUG nova.objects.instance [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.421 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.431 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <name>instance-0000007f</name>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:19:02</nova:creationTime>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <entry name="serial">8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <entry name="uuid">8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:20:a8:49"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <target dev="tap4be63359-13"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log" append="off"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:19:04 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:19:04 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:19:04 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:19:04 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.434 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Preparing to wait for external event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.435 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.435 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.436 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.437 238945 DEBUG nova.virt.libvirt.vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:18:56Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.438 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.439 238945 DEBUG nova.network.os_vif_util [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.440 238945 DEBUG os_vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.441 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.442 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.443 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.447 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.453 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4be63359-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.454 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4be63359-13, col_values=(('external_ids', {'iface-id': '4be63359-1372-48ba-b3a8-f60edc16d879', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:a8:49', 'vm-uuid': '8112a700-f12a-43be-a5c6-f0536e53b2c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 NetworkManager[48904]: <info>  [1769523544.4564] manager: (tap4be63359-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.457 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.463 238945 INFO os_vif [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13')#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.470 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:04 np0005597378 podman[356129]: 2026-01-27 14:19:04.489145629 +0000 UTC m=+1.189867890 container init 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:19:04 np0005597378 podman[356181]: 2026-01-27 14:19:04.495066347 +0000 UTC m=+0.834317720 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:19:04 np0005597378 podman[356129]: 2026-01-27 14:19:04.496766172 +0000 UTC m=+1.197488413 container start 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:19:04 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : New worker (356241) forked
Jan 27 09:19:04 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : Loading success.
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.549 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.550 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.550 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:20:a8:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.550 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Using config drive#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.582 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0bd6bb45-6845-4dd7-abd7-26549236c21b in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.584 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2793c30a-995a-4996-954d-63cf3fd0ddfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.596 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec30aef5-51 in ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.598 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec30aef5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.598 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebdde6a-8f87-4755-ad0a-127bd39fbdc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.599 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[39379435-a969-4c0f-b3e2-fed78860f51e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.601 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.613 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[7eda3784-13a8-485b-a26e-918917c7b33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.638 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188fbdc9-4841-4fd8-a020-f3fb996fc17b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.668 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e1322311-fb5e-4d5d-832a-909e1b113133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 NetworkManager[48904]: <info>  [1769523544.6769] manager: (tapec30aef5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/555)
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.676 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff16e0fd-b238-48fc-87c7-0725f4c27755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 systemd-udevd[356026]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.690 238945 DEBUG nova.network.neutron [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.691 238945 DEBUG nova.network.neutron [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.709 238945 DEBUG oslo_concurrency.lockutils [req-69cda656-632b-411e-99f4-873715fe9fe2 req-2db1cfe0-55b8-4b06-99fb-761a703712e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.718 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cbeb9b-016b-43b0-9597-cb18565e6764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.721 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4e50916b-8cf9-4c9f-b487-7afed4a3de82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 NetworkManager[48904]: <info>  [1769523544.7458] device (tapec30aef5-50): carrier: link connected
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.751 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4f87a458-7df5-4133-9813-bbafec199072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.770 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[928b5540-d6ad-4e15-b747-28ee4413bdd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 16493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356282, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.785 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cb5e32-1633-4867-8bc1-ee244252750d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:b03f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618437, 'tstamp': 618437}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356283, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.807 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[baa68bc7-a97f-4022-80bb-bcebe2b27e6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 16493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356284, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.838 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9325fb-83c4-4fc2-a0db-412d1f4e89e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.871 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14972a4f-3bc9-4c52-b1c8-f1c3f7e47707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.872 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.872 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.873 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec30aef5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:04 np0005597378 NetworkManager[48904]: <info>  [1769523544.8755] manager: (tapec30aef5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Jan 27 09:19:04 np0005597378 kernel: tapec30aef5-50: entered promiscuous mode
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.883 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec30aef5-50, col_values=(('external_ids', {'iface-id': 'efb5ae4a-27c5-4322-b9a7-2ceba053c0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:04Z|01343|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 nova_compute[238941]: 2026-01-27 14:19:04.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.901 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b1535b-776c-4b82-8a89-bdb7e836b5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.903 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.pid.haproxy
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID ec30aef5-5eb6-4cbb-86f9-bf221c914a9f
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:19:04 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:04.903 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'env', 'PROCESS_TAG=haproxy-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec30aef5-5eb6-4cbb-86f9-bf221c914a9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:19:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.100 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Creating config drive at /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.104 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejrir8sa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.253 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejrir8sa" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.285 238945 DEBUG nova.storage.rbd_utils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.288 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.330 238945 DEBUG nova.compute.manager [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.331 238945 DEBUG oslo_concurrency.lockutils [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.331 238945 DEBUG oslo_concurrency.lockutils [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.332 238945 DEBUG oslo_concurrency.lockutils [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.333 238945 DEBUG nova.compute.manager [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.333 238945 WARNING nova.compute.manager [req-99ad2e0e-8934-4969-8f7e-303779e4320d req-d729590c-201f-4384-a50c-62b5eb61a363 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:19:05 np0005597378 podman[356320]: 2026-01-27 14:19:05.338177401 +0000 UTC m=+0.100500434 container create 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:19:05 np0005597378 podman[356320]: 2026-01-27 14:19:05.265897862 +0000 UTC m=+0.028220915 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:19:05 np0005597378 systemd[1]: Started libpod-conmon-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b.scope.
Jan 27 09:19:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:19:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95b99ac101cf03a0ab1a9c7d2ef5193ede85c9434f82059e52762aafe836921/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:05 np0005597378 podman[356320]: 2026-01-27 14:19:05.444267163 +0000 UTC m=+0.206590196 container init 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:19:05 np0005597378 podman[356320]: 2026-01-27 14:19:05.452126763 +0000 UTC m=+0.214449796 container start 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:19:05 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : New worker (356378) forked
Jan 27 09:19:05 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : Loading success.
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.598 238945 DEBUG oslo_concurrency.processutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config 8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.599 238945 INFO nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deleting local config drive /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/disk.config because it was imported into RBD.#033[00m
Jan 27 09:19:05 np0005597378 kernel: tap4be63359-13: entered promiscuous mode
Jan 27 09:19:05 np0005597378 NetworkManager[48904]: <info>  [1769523545.6534] manager: (tap4be63359-13): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:05Z|01344|binding|INFO|Claiming lport 4be63359-1372-48ba-b3a8-f60edc16d879 for this chassis.
Jan 27 09:19:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:05Z|01345|binding|INFO|4be63359-1372-48ba-b3a8-f60edc16d879: Claiming fa:16:3e:20:a8:49 10.100.0.3
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.675 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a8:49 10.100.0.3'], port_security=['fa:16:3e:20:a8:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24a46af1-cafa-42b2-ad53-4a62558369c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00cb397-90a2-41fb-b94f-8a302bfb5bea, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be63359-1372-48ba-b3a8-f60edc16d879) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.676 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be63359-1372-48ba-b3a8-f60edc16d879 in datapath 8832cfc6-32b7-455a-a552-de53a2f1fc74 bound to our chassis#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.678 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8832cfc6-32b7-455a-a552-de53a2f1fc74#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.691 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[291f8588-e510-4585-9d7f-85e61c20ddcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.692 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8832cfc6-31 in ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.694 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8832cfc6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51b00cbf-5e35-4963-bd28-4e2b4909e518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.696 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[70318bc7-966d-4f61-96ea-23f49e4096f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.711 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b08380-94ba-4418-99a6-18e4429f79af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 systemd-machined[207425]: New machine qemu-159-instance-0000007f.
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:05Z|01346|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 09:19:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:05Z|01347|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 09:19:05 np0005597378 systemd[1]: Started Virtual Machine qemu-159-instance-0000007f.
Jan 27 09:19:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:05Z|01348|binding|INFO|Setting lport 4be63359-1372-48ba-b3a8-f60edc16d879 ovn-installed in OVS
Jan 27 09:19:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:05Z|01349|binding|INFO|Setting lport 4be63359-1372-48ba-b3a8-f60edc16d879 up in Southbound
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[90e344ca-c320-452c-a761-6b6cad1283f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 systemd-udevd[356402]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:05 np0005597378 NetworkManager[48904]: <info>  [1769523545.7689] device (tap4be63359-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:19:05 np0005597378 NetworkManager[48904]: <info>  [1769523545.7695] device (tap4be63359-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.785 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[308cc6bb-483a-4884-bd47-f08db7fb3cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.790 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a36ccfb-4fe6-4bae-80f3-4336ffaf70d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 NetworkManager[48904]: <info>  [1769523545.7918] manager: (tap8832cfc6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/558)
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.824 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[aa871a72-373d-47e6-9df5-52d40f757496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.827 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed9ed68-ac14-427a-86c9-6ea7dbf9df46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 NetworkManager[48904]: <info>  [1769523545.8480] device (tap8832cfc6-30): carrier: link connected
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.855 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a88c5b61-be20-4f84-baa9-deef36587df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.872 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5afb6e5-111e-424c-9c8b-4e12a330dcc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8832cfc6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618547, 'reachable_time': 18436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356432, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.892 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f600eeca-bf28-40ea-a8b2-64b183949407]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:a3cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618547, 'tstamp': 618547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356433, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.908 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3796ec17-54d0-49d4-89d5-85643bc7aa1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8832cfc6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618547, 'reachable_time': 18436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356434, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.937 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c27be6db-ab90-40dc-a44e-c4704fe09fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.992 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c31f4d5d-ec03-4939-8b6e-f8d7a76a1c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.994 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8832cfc6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.994 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.995 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8832cfc6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:05 np0005597378 kernel: tap8832cfc6-30: entered promiscuous mode
Jan 27 09:19:05 np0005597378 NetworkManager[48904]: <info>  [1769523545.9975] manager: (tap8832cfc6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Jan 27 09:19:05 np0005597378 nova_compute[238941]: 2026-01-27 14:19:05.998 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:05.999 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8832cfc6-30, col_values=(('external_ids', {'iface-id': '60c58b14-38c7-4b18-a86f-4ef52a16b872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.001 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:06 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:06Z|01350|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.016 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8832cfc6-32b7-455a-a552-de53a2f1fc74.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8832cfc6-32b7-455a-a552-de53a2f1fc74.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c613ebe-69a2-4064-a6de-e89428cd760e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.018 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-8832cfc6-32b7-455a-a552-de53a2f1fc74
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/8832cfc6-32b7-455a-a552-de53a2f1fc74.pid.haproxy
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 8832cfc6-32b7-455a-a552-de53a2f1fc74
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:19:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:06.021 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'env', 'PROCESS_TAG=haproxy-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8832cfc6-32b7-455a-a552-de53a2f1fc74.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.231 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523546.231, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.232 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Started (Lifecycle Event)#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.257 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.261 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523546.2311842, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.261 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.283 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.287 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:19:06 np0005597378 nova_compute[238941]: 2026-01-27 14:19:06.305 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:19:06 np0005597378 podman[356508]: 2026-01-27 14:19:06.37143532 +0000 UTC m=+0.025497042 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:19:06 np0005597378 podman[356508]: 2026-01-27 14:19:06.743291296 +0000 UTC m=+0.397352988 container create 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:19:06 np0005597378 systemd[1]: Started libpod-conmon-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd.scope.
Jan 27 09:19:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:19:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630e8eb25e7e6532b6eba282b7443efcb2ce88b3f255e03466027626d15b9600/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:06 np0005597378 podman[356508]: 2026-01-27 14:19:06.894656955 +0000 UTC m=+0.548718667 container init 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 09:19:06 np0005597378 podman[356508]: 2026-01-27 14:19:06.899615007 +0000 UTC m=+0.553676699 container start 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:19:06 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : New worker (356529) forked
Jan 27 09:19:06 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : Loading success.
Jan 27 09:19:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.363 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.365 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Processing event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.366 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG oslo_concurrency.lockutils [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.367 238945 DEBUG nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.368 238945 WARNING nova.compute.manager [req-7b2526e4-5373-44c3-8389-264654c87eed req-b16c2f95-aa5d-4fc0-9bec-ef58c16fb778 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.368 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.379 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523547.374158, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.379 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.381 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.393 238945 INFO nova.virt.libvirt.driver [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance spawned successfully.#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.394 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.403 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.406 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.415 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.416 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.416 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.417 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.417 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.418 238945 DEBUG nova.virt.libvirt.driver [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.428 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.662 238945 INFO nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 10.82 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.663 238945 DEBUG nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.798 238945 INFO nova.compute.manager [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 12.07 seconds to build instance.#033[00m
Jan 27 09:19:07 np0005597378 nova_compute[238941]: 2026-01-27 14:19:07.820 238945 DEBUG oslo_concurrency.lockutils [None req-2a98ad0b-f02f-4100-ac86-a311220f083d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:08Z|01351|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 09:19:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:08Z|01352|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 09:19:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:08Z|01353|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 09:19:08 np0005597378 NetworkManager[48904]: <info>  [1769523548.8923] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Jan 27 09:19:08 np0005597378 NetworkManager[48904]: <info>  [1769523548.8929] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Jan 27 09:19:08 np0005597378 nova_compute[238941]: 2026-01-27 14:19:08.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:08 np0005597378 nova_compute[238941]: 2026-01-27 14:19:08.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:08Z|01354|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 09:19:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:08Z|01355|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 09:19:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:08Z|01356|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 09:19:08 np0005597378 nova_compute[238941]: 2026-01-27 14:19:08.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.466 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.466 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.617 238945 DEBUG nova.compute.manager [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.618 238945 DEBUG nova.compute.manager [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.619 238945 DEBUG oslo_concurrency.lockutils [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.619 238945 DEBUG oslo_concurrency.lockutils [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:09 np0005597378 nova_compute[238941]: 2026-01-27 14:19:09.619 238945 DEBUG nova.network.neutron [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 27 09:19:11 np0005597378 nova_compute[238941]: 2026-01-27 14:19:11.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:11 np0005597378 nova_compute[238941]: 2026-01-27 14:19:11.471 238945 DEBUG nova.network.neutron [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated VIF entry in instance network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:19:11 np0005597378 nova_compute[238941]: 2026-01-27 14:19:11.472 238945 DEBUG nova.network.neutron [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:11 np0005597378 nova_compute[238941]: 2026-01-27 14:19:11.492 238945 DEBUG oslo_concurrency.lockutils [req-4b2c1f8c-12c7-4223-87e8-99d1d179466d req-cf4583df-fe21-4a29-b54b-526c05c3122b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.456 238945 DEBUG nova.compute.manager [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.456 238945 DEBUG nova.compute.manager [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.457 238945 DEBUG oslo_concurrency.lockutils [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.457 238945 DEBUG oslo_concurrency.lockutils [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.457 238945 DEBUG nova.network.neutron [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.634 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.642 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.642 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.642 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:12 np0005597378 nova_compute[238941]: 2026-01-27 14:19:12.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 628 KiB/s wr, 157 op/s
Jan 27 09:19:13 np0005597378 nova_compute[238941]: 2026-01-27 14:19:13.708 238945 DEBUG nova.network.neutron [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:19:13 np0005597378 nova_compute[238941]: 2026-01-27 14:19:13.709 238945 DEBUG nova.network.neutron [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:13 np0005597378 nova_compute[238941]: 2026-01-27 14:19:13.730 238945 DEBUG oslo_concurrency.lockutils [req-573dd8d6-e345-4cfc-8c01-c6a26fa50876 req-ceabdee2-8315-48dd-bda4-fe92cc8b7f32 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:14 np0005597378 nova_compute[238941]: 2026-01-27 14:19:14.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 628 KiB/s wr, 157 op/s
Jan 27 09:19:15 np0005597378 nova_compute[238941]: 2026-01-27 14:19:15.724 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:15 np0005597378 nova_compute[238941]: 2026-01-27 14:19:15.745 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:15 np0005597378 nova_compute[238941]: 2026-01-27 14:19:15.745 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:19:17
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:19:17 np0005597378 nova_compute[238941]: 2026-01-27 14:19:17.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:19:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:19:18 np0005597378 nova_compute[238941]: 2026-01-27 14:19:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 147 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.4 MiB/s wr, 155 op/s
Jan 27 09:19:19 np0005597378 nova_compute[238941]: 2026-01-27 14:19:19.461 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:19Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:f3:c2 10.100.0.8
Jan 27 09:19:19 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:19Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:f3:c2 10.100.0.8
Jan 27 09:19:20 np0005597378 nova_compute[238941]: 2026-01-27 14:19:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:20 np0005597378 nova_compute[238941]: 2026-01-27 14:19:20.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:19:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 163 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 102 op/s
Jan 27 09:19:21 np0005597378 nova_compute[238941]: 2026-01-27 14:19:21.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:22Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:a8:49 10.100.0.3
Jan 27 09:19:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:22Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:a8:49 10.100.0.3
Jan 27 09:19:22 np0005597378 nova_compute[238941]: 2026-01-27 14:19:22.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 163 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 2.9 MiB/s wr, 68 op/s
Jan 27 09:19:24 np0005597378 nova_compute[238941]: 2026-01-27 14:19:24.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 189 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 195 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Jan 27 09:19:27 np0005597378 nova_compute[238941]: 2026-01-27 14:19:27.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:27 np0005597378 nova_compute[238941]: 2026-01-27 14:19:27.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015253341003127836 of space, bias 1.0, pg target 0.4576002300938351 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006694006114109439 of space, bias 1.0, pg target 0.20082018342328317 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0529105967746284e-06 of space, bias 4.0, pg target 0.001263492716129554 quantized to 16 (current 16)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:19:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.036 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.037 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.052 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:19:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.139 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.140 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.152 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.152 238945 INFO nova.compute.claims [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.340 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.456 238945 INFO nova.compute.manager [None req-bfe62468-f7da-48fd-a010-bb3b95ff138a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Get console output#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.461 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:19:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460417326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.889 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.895 238945 DEBUG nova.compute.provider_tree [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.920 238945 DEBUG nova.scheduler.client.report [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.953 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:29 np0005597378 nova_compute[238941]: 2026-01-27 14:19:29.955 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.004 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.005 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.021 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.055 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.208 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.209 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.210 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Creating image(s)#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.239 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.272 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.302 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.308 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.345 238945 DEBUG nova.policy [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.386 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.387 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.388 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.388 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.410 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:30 np0005597378 nova_compute[238941]: 2026-01-27 14:19:30.413 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 200 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 450 KiB/s rd, 2.9 MiB/s wr, 91 op/s
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.295 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.882s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.357 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.602 238945 DEBUG nova.objects.instance [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 14ad708e-9b73-4e8e-822e-036be4f62cdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.674 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.674 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Ensure instance console log exists: /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.675 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.675 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:31 np0005597378 nova_compute[238941]: 2026-01-27 14:19:31.675 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:31 np0005597378 podman[356728]: 2026-01-27 14:19:31.708253802 +0000 UTC m=+0.052253035 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:19:32 np0005597378 nova_compute[238941]: 2026-01-27 14:19:32.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 200 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Jan 27 09:19:33 np0005597378 nova_compute[238941]: 2026-01-27 14:19:33.188 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully created port: 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:19:34 np0005597378 nova_compute[238941]: 2026-01-27 14:19:34.126 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully created port: 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:19:34 np0005597378 nova_compute[238941]: 2026-01-27 14:19:34.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:34 np0005597378 nova_compute[238941]: 2026-01-27 14:19:34.440 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:34 np0005597378 nova_compute[238941]: 2026-01-27 14:19:34.441 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:34 np0005597378 nova_compute[238941]: 2026-01-27 14:19:34.442 238945 DEBUG nova.objects.instance [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:34 np0005597378 nova_compute[238941]: 2026-01-27 14:19:34.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:34 np0005597378 podman[356747]: 2026-01-27 14:19:34.762403851 +0000 UTC m=+0.094120563 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.884898) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523574884973, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 535, "num_deletes": 255, "total_data_size": 534942, "memory_usage": 546336, "flush_reason": "Manual Compaction"}
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523574904214, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 530204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47183, "largest_seqno": 47717, "table_properties": {"data_size": 527243, "index_size": 933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6755, "raw_average_key_size": 18, "raw_value_size": 521395, "raw_average_value_size": 1416, "num_data_blocks": 42, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523539, "oldest_key_time": 1769523539, "file_creation_time": 1769523574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 19387 microseconds, and 4655 cpu microseconds.
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.904282) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 530204 bytes OK
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.904313) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.913614) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.913636) EVENT_LOG_v1 {"time_micros": 1769523574913630, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.913669) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 531878, prev total WAL file size 531878, number of live WAL files 2.
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.914678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(517KB)], [107(10237KB)]
Jan 27 09:19:34 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523574914784, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11013569, "oldest_snapshot_seqno": -1}
Jan 27 09:19:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 227 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6930 keys, 10900383 bytes, temperature: kUnknown
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523575290115, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10900383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10851978, "index_size": 29989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180333, "raw_average_key_size": 26, "raw_value_size": 10726157, "raw_average_value_size": 1547, "num_data_blocks": 1178, "num_entries": 6930, "num_filter_entries": 6930, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.290382) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10900383 bytes
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.296171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.3 rd, 29.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(41.3) write-amplify(20.6) OK, records in: 7448, records dropped: 518 output_compression: NoCompression
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.296216) EVENT_LOG_v1 {"time_micros": 1769523575296194, "job": 64, "event": "compaction_finished", "compaction_time_micros": 375392, "compaction_time_cpu_micros": 29214, "output_level": 6, "num_output_files": 1, "total_output_size": 10900383, "num_input_records": 7448, "num_output_records": 6930, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523575296537, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523575298473, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:34.914288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:19:35.298510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.460 238945 DEBUG nova.objects.instance [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.480 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully updated port: 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.495 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.752 238945 DEBUG nova.policy [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.896 238945 DEBUG nova.compute.manager [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.897 238945 DEBUG nova.compute.manager [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.897 238945 DEBUG oslo_concurrency.lockutils [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.897 238945 DEBUG oslo_concurrency.lockutils [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:35 np0005597378 nova_compute[238941]: 2026-01-27 14:19:35.898 238945 DEBUG nova.network.neutron [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:36 np0005597378 nova_compute[238941]: 2026-01-27 14:19:36.144 238945 DEBUG nova.network.neutron [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:19:36 np0005597378 nova_compute[238941]: 2026-01-27 14:19:36.887 238945 DEBUG nova.network.neutron [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:36 np0005597378 nova_compute[238941]: 2026-01-27 14:19:36.903 238945 DEBUG oslo_concurrency.lockutils [req-b59beba5-7861-4fdc-9ac8-136e6a89e185 req-9ce2690b-591a-456b-9f09-86012041d383 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Jan 27 09:19:37 np0005597378 nova_compute[238941]: 2026-01-27 14:19:37.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:19:37 np0005597378 nova_compute[238941]: 2026-01-27 14:19:37.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.033 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully created port: a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.037 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Successfully updated port: 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.062 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.063 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.063 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.145 238945 DEBUG nova.compute.manager [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.146 238945 DEBUG nova.compute.manager [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.146 238945 DEBUG oslo_concurrency.lockutils [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:38 np0005597378 nova_compute[238941]: 2026-01-27 14:19:38.657 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:19:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 27 09:19:39 np0005597378 nova_compute[238941]: 2026-01-27 14:19:39.469 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:39 np0005597378 nova_compute[238941]: 2026-01-27 14:19:39.629 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Successfully updated port: a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:19:39 np0005597378 nova_compute[238941]: 2026-01-27 14:19:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:39 np0005597378 nova_compute[238941]: 2026-01-27 14:19:39.644 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:39 np0005597378 nova_compute[238941]: 2026-01-27 14:19:39.645 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:19:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:40 np0005597378 nova_compute[238941]: 2026-01-27 14:19:40.247 238945 DEBUG nova.compute.manager [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:40 np0005597378 nova_compute[238941]: 2026-01-27 14:19:40.248 238945 DEBUG nova.compute.manager [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:40 np0005597378 nova_compute[238941]: 2026-01-27 14:19:40.248 238945 DEBUG oslo_concurrency.lockutils [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 27 09:19:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:41.863 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:41.864 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.921 238945 DEBUG nova.network.neutron [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.969 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.969 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance network_info: |[{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.970 238945 DEBUG oslo_concurrency.lockutils [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.970 238945 DEBUG nova.network.neutron [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.973 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start _get_guest_xml network_info=[{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.977 238945 WARNING nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.985 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.985 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.993 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.994 238945 DEBUG nova.virt.libvirt.host [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.994 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.994 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.995 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.996 238945 DEBUG nova.virt.hardware [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:19:41 np0005597378 nova_compute[238941]: 2026-01-27 14:19:41.999 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:19:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592778182' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:19:42 np0005597378 nova_compute[238941]: 2026-01-27 14:19:42.533 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:42 np0005597378 nova_compute[238941]: 2026-01-27 14:19:42.558 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:42 np0005597378 nova_compute[238941]: 2026-01-27 14:19:42.562 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:42 np0005597378 nova_compute[238941]: 2026-01-27 14:19:42.715 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:19:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:19:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565539619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.156 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.159 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.160 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.161 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.163 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.164 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.166 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.167 238945 DEBUG nova.objects.instance [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14ad708e-9b73-4e8e-822e-036be4f62cdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.193 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <uuid>14ad708e-9b73-4e8e-822e-036be4f62cdd</uuid>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <name>instance-00000080</name>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-375219371</nova:name>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:19:41</nova:creationTime>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:port uuid="1bcec80f-dc59-4ec8-95f1-fb7555b8b889">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <nova:port uuid="0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fea6:83f4" ipVersion="6"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <entry name="serial">14ad708e-9b73-4e8e-822e-036be4f62cdd</entry>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <entry name="uuid">14ad708e-9b73-4e8e-822e-036be4f62cdd</entry>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/14ad708e-9b73-4e8e-822e-036be4f62cdd_disk">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:1a:73:7f"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <target dev="tap1bcec80f-dc"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:a6:83:f4"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <target dev="tap0b21a97c-a7"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/console.log" append="off"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:19:43 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:19:43 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.195 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Preparing to wait for external event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.195 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Preparing to wait for external event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.196 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.197 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.197 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.198 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.198 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.199 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.199 238945 DEBUG os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.200 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.201 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.203 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.204 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bcec80f-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.204 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bcec80f-dc, col_values=(('external_ids', {'iface-id': '1bcec80f-dc59-4ec8-95f1-fb7555b8b889', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:73:7f', 'vm-uuid': '14ad708e-9b73-4e8e-822e-036be4f62cdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.206 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.2073] manager: (tap1bcec80f-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.208 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.213 238945 INFO os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc')#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.214 238945 DEBUG nova.virt.libvirt.vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:30Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.214 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.215 238945 DEBUG nova.network.os_vif_util [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.216 238945 DEBUG os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.216 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.217 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.219 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b21a97c-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.220 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b21a97c-a7, col_values=(('external_ids', {'iface-id': '0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:83:f4', 'vm-uuid': '14ad708e-9b73-4e8e-822e-036be4f62cdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.2223] manager: (tap0b21a97c-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.225 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.228 238945 INFO os_vif [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7')#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:1a:73:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.447 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:a6:83:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.448 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Using config drive#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.477 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.547 238945 DEBUG nova.network.neutron [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.596 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.597 238945 DEBUG oslo_concurrency.lockutils [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.597 238945 DEBUG nova.network.neutron [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.600 238945 DEBUG nova.virt.libvirt.vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.601 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.601 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.602 238945 DEBUG os_vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.602 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.603 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.605 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6c25c1f-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.605 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6c25c1f-7e, col_values=(('external_ids', {'iface-id': 'a6c25c1f-7e72-447c-98b1-66fc3fd447e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:b5:64', 'vm-uuid': '8112a700-f12a-43be-a5c6-f0536e53b2c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.606 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.6080] manager: (tapa6c25c1f-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.608 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.617 238945 INFO os_vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e')#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.618 238945 DEBUG nova.virt.libvirt.vif [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.618 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.619 238945 DEBUG nova.network.os_vif_util [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.621 238945 DEBUG nova.virt.libvirt.guest [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] attach device xml: <interface type="ethernet">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:33:b5:64"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <target dev="tapa6c25c1f-7e"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]: </interface>
Jan 27 09:19:43 np0005597378 nova_compute[238941]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 27 09:19:43 np0005597378 kernel: tapa6c25c1f-7e: entered promiscuous mode
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.6376] manager: (tapa6c25c1f-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/565)
Jan 27 09:19:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:43Z|01357|binding|INFO|Claiming lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for this chassis.
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.647 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:43Z|01358|binding|INFO|a6c25c1f-7e72-447c-98b1-66fc3fd447e1: Claiming fa:16:3e:33:b5:64 10.100.0.20
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.661 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b5:64 10.100.0.20'], port_security=['fa:16:3e:33:b5:64 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a6c25c1f-7e72-447c-98b1-66fc3fd447e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.662 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee bound to our chassis#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.664 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa696622-36f6-4e49-a5aa-336a8636b3ee#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.677 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97a4606b-097f-4cef-8a96-c11dd16a0d05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.678 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa696622-31 in ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.680 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa696622-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.680 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d7148683-7cde-40fd-a8dc-8cc21bb8cb7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.681 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[809e9d00-d910-4a06-944f-f3fdcf925742]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 systemd-udevd[356869]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.696 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[e6998caa-8d6c-48ae-b848-0d7b43c78d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.6993] device (tapa6c25c1f-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.7003] device (tapa6c25c1f-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.707 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:43Z|01359|binding|INFO|Setting lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 ovn-installed in OVS
Jan 27 09:19:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:43Z|01360|binding|INFO|Setting lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 up in Southbound
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.715 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad84a35a-a3ca-4f6c-8fba-c04847c593bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.745 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8719e81-94da-4c03-9be8-0a447452fc55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 systemd-udevd[356875]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.7552] manager: (tapaa696622-30): new Veth device (/org/freedesktop/NetworkManager/Devices/566)
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.754 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8445c417-c207-462a-8493-4a0497589c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.795 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[622382f1-b02e-4df2-9b47-6174cbc8d837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.798 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[125a6c96-4e60-4d9a-bec8-3ee5ecf08c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.819 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.819 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.820 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:20:a8:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.820 238945 DEBUG nova.virt.libvirt.driver [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:33:b5:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.8252] device (tapaa696622-30): carrier: link connected
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.831 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f8868fc9-d2db-4620-9490-ee5cf7b2a4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.846 238945 DEBUG nova.virt.libvirt.guest [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:19:43</nova:creationTime>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    <nova:port uuid="a6c25c1f-7e72-447c-98b1-66fc3fd447e1">
Jan 27 09:19:43 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:19:43 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:19:43 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:19:43 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.851 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[28b58498-7a04-4bc3-8503-ddb3acde7078]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356905, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.869 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7f04043e-fc17-413b-b043-946bf7a90333]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:cc0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622344, 'tstamp': 622344}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356906, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.876 238945 DEBUG oslo_concurrency.lockutils [None req-0c8c5213-eb1b-4448-bae4-9d4793b01e75 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.885 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aeae1c3c-37b3-45b4-a00f-f56823d70c5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356907, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.917 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b550a368-7dd7-43b7-b5e9-3feb2a9bc0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.977 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3daf4a-11a6-48ca-86ce-b00527bee5e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.978 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.979 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa696622-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 NetworkManager[48904]: <info>  [1769523583.9814] manager: (tapaa696622-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Jan 27 09:19:43 np0005597378 kernel: tapaa696622-30: entered promiscuous mode
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:43.990 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa696622-30, col_values=(('external_ids', {'iface-id': 'a853d263-cad4-42f8-b0f8-2a1dfd60552f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:43 np0005597378 nova_compute[238941]: 2026-01-27 14:19:43.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:43Z|01361|binding|INFO|Releasing lport a853d263-cad4-42f8-b0f8-2a1dfd60552f from this chassis (sb_readonly=0)
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.018 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa696622-36f6-4e49-a5aa-336a8636b3ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa696622-36f6-4e49-a5aa-336a8636b3ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36f21ae1-2e44-4aea-b31a-15fa001be31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.019 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/aa696622-36f6-4e49-a5aa-336a8636b3ee.pid.haproxy
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID aa696622-36f6-4e49-a5aa-336a8636b3ee
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:19:44 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:44.020 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'env', 'PROCESS_TAG=haproxy-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa696622-36f6-4e49-a5aa-336a8636b3ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.221 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Creating config drive at /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.227 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_3ujbip execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.375 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_3ujbip" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.399 238945 DEBUG nova.storage.rbd_utils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.402 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:44 np0005597378 podman[356948]: 2026-01-27 14:19:44.326202711 +0000 UTC m=+0.021004321 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.662 238945 DEBUG nova.compute.manager [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.663 238945 DEBUG oslo_concurrency.lockutils [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.663 238945 DEBUG oslo_concurrency.lockutils [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.663 238945 DEBUG oslo_concurrency.lockutils [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.664 238945 DEBUG nova.compute.manager [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.664 238945 WARNING nova.compute.manager [req-43d376b5-db3f-4491-bac7-6a69b8127c78 req-38cabcc8-98b1-4aef-85c1-aecd5ba0cacd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.683 238945 DEBUG nova.network.neutron [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updated VIF entry in instance network info cache for port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.684 238945 DEBUG nova.network.neutron [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:44 np0005597378 nova_compute[238941]: 2026-01-27 14:19:44.700 238945 DEBUG oslo_concurrency.lockutils [req-604f0087-5db5-4acf-ad4b-db1ada65d8d8 req-8167f66b-e351-40f8-bb02-3c4db6b9ecb6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:19:45 np0005597378 podman[356948]: 2026-01-27 14:19:45.104552916 +0000 UTC m=+0.799354506 container create c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:19:45 np0005597378 systemd[1]: Started libpod-conmon-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842.scope.
Jan 27 09:19:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:19:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bccd3a09aa8b8ec0742474309f47136eb579eb986e6a8bca461955f693665d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:b5:64 10.100.0.20
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:b5:64 10.100.0.20
Jan 27 09:19:45 np0005597378 podman[356948]: 2026-01-27 14:19:45.625545092 +0000 UTC m=+1.320346712 container init c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:19:45 np0005597378 podman[356948]: 2026-01-27 14:19:45.637688346 +0000 UTC m=+1.332489976 container start c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:19:45 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : New worker (357006) forked
Jan 27 09:19:45 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : Loading success.
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.801 238945 DEBUG oslo_concurrency.processutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config 14ad708e-9b73-4e8e-822e-036be4f62cdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.802 238945 INFO nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deleting local config drive /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd/disk.config because it was imported into RBD.#033[00m
Jan 27 09:19:45 np0005597378 kernel: tap1bcec80f-dc: entered promiscuous mode
Jan 27 09:19:45 np0005597378 NetworkManager[48904]: <info>  [1769523585.8525] manager: (tap1bcec80f-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/568)
Jan 27 09:19:45 np0005597378 systemd-udevd[356893]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.857 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01362|binding|INFO|Claiming lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for this chassis.
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01363|binding|INFO|1bcec80f-dc59-4ec8-95f1-fb7555b8b889: Claiming fa:16:3e:1a:73:7f 10.100.0.11
Jan 27 09:19:45 np0005597378 NetworkManager[48904]: <info>  [1769523585.8688] device (tap1bcec80f-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:19:45 np0005597378 NetworkManager[48904]: <info>  [1769523585.8695] device (tap1bcec80f-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.868 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:73:7f 10.100.0.11'], port_security=['fa:16:3e:1a:73:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1bcec80f-dc59-4ec8-95f1-fb7555b8b889) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.869 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f bound to our chassis#033[00m
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.872 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f#033[00m
Jan 27 09:19:45 np0005597378 NetworkManager[48904]: <info>  [1769523585.8742] manager: (tap0b21a97c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/569)
Jan 27 09:19:45 np0005597378 kernel: tap0b21a97c-a7: entered promiscuous mode
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01364|binding|INFO|Setting lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 ovn-installed in OVS
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01365|binding|INFO|Setting lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 up in Southbound
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.880 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01366|if_status|INFO|Dropped 4 log messages in last 138 seconds (most recently, 138 seconds ago) due to excessive rate
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01367|if_status|INFO|Not updating pb chassis for 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba now as sb is readonly
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01368|binding|INFO|Claiming lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for this chassis.
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01369|binding|INFO|0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba: Claiming fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4
Jan 27 09:19:45 np0005597378 NetworkManager[48904]: <info>  [1769523585.8905] device (tap0b21a97c-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:19:45 np0005597378 NetworkManager[48904]: <info>  [1769523585.8917] device (tap0b21a97c-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.896 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[103308df-464b-41c5-b653-40a8905b3821]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.904 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], port_security=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea6:83f4/64', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01370|binding|INFO|Setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba ovn-installed in OVS
Jan 27 09:19:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:45Z|01371|binding|INFO|Setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba up in Southbound
Jan 27 09:19:45 np0005597378 nova_compute[238941]: 2026-01-27 14:19:45.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:45 np0005597378 systemd-machined[207425]: New machine qemu-160-instance-00000080.
Jan 27 09:19:45 np0005597378 systemd[1]: Started Virtual Machine qemu-160-instance-00000080.
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.932 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[14153f93-d76f-4a10-8eba-b68a3dd6510d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.935 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7297857f-6d16-431c-bddb-b361a76aecdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.962 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[184660ff-574b-4c41-be9b-562dae04bb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:45 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:45.985 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53a975da-beb1-4f5d-8224-8542670c681a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 42315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357043, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[92196e82-f234-4cc8-ba03-0db61220e55e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618251, 'tstamp': 618251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357046, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618254, 'tstamp': 618254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357046, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.008 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.011 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd37f3d1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd37f3d1-30, col_values=(('external_ids', {'iface-id': '40babe7c-93a1-447f-a7bf-393e56c7e18c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.012 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.014 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.016 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.030 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[70954405-f23a-4b69-8162-dcbc915fe2ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.062 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcfa452-fbb5-46aa-8718-a7223807d684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.065 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2eef9123-8a59-4830-bca3-0b1ca82c2eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.095 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf726c5-89bb-4804-8e81-d21094391a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.113 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66d85ec0-ccb8-4b74-ad1f-e49c2ddd857a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 33407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357053, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.132 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0383c765-9f7a-428c-ae07-f01afdc1b783]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec30aef5-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618449, 'tstamp': 618449}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357054, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.135 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.137 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.138 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec30aef5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec30aef5-50, col_values=(('external_ids', {'iface-id': 'efb5ae4a-27c5-4322-b9a7-2ceba053c0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.139 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.324 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:46.325 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.434 238945 DEBUG nova.network.neutron [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.435 238945 DEBUG nova.network.neutron [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.452 238945 DEBUG oslo_concurrency.lockutils [req-cf0a4f4e-19b2-46b7-9c82-c0a0f22e8ead req-ea21684c-c289-4f61-920d-210b28b2162f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.784 238945 DEBUG nova.compute.manager [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG oslo_concurrency.lockutils [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG oslo_concurrency.lockutils [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG oslo_concurrency.lockutils [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 DEBUG nova.compute.manager [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.785 238945 WARNING nova.compute.manager [req-b4900c85-8fc9-4d50-a419-89db00efcf1d req-6e9d7b82-ca4a-4053-b968-5847df801780 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.812 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523586.8120117, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.813 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Started (Lifecycle Event)#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.856 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.860 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523586.8129096, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.861 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.880 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.884 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:19:46 np0005597378 nova_compute[238941]: 2026-01-27 14:19:46.911 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 27 09:19:47 np0005597378 nova_compute[238941]: 2026-01-27 14:19:47.466 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:47 np0005597378 nova_compute[238941]: 2026-01-27 14:19:47.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:19:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:19:48 np0005597378 nova_compute[238941]: 2026-01-27 14:19:48.607 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 13 KiB/s wr, 5 op/s
Jan 27 09:19:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:19:50.866 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:19:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:50Z|01372|binding|INFO|Releasing lport a853d263-cad4-42f8-b0f8-2a1dfd60552f from this chassis (sb_readonly=0)
Jan 27 09:19:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:50Z|01373|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 09:19:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:50Z|01374|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 09:19:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:50Z|01375|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 09:19:50 np0005597378 nova_compute[238941]: 2026-01-27 14:19:50.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 10 op/s
Jan 27 09:19:51 np0005597378 nova_compute[238941]: 2026-01-27 14:19:51.929 238945 DEBUG nova.compute.manager [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:51 np0005597378 nova_compute[238941]: 2026-01-27 14:19:51.929 238945 DEBUG oslo_concurrency.lockutils [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:51 np0005597378 nova_compute[238941]: 2026-01-27 14:19:51.930 238945 DEBUG oslo_concurrency.lockutils [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:51 np0005597378 nova_compute[238941]: 2026-01-27 14:19:51.930 238945 DEBUG oslo_concurrency.lockutils [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:51 np0005597378 nova_compute[238941]: 2026-01-27 14:19:51.930 238945 DEBUG nova.compute.manager [req-160a17e2-0c0b-4f96-a497-a4c86397deab req-0398da4d-45ca-4652-a21b-33eb38a8adb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Processing event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.549 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.550 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.567 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.636 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.637 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.643 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.644 238945 INFO nova.compute.claims [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:52 np0005597378 nova_compute[238941]: 2026-01-27 14:19:52.782 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 10 op/s
Jan 27 09:19:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:19:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782307138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.356 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.361 238945 DEBUG nova.compute.provider_tree [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.424 238945 DEBUG nova.scheduler.client.report [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.458 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.458 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.546 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.546 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.583 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.603 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.689 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.690 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.691 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Creating image(s)#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.710 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.730 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.750 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.753 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.787 238945 DEBUG nova.policy [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.824 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.824 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.825 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.825 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.845 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:19:53 np0005597378 nova_compute[238941]: 2026-01-27 14:19:53.848 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.023 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.024 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.024 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.024 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No event matching network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 in dict_keys([('network-vif-plugged', '0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 WARNING nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.025 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Processing event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.026 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 DEBUG oslo_concurrency.lockutils [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 DEBUG nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.027 238945 WARNING nova.compute.manager [req-672055f8-b170-4cd9-bd76-241b05d4caa5 req-1e6855d2-e38e-4a0d-ad35-7b6cf067046a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.028 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance event wait completed in 7 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.032 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523594.0323625, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.033 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.035 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.038 238945 INFO nova.virt.libvirt.driver [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance spawned successfully.#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.038 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.058 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.062 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.066 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.066 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.066 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.067 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.067 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.068 238945 DEBUG nova.virt.libvirt.driver [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.092 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.123 238945 INFO nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 23.91 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.123 238945 DEBUG nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.187 238945 INFO nova.compute.manager [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 25.08 seconds to build instance.#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.206 238945 DEBUG oslo_concurrency.lockutils [None req-37ad38d9-21da-4c9c-b896-d004a85cba0c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.777 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:19:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:54 np0005597378 nova_compute[238941]: 2026-01-27 14:19:54.840 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:19:55 np0005597378 nova_compute[238941]: 2026-01-27 14:19:55.037 238945 DEBUG nova.objects.instance [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 524e15bb-2900-40c4-a30f-4b157bfe59e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:19:55 np0005597378 nova_compute[238941]: 2026-01-27 14:19:55.062 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:19:55 np0005597378 nova_compute[238941]: 2026-01-27 14:19:55.062 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Ensure instance console log exists: /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:19:55 np0005597378 nova_compute[238941]: 2026-01-27 14:19:55.063 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:19:55 np0005597378 nova_compute[238941]: 2026-01-27 14:19:55.063 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:19:55 np0005597378 nova_compute[238941]: 2026-01-27 14:19:55.063 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:19:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 246 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 18 KiB/s wr, 11 op/s
Jan 27 09:19:56 np0005597378 nova_compute[238941]: 2026-01-27 14:19:56.796 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Successfully created port: ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:19:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 268 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 710 KiB/s wr, 66 op/s
Jan 27 09:19:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:57Z|01376|binding|INFO|Releasing lport a853d263-cad4-42f8-b0f8-2a1dfd60552f from this chassis (sb_readonly=0)
Jan 27 09:19:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:57Z|01377|binding|INFO|Releasing lport efb5ae4a-27c5-4322-b9a7-2ceba053c0fa from this chassis (sb_readonly=0)
Jan 27 09:19:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:57Z|01378|binding|INFO|Releasing lport 40babe7c-93a1-447f-a7bf-393e56c7e18c from this chassis (sb_readonly=0)
Jan 27 09:19:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:19:57Z|01379|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.708 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Successfully updated port: ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.722 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.723 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.723 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:19:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:19:57 np0005597378 nova_compute[238941]: 2026-01-27 14:19:57.880 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:19:58 np0005597378 nova_compute[238941]: 2026-01-27 14:19:58.027 238945 DEBUG nova.compute.manager [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-changed-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:19:58 np0005597378 nova_compute[238941]: 2026-01-27 14:19:58.027 238945 DEBUG nova.compute.manager [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Refreshing instance network info cache due to event network-changed-ac0e0d3a-d130-4a9a-924f-77a87f787cc2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:19:58 np0005597378 nova_compute[238941]: 2026-01-27 14:19:58.028 238945 DEBUG oslo_concurrency.lockutils [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:19:58 np0005597378 podman[357433]: 2026-01-27 14:19:58.10748041 +0000 UTC m=+0.028236364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:19:58 np0005597378 podman[357433]: 2026-01-27 14:19:58.34987585 +0000 UTC m=+0.270631794 container create c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:19:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:19:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:19:58 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:19:58 np0005597378 systemd[1]: Started libpod-conmon-c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9.scope.
Jan 27 09:19:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:19:58 np0005597378 nova_compute[238941]: 2026-01-27 14:19:58.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:19:58 np0005597378 podman[357433]: 2026-01-27 14:19:58.686164026 +0000 UTC m=+0.606919980 container init c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:19:58 np0005597378 podman[357433]: 2026-01-27 14:19:58.695854005 +0000 UTC m=+0.616609939 container start c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:58 np0005597378 stoic_darwin[357450]: 167 167
Jan 27 09:19:58 np0005597378 systemd[1]: libpod-c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9.scope: Deactivated successfully.
Jan 27 09:19:58 np0005597378 podman[357433]: 2026-01-27 14:19:58.75337238 +0000 UTC m=+0.674128324 container attach c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:19:58 np0005597378 podman[357433]: 2026-01-27 14:19:58.753991556 +0000 UTC m=+0.674747500 container died c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:19:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 27 09:19:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cb1fe89f51ae4238da96875bead3cca85a722cab2b6bf3531b8424ea23094f45-merged.mount: Deactivated successfully.
Jan 27 09:19:59 np0005597378 podman[357433]: 2026-01-27 14:19:59.684703578 +0000 UTC m=+1.605459542 container remove c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:19:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:19:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131532386' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:19:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:19:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131532386' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:19:59 np0005597378 systemd[1]: libpod-conmon-c47a998f0c6db569b48b6087368666773130057d618926e10076db275f912ca9.scope: Deactivated successfully.
Jan 27 09:19:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:19:59 np0005597378 nova_compute[238941]: 2026-01-27 14:19:59.922 238945 DEBUG nova.network.neutron [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:19:59 np0005597378 podman[357475]: 2026-01-27 14:19:59.929100542 +0000 UTC m=+0.055891393 container create 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:19:59 np0005597378 podman[357475]: 2026-01-27 14:19:59.903050856 +0000 UTC m=+0.029841727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.029 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.029 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance network_info: |[{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.029 238945 DEBUG oslo_concurrency.lockutils [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.030 238945 DEBUG nova.network.neutron [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Refreshing network info cache for port ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.032 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start _get_guest_xml network_info=[{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.037 238945 WARNING nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.045 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.046 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.049 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.050 238945 DEBUG nova.virt.libvirt.host [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.050 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.050 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.051 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.052 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.052 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.052 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.053 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.053 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.053 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.054 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.054 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.054 238945 DEBUG nova.virt.hardware [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.060 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:00 np0005597378 systemd[1]: Started libpod-conmon-8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43.scope.
Jan 27 09:20:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:20:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:00 np0005597378 podman[357475]: 2026-01-27 14:20:00.240647257 +0000 UTC m=+0.367438128 container init 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:20:00 np0005597378 podman[357475]: 2026-01-27 14:20:00.248728133 +0000 UTC m=+0.375518984 container start 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:20:00 np0005597378 podman[357475]: 2026-01-27 14:20:00.275742874 +0000 UTC m=+0.402533775 container attach 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:20:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:20:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318364148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.662 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.694 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:00 np0005597378 nova_compute[238941]: 2026-01-27 14:20:00.721 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:00 np0005597378 laughing_agnesi[357493]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:20:00 np0005597378 laughing_agnesi[357493]: --> All data devices are unavailable
Jan 27 09:20:00 np0005597378 systemd[1]: libpod-8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43.scope: Deactivated successfully.
Jan 27 09:20:00 np0005597378 podman[357553]: 2026-01-27 14:20:00.798546759 +0000 UTC m=+0.024327741 container died 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:20:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 27 09:20:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d429a83ac15aae09d87c08d979c34ec9f42d5d5f345f4382dc92e8ef80ee93da-merged.mount: Deactivated successfully.
Jan 27 09:20:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:20:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314808992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.305 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.309 238945 DEBUG nova.virt.libvirt.vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416201237',display_name='tempest-TestNetworkBasicOps-server-416201237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416201237',id=129,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxqUrpjwSX2VKwWkqwrm5TmtR8eSNaUQORQqYjXZgJNbQzWzVjrR4Wkg7gf4skQSY2qbyqZQiFKC3/y2GJSL2I0x1IYCAsvCUtak2Fzh/j54u9+8rJjIpxoqLh6/2welg==',key_name='tempest-TestNetworkBasicOps-99858625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d3mjbikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:53Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=524e15bb-2900-40c4-a30f-4b157bfe59e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.309 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.310 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.312 238945 DEBUG nova.objects.instance [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 524e15bb-2900-40c4-a30f-4b157bfe59e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:01 np0005597378 podman[357553]: 2026-01-27 14:20:01.331485463 +0000 UTC m=+0.557266465 container remove 8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.334 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <uuid>524e15bb-2900-40c4-a30f-4b157bfe59e1</uuid>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <name>instance-00000081</name>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-416201237</nova:name>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:20:00</nova:creationTime>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <nova:port uuid="ac0e0d3a-d130-4a9a-924f-77a87f787cc2">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <entry name="serial">524e15bb-2900-40c4-a30f-4b157bfe59e1</entry>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <entry name="uuid">524e15bb-2900-40c4-a30f-4b157bfe59e1</entry>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/524e15bb-2900-40c4-a30f-4b157bfe59e1_disk">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:d5:d9:05"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <target dev="tapac0e0d3a-d1"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/console.log" append="off"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:20:01 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:20:01 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:20:01 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:20:01 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.336 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Preparing to wait for external event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.336 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.337 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:01 np0005597378 systemd[1]: libpod-conmon-8145a736f956a2a3b94728d927ecc199bd2b5e506038f44b44f56a5ca0d42d43.scope: Deactivated successfully.
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.337 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.339 238945 DEBUG nova.virt.libvirt.vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416201237',display_name='tempest-TestNetworkBasicOps-server-416201237',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416201237',id=129,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxqUrpjwSX2VKwWkqwrm5TmtR8eSNaUQORQqYjXZgJNbQzWzVjrR4Wkg7gf4skQSY2qbyqZQiFKC3/y2GJSL2I0x1IYCAsvCUtak2Fzh/j54u9+8rJjIpxoqLh6/2welg==',key_name='tempest-TestNetworkBasicOps-99858625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d3mjbikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:19:53Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=524e15bb-2900-40c4-a30f-4b157bfe59e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.340 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.340 238945 DEBUG nova.network.os_vif_util [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.341 238945 DEBUG os_vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.343 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.343 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.347 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac0e0d3a-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.348 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac0e0d3a-d1, col_values=(('external_ids', {'iface-id': 'ac0e0d3a-d130-4a9a-924f-77a87f787cc2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:d9:05', 'vm-uuid': '524e15bb-2900-40c4-a30f-4b157bfe59e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:01 np0005597378 NetworkManager[48904]: <info>  [1769523601.3514] manager: (tapac0e0d3a-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.358 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.359 238945 INFO os_vif [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1')#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.448 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.449 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.449 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:d5:d9:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.450 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Using config drive#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.472 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG nova.compute.manager [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG nova.compute.manager [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG oslo_concurrency.lockutils [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.828 238945 DEBUG oslo_concurrency.lockutils [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.829 238945 DEBUG nova.network.neutron [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.852 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Creating config drive at /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config#033[00m
Jan 27 09:20:01 np0005597378 nova_compute[238941]: 2026-01-27 14:20:01.857 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjrwzn3o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:01 np0005597378 podman[357671]: 2026-01-27 14:20:01.867954703 +0000 UTC m=+0.106004921 container create fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:20:01 np0005597378 podman[357671]: 2026-01-27 14:20:01.784530126 +0000 UTC m=+0.022580374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:20:01 np0005597378 systemd[1]: Started libpod-conmon-fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae.scope.
Jan 27 09:20:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:20:01 np0005597378 podman[357686]: 2026-01-27 14:20:01.982357826 +0000 UTC m=+0.081912027 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.008 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjrwzn3o" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.031 238945 DEBUG nova.storage.rbd_utils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.035 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:02 np0005597378 podman[357671]: 2026-01-27 14:20:02.036370088 +0000 UTC m=+0.274420336 container init fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:20:02 np0005597378 podman[357671]: 2026-01-27 14:20:02.044520625 +0000 UTC m=+0.282570843 container start fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:20:02 np0005597378 epic_williams[357700]: 167 167
Jan 27 09:20:02 np0005597378 systemd[1]: libpod-fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae.scope: Deactivated successfully.
Jan 27 09:20:02 np0005597378 podman[357671]: 2026-01-27 14:20:02.064685013 +0000 UTC m=+0.302735241 container attach fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:20:02 np0005597378 podman[357671]: 2026-01-27 14:20:02.066023159 +0000 UTC m=+0.304073377 container died fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:20:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-61a25c3b34ca6c6e3aefb335b11b690cafc089790572204fc145555bf4fc2bcb-merged.mount: Deactivated successfully.
Jan 27 09:20:02 np0005597378 podman[357671]: 2026-01-27 14:20:02.205269385 +0000 UTC m=+0.443319603 container remove fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williams, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:20:02 np0005597378 systemd[1]: libpod-conmon-fed0ab4028596245212a0a42bdc4b6f954202ad17d37768dcccfdda2ff1967ae.scope: Deactivated successfully.
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.354 238945 DEBUG oslo_concurrency.processutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config 524e15bb-2900-40c4-a30f-4b157bfe59e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.355 238945 INFO nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deleting local config drive /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1/disk.config because it was imported into RBD.#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:02 np0005597378 kernel: tapac0e0d3a-d1: entered promiscuous mode
Jan 27 09:20:02 np0005597378 NetworkManager[48904]: <info>  [1769523602.4217] manager: (tapac0e0d3a-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/571)
Jan 27 09:20:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:02Z|01380|binding|INFO|Claiming lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for this chassis.
Jan 27 09:20:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:02Z|01381|binding|INFO|ac0e0d3a-d130-4a9a-924f-77a87f787cc2: Claiming fa:16:3e:d5:d9:05 10.100.0.27
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.434 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:d9:05 10.100.0.27'], port_security=['fa:16:3e:d5:d9:05 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '524e15bb-2900-40c4-a30f-4b157bfe59e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff10884f-7985-426f-bfc3-0fc975d089ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac0e0d3a-d130-4a9a-924f-77a87f787cc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.435 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac0e0d3a-d130-4a9a-924f-77a87f787cc2 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee bound to our chassis#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.437 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa696622-36f6-4e49-a5aa-336a8636b3ee#033[00m
Jan 27 09:20:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:02Z|01382|binding|INFO|Setting lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 ovn-installed in OVS
Jan 27 09:20:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:02Z|01383|binding|INFO|Setting lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 up in Southbound
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:02 np0005597378 podman[357766]: 2026-01-27 14:20:02.46356079 +0000 UTC m=+0.085721969 container create ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.458 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12b04e44-65c0-4326-a6fb-592a55cab8f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:02 np0005597378 systemd-machined[207425]: New machine qemu-161-instance-00000081.
Jan 27 09:20:02 np0005597378 systemd[1]: Started Virtual Machine qemu-161-instance-00000081.
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.494 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ae63ca7e-0fd4-4cab-ae61-29ebb357e9b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.500 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[da63f5a1-5792-4b1f-a225-31084d89935e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:02 np0005597378 podman[357766]: 2026-01-27 14:20:02.411459749 +0000 UTC m=+0.033620948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:20:02 np0005597378 systemd-udevd[357795]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:20:02 np0005597378 NetworkManager[48904]: <info>  [1769523602.5292] device (tapac0e0d3a-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:20:02 np0005597378 NetworkManager[48904]: <info>  [1769523602.5303] device (tapac0e0d3a-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.531 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ed345512-f451-4804-8b09-a524f7212d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:02 np0005597378 systemd[1]: Started libpod-conmon-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope.
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.552 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c3ea9a-f553-476f-b054-fe18f5e225ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357805, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.580 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[46265592-7a9a-47be-8804-08920b1c0b73]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622356, 'tstamp': 622356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357821, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622359, 'tstamp': 622359}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357821, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.585 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.592 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa696622-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa696622-30, col_values=(('external_ids', {'iface-id': 'a853d263-cad4-42f8-b0f8-2a1dfd60552f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:02.594 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:02 np0005597378 podman[357766]: 2026-01-27 14:20:02.6340259 +0000 UTC m=+0.256187079 container init ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:20:02 np0005597378 podman[357766]: 2026-01-27 14:20:02.644345385 +0000 UTC m=+0.266506564 container start ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:20:02 np0005597378 podman[357766]: 2026-01-27 14:20:02.6726471 +0000 UTC m=+0.294808299 container attach ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.693 238945 DEBUG nova.network.neutron [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updated VIF entry in instance network info cache for port ac0e0d3a-d130-4a9a-924f-77a87f787cc2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.694 238945 DEBUG nova.network.neutron [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [{"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.718 238945 DEBUG oslo_concurrency.lockutils [req-b659b507-4965-48b9-8c30-8786ab84b23e req-82db38b6-f5e1-4453-8f8e-fe60ca4bb0dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-524e15bb-2900-40c4-a30f-4b157bfe59e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.913 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523602.9126995, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.913 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Started (Lifecycle Event)#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.932 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.936 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523602.915556, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.937 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]: {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:    "0": [
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:        {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "devices": [
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "/dev/loop3"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            ],
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_name": "ceph_lv0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_size": "21470642176",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "name": "ceph_lv0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "tags": {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cluster_name": "ceph",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.crush_device_class": "",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.encrypted": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.objectstore": "bluestore",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osd_id": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.type": "block",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.vdo": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.with_tpm": "0"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            },
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "type": "block",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "vg_name": "ceph_vg0"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:        }
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:    ],
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:    "1": [
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:        {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "devices": [
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "/dev/loop4"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            ],
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_name": "ceph_lv1",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_size": "21470642176",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "name": "ceph_lv1",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "tags": {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cluster_name": "ceph",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.crush_device_class": "",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.encrypted": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.objectstore": "bluestore",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osd_id": "1",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.type": "block",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.vdo": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.with_tpm": "0"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            },
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "type": "block",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "vg_name": "ceph_vg1"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:        }
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:    ],
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:    "2": [
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:        {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "devices": [
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "/dev/loop5"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            ],
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_name": "ceph_lv2",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_size": "21470642176",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "name": "ceph_lv2",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "tags": {
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.cluster_name": "ceph",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.crush_device_class": "",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.encrypted": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.objectstore": "bluestore",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osd_id": "2",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.type": "block",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.vdo": "0",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:                "ceph.with_tpm": "0"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            },
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "type": "block",
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:            "vg_name": "ceph_vg2"
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:        }
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]:    ]
Jan 27 09:20:02 np0005597378 awesome_shannon[357809]: }
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.958 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.961 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:20:02 np0005597378 systemd[1]: libpod-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope: Deactivated successfully.
Jan 27 09:20:02 np0005597378 conmon[357809]: conmon ab125eed060260005b5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope/container/memory.events
Jan 27 09:20:02 np0005597378 nova_compute[238941]: 2026-01-27 14:20:02.982 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:20:03 np0005597378 podman[357880]: 2026-01-27 14:20:03.025110098 +0000 UTC m=+0.030902545 container died ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:20:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:20:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533273019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:20:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.093 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.174 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.174 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.177 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.178 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.181 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.181 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.184 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.184 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:20:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-69870cc18deff7ff5d8254e51938124343824dcea84bf434be0bef167221cb8e-merged.mount: Deactivated successfully.
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.403 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.404 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3099MB free_disk=59.854668624699116GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.516 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 98452226-e32f-475f-814f-d0eba538b8ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 14ad708e-9b73-4e8e-822e-036be4f62cdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 524e15bb-2900-40c4-a30f-4b157bfe59e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.518 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.518 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.605 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:03 np0005597378 podman[357880]: 2026-01-27 14:20:03.646168176 +0000 UTC m=+0.651960603 container remove ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:20:03 np0005597378 systemd[1]: libpod-conmon-ab125eed060260005b5c4b2f1254d7f7a52a283c31a3834b70de09d790a005ef.scope: Deactivated successfully.
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.720 238945 DEBUG nova.network.neutron [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updated VIF entry in instance network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.721 238945 DEBUG nova.network.neutron [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.804 238945 DEBUG oslo_concurrency.lockutils [req-c1aab6cb-91bb-4be7-a682-ccb7494db208 req-b7444b28-2fb5-46e7-aa94-5353bf4c9773 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.945 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.946 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.946 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Processing event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.947 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.948 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.948 238945 DEBUG oslo_concurrency.lockutils [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.948 238945 DEBUG nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] No waiting events found dispatching network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.949 238945 WARNING nova.compute.manager [req-225f8220-805c-4701-b88b-7a3f40ded291 req-c388d6d1-9069-4c90-93b1-cb64d08ada7e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received unexpected event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.949 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.954 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523603.9541442, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.954 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.957 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.961 238945 INFO nova.virt.libvirt.driver [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance spawned successfully.#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.961 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.982 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.985 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.990 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.990 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.990 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.991 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.991 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:03 np0005597378 nova_compute[238941]: 2026-01-27 14:20:03.991 238945 DEBUG nova.virt.libvirt.driver [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.005 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.076 238945 INFO nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 10.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.077 238945 DEBUG nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.140 238945 INFO nova.compute.manager [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 11.53 seconds to build instance.#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.159 238945 DEBUG oslo_concurrency.lockutils [None req-5ef74f44-04e2-40dd-9cef-549afe94c7cd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.10455794 +0000 UTC m=+0.032266792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:20:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:20:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254591161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.241 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.246 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.250851135 +0000 UTC m=+0.178559957 container create 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.268 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:20:04 np0005597378 systemd[1]: Started libpod-conmon-5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027.scope.
Jan 27 09:20:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.475782669 +0000 UTC m=+0.403491531 container init 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.485073687 +0000 UTC m=+0.412782529 container start 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:20:04 np0005597378 friendly_brahmagupta[357999]: 167 167
Jan 27 09:20:04 np0005597378 systemd[1]: libpod-5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027.scope: Deactivated successfully.
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.514 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:20:04 np0005597378 nova_compute[238941]: 2026-01-27 14:20:04.515 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.573662351 +0000 UTC m=+0.501371173 container attach 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.574074152 +0000 UTC m=+0.501782994 container died 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:20:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-74c821c5e5d2979c421a782cb3410776784141c64d56b943e06f93e1b9a0a270-merged.mount: Deactivated successfully.
Jan 27 09:20:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:04 np0005597378 podman[357980]: 2026-01-27 14:20:04.877108621 +0000 UTC m=+0.804817443 container remove 5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brahmagupta, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 09:20:04 np0005597378 systemd[1]: libpod-conmon-5d5f8bb3164b3885186b5b151861312ef8fedd441ea309fc3d676e031fc55027.scope: Deactivated successfully.
Jan 27 09:20:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 293 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 27 09:20:05 np0005597378 podman[358019]: 2026-01-27 14:20:05.132944818 +0000 UTC m=+0.124306847 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:20:05 np0005597378 podman[358040]: 2026-01-27 14:20:05.109894424 +0000 UTC m=+0.034985835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:20:05 np0005597378 podman[358040]: 2026-01-27 14:20:05.216791597 +0000 UTC m=+0.141882978 container create cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:20:05 np0005597378 systemd[1]: Started libpod-conmon-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope.
Jan 27 09:20:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:20:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:05 np0005597378 podman[358040]: 2026-01-27 14:20:05.436287866 +0000 UTC m=+0.361379267 container init cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:20:05 np0005597378 podman[358040]: 2026-01-27 14:20:05.444721261 +0000 UTC m=+0.369812642 container start cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:20:05 np0005597378 podman[358040]: 2026-01-27 14:20:05.485094149 +0000 UTC m=+0.410185560 container attach cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:20:06 np0005597378 lvm[358148]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:20:06 np0005597378 lvm[358147]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:20:06 np0005597378 lvm[358147]: VG ceph_vg1 finished
Jan 27 09:20:06 np0005597378 lvm[358148]: VG ceph_vg2 finished
Jan 27 09:20:06 np0005597378 lvm[358146]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:20:06 np0005597378 lvm[358146]: VG ceph_vg0 finished
Jan 27 09:20:06 np0005597378 pensive_wiles[358067]: {}
Jan 27 09:20:06 np0005597378 nova_compute[238941]: 2026-01-27 14:20:06.352 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:06 np0005597378 systemd[1]: libpod-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope: Deactivated successfully.
Jan 27 09:20:06 np0005597378 systemd[1]: libpod-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope: Consumed 1.427s CPU time.
Jan 27 09:20:06 np0005597378 podman[358040]: 2026-01-27 14:20:06.372945577 +0000 UTC m=+1.298037008 container died cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:20:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b25541fdfb2bd15fc1eb64be9940943691fe09794d37cce090f575248db0f46f-merged.mount: Deactivated successfully.
Jan 27 09:20:06 np0005597378 podman[358040]: 2026-01-27 14:20:06.633610174 +0000 UTC m=+1.558701555 container remove cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:20:06 np0005597378 systemd[1]: libpod-conmon-cf2a128781942de1d00339406b98b0933ef79de8019b9890bce09e913e2b6ee4.scope: Deactivated successfully.
Jan 27 09:20:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:20:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:20:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:20:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:20:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 293 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 27 09:20:07 np0005597378 nova_compute[238941]: 2026-01-27 14:20:07.725 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:20:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:20:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 09:20:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 300 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 27 09:20:09 np0005597378 nova_compute[238941]: 2026-01-27 14:20:09.514 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:09 np0005597378 nova_compute[238941]: 2026-01-27 14:20:09.515 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:10 np0005597378 nova_compute[238941]: 2026-01-27 14:20:10.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 308 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 27 09:20:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:11Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:73:7f 10.100.0.11
Jan 27 09:20:11 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:11Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:73:7f 10.100.0.11
Jan 27 09:20:11 np0005597378 nova_compute[238941]: 2026-01-27 14:20:11.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:12 np0005597378 nova_compute[238941]: 2026-01-27 14:20:12.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:12 np0005597378 nova_compute[238941]: 2026-01-27 14:20:12.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 308 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 27 09:20:14 np0005597378 nova_compute[238941]: 2026-01-27 14:20:14.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:14 np0005597378 nova_compute[238941]: 2026-01-27 14:20:14.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:20:14 np0005597378 nova_compute[238941]: 2026-01-27 14:20:14.726 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:14 np0005597378 nova_compute[238941]: 2026-01-27 14:20:14.728 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:14 np0005597378 nova_compute[238941]: 2026-01-27 14:20:14.728 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:20:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 324 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 09:20:15 np0005597378 nova_compute[238941]: 2026-01-27 14:20:15.116 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:16 np0005597378 nova_compute[238941]: 2026-01-27 14:20:16.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 326 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:20:17
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr']
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:20:17 np0005597378 nova_compute[238941]: 2026-01-27 14:20:17.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:20:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:20:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 326 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 27 09:20:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 327 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Jan 27 09:20:21 np0005597378 nova_compute[238941]: 2026-01-27 14:20:21.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:21 np0005597378 nova_compute[238941]: 2026-01-27 14:20:21.745 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:21 np0005597378 nova_compute[238941]: 2026-01-27 14:20:21.762 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:21 np0005597378 nova_compute[238941]: 2026-01-27 14:20:21.762 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:20:21 np0005597378 nova_compute[238941]: 2026-01-27 14:20:21.763 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:22 np0005597378 nova_compute[238941]: 2026-01-27 14:20:22.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:22 np0005597378 nova_compute[238941]: 2026-01-27 14:20:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:22 np0005597378 nova_compute[238941]: 2026-01-27 14:20:22.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:20:22 np0005597378 nova_compute[238941]: 2026-01-27 14:20:22.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:22Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:d9:05 10.100.0.27
Jan 27 09:20:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:22Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:d9:05 10.100.0.27
Jan 27 09:20:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 327 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 891 KiB/s wr, 45 op/s
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.258 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.259 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.278 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.370 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.371 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.381 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.381 238945 INFO nova.compute.claims [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:20:23 np0005597378 nova_compute[238941]: 2026-01-27 14:20:23.550 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:20:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520860190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.130 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.137 238945 DEBUG nova.compute.provider_tree [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.155 238945 DEBUG nova.scheduler.client.report [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.175 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.175 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.215 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.216 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.235 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.258 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.334 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.336 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.337 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Creating image(s)#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.370 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.397 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.426 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.431 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.506 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.507 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.508 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.509 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.686 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.689 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9588e56d-325a-44ac-b589-16da13fbcc3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:24 np0005597378 nova_compute[238941]: 2026-01-27 14:20:24.750 238945 DEBUG nova.policy [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:20:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 349 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Jan 27 09:20:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:25.544 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:25 np0005597378 nova_compute[238941]: 2026-01-27 14:20:25.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:25.546 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:20:25 np0005597378 nova_compute[238941]: 2026-01-27 14:20:25.680 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Successfully created port: 09c77aca-6ddf-4429-a493-6659c2468c83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.852 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Successfully updated port: 09c77aca-6ddf-4429-a493-6659c2468c83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.866 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.867 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.867 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.939 238945 DEBUG nova.compute.manager [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.939 238945 DEBUG nova.compute.manager [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing instance network info cache due to event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:26 np0005597378 nova_compute[238941]: 2026-01-27 14:20:26.940 238945 DEBUG oslo_concurrency.lockutils [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:27 np0005597378 nova_compute[238941]: 2026-01-27 14:20:27.046 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 351 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 09:20:27 np0005597378 nova_compute[238941]: 2026-01-27 14:20:27.652 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9588e56d-325a-44ac-b589-16da13fbcc3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:27 np0005597378 nova_compute[238941]: 2026-01-27 14:20:27.715 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:20:27 np0005597378 nova_compute[238941]: 2026-01-27 14:20:27.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003040959315428225 of space, bias 1.0, pg target 0.9122877946284675 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693932059500967 of space, bias 1.0, pg target 0.200817961785029 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0498366318946316e-06 of space, bias 4.0, pg target 0.001259803958273558 quantized to 16 (current 16)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:20:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.006 238945 DEBUG nova.network.neutron [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.033 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.033 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance network_info: |[{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.034 238945 DEBUG oslo_concurrency.lockutils [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.034 238945 DEBUG nova.network.neutron [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.410 238945 DEBUG nova.objects.instance [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.424 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.424 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Ensure instance console log exists: /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.425 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.425 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.426 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.428 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start _get_guest_xml network_info=[{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.433 238945 WARNING nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.440 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.441 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.444 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.445 238945 DEBUG nova.virt.libvirt.host [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.445 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.445 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.446 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.446 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.447 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.448 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.448 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.448 238945 DEBUG nova.virt.hardware [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:20:28 np0005597378 nova_compute[238941]: 2026-01-27 14:20:28.451 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:20:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717998728' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.030 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.054 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.060 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 381 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.3 MiB/s wr, 76 op/s
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.140 238945 DEBUG nova.network.neutron [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated VIF entry in instance network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.141 238945 DEBUG nova.network.neutron [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.157 238945 DEBUG oslo_concurrency.lockutils [req-dcc0eb02-16ab-48a2-be14-6655b012d8dd req-8fd021bc-c327-4387-932b-f626e6dab7fc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:20:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:20:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195715627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.599 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.601 238945 DEBUG nova.virt.libvirt.vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:20:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=130,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+gqlVzG9h7jXhfyoskTs2NCm6wAB3wVDlwONrKb4mWpkwLIK+XxA+6h41JzRCoN6TybE0DPiUgsj35t6yTYW/Hd7vrF1apMuU/h4HUaTJzVzqD1e3yepTjEIwWfGCDQ==',key_name='tempest-TestSecurityGroupsBasicOps-931992880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-msmno0o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:20:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9588e56d-325a-44ac-b589-16da13fbcc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.602 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.603 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.606 238945 DEBUG nova.objects.instance [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.622 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <uuid>9588e56d-325a-44ac-b589-16da13fbcc3d</uuid>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <name>instance-00000082</name>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698</nova:name>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:20:28</nova:creationTime>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <nova:port uuid="09c77aca-6ddf-4429-a493-6659c2468c83">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <entry name="serial">9588e56d-325a-44ac-b589-16da13fbcc3d</entry>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <entry name="uuid">9588e56d-325a-44ac-b589-16da13fbcc3d</entry>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9588e56d-325a-44ac-b589-16da13fbcc3d_disk">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:dc:fd:e4"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <target dev="tap09c77aca-6d"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/console.log" append="off"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:20:29 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:20:29 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:20:29 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:20:29 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.623 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Preparing to wait for external event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.624 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.624 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.625 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.626 238945 DEBUG nova.virt.libvirt.vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:20:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=130,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+gqlVzG9h7jXhfyoskTs2NCm6wAB3wVDlwONrKb4mWpkwLIK+XxA+6h41JzRCoN6TybE0DPiUgsj35t6yTYW/Hd7vrF1apMuU/h4HUaTJzVzqD1e3yepTjEIwWfGCDQ==',key_name='tempest-TestSecurityGroupsBasicOps-931992880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-msmno0o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:20:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9588e56d-325a-44ac-b589-16da13fbcc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.626 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.627 238945 DEBUG nova.network.os_vif_util [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.628 238945 DEBUG os_vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.630 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.630 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.635 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09c77aca-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.636 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09c77aca-6d, col_values=(('external_ids', {'iface-id': '09c77aca-6ddf-4429-a493-6659c2468c83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:fd:e4', 'vm-uuid': '9588e56d-325a-44ac-b589-16da13fbcc3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:29 np0005597378 NetworkManager[48904]: <info>  [1769523629.6393] manager: (tap09c77aca-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.646 238945 INFO os_vif [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d')#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.959 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.960 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.960 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:dc:fd:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:20:29 np0005597378 nova_compute[238941]: 2026-01-27 14:20:29.961 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Using config drive#033[00m
Jan 27 09:20:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.109 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.299 238945 DEBUG nova.compute.manager [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.299 238945 DEBUG nova.compute.manager [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-a6c25c1f-7e72-447c-98b1-66fc3fd447e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.299 238945 DEBUG oslo_concurrency.lockutils [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.300 238945 DEBUG oslo_concurrency.lockutils [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.300 238945 DEBUG nova.network.neutron [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.373 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Creating config drive at /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.378 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14oarmqk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.517 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14oarmqk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.541 238945 DEBUG nova.storage.rbd_utils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:20:30 np0005597378 nova_compute[238941]: 2026-01-27 14:20:30.544 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 27 09:20:31 np0005597378 nova_compute[238941]: 2026-01-27 14:20:31.390 238945 DEBUG nova.network.neutron [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port a6c25c1f-7e72-447c-98b1-66fc3fd447e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:31 np0005597378 nova_compute[238941]: 2026-01-27 14:20:31.391 238945 DEBUG nova.network.neutron [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:31 np0005597378 nova_compute[238941]: 2026-01-27 14:20:31.441 238945 DEBUG oslo_concurrency.lockutils [req-f7396a79-0477-438f-a335-c52bfa1efaaa req-17611c25-25eb-4d6a-8c4a-a4b0cdf61654 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.321 238945 DEBUG oslo_concurrency.processutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config 9588e56d-325a-44ac-b589-16da13fbcc3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.322 238945 INFO nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deleting local config drive /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d/disk.config because it was imported into RBD.#033[00m
Jan 27 09:20:32 np0005597378 kernel: tap09c77aca-6d: entered promiscuous mode
Jan 27 09:20:32 np0005597378 NetworkManager[48904]: <info>  [1769523632.4029] manager: (tap09c77aca-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/573)
Jan 27 09:20:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:32Z|01384|binding|INFO|Claiming lport 09c77aca-6ddf-4429-a493-6659c2468c83 for this chassis.
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:32Z|01385|binding|INFO|09c77aca-6ddf-4429-a493-6659c2468c83: Claiming fa:16:3e:dc:fd:e4 10.100.0.14
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.414 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:fd:e4 10.100.0.14'], port_security=['fa:16:3e:dc:fd:e4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9588e56d-325a-44ac-b589-16da13fbcc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-820c1fd4-2071-45df-974d-54892e70889b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '649dea99-5b61-4f66-9587-d172de12a07d c497b409-cdfa-4ad1-9b57-9f3c97ba8246', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ababd73-2b6f-4f89-98d3-56671274acc6, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=09c77aca-6ddf-4429-a493-6659c2468c83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.415 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 09c77aca-6ddf-4429-a493-6659c2468c83 in datapath 820c1fd4-2071-45df-974d-54892e70889b bound to our chassis#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.417 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 820c1fd4-2071-45df-974d-54892e70889b#033[00m
Jan 27 09:20:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:32Z|01386|binding|INFO|Setting lport 09c77aca-6ddf-4429-a493-6659c2468c83 ovn-installed in OVS
Jan 27 09:20:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:32Z|01387|binding|INFO|Setting lport 09c77aca-6ddf-4429-a493-6659c2468c83 up in Southbound
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.432 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9b14ac-223f-481c-85d9-c53f12f191d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.433 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap820c1fd4-21 in ovnmeta-820c1fd4-2071-45df-974d-54892e70889b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.435 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap820c1fd4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.435 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60265ffc-1d68-4c4c-9f40-47f4097645d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.437 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3af88143-7b4f-4fa0-9cb8-cbab81ecb061]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.452 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5bebc1-f5f2-4b4f-b543-c48e400762e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 systemd-udevd[358526]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:20:32 np0005597378 systemd-machined[207425]: New machine qemu-162-instance-00000082.
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.467 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2f142846-ca03-48e4-9773-2a74678eb62e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Jan 27 09:20:32 np0005597378 NetworkManager[48904]: <info>  [1769523632.4786] device (tap09c77aca-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:20:32 np0005597378 NetworkManager[48904]: <info>  [1769523632.4796] device (tap09c77aca-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.498 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[091d29d1-8d9f-470e-8024-6a6ea4671910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.504 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3f852a-ba58-47fc-b603-38189db46289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 NetworkManager[48904]: <info>  [1769523632.5060] manager: (tap820c1fd4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/574)
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.540 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[278b0508-2e47-4574-8ec6-096da3a8509f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 podman[358512]: 2026-01-27 14:20:32.543143961 +0000 UTC m=+0.104293924 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.543 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9881f192-98c3-412f-93bf-9b0ba96f178d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.548 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:32 np0005597378 NetworkManager[48904]: <info>  [1769523632.5678] device (tap820c1fd4-20): carrier: link connected
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.574 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a117d117-205b-4962-b937-ca93a5a0ba4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.594 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[523d003b-7a53-4090-99c4-bf97a137f2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap820c1fd4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:5c:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627219, 'reachable_time': 24586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358565, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.609 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[33788e2c-888b-4e25-8731-32312b222255]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:5c1f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 627219, 'tstamp': 627219}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358566, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.626 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[40098e0b-334d-4f05-b1fa-3f0d9245b40d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap820c1fd4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:5c:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627219, 'reachable_time': 24586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358567, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.660 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd326a4-e671-4ea6-ac67-59102c7ace57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.721 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7e64d6b5-d66c-4683-9a30-0c2138a15417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.722 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap820c1fd4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.722 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.723 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap820c1fd4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.724 238945 DEBUG nova.compute.manager [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.724 238945 DEBUG oslo_concurrency.lockutils [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:32 np0005597378 kernel: tap820c1fd4-20: entered promiscuous mode
Jan 27 09:20:32 np0005597378 NetworkManager[48904]: <info>  [1769523632.7258] manager: (tap820c1fd4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.725 238945 DEBUG oslo_concurrency.lockutils [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.728 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap820c1fd4-20, col_values=(('external_ids', {'iface-id': '506e7ffb-d74f-480e-9382-49f98d134f52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.728 238945 DEBUG oslo_concurrency.lockutils [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.728 238945 DEBUG nova.compute.manager [req-f4244a33-3eb6-42be-8840-5e290ba9a0c4 req-fc08f442-0c5e-4f3e-8b12-af2affc5f8f1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Processing event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:32Z|01388|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 09:20:32 np0005597378 nova_compute[238941]: 2026-01-27 14:20:32.744 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.747 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/820c1fd4-2071-45df-974d-54892e70889b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/820c1fd4-2071-45df-974d-54892e70889b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.748 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f144a507-4c92-4b86-9357-37b5d2224919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.749 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-820c1fd4-2071-45df-974d-54892e70889b
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/820c1fd4-2071-45df-974d-54892e70889b.pid.haproxy
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 820c1fd4-2071-45df-974d-54892e70889b
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:20:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:32.750 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'env', 'PROCESS_TAG=haproxy-820c1fd4-2071-45df-974d-54892e70889b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/820c1fd4-2071-45df-974d-54892e70889b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.055 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523633.055394, 9588e56d-325a-44ac-b589-16da13fbcc3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.056 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Started (Lifecycle Event)#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.057 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.060 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.063 238945 INFO nova.virt.libvirt.driver [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance spawned successfully.#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.063 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.090 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.093 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.093 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.094 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.094 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.095 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.095 238945 DEBUG nova.virt.libvirt.driver [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.117 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.117 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523633.0555327, 9588e56d-325a-44ac-b589-16da13fbcc3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.118 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.142 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.145 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523633.0599995, 9588e56d-325a-44ac-b589-16da13fbcc3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.145 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.154 238945 INFO nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 8.82 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.154 238945 DEBUG nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.162 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.163 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.187 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:20:33 np0005597378 podman[358641]: 2026-01-27 14:20:33.108206834 +0000 UTC m=+0.022280276 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.235 238945 INFO nova.compute.manager [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 9.91 seconds to build instance.#033[00m
Jan 27 09:20:33 np0005597378 nova_compute[238941]: 2026-01-27 14:20:33.254 238945 DEBUG oslo_concurrency.lockutils [None req-0b5ab34e-2b0a-41ac-9976-9a62bfa4f620 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:33 np0005597378 podman[358641]: 2026-01-27 14:20:33.650139149 +0000 UTC m=+0.564212601 container create 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:20:33 np0005597378 systemd[1]: Started libpod-conmon-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557.scope.
Jan 27 09:20:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:20:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fc1971e69b0afb800322f8c5fff8c16f85eb53b98fe3874cad2ffac52b1fb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:20:34 np0005597378 podman[358641]: 2026-01-27 14:20:34.018716207 +0000 UTC m=+0.932789659 container init 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:20:34 np0005597378 podman[358641]: 2026-01-27 14:20:34.030020888 +0000 UTC m=+0.944094320 container start 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:20:34 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : New worker (358663) forked
Jan 27 09:20:34 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : Loading success.
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG nova.compute.manager [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG oslo_concurrency.lockutils [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG oslo_concurrency.lockutils [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.821 238945 DEBUG oslo_concurrency.lockutils [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.822 238945 DEBUG nova.compute.manager [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] No waiting events found dispatching network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:34 np0005597378 nova_compute[238941]: 2026-01-27 14:20:34.822 238945 WARNING nova.compute.manager [req-161288b3-5359-4e5f-bada-f8e512f835d8 req-96bcfda4-4c77-4126-ba87-6dbdcb1f4360 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received unexpected event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:20:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.6 MiB/s wr, 91 op/s
Jan 27 09:20:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:35 np0005597378 podman[358672]: 2026-01-27 14:20:35.765321165 +0000 UTC m=+0.104132230 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.862 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.863 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.864 238945 INFO nova.compute.manager [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Terminating instance#033[00m
Jan 27 09:20:36 np0005597378 nova_compute[238941]: 2026-01-27 14:20:36.865 238945 DEBUG nova.compute.manager [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:20:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 815 KiB/s rd, 2.0 MiB/s wr, 90 op/s
Jan 27 09:20:37 np0005597378 kernel: tapac0e0d3a-d1 (unregistering): left promiscuous mode
Jan 27 09:20:37 np0005597378 NetworkManager[48904]: <info>  [1769523637.1018] device (tapac0e0d3a-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01389|binding|INFO|Releasing lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 from this chassis (sb_readonly=0)
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.118 238945 DEBUG nova.compute.manager [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01390|binding|INFO|Setting lport ac0e0d3a-d130-4a9a-924f-77a87f787cc2 down in Southbound
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG nova.compute.manager [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing instance network info cache due to event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG oslo_concurrency.lockutils [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG oslo_concurrency.lockutils [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.119 238945 DEBUG nova.network.neutron [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01391|binding|INFO|Removing iface tapac0e0d3a-d1 ovn-installed in OVS
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.121 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.143 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.180 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:d9:05 10.100.0.27'], port_security=['fa:16:3e:d5:d9:05 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '524e15bb-2900-40c4-a30f-4b157bfe59e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff10884f-7985-426f-bfc3-0fc975d089ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=ac0e0d3a-d130-4a9a-924f-77a87f787cc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.181 154802 INFO neutron.agent.ovn.metadata.agent [-] Port ac0e0d3a-d130-4a9a-924f-77a87f787cc2 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee unbound from our chassis#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.183 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa696622-36f6-4e49-a5aa-336a8636b3ee#033[00m
Jan 27 09:20:37 np0005597378 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Deactivated successfully.
Jan 27 09:20:37 np0005597378 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Consumed 18.074s CPU time.
Jan 27 09:20:37 np0005597378 systemd-machined[207425]: Machine qemu-161-instance-00000081 terminated.
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c06611a-aace-4f74-9c91-e3ac5e637f20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.239 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[13d77454-b4ff-4645-ad03-0a25d60e8655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.242 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d047f5-4adb-4fad-8125-c742ab26030f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.265 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4b11a1f5-d0a8-4b08-87b4-0006f49f5903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.281 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b0a5b8-7454-446e-ba2f-b9220161a261]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa696622-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 398], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622344, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 524, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 524, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358710, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.299 238945 INFO nova.virt.libvirt.driver [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Instance destroyed successfully.#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.300 238945 DEBUG nova.objects.instance [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 524e15bb-2900-40c4-a30f-4b157bfe59e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.303 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d50f48-17c2-4215-b18a-f67f83a8b69b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622356, 'tstamp': 622356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358718, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapaa696622-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622359, 'tstamp': 622359}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358718, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.305 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.313 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.313 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa696622-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa696622-30, col_values=(('external_ids', {'iface-id': 'a853d263-cad4-42f8-b0f8-2a1dfd60552f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.314 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.319 238945 DEBUG nova.virt.libvirt.vif [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416201237',display_name='tempest-TestNetworkBasicOps-server-416201237',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416201237',id=129,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHxqUrpjwSX2VKwWkqwrm5TmtR8eSNaUQORQqYjXZgJNbQzWzVjrR4Wkg7gf4skQSY2qbyqZQiFKC3/y2GJSL2I0x1IYCAsvCUtak2Fzh/j54u9+8rJjIpxoqLh6/2welg==',key_name='tempest-TestNetworkBasicOps-99858625',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:20:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d3mjbikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:20:04Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=524e15bb-2900-40c4-a30f-4b157bfe59e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.319 238945 DEBUG nova.network.os_vif_util [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "address": "fa:16:3e:d5:d9:05", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac0e0d3a-d1", "ovs_interfaceid": "ac0e0d3a-d130-4a9a-924f-77a87f787cc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.320 238945 DEBUG nova.network.os_vif_util [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.320 238945 DEBUG os_vif [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.322 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac0e0d3a-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.327 238945 INFO os_vif [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:d9:05,bridge_name='br-int',has_traffic_filtering=True,id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac0e0d3a-d1')#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.678 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.679 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.679 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.679 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.680 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.681 238945 INFO nova.compute.manager [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Terminating instance#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.682 238945 DEBUG nova.compute.manager [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 kernel: tap1bcec80f-dc (unregistering): left promiscuous mode
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.831 238945 DEBUG nova.compute.manager [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-unplugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.832 238945 DEBUG oslo_concurrency.lockutils [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG oslo_concurrency.lockutils [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG oslo_concurrency.lockutils [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG nova.compute.manager [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] No waiting events found dispatching network-vif-unplugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.833 238945 DEBUG nova.compute.manager [req-f892405c-07be-43ef-bbed-690c90e65619 req-e3f0c433-84aa-4320-a3f6-86f2f15f4000 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-unplugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:20:37 np0005597378 NetworkManager[48904]: <info>  [1769523637.8345] device (tap1bcec80f-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01392|binding|INFO|Releasing lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 from this chassis (sb_readonly=0)
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01393|binding|INFO|Setting lport 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 down in Southbound
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01394|binding|INFO|Removing iface tap1bcec80f-dc ovn-installed in OVS
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 kernel: tap0b21a97c-a7 (unregistering): left promiscuous mode
Jan 27 09:20:37 np0005597378 NetworkManager[48904]: <info>  [1769523637.8727] device (tap0b21a97c-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01395|binding|INFO|Releasing lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba from this chassis (sb_readonly=1)
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.888 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01396|binding|INFO|Removing iface tap0b21a97c-a7 ovn-installed in OVS
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01397|if_status|INFO|Dropped 5 log messages in last 680 seconds (most recently, 680 seconds ago) due to excessive rate
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01398|if_status|INFO|Not setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba down as sb is readonly
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 nova_compute[238941]: 2026-01-27 14:20:37.907 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:37 np0005597378 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 27 09:20:37 np0005597378 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Consumed 16.421s CPU time.
Jan 27 09:20:37 np0005597378 systemd-machined[207425]: Machine qemu-160-instance-00000080 terminated.
Jan 27 09:20:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:37Z|01399|binding|INFO|Setting lport 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba down in Southbound
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.985 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:73:7f 10.100.0.11'], port_security=['fa:16:3e:1a:73:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1bcec80f-dc59-4ec8-95f1-fb7555b8b889) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.986 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f unbound from our chassis#033[00m
Jan 27 09:20:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:37.987 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[407428fd-1b2b-4172-bdeb-8739e5d9ab98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.011 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], port_security=['fa:16:3e:a6:83:f4 2001:db8::f816:3eff:fea6:83f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea6:83f4/64', 'neutron:device_id': '14ad708e-9b73-4e8e-822e-036be4f62cdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.036 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3252ae-9a06-4e8a-ab1f-14075e08ed11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.039 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c61280ab-90b8-40cc-9fdd-50a2f5c388ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.068 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[63a96a85-84ad-4c83-a9c0-fc7ea832cefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.087 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f1660a4a-dafe-4b62-8de7-d274558f468b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd37f3d1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:2c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 393], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618239, 'reachable_time': 42315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358754, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.104 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[821eb145-05b9-4c29-bd21-09b71fbc88f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618251, 'tstamp': 618251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358756, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbd37f3d1-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618254, 'tstamp': 618254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358756, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.107 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 NetworkManager[48904]: <info>  [1769523638.1200] manager: (tap0b21a97c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.128 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd37f3d1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.129 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.129 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd37f3d1-30, col_values=(('external_ids', {'iface-id': '40babe7c-93a1-447f-a7bf-393e56c7e18c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.130 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.131 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.132 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.137 238945 INFO nova.virt.libvirt.driver [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Instance destroyed successfully.#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.137 238945 DEBUG nova.objects.instance [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 14ad708e-9b73-4e8e-822e-036be4f62cdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.150 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8423e6-f9b7-4a23-8943-6679bb7ae7d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.180 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[08e74a54-cc56-4ca3-8e4d-81b2e9cb641b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.183 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[733ddb83-8076-47d0-8c0e-0a4c602966e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.219 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4312d41a-3530-47b3-8437-6f15e22318de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.237 238945 DEBUG nova.virt.libvirt.vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:54Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.238 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.239 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.239 238945 DEBUG os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.241 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bcec80f-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.242 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98567f3b-2814-4b3e-9792-8b905e276453]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec30aef5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:b0:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618437, 'reachable_time': 33407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358785, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.247 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.249 238945 INFO os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:73:7f,bridge_name='br-int',has_traffic_filtering=True,id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bcec80f-dc')#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.250 238945 DEBUG nova.virt.libvirt.vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-375219371',display_name='tempest-TestGettingAddress-server-375219371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-375219371',id=128,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-migbk3bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:54Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=14ad708e-9b73-4e8e-822e-036be4f62cdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.251 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.251 238945 DEBUG nova.network.os_vif_util [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.252 238945 DEBUG os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.253 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b21a97c-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.264 238945 INFO os_vif [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:83:f4,bridge_name='br-int',has_traffic_filtering=True,id=0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b21a97c-a7')#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d8579956-f8f9-42f0-aa04-7e96c40639c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec30aef5-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618449, 'tstamp': 618449}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358787, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.265 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.268 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec30aef5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.268 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.268 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec30aef5-50, col_values=(('external_ids', {'iface-id': 'efb5ae4a-27c5-4322-b9a7-2ceba053c0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:38.269 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:38 np0005597378 nova_compute[238941]: 2026-01-27 14:20:38.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 354 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 123 op/s
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.235 238945 DEBUG nova.compute.manager [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.236 238945 DEBUG nova.compute.manager [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing instance network info cache due to event network-changed-1bcec80f-dc59-4ec8-95f1-fb7555b8b889. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.236 238945 DEBUG oslo_concurrency.lockutils [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.237 238945 DEBUG oslo_concurrency.lockutils [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.237 238945 DEBUG nova.network.neutron [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Refreshing network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.442 238945 DEBUG nova.network.neutron [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated VIF entry in instance network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.442 238945 DEBUG nova.network.neutron [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.564 238945 DEBUG oslo_concurrency.lockutils [req-c8ff69c1-ca91-4896-be58-7430286fbd80 req-82660648-32dd-4abf-9a81-1fe33722192c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.921 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.922 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.923 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.923 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.924 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] No waiting events found dispatching network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.924 238945 WARNING nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received unexpected event network-vif-plugged-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.925 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.925 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.926 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.926 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.927 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-unplugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.927 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.928 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.928 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.928 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.929 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.929 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.930 238945 WARNING nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.930 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.930 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.931 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.931 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.931 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-unplugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.932 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-unplugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.932 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.933 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.933 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.933 238945 DEBUG oslo_concurrency.lockutils [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.934 238945 DEBUG nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] No waiting events found dispatching network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:39 np0005597378 nova_compute[238941]: 2026-01-27 14:20:39.934 238945 WARNING nova.compute.manager [req-1fd7c4ee-2d17-47f9-b144-cf9dee891639 req-e1379a8d-4ff9-42b0-8481-4dae122b82c0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received unexpected event network-vif-plugged-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:20:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 296 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 728 KiB/s wr, 121 op/s
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.116 238945 INFO nova.virt.libvirt.driver [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deleting instance files /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1_del#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.117 238945 INFO nova.virt.libvirt.driver [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deletion of /var/lib/nova/instances/524e15bb-2900-40c4-a30f-4b157bfe59e1_del complete#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.389 238945 INFO nova.compute.manager [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 4.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.389 238945 DEBUG oslo.service.loopingcall [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.390 238945 DEBUG nova.compute.manager [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.390 238945 DEBUG nova.network.neutron [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.887 238945 INFO nova.virt.libvirt.driver [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deleting instance files /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd_del#033[00m
Jan 27 09:20:41 np0005597378 nova_compute[238941]: 2026-01-27 14:20:41.888 238945 INFO nova.virt.libvirt.driver [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deletion of /var/lib/nova/instances/14ad708e-9b73-4e8e-822e-036be4f62cdd_del complete#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.486 238945 INFO nova.compute.manager [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 4.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.487 238945 DEBUG oslo.service.loopingcall [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.487 238945 DEBUG nova.compute.manager [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.487 238945 DEBUG nova.network.neutron [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.639 238945 DEBUG nova.network.neutron [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updated VIF entry in instance network info cache for port 1bcec80f-dc59-4ec8-95f1-fb7555b8b889. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.639 238945 DEBUG nova.network.neutron [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "address": "fa:16:3e:1a:73:7f", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bcec80f-dc", "ovs_interfaceid": "1bcec80f-dc59-4ec8-95f1-fb7555b8b889", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.728 238945 DEBUG oslo_concurrency.lockutils [req-2675e2e3-1447-4db1-9b82-c3a3fe610869 req-e53c25ad-2d3c-4b81-b663-3f07608d02d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-14ad708e-9b73-4e8e-822e-036be4f62cdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:42 np0005597378 nova_compute[238941]: 2026-01-27 14:20:42.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 296 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.249 238945 DEBUG nova.network.neutron [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.431 238945 DEBUG nova.compute.manager [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Received event network-vif-deleted-ac0e0d3a-d130-4a9a-924f-77a87f787cc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.431 238945 INFO nova.compute.manager [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Neutron deleted interface ac0e0d3a-d130-4a9a-924f-77a87f787cc2; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.431 238945 DEBUG nova.network.neutron [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.443 238945 DEBUG nova.compute.manager [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-deleted-1bcec80f-dc59-4ec8-95f1-fb7555b8b889 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.443 238945 INFO nova.compute.manager [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Neutron deleted interface 1bcec80f-dc59-4ec8-95f1-fb7555b8b889; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.444 238945 DEBUG nova.network.neutron [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [{"id": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "address": "fa:16:3e:a6:83:f4", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea6:83f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b21a97c-a7", "ovs_interfaceid": "0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.446 238945 INFO nova.compute.manager [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Took 2.06 seconds to deallocate network for instance.#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.452 238945 DEBUG nova.compute.manager [req-05dc5bb5-f1b2-4a73-8951-bb65beba7d73 req-3094119e-30f7-4ca6-818b-84cb835d9ee6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Detach interface failed, port_id=ac0e0d3a-d130-4a9a-924f-77a87f787cc2, reason: Instance 524e15bb-2900-40c4-a30f-4b157bfe59e1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.871 238945 DEBUG nova.compute.manager [req-5aacd128-66dd-440c-908b-b34397ba86c9 req-b3fdb662-d0a3-453e-ac71-8080d64767a9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Detach interface failed, port_id=1bcec80f-dc59-4ec8-95f1-fb7555b8b889, reason: Instance 14ad708e-9b73-4e8e-822e-036be4f62cdd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.909 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.909 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:43 np0005597378 nova_compute[238941]: 2026-01-27 14:20:43.920 238945 DEBUG nova.network.neutron [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.029 238945 DEBUG oslo_concurrency.processutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.145 238945 INFO nova.compute.manager [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Took 1.66 seconds to deallocate network for instance.#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.436 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:20:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454975699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.591 238945 DEBUG oslo_concurrency.processutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.596 238945 DEBUG nova.compute.provider_tree [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.635 238945 DEBUG nova.scheduler.client.report [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.768 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.770 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.852 238945 DEBUG oslo_concurrency.processutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:44 np0005597378 nova_compute[238941]: 2026-01-27 14:20:44.976 238945 INFO nova.scheduler.client.report [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 524e15bb-2900-40c4-a30f-4b157bfe59e1#033[00m
Jan 27 09:20:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 247 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 37 KiB/s wr, 124 op/s
Jan 27 09:20:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:20:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051963444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.433 238945 DEBUG oslo_concurrency.processutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.440 238945 DEBUG nova.compute.provider_tree [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.469 238945 DEBUG nova.scheduler.client.report [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.508 238945 DEBUG oslo_concurrency.lockutils [None req-7d404789-ee64-4c67-a109-94ef5e27dafd 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "524e15bb-2900-40c4-a30f-4b157bfe59e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.511 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.654 238945 INFO nova.scheduler.client.report [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 14ad708e-9b73-4e8e-822e-036be4f62cdd#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.723 238945 DEBUG nova.compute.manager [req-2a89bd09-496f-46fb-8976-83f69638c1bd req-33c91e84-934f-4a12-b848-395c5f4571b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Received event network-vif-deleted-0b21a97c-a701-4ed3-b0ac-dc7542f2b9ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:45 np0005597378 nova_compute[238941]: 2026-01-27 14:20:45.910 238945 DEBUG oslo_concurrency.lockutils [None req-0717d59a-bf42-49ef-856a-1fc609eaaeb9 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "14ad708e-9b73-4e8e-822e-036be4f62cdd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:46.325 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:46.326 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 246 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 125 op/s
Jan 27 09:20:47 np0005597378 nova_compute[238941]: 2026-01-27 14:20:47.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:20:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:20:48 np0005597378 nova_compute[238941]: 2026-01-27 14:20:48.256 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 258 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 886 KiB/s wr, 112 op/s
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.336 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-a6c25c1f-7e72-447c-98b1-66fc3fd447e1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.336 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-a6c25c1f-7e72-447c-98b1-66fc3fd447e1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.353 238945 DEBUG nova.objects.instance [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'flavor' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.383 238945 DEBUG nova.virt.libvirt.vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.384 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.384 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.389 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.391 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.394 238945 DEBUG nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Attempting to detach device tapa6c25c1f-7e from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.394 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:33:b5:64"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <target dev="tapa6c25c1f-7e"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </interface>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.417 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.420 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <name>instance-0000007f</name>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:19:43</nova:creationTime>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:port uuid="a6c25c1f-7e72-447c-98b1-66fc3fd447e1">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:20:a8:49'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='tap4be63359-13'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:33:b5:64'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='tapa6c25c1f-7e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='net1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/1'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.420 238945 INFO nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tapa6c25c1f-7e from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the persistent domain config.#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.420 238945 DEBUG nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] (1/8): Attempting to detach device tapa6c25c1f-7e with device alias net1 from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.421 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] detach device xml: <interface type="ethernet">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <mac address="fa:16:3e:33:b5:64"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <model type="virtio"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <mtu size="1442"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <target dev="tapa6c25c1f-7e"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </interface>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 27 09:20:49 np0005597378 kernel: tapa6c25c1f-7e (unregistering): left promiscuous mode
Jan 27 09:20:49 np0005597378 NetworkManager[48904]: <info>  [1769523649.4810] device (tapa6c25c1f-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG nova.compute.manager [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG nova.compute.manager [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing instance network info cache due to event network-changed-4e0bfd53-3592-45ef-aef8-c273dbee749b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG oslo_concurrency.lockutils [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.485 238945 DEBUG oslo_concurrency.lockutils [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.486 238945 DEBUG nova.network.neutron [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Refreshing network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01400|binding|INFO|Releasing lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 from this chassis (sb_readonly=0)
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01401|binding|INFO|Setting lport a6c25c1f-7e72-447c-98b1-66fc3fd447e1 down in Southbound
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01402|binding|INFO|Removing iface tapa6c25c1f-7e ovn-installed in OVS
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.496 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.506 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b5:64 10.100.0.20', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4e7fe83-1b26-4cab-bd58-13d7a6a5e2cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a6c25c1f-7e72-447c-98b1-66fc3fd447e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.507 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 in datapath aa696622-36f6-4e49-a5aa-336a8636b3ee unbound from our chassis#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.509 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa696622-36f6-4e49-a5aa-336a8636b3ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.510 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c4931785-b95e-4ea6-8a67-b55861e83403]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.515 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee namespace which is not needed anymore#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.514 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.515 238945 DEBUG nova.virt.libvirt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Received event <DeviceRemovedEvent: 1769523649.5148034, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.518 238945 DEBUG nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Start waiting for the detach event from libvirt for device tapa6c25c1f-7e with device alias net1 for instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.518 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.525 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <name>instance-0000007f</name>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:19:43</nova:creationTime>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:port uuid="a6c25c1f-7e72-447c-98b1-66fc3fd447e1">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:20:a8:49'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target dev='tap4be63359-13'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/1'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.525 238945 INFO nova.virt.libvirt.driver [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully detached device tapa6c25c1f-7e from instance 8112a700-f12a-43be-a5c6-f0536e53b2c4 from the live domain config.#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.525 238945 DEBUG nova.virt.libvirt.vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.526 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.526 238945 DEBUG nova.network.os_vif_util [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.526 238945 DEBUG os_vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.528 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6c25c1f-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.534 238945 INFO os_vif [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e')#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.535 238945 DEBUG nova.virt.libvirt.guest [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:20:49</nova:creationTime>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:20:49 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:49 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:20:49 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.620 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.621 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.622 238945 INFO nova.compute.manager [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Terminating instance#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.623 238945 DEBUG nova.compute.manager [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:20:49 np0005597378 kernel: tap4e0bfd53-35 (unregistering): left promiscuous mode
Jan 27 09:20:49 np0005597378 NetworkManager[48904]: <info>  [1769523649.7366] device (tap4e0bfd53-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01403|binding|INFO|Releasing lport 4e0bfd53-3592-45ef-aef8-c273dbee749b from this chassis (sb_readonly=0)
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01404|binding|INFO|Setting lport 4e0bfd53-3592-45ef-aef8-c273dbee749b down in Southbound
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01405|binding|INFO|Removing iface tap4e0bfd53-35 ovn-installed in OVS
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.746 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.753 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:f3:c2 10.100.0.8'], port_security=['fa:16:3e:e7:f3:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21712e78-75dc-4510-801a-6748e9e4e02c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4e0bfd53-3592-45ef-aef8-c273dbee749b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:49 np0005597378 kernel: tap0bd6bb45-68 (unregistering): left promiscuous mode
Jan 27 09:20:49 np0005597378 NetworkManager[48904]: <info>  [1769523649.7632] device (tap0bd6bb45-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01406|binding|INFO|Releasing lport 0bd6bb45-6845-4dd7-abd7-26549236c21b from this chassis (sb_readonly=0)
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01407|binding|INFO|Setting lport 0bd6bb45-6845-4dd7-abd7-26549236c21b down in Southbound
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|01408|binding|INFO|Removing iface tap0bd6bb45-68 ovn-installed in OVS
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 nova_compute[238941]: 2026-01-27 14:20:49.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:49.787 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], port_security=['fa:16:3e:9d:4f:ef 2001:db8::f816:3eff:fe9d:4fef'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:4fef/64', 'neutron:device_id': '98452226-e32f-475f-814f-d0eba538b8ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9b0a993-df9e-47a3-9c4f-f3af8e0533ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77fecdf9-05ce-491c-ab82-8473333acf08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=0bd6bb45-6845-4dd7-abd7-26549236c21b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:fd:e4 10.100.0.14
Jan 27 09:20:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:49Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:fd:e4 10.100.0.14
Jan 27 09:20:50 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : haproxy version is 2.8.14-c23fe91
Jan 27 09:20:50 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [NOTICE]   (357004) : path to executable is /usr/sbin/haproxy
Jan 27 09:20:50 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [WARNING]  (357004) : Exiting Master process...
Jan 27 09:20:50 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [WARNING]  (357004) : Exiting Master process...
Jan 27 09:20:50 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [ALERT]    (357004) : Current worker (357006) exited with code 143 (Terminated)
Jan 27 09:20:50 np0005597378 neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee[357000]: [WARNING]  (357004) : All workers exited. Exiting... (0)
Jan 27 09:20:50 np0005597378 systemd[1]: libpod-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842.scope: Deactivated successfully.
Jan 27 09:20:50 np0005597378 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Jan 27 09:20:50 np0005597378 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Consumed 17.726s CPU time.
Jan 27 09:20:50 np0005597378 podman[358875]: 2026-01-27 14:20:50.05287395 +0000 UTC m=+0.366378590 container died c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 09:20:50 np0005597378 systemd-machined[207425]: Machine qemu-158-instance-0000007e terminated.
Jan 27 09:20:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842-userdata-shm.mount: Deactivated successfully.
Jan 27 09:20:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-29bccd3a09aa8b8ec0742474309f47136eb579eb986e6a8bca461955f693665d-merged.mount: Deactivated successfully.
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.250 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 NetworkManager[48904]: <info>  [1769523650.2568] manager: (tap0bd6bb45-68): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.275 238945 INFO nova.virt.libvirt.driver [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Instance destroyed successfully.#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.275 238945 DEBUG nova.objects.instance [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 98452226-e32f-475f-814f-d0eba538b8ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.300 238945 DEBUG nova.virt.libvirt.vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:03Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.301 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.301 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.302 238945 DEBUG os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.304 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e0bfd53-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.314 238945 INFO os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:f3:c2,bridge_name='br-int',has_traffic_filtering=True,id=4e0bfd53-3592-45ef-aef8-c273dbee749b,network=Network(bd37f3d1-36c6-44a7-9f3e-1ef294aba42f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e0bfd53-35')#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.315 238945 DEBUG nova.virt.libvirt.vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1495948444',display_name='tempest-TestGettingAddress-server-1495948444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1495948444',id=126,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLVR4kJ+oiqx33Qg180MmMVMpQx2Z4p65h2MZ/j4tqWaatH2RZqPsv9UCOROlA7h//Dh3EZfYQldXaQLa/CxP24lUA20+AEw0xkAPqU/s4HwenpmowX+fJHmQP3O1sqN6Q==',key_name='tempest-TestGettingAddress-1384189608',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-5t8ktlk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:03Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=98452226-e32f-475f-814f-d0eba538b8ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.316 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.316 238945 DEBUG nova.network.os_vif_util [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.317 238945 DEBUG os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.318 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd6bb45-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.324 238945 INFO os_vif [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4f:ef,bridge_name='br-int',has_traffic_filtering=True,id=0bd6bb45-6845-4dd7-abd7-26549236c21b,network=Network(ec30aef5-5eb6-4cbb-86f9-bf221c914a9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bd6bb45-68')#033[00m
Jan 27 09:20:50 np0005597378 podman[358875]: 2026-01-27 14:20:50.434633249 +0000 UTC m=+0.748137869 container cleanup c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:20:50 np0005597378 systemd[1]: libpod-conmon-c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842.scope: Deactivated successfully.
Jan 27 09:20:50 np0005597378 podman[358951]: 2026-01-27 14:20:50.697603938 +0000 UTC m=+0.232859726 container remove c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.707 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[602492ed-1001-459e-b976-b287a875a8d7]: (4, ('Tue Jan 27 02:20:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee (c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842)\nc6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842\nTue Jan 27 02:20:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee (c6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842)\nc6aa32cf47d4737a338f0bdf57a40e655af3824bedce15eda424d27b5b928842\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.710 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccc373b-3fe8-4dd5-8a27-b9cf87a743d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.711 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa696622-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 kernel: tapaa696622-30: left promiscuous mode
Jan 27 09:20:50 np0005597378 nova_compute[238941]: 2026-01-27 14:20:50.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.732 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53319b5d-b0c6-43bf-ab56-e41bd369438b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.754 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1c66f54b-5f3d-4ac7-b6ac-c348158a442a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c24118b6-43fa-4c3a-afae-5e6d3334605c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.775 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[22fc012b-e8c0-4fbd-ac7f-b846f4fe9fc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622336, 'reachable_time': 43531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358967, 'error': None, 'target': 'ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 systemd[1]: run-netns-ovnmeta\x2daa696622\x2d36f6\x2d4e49\x2da5aa\x2d336a8636b3ee.mount: Deactivated successfully.
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.778 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa696622-36f6-4e49-a5aa-336a8636b3ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.779 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf7f7c4-5e63-4fcc-8350-f29b0a981468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.780 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4e0bfd53-3592-45ef-aef8-c273dbee749b in datapath bd37f3d1-36c6-44a7-9f3e-1ef294aba42f unbound from our chassis#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.781 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.782 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e297e893-035b-4f22-b477-b09ec9750b72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:50.782 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f namespace which is not needed anymore#033[00m
Jan 27 09:20:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 268 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 80 op/s
Jan 27 09:20:51 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : haproxy version is 2.8.14-c23fe91
Jan 27 09:20:51 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [NOTICE]   (356239) : path to executable is /usr/sbin/haproxy
Jan 27 09:20:51 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [WARNING]  (356239) : Exiting Master process...
Jan 27 09:20:51 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [ALERT]    (356239) : Current worker (356241) exited with code 143 (Terminated)
Jan 27 09:20:51 np0005597378 neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f[356202]: [WARNING]  (356239) : All workers exited. Exiting... (0)
Jan 27 09:20:51 np0005597378 systemd[1]: libpod-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef.scope: Deactivated successfully.
Jan 27 09:20:51 np0005597378 podman[358985]: 2026-01-27 14:20:51.147021704 +0000 UTC m=+0.254741361 container died 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:20:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef-userdata-shm.mount: Deactivated successfully.
Jan 27 09:20:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2a6a54763952998e1a1b93e3b8227d10c26571ca7886da7a77820100ec3d732f-merged.mount: Deactivated successfully.
Jan 27 09:20:51 np0005597378 podman[358985]: 2026-01-27 14:20:51.666272213 +0000 UTC m=+0.773991880 container cleanup 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:20:51 np0005597378 systemd[1]: libpod-conmon-25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef.scope: Deactivated successfully.
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.855 238945 DEBUG nova.compute.manager [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.856 238945 DEBUG oslo_concurrency.lockutils [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.857 238945 DEBUG oslo_concurrency.lockutils [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.857 238945 DEBUG oslo_concurrency.lockutils [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.857 238945 DEBUG nova.compute.manager [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-unplugged-0bd6bb45-6845-4dd7-abd7-26549236c21b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.858 238945 DEBUG nova.compute.manager [req-33e5c07c-a765-4754-a617-b238c61f2ba4 req-f25e1c79-a147-4088-a39a-28082a6dff46 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-0bd6bb45-6845-4dd7-abd7-26549236c21b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.936 238945 DEBUG nova.compute.manager [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-unplugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG oslo_concurrency.lockutils [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG oslo_concurrency.lockutils [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG oslo_concurrency.lockutils [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.937 238945 DEBUG nova.compute.manager [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-unplugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:51 np0005597378 nova_compute[238941]: 2026-01-27 14:20:51.938 238945 WARNING nova.compute.manager [req-df5b37b0-5dbe-45d1-b57e-a886efb72e4d req-c76d8baa-d77b-4c82-9d81-244880c93c71 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-unplugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:20:52 np0005597378 podman[359017]: 2026-01-27 14:20:52.010800269 +0000 UTC m=+0.319967491 container remove 25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f607a2b1-d91a-4599-8b1d-8d24c4b1c6d9]: (4, ('Tue Jan 27 02:20:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f (25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef)\n25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef\nTue Jan 27 02:20:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f (25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef)\n25481eb47c986d91d8cd8a26bbe900073a40bdd643ed0fa9d1677ffa112217ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.021 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[57ffd302-285c-4606-95e9-da3910f4715f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.022 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd37f3d1-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:52 np0005597378 kernel: tapbd37f3d1-30: left promiscuous mode
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[04a7aa4d-f5e6-4d7c-8574-204d9f162425]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.066 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2998983-8992-4c1b-b3fd-63bea9589808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.068 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb35ca5c-ac05-4020-b611-3c186b5f19b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.086 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d234a99-d448-4620-ac91-601cb2b6a6a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618231, 'reachable_time': 16387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359033, 'error': None, 'target': 'ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.088 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd37f3d1-36c6-44a7-9f3e-1ef294aba42f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.088 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f48fe2-6abb-4377-9652-9ed1874d09b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.089 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 0bd6bb45-6845-4dd7-abd7-26549236c21b in datapath ec30aef5-5eb6-4cbb-86f9-bf221c914a9f unbound from our chassis#033[00m
Jan 27 09:20:52 np0005597378 systemd[1]: run-netns-ovnmeta\x2dbd37f3d1\x2d36c6\x2d44a7\x2d9f3e\x2d1ef294aba42f.mount: Deactivated successfully.
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.091 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.092 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc938aa-bd41-4e39-85a7-37ee1758fea6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.092 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f namespace which is not needed anymore#033[00m
Jan 27 09:20:52 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : haproxy version is 2.8.14-c23fe91
Jan 27 09:20:52 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [NOTICE]   (356376) : path to executable is /usr/sbin/haproxy
Jan 27 09:20:52 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [WARNING]  (356376) : Exiting Master process...
Jan 27 09:20:52 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [WARNING]  (356376) : Exiting Master process...
Jan 27 09:20:52 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [ALERT]    (356376) : Current worker (356378) exited with code 143 (Terminated)
Jan 27 09:20:52 np0005597378 neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f[356369]: [WARNING]  (356376) : All workers exited. Exiting... (0)
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.298 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523637.297295, 524e15bb-2900-40c4-a30f-4b157bfe59e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.299 238945 INFO nova.compute.manager [-] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:20:52 np0005597378 systemd[1]: libpod-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b.scope: Deactivated successfully.
Jan 27 09:20:52 np0005597378 podman[359051]: 2026-01-27 14:20:52.305639759 +0000 UTC m=+0.123971580 container died 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.321 238945 DEBUG nova.compute.manager [None req-b6bdb70f-325c-4ce5-ba7a-7bea4584c6d1 - - - - - -] [instance: 524e15bb-2900-40c4-a30f-4b157bfe59e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b-userdata-shm.mount: Deactivated successfully.
Jan 27 09:20:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a95b99ac101cf03a0ab1a9c7d2ef5193ede85c9434f82059e52762aafe836921-merged.mount: Deactivated successfully.
Jan 27 09:20:52 np0005597378 podman[359051]: 2026-01-27 14:20:52.664013065 +0000 UTC m=+0.482344896 container cleanup 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:20:52 np0005597378 systemd[1]: libpod-conmon-921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b.scope: Deactivated successfully.
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.849 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.850 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.851 238945 DEBUG nova.network.neutron [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:20:52 np0005597378 podman[359081]: 2026-01-27 14:20:52.892589086 +0000 UTC m=+0.199120266 container remove 921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.902 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3a508d-dccc-4663-b965-a83f9542df9d]: (4, ('Tue Jan 27 02:20:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f (921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b)\n921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b\nTue Jan 27 02:20:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f (921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b)\n921aaa045f0dd826fd1bcec1f1daec620ad3813271f4cfa643377b60179ef82b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.904 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4adb2263-8008-4fed-851f-7f23c75086a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.905 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec30aef5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:52 np0005597378 kernel: tapec30aef5-50: left promiscuous mode
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:52 np0005597378 nova_compute[238941]: 2026-01-27 14:20:52.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1565308e-49be-42e3-9b65-75164831a647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.949 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9908db-36c9-4ab6-8790-a594a6180a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44c478f5-3be7-4c68-bace-4198a6023e58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.969 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3cb5f8b-f3d8-40c5-8d22-4e8146a1a93c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618428, 'reachable_time': 35019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359094, 'error': None, 'target': 'ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.971 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec30aef5-5eb6-4cbb-86f9-bf221c914a9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:20:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:52.971 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4476a642-cd3b-47a5-9643-90575423c58d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:52 np0005597378 systemd[1]: run-netns-ovnmeta\x2dec30aef5\x2d5eb6\x2d4cbb\x2d86f9\x2dbf221c914a9f.mount: Deactivated successfully.
Jan 27 09:20:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 268 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.135 238945 INFO nova.virt.libvirt.driver [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deleting instance files /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca_del#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.136 238945 INFO nova.virt.libvirt.driver [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deletion of /var/lib/nova/instances/98452226-e32f-475f-814f-d0eba538b8ca_del complete#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.139 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523638.1350229, 14ad708e-9b73-4e8e-822e-036be4f62cdd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.139 238945 INFO nova.compute.manager [-] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.177 238945 DEBUG nova.compute.manager [None req-96d3625e-f62b-466b-839b-9f4c234dd73c - - - - - -] [instance: 14ad708e-9b73-4e8e-822e-036be4f62cdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.210 238945 INFO nova.compute.manager [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 3.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.210 238945 DEBUG oslo.service.loopingcall [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.210 238945 DEBUG nova.compute.manager [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.211 238945 DEBUG nova.network.neutron [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.377 238945 DEBUG nova.network.neutron [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updated VIF entry in instance network info cache for port 4e0bfd53-3592-45ef-aef8-c273dbee749b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.378 238945 DEBUG nova.network.neutron [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "address": "fa:16:3e:9d:4f:ef", "network": {"id": "ec30aef5-5eb6-4cbb-86f9-bf221c914a9f", "bridge": "br-int", "label": "tempest-network-smoke--1488598611", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:4fef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bd6bb45-68", "ovs_interfaceid": "0bd6bb45-6845-4dd7-abd7-26549236c21b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.401 238945 DEBUG oslo_concurrency.lockutils [req-f423dedb-ba40-4f1b-9db7-61e14cf08b92 req-597f524a-8db1-4c2e-83da-0aa749caf5c5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-98452226-e32f-475f-814f-d0eba538b8ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.959 238945 DEBUG nova.compute.manager [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.959 238945 DEBUG oslo_concurrency.lockutils [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 DEBUG oslo_concurrency.lockutils [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 DEBUG oslo_concurrency.lockutils [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 DEBUG nova.compute.manager [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:53 np0005597378 nova_compute[238941]: 2026-01-27 14:20:53.960 238945 WARNING nova.compute.manager [req-189e096f-fc6d-40de-adad-765b7dad8f4e req-ed35f3a0-c57a-4635-b6c2-9f45a29ad1f6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-0bd6bb45-6845-4dd7-abd7-26549236c21b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.071 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.072 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.072 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.073 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.073 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.073 238945 WARNING nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.074 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-deleted-a6c25c1f-7e72-447c-98b1-66fc3fd447e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.074 238945 INFO nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Neutron deleted interface a6c25c1f-7e72-447c-98b1-66fc3fd447e1; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.075 238945 DEBUG nova.network.neutron [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.167 238945 DEBUG nova.objects.instance [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'system_metadata' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.202 238945 DEBUG nova.objects.instance [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lazy-loading 'flavor' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.235 238945 DEBUG nova.virt.libvirt.vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.236 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.237 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.241 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.247 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <name>instance-0000007f</name>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:20:49</nova:creationTime>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:20:a8:49'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target dev='tap4be63359-13'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/1'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.248 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.254 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:b5:64"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6c25c1f-7e"/></interface>not found in domain: <domain type='kvm' id='159'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <name>instance-0000007f</name>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <uuid>8112a700-f12a-43be-a5c6-f0536e53b2c4</uuid>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:20:49</nova:creationTime>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <memory unit='KiB'>131072</memory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <vcpu placement='static'>1</vcpu>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <resource>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <partition>/machine</partition>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </resource>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <sysinfo type='smbios'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='manufacturer'>RDO</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='product'>OpenStack Compute</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='serial'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='uuid'>8112a700-f12a-43be-a5c6-f0536e53b2c4</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <entry name='family'>Virtual Machine</entry>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <boot dev='hd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <smbios mode='sysinfo'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <vmcoreinfo state='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <cpu mode='custom' match='exact' check='full'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <model fallback='forbid'>EPYC-Rome</model>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <vendor>AMD</vendor>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='x2apic'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc-deadline'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='hypervisor'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='tsc_adjust'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='spec-ctrl'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='stibp'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='ssbd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='cmp_legacy'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='overflow-recov'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='succor'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='ibrs'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='amd-ssbd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='virt-ssbd'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='lbrv'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='tsc-scale'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='vmcb-clean'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='flushbyasid'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pause-filter'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='pfthreshold'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='xsaves'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='svm'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='require' name='topoext'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='npt'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <feature policy='disable' name='nrip-save'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <clock offset='utc'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <timer name='pit' tickpolicy='delay'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <timer name='hpet' present='no'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <on_poweroff>destroy</on_poweroff>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <on_reboot>restart</on_reboot>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <on_crash>destroy</on_crash>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <disk type='network' device='disk'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk' index='2'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target dev='vda' bus='virtio'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='virtio-disk0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <disk type='network' device='cdrom'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <driver name='qemu' type='raw' cache='none'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <auth username='openstack'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <secret type='ceph' uuid='4d8fd694-f443-5fb1-b612-70034b2f3c6e'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source protocol='rbd' name='vms/8112a700-f12a-43be-a5c6-f0536e53b2c4_disk.config' index='1'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <host name='192.168.122.100' port='6789'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target dev='sda' bus='sata'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <readonly/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='sata0-0-0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='0' model='pcie-root'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pcie.0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='1' port='0x10'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='2' port='0x11'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='3' port='0x12'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='4' port='0x13'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='5' port='0x14'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='6' port='0x15'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='7' port='0x16'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='8' port='0x17'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.8'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='9' port='0x18'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.9'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='10' port='0x19'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.10'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='11' port='0x1a'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.11'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='12' port='0x1b'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.12'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='13' port='0x1c'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.13'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='14' port='0x1d'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.14'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='15' port='0x1e'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.15'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='16' port='0x1f'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.16'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='17' port='0x20'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.17'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='18' port='0x21'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.18'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='19' port='0x22'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.19'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='20' port='0x23'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.20'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='21' port='0x24'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.21'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='22' port='0x25'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.22'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='23' port='0x26'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.23'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='24' port='0x27'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.24'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-root-port'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target chassis='25' port='0x28'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.25'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model name='pcie-pci-bridge'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='pci.26'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='usb'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <controller type='sata' index='0'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='ide'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </controller>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <interface type='ethernet'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <mac address='fa:16:3e:20:a8:49'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target dev='tap4be63359-13'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model type='virtio'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <driver name='vhost' rx_queue_size='512'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <mtu size='1442'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='net0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <serial type='pty'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target type='isa-serial' port='0'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:        <model name='isa-serial'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      </target>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <console type='pty' tty='/dev/pts/1'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <source path='/dev/pts/1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <log file='/var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4/console.log' append='off'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <target type='serial' port='0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='serial0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </console>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <input type='tablet' bus='usb'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='input0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='usb' bus='0' port='1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <input type='mouse' bus='ps2'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='input1'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <input type='keyboard' bus='ps2'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='input2'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </input>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <listen type='address' address='::0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </graphics>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <audio id='1' type='none'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <model type='virtio' heads='1' primary='yes'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='video0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <watchdog model='itco' action='reset'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='watchdog0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </watchdog>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <memballoon model='virtio'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <stats period='10'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='balloon0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <rng model='virtio'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <backend model='random'>/dev/urandom</backend>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <alias name='rng0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <label>system_u:system_r:svirt_t:s0:c559,c601</label>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c559,c601</imagelabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <label>+107:+107</label>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <imagelabel>+107:+107</imagelabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </seclabel>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.254 238945 WARNING nova.virt.libvirt.driver [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Detaching interface fa:16:3e:33:b5:64 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa6c25c1f-7e' not found.#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.256 238945 DEBUG nova.virt.libvirt.vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.256 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converting VIF {"id": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "address": "fa:16:3e:33:b5:64", "network": {"id": "aa696622-36f6-4e49-a5aa-336a8636b3ee", "bridge": "br-int", "label": "tempest-network-smoke--1157639663", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6c25c1f-7e", "ovs_interfaceid": "a6c25c1f-7e72-447c-98b1-66fc3fd447e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.257 238945 DEBUG nova.network.os_vif_util [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.258 238945 DEBUG os_vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.260 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6c25c1f-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.261 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.263 238945 INFO os_vif [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b5:64,bridge_name='br-int',has_traffic_filtering=True,id=a6c25c1f-7e72-447c-98b1-66fc3fd447e1,network=Network(aa696622-36f6-4e49-a5aa-336a8636b3ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6c25c1f-7e')#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.263 238945 DEBUG nova.virt.libvirt.guest [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:name>tempest-TestNetworkBasicOps-server-82343629</nova:name>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:creationTime>2026-01-27 14:20:54</nova:creationTime>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:flavor name="m1.nano">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:memory>128</nova:memory>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:disk>1</nova:disk>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:swap>0</nova:swap>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:vcpus>1</nova:vcpus>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:flavor>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:owner>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:owner>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  <nova:ports>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    <nova:port uuid="4be63359-1372-48ba-b3a8-f60edc16d879">
Jan 27 09:20:54 np0005597378 nova_compute[238941]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:    </nova:port>
Jan 27 09:20:54 np0005597378 nova_compute[238941]:  </nova:ports>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: </nova:instance>
Jan 27 09:20:54 np0005597378 nova_compute[238941]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.266 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.266 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-unplugged-4e0bfd53-3592-45ef-aef8-c273dbee749b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-unplugged-4e0bfd53-3592-45ef-aef8-c273dbee749b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.267 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "98452226-e32f-475f-814f-d0eba538b8ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG oslo_concurrency.lockutils [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] No waiting events found dispatching network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 WARNING nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received unexpected event network-vif-plugged-4e0bfd53-3592-45ef-aef8-c273dbee749b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.268 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-deleted-0bd6bb45-6845-4dd7-abd7-26549236c21b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.269 238945 INFO nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Neutron deleted interface 0bd6bb45-6845-4dd7-abd7-26549236c21b; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.269 238945 DEBUG nova.network.neutron [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [{"id": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "address": "fa:16:3e:e7:f3:c2", "network": {"id": "bd37f3d1-36c6-44a7-9f3e-1ef294aba42f", "bridge": "br-int", "label": "tempest-network-smoke--873387377", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e0bfd53-35", "ovs_interfaceid": "4e0bfd53-3592-45ef-aef8-c273dbee749b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.345 238945 INFO nova.network.neutron [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Port a6c25c1f-7e72-447c-98b1-66fc3fd447e1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.346 238945 DEBUG nova.network.neutron [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.357 238945 DEBUG nova.compute.manager [req-801ca72d-8c56-420a-ab7f-97ff9e460ce3 req-33a202f7-2011-4268-9bba-fb52ccb7a31b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Detach interface failed, port_id=0bd6bb45-6845-4dd7-abd7-26549236c21b, reason: Instance 98452226-e32f-475f-814f-d0eba538b8ca could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.374 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.424 238945 DEBUG oslo_concurrency.lockutils [None req-574bf5dc-e398-4cc1-be7d-a49d18dfccd1 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "interface-8112a700-f12a-43be-a5c6-f0536e53b2c4-a6c25c1f-7e72-447c-98b1-66fc3fd447e1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:54Z|01409|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 09:20:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:54Z|01410|binding|INFO|Releasing lport 60c58b14-38c7-4b18-a86f-4ef52a16b872 from this chassis (sb_readonly=0)
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.851 238945 DEBUG nova.network.neutron [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:54 np0005597378 nova_compute[238941]: 2026-01-27 14:20:54.977 238945 INFO nova.compute.manager [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Took 1.77 seconds to deallocate network for instance.#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.031 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.032 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:20:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 221 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.133 238945 DEBUG oslo_concurrency.processutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.249310) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655249431, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 896, "num_deletes": 250, "total_data_size": 1219438, "memory_usage": 1245096, "flush_reason": "Manual Compaction"}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655350490, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 754303, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47718, "largest_seqno": 48613, "table_properties": {"data_size": 750625, "index_size": 1394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9733, "raw_average_key_size": 20, "raw_value_size": 742841, "raw_average_value_size": 1580, "num_data_blocks": 63, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523575, "oldest_key_time": 1769523575, "file_creation_time": 1769523655, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 101185 microseconds, and 5966 cpu microseconds.
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.350541) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 754303 bytes OK
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.350559) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.430949) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.430988) EVENT_LOG_v1 {"time_micros": 1769523655430980, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.431011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1215076, prev total WAL file size 1241894, number of live WAL files 2.
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.431652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373536' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(736KB)], [110(10MB)]
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655431715, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11654686, "oldest_snapshot_seqno": -1}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6922 keys, 8764448 bytes, temperature: kUnknown
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655620619, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8764448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8719837, "index_size": 26195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180353, "raw_average_key_size": 26, "raw_value_size": 8597850, "raw_average_value_size": 1242, "num_data_blocks": 1023, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523655, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.621266) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8764448 bytes
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.673989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.5 rd, 46.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(27.1) write-amplify(11.6) OK, records in: 7400, records dropped: 478 output_compression: NoCompression
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.674047) EVENT_LOG_v1 {"time_micros": 1769523655674026, "job": 66, "event": "compaction_finished", "compaction_time_micros": 189363, "compaction_time_cpu_micros": 24051, "output_level": 6, "num_output_files": 1, "total_output_size": 8764448, "num_input_records": 7400, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655674984, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523655678118, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.431539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:20:55.678201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:20:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302703747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.855 238945 DEBUG oslo_concurrency.processutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.864 238945 DEBUG nova.compute.provider_tree [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.886 238945 DEBUG nova.scheduler.client.report [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.909 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.937 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.937 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.937 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.938 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.938 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.939 238945 INFO nova.compute.manager [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Terminating instance#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.940 238945 DEBUG nova.compute.manager [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:20:55 np0005597378 nova_compute[238941]: 2026-01-27 14:20:55.945 238945 INFO nova.scheduler.client.report [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 98452226-e32f-475f-814f-d0eba538b8ca#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.011 238945 DEBUG oslo_concurrency.lockutils [None req-2af1e59f-0cd9-4a61-bbd0-ef45ef81e630 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "98452226-e32f-475f-814f-d0eba538b8ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:56 np0005597378 kernel: tap4be63359-13 (unregistering): left promiscuous mode
Jan 27 09:20:56 np0005597378 NetworkManager[48904]: <info>  [1769523656.1663] device (tap4be63359-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:20:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:56Z|01411|binding|INFO|Releasing lport 4be63359-1372-48ba-b3a8-f60edc16d879 from this chassis (sb_readonly=0)
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:56Z|01412|binding|INFO|Setting lport 4be63359-1372-48ba-b3a8-f60edc16d879 down in Southbound
Jan 27 09:20:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:20:56Z|01413|binding|INFO|Removing iface tap4be63359-13 ovn-installed in OVS
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.177 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.183 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a8:49 10.100.0.3'], port_security=['fa:16:3e:20:a8:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8112a700-f12a-43be-a5c6-f0536e53b2c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24a46af1-cafa-42b2-ad53-4a62558369c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d00cb397-90a2-41fb-b94f-8a302bfb5bea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=4be63359-1372-48ba-b3a8-f60edc16d879) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:20:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.185 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 4be63359-1372-48ba-b3a8-f60edc16d879 in datapath 8832cfc6-32b7-455a-a552-de53a2f1fc74 unbound from our chassis#033[00m
Jan 27 09:20:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.186 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8832cfc6-32b7-455a-a552-de53a2f1fc74, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:20:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.187 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebbd366-c432-47fb-aa70-acaacf51222c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.188 238945 DEBUG nova.compute.manager [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Received event network-vif-deleted-4e0bfd53-3592-45ef-aef8-c273dbee749b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.188 238945 DEBUG nova.compute.manager [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:56.189 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 namespace which is not needed anymore#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.189 238945 DEBUG nova.compute.manager [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing instance network info cache due to event network-changed-4be63359-1372-48ba-b3a8-f60edc16d879. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.190 238945 DEBUG oslo_concurrency.lockutils [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.190 238945 DEBUG oslo_concurrency.lockutils [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.191 238945 DEBUG nova.network.neutron [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Refreshing network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:56 np0005597378 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 27 09:20:56 np0005597378 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Consumed 17.135s CPU time.
Jan 27 09:20:56 np0005597378 systemd-machined[207425]: Machine qemu-159-instance-0000007f terminated.
Jan 27 09:20:56 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : haproxy version is 2.8.14-c23fe91
Jan 27 09:20:56 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [NOTICE]   (356527) : path to executable is /usr/sbin/haproxy
Jan 27 09:20:56 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [WARNING]  (356527) : Exiting Master process...
Jan 27 09:20:56 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [ALERT]    (356527) : Current worker (356529) exited with code 143 (Terminated)
Jan 27 09:20:56 np0005597378 neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74[356523]: [WARNING]  (356527) : All workers exited. Exiting... (0)
Jan 27 09:20:56 np0005597378 systemd[1]: libpod-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd.scope: Deactivated successfully.
Jan 27 09:20:56 np0005597378 podman[359144]: 2026-01-27 14:20:56.373487036 +0000 UTC m=+0.096906007 container died 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.382 238945 INFO nova.virt.libvirt.driver [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Instance destroyed successfully.#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.383 238945 DEBUG nova.objects.instance [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 8112a700-f12a-43be-a5c6-f0536e53b2c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.392 238945 DEBUG nova.compute.manager [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-unplugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG oslo_concurrency.lockutils [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG oslo_concurrency.lockutils [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG oslo_concurrency.lockutils [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.393 238945 DEBUG nova.compute.manager [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-unplugged-4be63359-1372-48ba-b3a8-f60edc16d879 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.394 238945 DEBUG nova.compute.manager [req-6b8d9b3c-3d1d-4b68-8f58-08d006afd625 req-16355967-b7af-4282-ba9d-287e5972fb02 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-unplugged-4be63359-1372-48ba-b3a8-f60edc16d879 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.396 238945 DEBUG nova.virt.libvirt.vif [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-82343629',display_name='tempest-TestNetworkBasicOps-server-82343629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-82343629',id=127,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuX53vPXsyg4uuukMBwqzOUN9xek4L9Qvv9qPilfxmJkwrI8TuSP5Wx/0L7VecZQiPrIg31jSJZB+XNydIweCj55kPL9N3Sk35CgiUftQVQfI4fRX7/PlPVrC2mFc9NHA==',key_name='tempest-TestNetworkBasicOps-448964136',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-409jsl1v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:19:07Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=8112a700-f12a-43be-a5c6-f0536e53b2c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.397 238945 DEBUG nova.network.os_vif_util [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.397 238945 DEBUG nova.network.os_vif_util [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.398 238945 DEBUG os_vif [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.400 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.400 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4be63359-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:56 np0005597378 nova_compute[238941]: 2026-01-27 14:20:56.406 238945 INFO os_vif [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a8:49,bridge_name='br-int',has_traffic_filtering=True,id=4be63359-1372-48ba-b3a8-f60edc16d879,network=Network(8832cfc6-32b7-455a-a552-de53a2f1fc74),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4be63359-13')#033[00m
Jan 27 09:20:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd-userdata-shm.mount: Deactivated successfully.
Jan 27 09:20:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-630e8eb25e7e6532b6eba282b7443efcb2ce88b3f255e03466027626d15b9600-merged.mount: Deactivated successfully.
Jan 27 09:20:56 np0005597378 podman[359144]: 2026-01-27 14:20:56.79252528 +0000 UTC m=+0.515944251 container cleanup 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:20:56 np0005597378 systemd[1]: libpod-conmon-2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd.scope: Deactivated successfully.
Jan 27 09:20:57 np0005597378 podman[359199]: 2026-01-27 14:20:57.062087136 +0000 UTC m=+0.242522685 container remove 2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.071 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bded78-60e6-4bd6-b8cc-555f349eeaf4]: (4, ('Tue Jan 27 02:20:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 (2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd)\n2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd\nTue Jan 27 02:20:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 (2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd)\n2d03865028cc9bc609ed010280c1b2d6adbb88c34f6238222ba02246d66a80cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.073 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[edb4f2ef-d29e-43ee-b547-4ad29d7c3d87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.074 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8832cfc6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:20:57 np0005597378 nova_compute[238941]: 2026-01-27 14:20:57.077 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:57 np0005597378 kernel: tap8832cfc6-30: left promiscuous mode
Jan 27 09:20:57 np0005597378 nova_compute[238941]: 2026-01-27 14:20:57.092 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.095 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f140a0-014a-4f39-b532-a6556dfcb6f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 200 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.118 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[64aad816-4f42-4b0c-948e-f06e473019cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.120 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c248fabf-d551-4a74-9717-9eefbb5a4850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.149 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea760f31-d260-453c-9268-f99c989aedc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618540, 'reachable_time': 39178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359213, 'error': None, 'target': 'ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.152 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8832cfc6-32b7-455a-a552-de53a2f1fc74 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:20:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:20:57.152 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a33358-99af-44d4-be15-c096d469bc54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:20:57 np0005597378 systemd[1]: run-netns-ovnmeta\x2d8832cfc6\x2d32b7\x2d455a\x2da552\x2dde53a2f1fc74.mount: Deactivated successfully.
Jan 27 09:20:57 np0005597378 nova_compute[238941]: 2026-01-27 14:20:57.756 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG nova.compute.manager [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG oslo_concurrency.lockutils [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG oslo_concurrency.lockutils [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.468 238945 DEBUG oslo_concurrency.lockutils [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.469 238945 DEBUG nova.compute.manager [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] No waiting events found dispatching network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.469 238945 WARNING nova.compute.manager [req-1ac3fd95-4dde-4a65-9f9b-14249478ce2b req-0b66618c-0e36-4748-9b40-3258c7c3cd0f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received unexpected event network-vif-plugged-4be63359-1372-48ba-b3a8-f60edc16d879 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.776 238945 INFO nova.virt.libvirt.driver [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deleting instance files /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4_del#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.777 238945 INFO nova.virt.libvirt.driver [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deletion of /var/lib/nova/instances/8112a700-f12a-43be-a5c6-f0536e53b2c4_del complete#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.840 238945 INFO nova.compute.manager [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 2.90 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.842 238945 DEBUG oslo.service.loopingcall [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.843 238945 DEBUG nova.compute.manager [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:20:58 np0005597378 nova_compute[238941]: 2026-01-27 14:20:58.843 238945 DEBUG nova.network.neutron [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.028 238945 DEBUG nova.network.neutron [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updated VIF entry in instance network info cache for port 4be63359-1372-48ba-b3a8-f60edc16d879. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.029 238945 DEBUG nova.network.neutron [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [{"id": "4be63359-1372-48ba-b3a8-f60edc16d879", "address": "fa:16:3e:20:a8:49", "network": {"id": "8832cfc6-32b7-455a-a552-de53a2f1fc74", "bridge": "br-int", "label": "tempest-network-smoke--282448851", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4be63359-13", "ovs_interfaceid": "4be63359-1372-48ba-b3a8-f60edc16d879", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.045 238945 DEBUG oslo_concurrency.lockutils [req-03d6f6b3-cc1c-4a89-8b55-f2d7415f2bf0 req-eceb67a7-bb44-423c-8b62-a18b0106b1e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8112a700-f12a-43be-a5c6-f0536e53b2c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:20:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 156 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.445 238945 DEBUG nova.network.neutron [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.468 238945 INFO nova.compute.manager [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Took 0.63 seconds to deallocate network for instance.#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.516 238945 DEBUG nova.compute.manager [req-6bc16d1a-3f62-4da5-94be-bd492c0b41e9 req-da6be2cd-bc53-414e-ac9b-4d8e63985064 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Received event network-vif-deleted-4be63359-1372-48ba-b3a8-f60edc16d879 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.520 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.520 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:20:59 np0005597378 nova_compute[238941]: 2026-01-27 14:20:59.580 238945 DEBUG oslo_concurrency.processutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:20:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:20:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723021857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:20:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:20:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723021857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:21:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1800287405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:00 np0005597378 nova_compute[238941]: 2026-01-27 14:21:00.172 238945 DEBUG oslo_concurrency.processutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:00 np0005597378 nova_compute[238941]: 2026-01-27 14:21:00.183 238945 DEBUG nova.compute.provider_tree [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:21:00 np0005597378 nova_compute[238941]: 2026-01-27 14:21:00.201 238945 DEBUG nova.scheduler.client.report [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:21:00 np0005597378 nova_compute[238941]: 2026-01-27 14:21:00.227 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:00 np0005597378 nova_compute[238941]: 2026-01-27 14:21:00.255 238945 INFO nova.scheduler.client.report [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 8112a700-f12a-43be-a5c6-f0536e53b2c4#033[00m
Jan 27 09:21:00 np0005597378 nova_compute[238941]: 2026-01-27 14:21:00.322 238945 DEBUG oslo_concurrency.lockutils [None req-583baa95-0e74-400e-829d-2e7f3df9ca17 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "8112a700-f12a-43be-a5c6-f0536e53b2c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 1.3 MiB/s wr, 105 op/s
Jan 27 09:21:01 np0005597378 nova_compute[238941]: 2026-01-27 14:21:01.403 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:02 np0005597378 podman[359237]: 2026-01-27 14:21:02.734571221 +0000 UTC m=+0.073726729 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 09:21:02 np0005597378 nova_compute[238941]: 2026-01-27 14:21:02.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 189 KiB/s wr, 83 op/s
Jan 27 09:21:03 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:03Z|01414|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 09:21:03 np0005597378 nova_compute[238941]: 2026-01-27 14:21:03.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:04 np0005597378 nova_compute[238941]: 2026-01-27 14:21:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:04 np0005597378 nova_compute[238941]: 2026-01-27 14:21:04.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:04 np0005597378 nova_compute[238941]: 2026-01-27 14:21:04.407 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:04 np0005597378 nova_compute[238941]: 2026-01-27 14:21:04.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:04 np0005597378 nova_compute[238941]: 2026-01-27 14:21:04.408 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:21:04 np0005597378 nova_compute[238941]: 2026-01-27 14:21:04.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554504756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.031 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.102 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.103 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:21:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 189 KiB/s wr, 86 op/s
Jan 27 09:21:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.272 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523650.2716956, 98452226-e32f-475f-814f-d0eba538b8ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.273 238945 INFO nova.compute.manager [-] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.281 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.283 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3493MB free_disk=59.94181312341243GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.283 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.284 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.304 238945 DEBUG nova.compute.manager [None req-2b9be05a-6c8f-4810-b0b8-918ec49bfcf2 - - - - - -] [instance: 98452226-e32f-475f-814f-d0eba538b8ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9588e56d-325a-44ac-b589-16da13fbcc3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.517 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:21:05 np0005597378 nova_compute[238941]: 2026-01-27 14:21:05.687 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:06 np0005597378 podman[359301]: 2026-01-27 14:21:06.239114242 +0000 UTC m=+0.074434718 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:21:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959738751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:06 np0005597378 nova_compute[238941]: 2026-01-27 14:21:06.264 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:06 np0005597378 nova_compute[238941]: 2026-01-27 14:21:06.270 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:21:06 np0005597378 nova_compute[238941]: 2026-01-27 14:21:06.325 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:21:06 np0005597378 nova_compute[238941]: 2026-01-27 14:21:06.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:06 np0005597378 nova_compute[238941]: 2026-01-27 14:21:06.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:21:06 np0005597378 nova_compute[238941]: 2026-01-27 14:21:06.471 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 18 KiB/s wr, 44 op/s
Jan 27 09:21:07 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:07Z|01415|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 09:21:07 np0005597378 nova_compute[238941]: 2026-01-27 14:21:07.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:07 np0005597378 nova_compute[238941]: 2026-01-27 14:21:07.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:07 np0005597378 nova_compute[238941]: 2026-01-27 14:21:07.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:07 np0005597378 nova_compute[238941]: 2026-01-27 14:21:07.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:21:07 np0005597378 nova_compute[238941]: 2026-01-27 14:21:07.758 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:21:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:21:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:21:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:21:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:21:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:21:08 np0005597378 podman[359472]: 2026-01-27 14:21:08.42852041 +0000 UTC m=+0.021017412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:21:08 np0005597378 podman[359472]: 2026-01-27 14:21:08.728176798 +0000 UTC m=+0.320673770 container create 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:21:08 np0005597378 systemd[1]: Started libpod-conmon-0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248.scope.
Jan 27 09:21:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:09 np0005597378 podman[359472]: 2026-01-27 14:21:09.010696499 +0000 UTC m=+0.603193471 container init 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:21:09 np0005597378 podman[359472]: 2026-01-27 14:21:09.017220143 +0000 UTC m=+0.609717115 container start 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:21:09 np0005597378 xenodochial_raman[359488]: 167 167
Jan 27 09:21:09 np0005597378 systemd[1]: libpod-0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248.scope: Deactivated successfully.
Jan 27 09:21:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:21:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:21:09 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:21:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 27 09:21:09 np0005597378 podman[359472]: 2026-01-27 14:21:09.12874844 +0000 UTC m=+0.721245412 container attach 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:21:09 np0005597378 podman[359472]: 2026-01-27 14:21:09.129168351 +0000 UTC m=+0.721665323 container died 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:21:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-07a8d867fabb55ca5d82ddcefd6d937cef41dc2c4ae947029bbed3e58756de81-merged.mount: Deactivated successfully.
Jan 27 09:21:09 np0005597378 podman[359472]: 2026-01-27 14:21:09.553417655 +0000 UTC m=+1.145914627 container remove 0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_raman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:21:09 np0005597378 systemd[1]: libpod-conmon-0655a16880d840bb5bf996e82f7aaf064786006360875f4e18181aeef0a1d248.scope: Deactivated successfully.
Jan 27 09:21:09 np0005597378 podman[359513]: 2026-01-27 14:21:09.785809067 +0000 UTC m=+0.095836368 container create d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:09 np0005597378 podman[359513]: 2026-01-27 14:21:09.712953233 +0000 UTC m=+0.022980554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:21:09 np0005597378 systemd[1]: Started libpod-conmon-d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3.scope.
Jan 27 09:21:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:09 np0005597378 podman[359513]: 2026-01-27 14:21:09.967699762 +0000 UTC m=+0.277727073 container init d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:09 np0005597378 podman[359513]: 2026-01-27 14:21:09.974967526 +0000 UTC m=+0.284994827 container start d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:21:10 np0005597378 podman[359513]: 2026-01-27 14:21:10.018493678 +0000 UTC m=+0.328520999 container attach d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:10 np0005597378 gallant_edison[359529]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:21:10 np0005597378 gallant_edison[359529]: --> All data devices are unavailable
Jan 27 09:21:10 np0005597378 systemd[1]: libpod-d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3.scope: Deactivated successfully.
Jan 27 09:21:10 np0005597378 podman[359513]: 2026-01-27 14:21:10.459564421 +0000 UTC m=+0.769591732 container died d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:21:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a4a403d821f1a290fc57c02c694a9f0a27c3f9a117ebd95536f7ca09d2d66665-merged.mount: Deactivated successfully.
Jan 27 09:21:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:10Z|01416|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 09:21:10 np0005597378 nova_compute[238941]: 2026-01-27 14:21:10.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:11 np0005597378 podman[359513]: 2026-01-27 14:21:11.052145418 +0000 UTC m=+1.362172719 container remove d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:21:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 27 09:21:11 np0005597378 systemd[1]: libpod-conmon-d9877a683e5c0157f5766995642cc679aa13017cd70ce525ed5bcf5d384bcea3.scope: Deactivated successfully.
Jan 27 09:21:11 np0005597378 nova_compute[238941]: 2026-01-27 14:21:11.380 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523656.3781226, 8112a700-f12a-43be-a5c6-f0536e53b2c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:11 np0005597378 nova_compute[238941]: 2026-01-27 14:21:11.381 238945 INFO nova.compute.manager [-] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:21:11 np0005597378 nova_compute[238941]: 2026-01-27 14:21:11.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:11 np0005597378 nova_compute[238941]: 2026-01-27 14:21:11.406 238945 DEBUG nova.compute.manager [None req-e0aa029b-b567-4c90-9ef3-f715b03fd396 - - - - - -] [instance: 8112a700-f12a-43be-a5c6-f0536e53b2c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:11 np0005597378 nova_compute[238941]: 2026-01-27 14:21:11.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:11 np0005597378 podman[359623]: 2026-01-27 14:21:11.535444048 +0000 UTC m=+0.084145797 container create 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:21:11 np0005597378 podman[359623]: 2026-01-27 14:21:11.474164352 +0000 UTC m=+0.022866121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:21:11 np0005597378 systemd[1]: Started libpod-conmon-60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30.scope.
Jan 27 09:21:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:11 np0005597378 podman[359623]: 2026-01-27 14:21:11.712694218 +0000 UTC m=+0.261395987 container init 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:21:11 np0005597378 podman[359623]: 2026-01-27 14:21:11.717698402 +0000 UTC m=+0.266400151 container start 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:11 np0005597378 happy_nobel[359640]: 167 167
Jan 27 09:21:11 np0005597378 systemd[1]: libpod-60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30.scope: Deactivated successfully.
Jan 27 09:21:11 np0005597378 podman[359623]: 2026-01-27 14:21:11.746061749 +0000 UTC m=+0.294763628 container attach 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:21:11 np0005597378 podman[359623]: 2026-01-27 14:21:11.74642579 +0000 UTC m=+0.295127539 container died 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:21:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ab8f77d4c8a34517c0d324a07a57a1a80e07502442bc1555d70e3192d0dbd135-merged.mount: Deactivated successfully.
Jan 27 09:21:12 np0005597378 podman[359623]: 2026-01-27 14:21:12.064498619 +0000 UTC m=+0.613200378 container remove 60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:21:12 np0005597378 systemd[1]: libpod-conmon-60825b3ec2a6c10233909080ddb329eb496214a42ad627ebf7a8d3e6db834f30.scope: Deactivated successfully.
Jan 27 09:21:12 np0005597378 podman[359665]: 2026-01-27 14:21:12.207104035 +0000 UTC m=+0.021798893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:21:12 np0005597378 nova_compute[238941]: 2026-01-27 14:21:12.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:12 np0005597378 podman[359665]: 2026-01-27 14:21:12.410827973 +0000 UTC m=+0.225522811 container create f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:21:12 np0005597378 systemd[1]: Started libpod-conmon-f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa.scope.
Jan 27 09:21:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:12 np0005597378 podman[359665]: 2026-01-27 14:21:12.693658942 +0000 UTC m=+0.508353810 container init f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:21:12 np0005597378 podman[359665]: 2026-01-27 14:21:12.700856234 +0000 UTC m=+0.515551072 container start f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:12 np0005597378 nova_compute[238941]: 2026-01-27 14:21:12.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:12 np0005597378 podman[359665]: 2026-01-27 14:21:12.854257779 +0000 UTC m=+0.668952647 container attach f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]: {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:    "0": [
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:        {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "devices": [
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "/dev/loop3"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            ],
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_name": "ceph_lv0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_size": "21470642176",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "name": "ceph_lv0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "tags": {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cluster_name": "ceph",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.crush_device_class": "",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.encrypted": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.objectstore": "bluestore",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osd_id": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.type": "block",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.vdo": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.with_tpm": "0"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            },
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "type": "block",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "vg_name": "ceph_vg0"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:        }
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:    ],
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:    "1": [
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:        {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "devices": [
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "/dev/loop4"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            ],
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_name": "ceph_lv1",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_size": "21470642176",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "name": "ceph_lv1",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "tags": {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cluster_name": "ceph",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.crush_device_class": "",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.encrypted": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.objectstore": "bluestore",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osd_id": "1",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.type": "block",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.vdo": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.with_tpm": "0"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            },
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "type": "block",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "vg_name": "ceph_vg1"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:        }
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:    ],
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:    "2": [
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:        {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "devices": [
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "/dev/loop5"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            ],
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_name": "ceph_lv2",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_size": "21470642176",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "name": "ceph_lv2",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "tags": {
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.cluster_name": "ceph",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.crush_device_class": "",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.encrypted": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.objectstore": "bluestore",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osd_id": "2",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.type": "block",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.vdo": "0",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:                "ceph.with_tpm": "0"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            },
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "type": "block",
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:            "vg_name": "ceph_vg2"
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:        }
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]:    ]
Jan 27 09:21:12 np0005597378 nostalgic_mendeleev[359681]: }
Jan 27 09:21:12 np0005597378 systemd[1]: libpod-f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa.scope: Deactivated successfully.
Jan 27 09:21:12 np0005597378 podman[359665]: 2026-01-27 14:21:12.991500332 +0000 UTC m=+0.806195170 container died f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:21:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Jan 27 09:21:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c1ab5f7b7971e39b2f09aab7c9e0ed96adf52fa6bcdeaf988358613b09ad87ff-merged.mount: Deactivated successfully.
Jan 27 09:21:13 np0005597378 podman[359665]: 2026-01-27 14:21:13.287712657 +0000 UTC m=+1.102407495 container remove f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:13 np0005597378 systemd[1]: libpod-conmon-f669eb160c8ae652ab63705bc053091a7dc46833eb3dbcf9d67371cb262e23aa.scope: Deactivated successfully.
Jan 27 09:21:13 np0005597378 podman[359763]: 2026-01-27 14:21:13.737314699 +0000 UTC m=+0.023020146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:21:13 np0005597378 podman[359763]: 2026-01-27 14:21:13.928012769 +0000 UTC m=+0.213718196 container create e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:13 np0005597378 systemd[1]: Started libpod-conmon-e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5.scope.
Jan 27 09:21:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:14 np0005597378 podman[359763]: 2026-01-27 14:21:14.129155577 +0000 UTC m=+0.414861034 container init e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:21:14 np0005597378 podman[359763]: 2026-01-27 14:21:14.135878886 +0000 UTC m=+0.421584313 container start e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:21:14 np0005597378 sweet_driscoll[359779]: 167 167
Jan 27 09:21:14 np0005597378 systemd[1]: libpod-e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5.scope: Deactivated successfully.
Jan 27 09:21:14 np0005597378 podman[359763]: 2026-01-27 14:21:14.219585711 +0000 UTC m=+0.505291138 container attach e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:21:14 np0005597378 podman[359763]: 2026-01-27 14:21:14.220114485 +0000 UTC m=+0.505819902 container died e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:21:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-551010d5a99cafae06bf94063ffc119c9a7e47e8de1ab59141de78f89922a141-merged.mount: Deactivated successfully.
Jan 27 09:21:14 np0005597378 podman[359763]: 2026-01-27 14:21:14.571698328 +0000 UTC m=+0.857403755 container remove e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:21:14 np0005597378 systemd[1]: libpod-conmon-e41db358dda59159ace5d1e8ce835b0d21e7f3b616a8687794aedf9cdfc5edc5.scope: Deactivated successfully.
Jan 27 09:21:14 np0005597378 podman[359802]: 2026-01-27 14:21:14.777916023 +0000 UTC m=+0.065860928 container create 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:21:14 np0005597378 podman[359802]: 2026-01-27 14:21:14.73207133 +0000 UTC m=+0.020016265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:21:14 np0005597378 systemd[1]: Started libpod-conmon-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope.
Jan 27 09:21:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:14 np0005597378 podman[359802]: 2026-01-27 14:21:14.961433652 +0000 UTC m=+0.249378597 container init 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:21:14 np0005597378 podman[359802]: 2026-01-27 14:21:14.971453859 +0000 UTC m=+0.259398814 container start 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:21:15 np0005597378 podman[359802]: 2026-01-27 14:21:15.067817691 +0000 UTC m=+0.355762606 container attach 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:21:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Jan 27 09:21:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.672 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.673 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.674 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:21:15 np0005597378 nova_compute[238941]: 2026-01-27 14:21:15.674 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:15 np0005597378 lvm[359897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:21:15 np0005597378 lvm[359898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:21:15 np0005597378 lvm[359897]: VG ceph_vg0 finished
Jan 27 09:21:15 np0005597378 lvm[359898]: VG ceph_vg1 finished
Jan 27 09:21:15 np0005597378 lvm[359900]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:21:15 np0005597378 lvm[359900]: VG ceph_vg2 finished
Jan 27 09:21:15 np0005597378 upbeat_kare[359819]: {}
Jan 27 09:21:15 np0005597378 systemd[1]: libpod-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope: Deactivated successfully.
Jan 27 09:21:15 np0005597378 podman[359802]: 2026-01-27 14:21:15.827635141 +0000 UTC m=+1.115580056 container died 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:21:15 np0005597378 systemd[1]: libpod-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope: Consumed 1.396s CPU time.
Jan 27 09:21:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ce572727f94ad6ddf4d2f0e20daccf26fff4de660a73d854f85d0b7e3c3787fd-merged.mount: Deactivated successfully.
Jan 27 09:21:16 np0005597378 podman[359802]: 2026-01-27 14:21:16.296480995 +0000 UTC m=+1.584425930 container remove 7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kare, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:21:16 np0005597378 systemd[1]: libpod-conmon-7fb4940ddf71352d306c64b037a28585341272a656215f8225fbbd86a78d169f.scope: Deactivated successfully.
Jan 27 09:21:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:21:16 np0005597378 nova_compute[238941]: 2026-01-27 14:21:16.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:21:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:21:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:21:16 np0005597378 nova_compute[238941]: 2026-01-27 14:21:16.506 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:21:17
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr']
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:21:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:21:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:21:17 np0005597378 nova_compute[238941]: 2026-01-27 14:21:17.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:21:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:21:18 np0005597378 nova_compute[238941]: 2026-01-27 14:21:18.765 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:18 np0005597378 nova_compute[238941]: 2026-01-27 14:21:18.815 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:18 np0005597378 nova_compute[238941]: 2026-01-27 14:21:18.815 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:21:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 09:21:19 np0005597378 nova_compute[238941]: 2026-01-27 14:21:19.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:19 np0005597378 nova_compute[238941]: 2026-01-27 14:21:19.618 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:20 np0005597378 nova_compute[238941]: 2026-01-27 14:21:20.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 09:21:21 np0005597378 nova_compute[238941]: 2026-01-27 14:21:21.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:21 np0005597378 nova_compute[238941]: 2026-01-27 14:21:21.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:21:21 np0005597378 nova_compute[238941]: 2026-01-27 14:21:21.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:21:21 np0005597378 nova_compute[238941]: 2026-01-27 14:21:21.413 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:22 np0005597378 nova_compute[238941]: 2026-01-27 14:21:22.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:22 np0005597378 nova_compute[238941]: 2026-01-27 14:21:22.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 27 09:21:23 np0005597378 nova_compute[238941]: 2026-01-27 14:21:23.414 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:23 np0005597378 nova_compute[238941]: 2026-01-27 14:21:23.414 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:21:24 np0005597378 nova_compute[238941]: 2026-01-27 14:21:24.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:24 np0005597378 nova_compute[238941]: 2026-01-27 14:21:24.396 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 4.3 KiB/s wr, 0 op/s
Jan 27 09:21:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:26 np0005597378 nova_compute[238941]: 2026-01-27 14:21:26.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:26 np0005597378 nova_compute[238941]: 2026-01-27 14:21:26.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.543 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.545 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:21:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.851 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2 2001:db8::f816:3eff:fe55:623c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe55:623c/64', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14) old=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.852 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 in datapath fadddb78-26b2-452e-a680-4fa4490a9885 updated#033[00m
Jan 27 09:21:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.853 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fadddb78-26b2-452e-a680-4fa4490a9885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:21:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:26.855 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[46b05407-11b1-40e6-b7de-68ba35c91de5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Jan 27 09:21:27 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:27.547 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.678 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.678 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.724 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.821 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.821 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.830 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.831 238945 INFO nova.compute.claims [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007728918025508682 of space, bias 1.0, pg target 0.23186754076526045 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693785037039282 of space, bias 1.0, pg target 0.20081355111117846 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0441855045394863e-06 of space, bias 4.0, pg target 0.0012530226054473835 quantized to 16 (current 16)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:21:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:21:27 np0005597378 nova_compute[238941]: 2026-01-27 14:21:27.987 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3251129006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.533 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.541 238945 DEBUG nova.compute.provider_tree [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.560 238945 DEBUG nova.scheduler.client.report [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.585 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.585 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.635 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.635 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.656 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.675 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.773 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.774 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.775 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Creating image(s)#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.794 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.815 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.839 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.842 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.877 238945 DEBUG nova.policy [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0425e99118c045d98b41acd95be502b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fab94160690148e98795259a1f20f590', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.914 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.915 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.916 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.916 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.936 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:28 np0005597378 nova_compute[238941]: 2026-01-27 14:21:28.939 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a25c695-bd44-4d88-b931-920b89c75a4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 121 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.4 KiB/s wr, 0 op/s
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.839 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Successfully created port: d5582334-4cd2-421a-84da-5575a8f8ba69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.895 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.896 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.919 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.923 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3a25c695-bd44-4d88-b931-920b89c75a4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:29 np0005597378 nova_compute[238941]: 2026-01-27 14:21:29.985 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] resizing rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.024 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.024 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.031 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.031 238945 INFO nova.compute.claims [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:21:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.183 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.247 238945 DEBUG nova.objects.instance [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a25c695-bd44-4d88-b931-920b89c75a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.264 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.264 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Ensure instance console log exists: /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.264 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.265 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.265 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973623584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.746 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.751 238945 DEBUG nova.compute.provider_tree [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.771 238945 DEBUG nova.scheduler.client.report [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.790 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.790 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.830 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.830 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.852 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.867 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.871 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Successfully updated port: d5582334-4cd2-421a-84da-5575a8f8ba69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.920 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.920 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.920 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:21:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.973 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2 2001:db8:0:1:f816:3eff:fe55:623c 2001:db8::f816:3eff:fe55:623c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe55:623c/64 2001:db8::f816:3eff:fe55:623c/64', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14) old=Port_Binding(mac=['fa:16:3e:55:62:3c 10.100.0.2 2001:db8::f816:3eff:fe55:623c'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe55:623c/64', 'neutron:device_id': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.974 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 in datapath fadddb78-26b2-452e-a680-4fa4490a9885 updated#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.976 238945 DEBUG nova.compute.manager [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.977 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fadddb78-26b2-452e-a680-4fa4490a9885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.977 238945 DEBUG nova.compute.manager [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing instance network info cache due to event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.977 238945 DEBUG oslo_concurrency.lockutils [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:30.978 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51b883f2-5249-4c9b-8036-38055815beb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.983 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.984 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:21:30 np0005597378 nova_compute[238941]: 2026-01-27 14:21:30.985 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Creating image(s)#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.007 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.029 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.054 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.058 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.097 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:21:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 125 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 49 KiB/s wr, 11 op/s
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.137 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.138 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.139 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.139 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.160 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.166 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.350 238945 DEBUG nova.policy [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:31 np0005597378 nova_compute[238941]: 2026-01-27 14:21:31.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.285 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.371 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.443 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Successfully updated port: a4f55f62-5a14-4d6a-ad2b-746f03792b7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.460 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.460 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.460 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.522 238945 DEBUG nova.network.neutron [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.544 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.545 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance network_info: |[{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.545 238945 DEBUG oslo_concurrency.lockutils [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.545 238945 DEBUG nova.network.neutron [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.548 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start _get_guest_xml network_info=[{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.552 238945 WARNING nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.559 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.560 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.568 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.568 238945 DEBUG nova.virt.libvirt.host [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.569 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.569 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.569 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.570 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.570 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.570 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.571 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.572 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.572 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.573 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.573 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.573 238945 DEBUG nova.virt.hardware [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.577 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.670 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:21:32 np0005597378 nova_compute[238941]: 2026-01-27 14:21:32.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.013 238945 DEBUG nova.objects.instance [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 6abeb4c6-8b43-49cb-8ced-7e612d456e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.027 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.027 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Ensure instance console log exists: /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.028 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.028 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.028 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 125 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 49 KiB/s wr, 11 op/s
Jan 27 09:21:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:21:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831442527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.147 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.165 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.169 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.222 238945 DEBUG nova.compute.manager [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.224 238945 DEBUG nova.compute.manager [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing instance network info cache due to event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.224 238945 DEBUG oslo_concurrency.lockutils [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.684 238945 DEBUG nova.network.neutron [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.700 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.701 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance network_info: |[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.701 238945 DEBUG oslo_concurrency.lockutils [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.701 238945 DEBUG nova.network.neutron [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.706 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start _get_guest_xml network_info=[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.711 238945 WARNING nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.715 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.716 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.722 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.723 238945 DEBUG nova.virt.libvirt.host [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.723 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.723 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.724 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.724 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.724 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.725 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.726 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.726 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.726 238945 DEBUG nova.virt.hardware [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.729 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:33 np0005597378 podman[360378]: 2026-01-27 14:21:33.746183271 +0000 UTC m=+0.084327192 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 09:21:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:21:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842499350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.769 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.772 238945 DEBUG nova.virt.libvirt.vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-32498425-acce',id=131,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCyTZUYN0U7tIy5WORnuZEUZsT78tKpw5fd3F5Gn4FZzj7CRmdxr09neY3gqgNMFonT/3xkHWS+Ja9dEjk5+JCZO+fYN/o3x4zZA2x5xWCMoq+ymn58/Jm9fO3o3fvxRpg==',key_name='tempest-TestSecurityGroupsBasicOps-971836491',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fab94160690148e98795259a1f20f590',ramdisk_id='',reservation_id='r-jq519wc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-32498425',owner_user_name='tempest-TestSecurityGroupsBasicOps-32498425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:28Z,user_data=None,user_id='0425e99118c045d98b41acd95be502b2',uuid=3a25c695-bd44-4d88-b931-920b89c75a4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.773 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converting VIF {"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.774 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.775 238945 DEBUG nova.objects.instance [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a25c695-bd44-4d88-b931-920b89c75a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.800 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <uuid>3a25c695-bd44-4d88-b931-920b89c75a4d</uuid>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <name>instance-00000083</name>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749</nova:name>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:21:32</nova:creationTime>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:user uuid="0425e99118c045d98b41acd95be502b2">tempest-TestSecurityGroupsBasicOps-32498425-project-member</nova:user>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:project uuid="fab94160690148e98795259a1f20f590">tempest-TestSecurityGroupsBasicOps-32498425</nova:project>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <nova:port uuid="d5582334-4cd2-421a-84da-5575a8f8ba69">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <entry name="serial">3a25c695-bd44-4d88-b931-920b89c75a4d</entry>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <entry name="uuid">3a25c695-bd44-4d88-b931-920b89c75a4d</entry>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3a25c695-bd44-4d88-b931-920b89c75a4d_disk">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e4:b5:e1"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <target dev="tapd5582334-4c"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/console.log" append="off"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:21:33 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:21:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:21:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:21:33 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Preparing to wait for external event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.802 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.803 238945 DEBUG nova.virt.libvirt.vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-32498425-acce',id=131,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCyTZUYN0U7tIy5WORnuZEUZsT78tKpw5fd3F5Gn4FZzj7CRmdxr09neY3gqgNMFonT/3xkHWS+Ja9dEjk5+JCZO+fYN/o3x4zZA2x5xWCMoq+ymn58/Jm9fO3o3fvxRpg==',key_name='tempest-TestSecurityGroupsBasicOps-971836491',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fab94160690148e98795259a1f20f590',ramdisk_id='',reservation_id='r-jq519wc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-32498425',owner_user_name='tempest-TestSecurityGroupsBasicOps-32498425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:28Z,user_data=None,user_id='0425e99118c045d98b41acd95be502b2',uuid=3a25c695-bd44-4d88-b931-920b89c75a4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.804 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converting VIF {"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.804 238945 DEBUG nova.network.os_vif_util [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.805 238945 DEBUG os_vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.806 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.813 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5582334-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.814 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5582334-4c, col_values=(('external_ids', {'iface-id': 'd5582334-4cd2-421a-84da-5575a8f8ba69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:b5:e1', 'vm-uuid': '3a25c695-bd44-4d88-b931-920b89c75a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:33 np0005597378 NetworkManager[48904]: <info>  [1769523693.8167] manager: (tapd5582334-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.823 238945 INFO os_vif [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c')#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.967 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.967 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.967 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] No VIF found with MAC fa:16:3e:e4:b5:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:21:33 np0005597378 nova_compute[238941]: 2026-01-27 14:21:33.968 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Using config drive#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.001 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.118 238945 DEBUG nova.network.neutron [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updated VIF entry in instance network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.119 238945 DEBUG nova.network.neutron [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.144 238945 DEBUG oslo_concurrency.lockutils [req-5ebb2fd9-84fe-4163-9c01-9643cb6346d7 req-80fa9476-f333-499c-9cf2-f3ea653917d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:21:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384227029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.280 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.309 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.315 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.362 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Creating config drive at /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.368 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57yasr2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.512 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57yasr2d" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.540 238945 DEBUG nova.storage.rbd_utils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] rbd image 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.544 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.877 238945 DEBUG nova.network.neutron [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updated VIF entry in instance network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.878 238945 DEBUG nova.network.neutron [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:21:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073990909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.893 238945 DEBUG oslo_concurrency.lockutils [req-b68aa83d-87a3-4e7f-b178-c004d8730897 req-dc4ee18a-3115-4440-8db5-60b625e3e06d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.906 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.908 238945 DEBUG nova.virt.libvirt.vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-872916815',display_name='tempest-TestNetworkBasicOps-server-872916815',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-872916815',id=132,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7m/6styVe/ToH0ttZnTHak+uRq17TwaCCo8ae7UQUPtdz5Zha64mXR/MJWrC520IqJi6DVerdLvabiFzfIC2iMcAfQyaB+R8xWqw81GzdVIJnJj94TKBMxB3JbuVBnrQ==',key_name='tempest-TestNetworkBasicOps-1212636844',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-rgqne85h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:30Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=6abeb4c6-8b43-49cb-8ced-7e612d456e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.908 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.909 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.910 238945 DEBUG nova.objects.instance [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6abeb4c6-8b43-49cb-8ced-7e612d456e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.929 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <uuid>6abeb4c6-8b43-49cb-8ced-7e612d456e18</uuid>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <name>instance-00000084</name>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-872916815</nova:name>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:21:33</nova:creationTime>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <nova:port uuid="a4f55f62-5a14-4d6a-ad2b-746f03792b7f">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <entry name="serial">6abeb4c6-8b43-49cb-8ced-7e612d456e18</entry>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <entry name="uuid">6abeb4c6-8b43-49cb-8ced-7e612d456e18</entry>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:40:9e:43"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <target dev="tapa4f55f62-5a"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/console.log" append="off"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:21:34 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:21:34 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:21:34 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:21:34 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.930 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Preparing to wait for external event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.930 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.930 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.931 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.931 238945 DEBUG nova.virt.libvirt.vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-872916815',display_name='tempest-TestNetworkBasicOps-server-872916815',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-872916815',id=132,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7m/6styVe/ToH0ttZnTHak+uRq17TwaCCo8ae7UQUPtdz5Zha64mXR/MJWrC520IqJi6DVerdLvabiFzfIC2iMcAfQyaB+R8xWqw81GzdVIJnJj94TKBMxB3JbuVBnrQ==',key_name='tempest-TestNetworkBasicOps-1212636844',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-rgqne85h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:30Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=6abeb4c6-8b43-49cb-8ced-7e612d456e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.932 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.932 238945 DEBUG nova.network.os_vif_util [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.933 238945 DEBUG os_vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.934 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.935 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.937 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4f55f62-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.938 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4f55f62-5a, col_values=(('external_ids', {'iface-id': 'a4f55f62-5a14-4d6a-ad2b-746f03792b7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:9e:43', 'vm-uuid': '6abeb4c6-8b43-49cb-8ced-7e612d456e18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:34 np0005597378 NetworkManager[48904]: <info>  [1769523694.9405] manager: (tapa4f55f62-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.948 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:34 np0005597378 nova_compute[238941]: 2026-01-27 14:21:34.949 238945 INFO os_vif [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.077 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.077 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.077 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:40:9e:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.078 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Using config drive#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.105 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 191 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.6 MiB/s wr, 53 op/s
Jan 27 09:21:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.406 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Creating config drive at /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.412 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gm1_ki3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.571 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gm1_ki3" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.693 238945 DEBUG nova.storage.rbd_utils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.699 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.970 238945 DEBUG oslo_concurrency.processutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config 3a25c695-bd44-4d88-b931-920b89c75a4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.971 238945 INFO nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deleting local config drive /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d/disk.config because it was imported into RBD.#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.990 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:35 np0005597378 nova_compute[238941]: 2026-01-27 14:21:35.990 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.007 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:21:36 np0005597378 NetworkManager[48904]: <info>  [1769523696.0217] manager: (tapd5582334-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Jan 27 09:21:36 np0005597378 kernel: tapd5582334-4c: entered promiscuous mode
Jan 27 09:21:36 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:36Z|01417|binding|INFO|Claiming lport d5582334-4cd2-421a-84da-5575a8f8ba69 for this chassis.
Jan 27 09:21:36 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:36Z|01418|binding|INFO|d5582334-4cd2-421a-84da-5575a8f8ba69: Claiming fa:16:3e:e4:b5:e1 10.100.0.11
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:b5:e1 10.100.0.11'], port_security=['fa:16:3e:e4:b5:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a25c695-bd44-4d88-b931-920b89c75a4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4eebbb2-d419-456a-965c-2d46e9651992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fab94160690148e98795259a1f20f590', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1573b945-2648-4a63-9472-5b7adbc61404 3af35995-534d-4c3f-b4ab-f970d48d2dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d78a1f4-b32d-4cb5-8458-3ffeab4c5d5d, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d5582334-4cd2-421a-84da-5575a8f8ba69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.035 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d5582334-4cd2-421a-84da-5575a8f8ba69 in datapath e4eebbb2-d419-456a-965c-2d46e9651992 bound to our chassis#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.037 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4eebbb2-d419-456a-965c-2d46e9651992#033[00m
Jan 27 09:21:36 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:36Z|01419|binding|INFO|Setting lport d5582334-4cd2-421a-84da-5575a8f8ba69 ovn-installed in OVS
Jan 27 09:21:36 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:36Z|01420|binding|INFO|Setting lport d5582334-4cd2-421a-84da-5575a8f8ba69 up in Southbound
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.047 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b04eda6-9872-4763-aac0-c6972d19b1a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.050 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4eebbb2-d1 in ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:21:36 np0005597378 systemd-machined[207425]: New machine qemu-163-instance-00000083.
Jan 27 09:21:36 np0005597378 systemd-udevd[360595]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.054 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.053 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4eebbb2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.053 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5216ef-5b9f-4d80-905f-7573f3cb0722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.055 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[781b06d5-c7b3-44f0-887c-15cab84200cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 systemd[1]: Started Virtual Machine qemu-163-instance-00000083.
Jan 27 09:21:36 np0005597378 NetworkManager[48904]: <info>  [1769523696.0664] device (tapd5582334-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:21:36 np0005597378 NetworkManager[48904]: <info>  [1769523696.0675] device (tapd5582334-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.070 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[48fdcf64-9579-4939-bf7a-ee8fb542d1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.093 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.094 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.096 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a829fc0c-fe98-42ba-85ef-02eaef8c11e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.100 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.100 238945 INFO nova.compute.claims [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.125 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa43723-7cef-4cf2-b62d-7d160153c59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 systemd-udevd[360599]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.135 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[641ae959-bf2f-4830-b964-bd144403a787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 NetworkManager[48904]: <info>  [1769523696.1379] manager: (tape4eebbb2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/581)
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.172 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[66a0c2a5-ab3f-4cf0-941d-9ada7e5ecee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.175 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4a17b41c-1bfc-4332-b719-2d4d097b72fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 NetworkManager[48904]: <info>  [1769523696.1991] device (tape4eebbb2-d0): carrier: link connected
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.207 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[df6d5539-e57e-4ca9-bb38-2de3acd703e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.223 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84691dd3-50fa-4f6d-97ca-cf0ac390e9b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4eebbb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:93:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633582, 'reachable_time': 28324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360631, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.239 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58c1beda-4ba5-4a59-8453-5131902e4927]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:93af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633582, 'tstamp': 633582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360632, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.251 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[540757db-9102-4a5e-ac06-e9ea0722d3e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4eebbb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:93:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633582, 'reachable_time': 28324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360633, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[515052af-cc31-499d-81b0-8fa680f54a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.299 238945 DEBUG nova.compute.manager [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.300 238945 DEBUG oslo_concurrency.lockutils [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.300 238945 DEBUG oslo_concurrency.lockutils [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.300 238945 DEBUG oslo_concurrency.lockutils [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.301 238945 DEBUG nova.compute.manager [req-da90a6e2-fa8e-405f-acd4-e977f0c9bfa1 req-01da9b1d-60be-4af5-acc2-2d83ef2826d9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Processing event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.355 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[739b22cc-0962-421f-85a9-fbf7daf0a0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.356 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4eebbb2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.356 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.357 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4eebbb2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:36 np0005597378 NetworkManager[48904]: <info>  [1769523696.3596] manager: (tape4eebbb2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 kernel: tape4eebbb2-d0: entered promiscuous mode
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.366 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4eebbb2-d0, col_values=(('external_ids', {'iface-id': '709ff178-4395-48d2-b261-d83dfcf66518'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:36 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:36Z|01421|binding|INFO|Releasing lport 709ff178-4395-48d2-b261-d83dfcf66518 from this chassis (sb_readonly=0)
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.388 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4eebbb2-d419-456a-965c-2d46e9651992.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4eebbb2-d419-456a-965c-2d46e9651992.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.389 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5530b7a3-b234-4477-ba68-e18060c4d7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.389 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-e4eebbb2-d419-456a-965c-2d46e9651992
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/e4eebbb2-d419-456a-965c-2d46e9651992.pid.haproxy
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID e4eebbb2-d419-456a-965c-2d46e9651992
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:21:36 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:36.390 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'env', 'PROCESS_TAG=haproxy-e4eebbb2-d419-456a-965c-2d46e9651992', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4eebbb2-d419-456a-965c-2d46e9651992.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:36 np0005597378 podman[360692]: 2026-01-27 14:21:36.735994402 +0000 UTC m=+0.079797561 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:21:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921543415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:36 np0005597378 podman[360732]: 2026-01-27 14:21:36.734702848 +0000 UTC m=+0.021286600 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.830 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.837 238945 DEBUG nova.compute.provider_tree [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.853 238945 DEBUG nova.scheduler.client.report [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.873 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.874 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.918 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.918 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.936 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:21:36 np0005597378 nova_compute[238941]: 2026-01-27 14:21:36.959 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.050 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.051 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.052 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Creating image(s)#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.081 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 213 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.154 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.179 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.183 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.251 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523697.25062, 3a25c695-bd44-4d88-b931-920b89c75a4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.251 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Started (Lifecycle Event)#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.254 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.267 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.270 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.274 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.281 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance spawned successfully.#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.281 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.283 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.283 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.283 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.284 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.609 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.613 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11a944d0-c529-462a-a12d-95eadb9446a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.652 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.653 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523697.2508707, 3a25c695-bd44-4d88-b931-920b89c75a4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.653 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.655 238945 DEBUG oslo_concurrency.processutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config 6abeb4c6-8b43-49cb-8ced-7e612d456e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.656 238945 INFO nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deleting local config drive /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18/disk.config because it was imported into RBD.#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.663 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.664 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.664 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.665 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.665 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.666 238945 DEBUG nova.virt.libvirt.driver [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:37 np0005597378 podman[360732]: 2026-01-27 14:21:37.685441935 +0000 UTC m=+0.972025667 container create 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.723 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:37 np0005597378 kernel: tapa4f55f62-5a: entered promiscuous mode
Jan 27 09:21:37 np0005597378 NetworkManager[48904]: <info>  [1769523697.7270] manager: (tapa4f55f62-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.727 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523697.2569547, 3a25c695-bd44-4d88-b931-920b89c75a4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:37 np0005597378 systemd-udevd[360618]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.727 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:37Z|01422|binding|INFO|Claiming lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f for this chassis.
Jan 27 09:21:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:37Z|01423|binding|INFO|a4f55f62-5a14-4d6a-ad2b-746f03792b7f: Claiming fa:16:3e:40:9e:43 10.100.0.13
Jan 27 09:21:37 np0005597378 NetworkManager[48904]: <info>  [1769523697.7424] device (tapa4f55f62-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:21:37 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:37.742 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6abeb4c6-8b43-49cb-8ced-7e612d456e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:37 np0005597378 NetworkManager[48904]: <info>  [1769523697.7441] device (tapa4f55f62-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.746 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:37Z|01424|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f ovn-installed in OVS
Jan 27 09:21:37 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:37Z|01425|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f up in Southbound
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.752 238945 INFO nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 8.98 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.752 238945 DEBUG nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.754 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:37 np0005597378 systemd-machined[207425]: New machine qemu-164-instance-00000084.
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:37 np0005597378 systemd[1]: Started Virtual Machine qemu-164-instance-00000084.
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.798 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:21:37 np0005597378 systemd[1]: Started libpod-conmon-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b.scope.
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.812 238945 DEBUG nova.policy [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:21:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ecb7c310b1ada57892ec42dcba1840531fe6340349755d250fe4bcf2f638ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.839 238945 INFO nova.compute.manager [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 10.04 seconds to build instance.#033[00m
Jan 27 09:21:37 np0005597378 nova_compute[238941]: 2026-01-27 14:21:37.866 238945 DEBUG oslo_concurrency.lockutils [None req-366b9314-cf80-4fb1-b2b5-fd275ff4a52b 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:37 np0005597378 podman[360732]: 2026-01-27 14:21:37.944584371 +0000 UTC m=+1.231168133 container init 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:21:37 np0005597378 podman[360732]: 2026-01-27 14:21:37.950420857 +0000 UTC m=+1.237004589 container start 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:21:37 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : New worker (360913) forked
Jan 27 09:21:37 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : Loading success.
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.110 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 unbound from our chassis#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.111 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22b98830-dbc4-457b-a04e-e9a5507f2880#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[837b62e7-8b50-46d6-8fed-f021c808289b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.127 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22b98830-d1 in ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.129 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22b98830-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.129 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b37269f8-4ff3-43e8-9d04-c5cc88377980]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce887488-95ee-40bc-8f19-ea52c99013b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.145 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fbaea4f1-09c4-4073-8c82-1b8a1b50badf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.164 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f20e94f1-f62e-4306-bc99-e4eda3a94404]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.195 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[672d08be-90f6-4ac1-9fb3-5e0100aa43e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.201 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bca96e-4f9e-4df3-8bb5-82a3295b1a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 NetworkManager[48904]: <info>  [1769523698.2039] manager: (tap22b98830-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/584)
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.226 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523698.2261512, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.227 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Started (Lifecycle Event)#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.235 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[264ea294-39db-4e2f-a60b-2d765397c1c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.238 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[638d9e41-0a1d-4a7d-899a-c83f366e10cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.245 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.249 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523698.2263637, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.250 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:21:38 np0005597378 NetworkManager[48904]: <info>  [1769523698.2654] device (tap22b98830-d0): carrier: link connected
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.265 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.268 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.274 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe08f77-b7a0-460f-89ea-5102c91b25b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.282 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2d0e1e-6725-4dea-98a7-efcd0d206220]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633788, 'reachable_time': 33465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360956, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.318 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa4b806-6327-45f1-a55c-df97fd9fab00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:7ab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360957, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.339 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a937fd0c-fc7a-4607-88ad-2f3c8f7a06a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633788, 'reachable_time': 33465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360958, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.392 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[587cda72-bac2-40db-8b70-03f9950e560a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.446 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5b828a59-ed3b-4fa2-b344-e157abfc6815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.448 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.448 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.449 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22b98830-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.450 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:38 np0005597378 kernel: tap22b98830-d0: entered promiscuous mode
Jan 27 09:21:38 np0005597378 NetworkManager[48904]: <info>  [1769523698.4513] manager: (tap22b98830-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.453 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.455 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22b98830-d0, col_values=(('external_ids', {'iface-id': '590369bd-e8ed-4b9b-b108-a8b595693634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:38Z|01426|binding|INFO|Releasing lport 590369bd-e8ed-4b9b-b108-a8b595693634 from this chassis (sb_readonly=0)
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.468 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.469 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.470 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2b66b5e6-85b3-4c94-80a4-aa674227281d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.470 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:21:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:38.471 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'env', 'PROCESS_TAG=haproxy-22b98830-dbc4-457b-a04e-e9a5507f2880', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22b98830-dbc4-457b-a04e-e9a5507f2880.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.770 238945 DEBUG nova.compute.manager [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.770 238945 DEBUG oslo_concurrency.lockutils [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.771 238945 DEBUG oslo_concurrency.lockutils [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.771 238945 DEBUG oslo_concurrency.lockutils [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.771 238945 DEBUG nova.compute.manager [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] No waiting events found dispatching network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:21:38 np0005597378 nova_compute[238941]: 2026-01-27 14:21:38.772 238945 WARNING nova.compute.manager [req-e7693527-e84c-4de4-a890-28a2a0451bf8 req-8e31a357-d0be-479b-b067-3c1c1a216785 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received unexpected event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:21:38 np0005597378 podman[360991]: 2026-01-27 14:21:38.792455862 +0000 UTC m=+0.020815897 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:21:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 213 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.301 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Successfully created port: 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.452 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 11a944d0-c529-462a-a12d-95eadb9446a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.839s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:39 np0005597378 podman[360991]: 2026-01-27 14:21:39.47500633 +0000 UTC m=+0.703366335 container create 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.520 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:21:39 np0005597378 systemd[1]: Started libpod-conmon-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac.scope.
Jan 27 09:21:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:39 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9741605712ec3cc4a06b9d254da75c34a200eec529ffa5682dc3ab358bbe7c10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:39 np0005597378 podman[360991]: 2026-01-27 14:21:39.701139046 +0000 UTC m=+0.929499131 container init 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:21:39 np0005597378 podman[360991]: 2026-01-27 14:21:39.712602142 +0000 UTC m=+0.940962187 container start 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:21:39 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : New worker (361066) forked
Jan 27 09:21:39 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : Loading success.
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.877 238945 DEBUG nova.objects.instance [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.895 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.896 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Ensure instance console log exists: /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.896 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.897 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.897 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:39 np0005597378 nova_compute[238941]: 2026-01-27 14:21:39.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:40 np0005597378 nova_compute[238941]: 2026-01-27 14:21:40.886 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Successfully updated port: 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:21:40 np0005597378 nova_compute[238941]: 2026-01-27 14:21:40.898 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:40 np0005597378 nova_compute[238941]: 2026-01-27 14:21:40.898 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:40 np0005597378 nova_compute[238941]: 2026-01-27 14:21:40.898 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:21:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 240 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 140 op/s
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.827 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.929 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.930 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.931 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.931 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.932 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Processing event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.932 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.933 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.934 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.934 238945 DEBUG oslo_concurrency.lockutils [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.935 238945 DEBUG nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.935 238945 WARNING nova.compute.manager [req-622c5f6d-abbf-47f7-b5d6-cabea794bff6 req-c035f089-53f6-46c0-add7-f95ae6745731 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.937 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.943 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.945 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523701.9432392, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.945 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.962 238945 INFO nova.virt.libvirt.driver [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance spawned successfully.#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.962 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.972 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.975 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.988 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.989 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.990 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.991 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.991 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.992 238945 DEBUG nova.virt.libvirt.driver [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:41 np0005597378 nova_compute[238941]: 2026-01-27 14:21:41.998 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:21:42 np0005597378 nova_compute[238941]: 2026-01-27 14:21:42.057 238945 INFO nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 11.07 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:21:42 np0005597378 nova_compute[238941]: 2026-01-27 14:21:42.058 238945 DEBUG nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:42 np0005597378 nova_compute[238941]: 2026-01-27 14:21:42.117 238945 INFO nova.compute.manager [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 12.14 seconds to build instance.#033[00m
Jan 27 09:21:42 np0005597378 nova_compute[238941]: 2026-01-27 14:21:42.133 238945 DEBUG oslo_concurrency.lockutils [None req-5b2a09be-e165-4881-952a-e1d0cc7baab2 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:42 np0005597378 nova_compute[238941]: 2026-01-27 14:21:42.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:21:42 np0005597378 nova_compute[238941]: 2026-01-27 14:21:42.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 240 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 128 op/s
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.772 238945 DEBUG nova.network.neutron [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.860 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.861 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance network_info: |[{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.864 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start _get_guest_xml network_info=[{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.870 238945 WARNING nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.876 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.878 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.885 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.886 238945 DEBUG nova.virt.libvirt.host [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.887 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.887 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.888 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.888 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.888 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.889 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.889 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.889 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.890 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.890 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.890 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.891 238945 DEBUG nova.virt.hardware [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:21:43 np0005597378 nova_compute[238941]: 2026-01-27 14:21:43.894 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.426 238945 DEBUG nova.compute.manager [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.427 238945 DEBUG nova.compute.manager [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing instance network info cache due to event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.427 238945 DEBUG oslo_concurrency.lockutils [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.428 238945 DEBUG oslo_concurrency.lockutils [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.428 238945 DEBUG nova.network.neutron [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:21:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:21:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505796886' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.497 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.521 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.524 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.557 238945 DEBUG nova.compute.manager [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.557 238945 DEBUG nova.compute.manager [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing instance network info cache due to event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.558 238945 DEBUG oslo_concurrency.lockutils [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.558 238945 DEBUG oslo_concurrency.lockutils [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.558 238945 DEBUG nova.network.neutron [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:21:44 np0005597378 nova_compute[238941]: 2026-01-27 14:21:44.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:21:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635879119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.121 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.123 238945 DEBUG nova.virt.libvirt.vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1772227334',display_name='tempest-TestGettingAddress-server-1772227334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1772227334',id=133,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8s66zbkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:36Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=11a944d0-c529-462a-a12d-95eadb9446a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.123 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.3 MiB/s wr, 186 op/s
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.126 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.127 238945 DEBUG nova.objects.instance [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.141 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <uuid>11a944d0-c529-462a-a12d-95eadb9446a8</uuid>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <name>instance-00000085</name>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-1772227334</nova:name>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:21:43</nova:creationTime>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <nova:port uuid="41bc1922-0e8b-4e12-a842-f9f8d958cc6f">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feda:bafd" ipVersion="6"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feda:bafd" ipVersion="6"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <entry name="serial">11a944d0-c529-462a-a12d-95eadb9446a8</entry>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <entry name="uuid">11a944d0-c529-462a-a12d-95eadb9446a8</entry>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/11a944d0-c529-462a-a12d-95eadb9446a8_disk">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/11a944d0-c529-462a-a12d-95eadb9446a8_disk.config">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:da:ba:fd"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <target dev="tap41bc1922-0e"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/console.log" append="off"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:21:45 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:21:45 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:21:45 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:21:45 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.147 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Preparing to wait for external event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.147 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.148 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.149 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.150 238945 DEBUG nova.virt.libvirt.vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1772227334',display_name='tempest-TestGettingAddress-server-1772227334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1772227334',id=133,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8s66zbkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:21:36Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=11a944d0-c529-462a-a12d-95eadb9446a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:21:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.152 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.153 238945 DEBUG nova.network.os_vif_util [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.153 238945 DEBUG os_vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.154 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.155 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.157 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.158 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41bc1922-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.158 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41bc1922-0e, col_values=(('external_ids', {'iface-id': '41bc1922-0e8b-4e12-a842-f9f8d958cc6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:ba:fd', 'vm-uuid': '11a944d0-c529-462a-a12d-95eadb9446a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.160 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:45 np0005597378 NetworkManager[48904]: <info>  [1769523705.1612] manager: (tap41bc1922-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.162 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.167 238945 INFO os_vif [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e')#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.345 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.345 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.346 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:da:ba:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.346 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Using config drive#033[00m
Jan 27 09:21:45 np0005597378 nova_compute[238941]: 2026-01-27 14:21:45.369 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:46.326 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:46.327 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.842 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Creating config drive at /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.847 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcj1f4ayd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.954 238945 DEBUG nova.compute.manager [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.955 238945 DEBUG nova.compute.manager [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing instance network info cache due to event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.955 238945 DEBUG oslo_concurrency.lockutils [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.955 238945 DEBUG oslo_concurrency.lockutils [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.956 238945 DEBUG nova.network.neutron [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Refreshing network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:21:46 np0005597378 nova_compute[238941]: 2026-01-27 14:21:46.989 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcj1f4ayd" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.018 238945 DEBUG nova.storage.rbd_utils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.023 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.7 MiB/s wr, 176 op/s
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.188 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.189 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.190 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.190 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.191 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.192 238945 INFO nova.compute.manager [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Terminating instance#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.193 238945 DEBUG nova.compute.manager [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.337 238945 DEBUG nova.network.neutron [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updated VIF entry in instance network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.338 238945 DEBUG nova.network.neutron [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.354 238945 DEBUG oslo_concurrency.lockutils [req-164068d0-7f01-41cd-bc47-7d203be2e3eb req-3036a1ce-8207-4f86-98ce-d3ffdaefb766 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.736 238945 DEBUG nova.network.neutron [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated VIF entry in instance network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.737 238945 DEBUG nova.network.neutron [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.751 238945 DEBUG oslo_concurrency.lockutils [req-b41828ed-ecb3-44c1-8864-4582d8e3aec1 req-b7228c62-e915-406a-994c-a0bc20acb637 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:47 np0005597378 nova_compute[238941]: 2026-01-27 14:21:47.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:21:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:21:48 np0005597378 kernel: tapa4f55f62-5a (unregistering): left promiscuous mode
Jan 27 09:21:48 np0005597378 NetworkManager[48904]: <info>  [1769523708.0545] device (tapa4f55f62-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01427|binding|INFO|Releasing lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f from this chassis (sb_readonly=0)
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01428|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f down in Southbound
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01429|binding|INFO|Removing iface tapa4f55f62-5a ovn-installed in OVS
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.067 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.076 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6abeb4c6-8b43-49cb-8ced-7e612d456e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.077 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 unbound from our chassis#033[00m
Jan 27 09:21:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.079 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22b98830-dbc4-457b-a04e-e9a5507f2880, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:21:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.080 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f0e88c-b2a6-4b9d-8212-6772eb87a98a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.084 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace which is not needed anymore#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 27 09:21:48 np0005597378 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Consumed 5.730s CPU time.
Jan 27 09:21:48 np0005597378 systemd-machined[207425]: Machine qemu-164-instance-00000084 terminated.
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.192 238945 DEBUG nova.network.neutron [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updated VIF entry in instance network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.194 238945 DEBUG nova.network.neutron [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.223 238945 DEBUG oslo_concurrency.lockutils [req-9030aff2-36eb-4956-bb78-78fb50b3b805 req-0ef29de7-4f14-461d-8245-ed99d644cba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6abeb4c6-8b43-49cb-8ced-7e612d456e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.235 238945 INFO nova.virt.libvirt.driver [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Instance destroyed successfully.#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.235 238945 DEBUG nova.objects.instance [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 6abeb4c6-8b43-49cb-8ced-7e612d456e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.256 238945 DEBUG nova.virt.libvirt.vif [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-872916815',display_name='tempest-TestNetworkBasicOps-server-872916815',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-872916815',id=132,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7m/6styVe/ToH0ttZnTHak+uRq17TwaCCo8ae7UQUPtdz5Zha64mXR/MJWrC520IqJi6DVerdLvabiFzfIC2iMcAfQyaB+R8xWqw81GzdVIJnJj94TKBMxB3JbuVBnrQ==',key_name='tempest-TestNetworkBasicOps-1212636844',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:21:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-rgqne85h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:21:42Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=6abeb4c6-8b43-49cb-8ced-7e612d456e18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.257 238945 DEBUG nova.network.os_vif_util [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.258 238945 DEBUG nova.network.os_vif_util [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.258 238945 DEBUG os_vif [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.261 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4f55f62-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.265 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.267 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.270 238945 INFO os_vif [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')#033[00m
Jan 27 09:21:48 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : haproxy version is 2.8.14-c23fe91
Jan 27 09:21:48 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [NOTICE]   (361064) : path to executable is /usr/sbin/haproxy
Jan 27 09:21:48 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [WARNING]  (361064) : Exiting Master process...
Jan 27 09:21:48 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [ALERT]    (361064) : Current worker (361066) exited with code 143 (Terminated)
Jan 27 09:21:48 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[361060]: [WARNING]  (361064) : All workers exited. Exiting... (0)
Jan 27 09:21:48 np0005597378 systemd[1]: libpod-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac.scope: Deactivated successfully.
Jan 27 09:21:48 np0005597378 podman[361242]: 2026-01-27 14:21:48.401956182 +0000 UTC m=+0.226938928 container died 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.823 238945 DEBUG oslo_concurrency.processutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config 11a944d0-c529-462a-a12d-95eadb9446a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.800s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.823 238945 INFO nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deleting local config drive /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8/disk.config because it was imported into RBD.#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.834 238945 DEBUG nova.compute.manager [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.835 238945 DEBUG oslo_concurrency.lockutils [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.835 238945 DEBUG oslo_concurrency.lockutils [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.836 238945 DEBUG oslo_concurrency.lockutils [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.836 238945 DEBUG nova.compute.manager [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] No waiting events found dispatching network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.837 238945 DEBUG nova.compute.manager [req-21da2789-707c-4587-b9f8-4c9213b9790c req-a1191ec1-6088-44ca-b9c7-c052c772641c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:21:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9741605712ec3cc4a06b9d254da75c34a200eec529ffa5682dc3ab358bbe7c10-merged.mount: Deactivated successfully.
Jan 27 09:21:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac-userdata-shm.mount: Deactivated successfully.
Jan 27 09:21:48 np0005597378 systemd-udevd[361221]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:21:48 np0005597378 kernel: tap41bc1922-0e: entered promiscuous mode
Jan 27 09:21:48 np0005597378 NetworkManager[48904]: <info>  [1769523708.8965] manager: (tap41bc1922-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01430|binding|INFO|Claiming lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f for this chassis.
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01431|binding|INFO|41bc1922-0e8b-4e12-a842-f9f8d958cc6f: Claiming fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 NetworkManager[48904]: <info>  [1769523708.9127] device (tap41bc1922-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:21:48 np0005597378 NetworkManager[48904]: <info>  [1769523708.9134] device (tap41bc1922-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:21:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:48.917 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], port_security=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feda:bafd/64 2001:db8::f816:3eff:feda:bafd/64', 'neutron:device_id': '11a944d0-c529-462a-a12d-95eadb9446a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=41bc1922-0e8b-4e12-a842-f9f8d958cc6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01432|binding|INFO|Setting lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f ovn-installed in OVS
Jan 27 09:21:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:48Z|01433|binding|INFO|Setting lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f up in Southbound
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 nova_compute[238941]: 2026-01-27 14:21:48.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:48 np0005597378 systemd-machined[207425]: New machine qemu-165-instance-00000085.
Jan 27 09:21:48 np0005597378 systemd[1]: Started Virtual Machine qemu-165-instance-00000085.
Jan 27 09:21:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 260 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 27 09:21:49 np0005597378 podman[361242]: 2026-01-27 14:21:49.224395535 +0000 UTC m=+1.049378281 container cleanup 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:21:49 np0005597378 systemd[1]: libpod-conmon-6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac.scope: Deactivated successfully.
Jan 27 09:21:49 np0005597378 podman[361325]: 2026-01-27 14:21:49.586435047 +0000 UTC m=+0.334776616 container remove 6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.595 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0fec0001-e329-4f28-a865-e3ad563ab039]: (4, ('Tue Jan 27 02:21:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac)\n6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac\nTue Jan 27 02:21:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac)\n6d2b44bf7ffb21efd7574beadbf7c68dfbb0b4c1a398f4f64eded90a4199b9ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.597 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[295ea0c6-d0af-4c01-98df-78819f863e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.598 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.601 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:49 np0005597378 kernel: tap22b98830-d0: left promiscuous mode
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.624 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2c3274-6647-4902-9301-3a9a3976a2ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.640 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75cb9feb-16ba-41fa-8a8b-a075a35c9619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.642 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbf4825-8a0e-4891-b207-b666743131a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.660 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c42068f-fc9e-4a8e-852e-09b4f2925e57]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633781, 'reachable_time': 41522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361378, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 systemd[1]: run-netns-ovnmeta\x2d22b98830\x2ddbc4\x2d457b\x2da04e\x2de9a5507f2880.mount: Deactivated successfully.
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.663 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.663 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[088cd923-435c-44dd-a322-7c7f0842a33e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.666 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f in datapath fadddb78-26b2-452e-a680-4fa4490a9885 unbound from our chassis#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.667 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fadddb78-26b2-452e-a680-4fa4490a9885#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.680 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7edd96-b005-4ae1-aa03-fc342b57ffba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.681 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfadddb78-21 in ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.682 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfadddb78-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.682 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6823106-1a2e-4089-80af-f49b6b417792]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.683 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3961540b-dd1d-447f-bea7-d69652cfa628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.697 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[085bb3ee-5ecb-4613-913c-39be6b4a8a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.719 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523709.7186847, 11a944d0-c529-462a-a12d-95eadb9446a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.719 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Started (Lifecycle Event)#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.720 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c8354d44-e92c-40de-883f-6b8afd8d16f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.745 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.749 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523709.7188623, 11a944d0-c529-462a-a12d-95eadb9446a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.749 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.750 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7905613d-3cbd-4c32-b00d-7ea984d74fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 NetworkManager[48904]: <info>  [1769523709.7567] manager: (tapfadddb78-20): new Veth device (/org/freedesktop/NetworkManager/Devices/588)
Jan 27 09:21:49 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:49.756 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f08ec9ac-b653-4d37-ac41-568f9bf2c68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.776 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.781 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.806 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.878 238945 DEBUG nova.compute.manager [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.879 238945 DEBUG oslo_concurrency.lockutils [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.879 238945 DEBUG oslo_concurrency.lockutils [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.879 238945 DEBUG oslo_concurrency.lockutils [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.880 238945 DEBUG nova.compute.manager [req-49810b4b-894a-44e4-88bf-2e31133463e3 req-dd7d0bc5-88d8-472a-83ca-90f3f75131b1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Processing event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.881 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.886 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523709.8855925, 11a944d0-c529-462a-a12d-95eadb9446a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.886 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.888 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.892 238945 INFO nova.virt.libvirt.driver [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance spawned successfully.#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.892 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.911 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.918 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.922 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.922 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.923 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.923 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.923 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.924 238945 DEBUG nova.virt.libvirt.driver [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.951 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.990 238945 INFO nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 12.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:21:49 np0005597378 nova_compute[238941]: 2026-01-27 14:21:49.991 238945 DEBUG nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.006 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3d5c77-9669-4cde-8814-5cc0dce3447d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.010 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ed941a8e-d8d5-4275-ac28-18d438d76cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 NetworkManager[48904]: <info>  [1769523710.0343] device (tapfadddb78-20): carrier: link connected
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.040 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c23c99e2-583c-4046-99fb-3d6dc2022e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.055 238945 INFO nova.compute.manager [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 13.99 seconds to build instance.#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.061 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[93961ab6-1c9a-44df-8b33-3100cd77aff5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361403, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.077 238945 DEBUG oslo_concurrency.lockutils [None req-ec510de5-e107-4b00-beb5-214157eb3a29 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.077 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0d261a19-1974-4ede-a5ff-23d3b266646d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:623c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634965, 'tstamp': 634965}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361404, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.098 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[74e3b117-064f-4384-b5f4-0e89ee67602e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361405, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.131 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d291cc5d-f1df-46b0-bc06-d2d0a179a934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.191 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[144ead1d-26a1-4cb3-9a8c-2516f984ab83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.192 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.193 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.193 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfadddb78-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:50 np0005597378 NetworkManager[48904]: <info>  [1769523710.1962] manager: (tapfadddb78-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Jan 27 09:21:50 np0005597378 kernel: tapfadddb78-20: entered promiscuous mode
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.199 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfadddb78-20, col_values=(('external_ids', {'iface-id': '2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:50 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:50Z|01434|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.218 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fadddb78-26b2-452e-a680-4fa4490a9885.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fadddb78-26b2-452e-a680-4fa4490a9885.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3513153d-12e6-4056-b722-1a1a14af1eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.220 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/fadddb78-26b2-452e-a680-4fa4490a9885.pid.haproxy
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID fadddb78-26b2-452e-a680-4fa4490a9885
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:21:50 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:21:50.520 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'env', 'PROCESS_TAG=haproxy-fadddb78-26b2-452e-a680-4fa4490a9885', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fadddb78-26b2-452e-a680-4fa4490a9885.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG nova.compute.manager [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG oslo_concurrency.lockutils [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG oslo_concurrency.lockutils [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.925 238945 DEBUG oslo_concurrency.lockutils [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.926 238945 DEBUG nova.compute.manager [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:21:50 np0005597378 nova_compute[238941]: 2026-01-27 14:21:50.926 238945 WARNING nova.compute.manager [req-057ca749-afe6-4c3e-978f-84f11cc61ee1 req-45b63733-0e13-4125-8eb7-4eb7a8fbdadf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:21:50 np0005597378 podman[361437]: 2026-01-27 14:21:50.897903972 +0000 UTC m=+0.023898688 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:21:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 238 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Jan 27 09:21:51 np0005597378 podman[361437]: 2026-01-27 14:21:51.428220158 +0000 UTC m=+0.554214824 container create eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:21:51 np0005597378 systemd[1]: Started libpod-conmon-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope.
Jan 27 09:21:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:21:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e343ece24984a12a7cdfd6f209a71eba45e9c527637fc2eceaf8acbff563793/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:21:51 np0005597378 podman[361437]: 2026-01-27 14:21:51.568087671 +0000 UTC m=+0.694082367 container init eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:21:51 np0005597378 podman[361437]: 2026-01-27 14:21:51.573863605 +0000 UTC m=+0.699858281 container start eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:21:51 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : New worker (361459) forked
Jan 27 09:21:51 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : Loading success.
Jan 27 09:21:51 np0005597378 nova_compute[238941]: 2026-01-27 14:21:51.882 238945 INFO nova.virt.libvirt.driver [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deleting instance files /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18_del#033[00m
Jan 27 09:21:51 np0005597378 nova_compute[238941]: 2026-01-27 14:21:51.884 238945 INFO nova.virt.libvirt.driver [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deletion of /var/lib/nova/instances/6abeb4c6-8b43-49cb-8ced-7e612d456e18_del complete#033[00m
Jan 27 09:21:51 np0005597378 nova_compute[238941]: 2026-01-27 14:21:51.938 238945 INFO nova.compute.manager [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 4.74 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:21:51 np0005597378 nova_compute[238941]: 2026-01-27 14:21:51.938 238945 DEBUG oslo.service.loopingcall [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:21:51 np0005597378 nova_compute[238941]: 2026-01-27 14:21:51.938 238945 DEBUG nova.compute.manager [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:21:51 np0005597378 nova_compute[238941]: 2026-01-27 14:21:51.939 238945 DEBUG nova.network.neutron [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.069 238945 DEBUG nova.compute.manager [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.069 238945 DEBUG oslo_concurrency.lockutils [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 DEBUG oslo_concurrency.lockutils [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 DEBUG oslo_concurrency.lockutils [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 DEBUG nova.compute.manager [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] No waiting events found dispatching network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.070 238945 WARNING nova.compute.manager [req-ca627122-8443-4d64-aead-df1f01b6521e req-747c5710-43b3-46ff-b390-af5520f96ea0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received unexpected event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f for instance with vm_state active and task_state None.#033[00m
Jan 27 09:21:52 np0005597378 nova_compute[238941]: 2026-01-27 14:21:52.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:52Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:b5:e1 10.100.0.11
Jan 27 09:21:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:21:52Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:b5:e1 10.100.0.11
Jan 27 09:21:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 238 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 886 KiB/s wr, 110 op/s
Jan 27 09:21:53 np0005597378 nova_compute[238941]: 2026-01-27 14:21:53.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:53 np0005597378 nova_compute[238941]: 2026-01-27 14:21:53.379 238945 DEBUG nova.network.neutron [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:53 np0005597378 nova_compute[238941]: 2026-01-27 14:21:53.398 238945 INFO nova.compute.manager [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Took 1.46 seconds to deallocate network for instance.#033[00m
Jan 27 09:21:53 np0005597378 nova_compute[238941]: 2026-01-27 14:21:53.447 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:21:53 np0005597378 nova_compute[238941]: 2026-01-27 14:21:53.447 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:21:53 np0005597378 nova_compute[238941]: 2026-01-27 14:21:53.537 238945 DEBUG oslo_concurrency.processutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:21:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:21:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595634014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.137 238945 DEBUG oslo_concurrency.processutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.143 238945 DEBUG nova.compute.provider_tree [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.164 238945 DEBUG nova.scheduler.client.report [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.186 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.216 238945 DEBUG nova.compute.manager [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG nova.compute.manager [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing instance network info cache due to event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG oslo_concurrency.lockutils [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG oslo_concurrency.lockutils [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.217 238945 DEBUG nova.network.neutron [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.222 238945 INFO nova.scheduler.client.report [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 6abeb4c6-8b43-49cb-8ced-7e612d456e18#033[00m
Jan 27 09:21:54 np0005597378 nova_compute[238941]: 2026-01-27 14:21:54.285 238945 DEBUG oslo_concurrency.lockutils [None req-fa4363d4-3e5c-48c5-914d-bc763f99107d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "6abeb4c6-8b43-49cb-8ced-7e612d456e18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:21:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 237 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 217 op/s
Jan 27 09:21:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:21:56 np0005597378 nova_compute[238941]: 2026-01-27 14:21:56.064 238945 DEBUG nova.network.neutron [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated VIF entry in instance network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:21:56 np0005597378 nova_compute[238941]: 2026-01-27 14:21:56.066 238945 DEBUG nova.network.neutron [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:21:56 np0005597378 nova_compute[238941]: 2026-01-27 14:21:56.089 238945 DEBUG oslo_concurrency.lockutils [req-0340e141-6fe8-4fe4-8464-298a16a12c03 req-1a2caa09-2c2e-4813-b3db-9f7f753a21a2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:21:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 195 op/s
Jan 27 09:21:57 np0005597378 nova_compute[238941]: 2026-01-27 14:21:57.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:58 np0005597378 nova_compute[238941]: 2026-01-27 14:21:58.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:21:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Jan 27 09:21:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:21:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626407197' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:21:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:21:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626407197' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:22:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 27 09:22:02 np0005597378 nova_compute[238941]: 2026-01-27 14:22:02.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 246 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.027 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523708.2320282, 6abeb4c6-8b43-49cb-8ced-7e612d456e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.027 238945 INFO nova.compute.manager [-] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:22:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:04Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:ba:fd 10.100.0.14
Jan 27 09:22:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:04Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:ba:fd 10.100.0.14
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.032 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.032 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.056 238945 DEBUG nova.compute.manager [None req-69afb754-3b30-473b-8548-536c01715d5b - - - - - -] [instance: 6abeb4c6-8b43-49cb-8ced-7e612d456e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.062 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.143 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.143 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.151 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.152 238945 INFO nova.compute.claims [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.305 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:04 np0005597378 podman[361510]: 2026-01-27 14:22:04.737481899 +0000 UTC m=+0.081766944 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:22:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304090449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.888 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.897 238945 DEBUG nova.compute.provider_tree [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.916 238945 DEBUG nova.scheduler.client.report [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.942 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.943 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.988 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:22:04 np0005597378 nova_compute[238941]: 2026-01-27 14:22:04.988 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.008 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.026 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:22:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 264 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 186 op/s
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.131 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.133 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.133 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Creating image(s)#033[00m
Jan 27 09:22:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.158 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.183 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.204 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.207 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.292 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.293 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.293 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.294 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.317 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.321 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3200f931-0872-4524-bbd2-c480c1cce88c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.405 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.406 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:05 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.645 238945 DEBUG nova.policy [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:22:05 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:22:05 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.706 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3200f931-0872-4524-bbd2-c480c1cce88c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.787 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.904 238945 DEBUG nova.objects.instance [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 3200f931-0872-4524-bbd2-c480c1cce88c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.929 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.930 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Ensure instance console log exists: /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.930 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.931 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:05 np0005597378 nova_compute[238941]: 2026-01-27 14:22:05.931 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250629004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.018 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.111 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.112 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.115 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.116 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.331 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.332 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3058MB free_disk=59.87547723017633GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.332 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.332 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.498 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9588e56d-325a-44ac-b589-16da13fbcc3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3a25c695-bd44-4d88-b931-920b89c75a4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 11a944d0-c529-462a-a12d-95eadb9446a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3200f931-0872-4524-bbd2-c480c1cce88c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.500 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.591 238945 DEBUG nova.compute.manager [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.593 238945 DEBUG nova.compute.manager [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing instance network info cache due to event network-changed-d5582334-4cd2-421a-84da-5575a8f8ba69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.594 238945 DEBUG oslo_concurrency.lockutils [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.594 238945 DEBUG oslo_concurrency.lockutils [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.594 238945 DEBUG nova.network.neutron [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Refreshing network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.607 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.672 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.673 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.674 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.674 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.674 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.675 238945 INFO nova.compute.manager [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Terminating instance#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.676 238945 DEBUG nova.compute.manager [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:22:06 np0005597378 kernel: tapd5582334-4c (unregistering): left promiscuous mode
Jan 27 09:22:06 np0005597378 NetworkManager[48904]: <info>  [1769523726.7411] device (tapd5582334-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:22:06 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:06Z|01435|binding|INFO|Releasing lport d5582334-4cd2-421a-84da-5575a8f8ba69 from this chassis (sb_readonly=0)
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.759 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:06 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:06Z|01436|binding|INFO|Setting lport d5582334-4cd2-421a-84da-5575a8f8ba69 down in Southbound
Jan 27 09:22:06 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:06Z|01437|binding|INFO|Removing iface tapd5582334-4c ovn-installed in OVS
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.775 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:b5:e1 10.100.0.11'], port_security=['fa:16:3e:e4:b5:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a25c695-bd44-4d88-b931-920b89c75a4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4eebbb2-d419-456a-965c-2d46e9651992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fab94160690148e98795259a1f20f590', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1573b945-2648-4a63-9472-5b7adbc61404 3af35995-534d-4c3f-b4ab-f970d48d2dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d78a1f4-b32d-4cb5-8458-3ffeab4c5d5d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=d5582334-4cd2-421a-84da-5575a8f8ba69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.777 154802 INFO neutron.agent.ovn.metadata.agent [-] Port d5582334-4cd2-421a-84da-5575a8f8ba69 in datapath e4eebbb2-d419-456a-965c-2d46e9651992 unbound from our chassis#033[00m
Jan 27 09:22:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.779 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4eebbb2-d419-456a-965c-2d46e9651992, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:22:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.780 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4457549b-eb3d-4d49-a375-daeaf7038795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:06.780 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 namespace which is not needed anymore#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.783 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:06 np0005597378 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Deactivated successfully.
Jan 27 09:22:06 np0005597378 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Consumed 14.764s CPU time.
Jan 27 09:22:06 np0005597378 systemd-machined[207425]: Machine qemu-163-instance-00000083 terminated.
Jan 27 09:22:06 np0005597378 podman[361732]: 2026-01-27 14:22:06.865000435 +0000 UTC m=+0.102293311 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.910 238945 INFO nova.virt.libvirt.driver [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Instance destroyed successfully.#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.911 238945 DEBUG nova.objects.instance [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lazy-loading 'resources' on Instance uuid 3a25c695-bd44-4d88-b931-920b89c75a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:06 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : haproxy version is 2.8.14-c23fe91
Jan 27 09:22:06 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [NOTICE]   (360900) : path to executable is /usr/sbin/haproxy
Jan 27 09:22:06 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [WARNING]  (360900) : Exiting Master process...
Jan 27 09:22:06 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [ALERT]    (360900) : Current worker (360913) exited with code 143 (Terminated)
Jan 27 09:22:06 np0005597378 neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992[360881]: [WARNING]  (360900) : All workers exited. Exiting... (0)
Jan 27 09:22:06 np0005597378 systemd[1]: libpod-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b.scope: Deactivated successfully.
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.927 238945 DEBUG nova.virt.libvirt.vif [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-32498425-access_point-1324723749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-32498425-acce',id=131,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCyTZUYN0U7tIy5WORnuZEUZsT78tKpw5fd3F5Gn4FZzj7CRmdxr09neY3gqgNMFonT/3xkHWS+Ja9dEjk5+JCZO+fYN/o3x4zZA2x5xWCMoq+ymn58/Jm9fO3o3fvxRpg==',key_name='tempest-TestSecurityGroupsBasicOps-971836491',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:21:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fab94160690148e98795259a1f20f590',ramdisk_id='',reservation_id='r-jq519wc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-32498425',owner_user_name='tempest-TestSecurityGroupsBasicOps-32498425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:21:37Z,user_data=None,user_id='0425e99118c045d98b41acd95be502b2',uuid=3a25c695-bd44-4d88-b931-920b89c75a4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.927 238945 DEBUG nova.network.os_vif_util [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converting VIF {"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.928 238945 DEBUG nova.network.os_vif_util [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.928 238945 DEBUG os_vif [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.931 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5582334-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.933 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:06 np0005597378 podman[361791]: 2026-01-27 14:22:06.935276191 +0000 UTC m=+0.058704608 container died 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:06 np0005597378 nova_compute[238941]: 2026-01-27 14:22:06.938 238945 INFO os_vif [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b5:e1,bridge_name='br-int',has_traffic_filtering=True,id=d5582334-4cd2-421a-84da-5575a8f8ba69,network=Network(e4eebbb2-d419-456a-965c-2d46e9651992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5582334-4c')#033[00m
Jan 27 09:22:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b-userdata-shm.mount: Deactivated successfully.
Jan 27 09:22:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-81ecb7c310b1ada57892ec42dcba1840531fe6340349755d250fe4bcf2f638ba-merged.mount: Deactivated successfully.
Jan 27 09:22:06 np0005597378 podman[361791]: 2026-01-27 14:22:06.993021162 +0000 UTC m=+0.116449579 container cleanup 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:22:07 np0005597378 systemd[1]: libpod-conmon-619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b.scope: Deactivated successfully.
Jan 27 09:22:07 np0005597378 podman[361845]: 2026-01-27 14:22:07.07608897 +0000 UTC m=+0.057470705 container remove 619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.083 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[820cab82-e542-4e38-b127-b5443b6b9997]: (4, ('Tue Jan 27 02:22:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 (619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b)\n619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b\nTue Jan 27 02:22:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 (619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b)\n619fdfac6f27fc77af4ece5e2454bee587fb47cd576c6f61489d87131ba1ae3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.086 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e72b3dc8-1ef8-4c4f-b05f-36161ebf9efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.087 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4eebbb2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:07 np0005597378 kernel: tape4eebbb2-d0: left promiscuous mode
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.111 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[87875c75-1d90-4ed9-9031-7ab852b52bc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.126 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[19522cf0-3087-429e-80f4-e84f614cff2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.128 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3b82d450-de16-4a43-8900-e76402b42631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 837 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1e8976-989a-49a6-a454-23b4eec50271]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633574, 'reachable_time': 36279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361860, 'error': None, 'target': 'ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.147 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4eebbb2-d419-456a-965c-2d46e9651992 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:22:07 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:07.147 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[a150caf9-6265-4fb0-b265-7cb36749b5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:07 np0005597378 systemd[1]: run-netns-ovnmeta\x2de4eebbb2\x2dd419\x2d456a\x2d965c\x2d2d46e9651992.mount: Deactivated successfully.
Jan 27 09:22:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409343436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.257 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.262 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.281 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.308 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.309 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.330 238945 INFO nova.virt.libvirt.driver [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deleting instance files /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d_del#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.331 238945 INFO nova.virt.libvirt.driver [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deletion of /var/lib/nova/instances/3a25c695-bd44-4d88-b931-920b89c75a4d_del complete#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.374 238945 INFO nova.compute.manager [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.375 238945 DEBUG oslo.service.loopingcall [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.375 238945 DEBUG nova.compute.manager [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.375 238945 DEBUG nova.network.neutron [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:22:07 np0005597378 nova_compute[238941]: 2026-01-27 14:22:07.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.142 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Successfully updated port: a4f55f62-5a14-4d6a-ad2b-746f03792b7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.194 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.195 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.195 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.196 238945 DEBUG nova.network.neutron [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.237 238945 INFO nova.compute.manager [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Took 0.86 seconds to deallocate network for instance.#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.286 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.287 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.309 238945 DEBUG nova.compute.manager [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.309 238945 DEBUG nova.compute.manager [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Refreshing instance network info cache due to event network-changed-a4f55f62-5a14-4d6a-ad2b-746f03792b7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.309 238945 DEBUG oslo_concurrency.lockutils [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.370 238945 DEBUG oslo_concurrency.processutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.642 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.688 238945 DEBUG nova.compute.manager [req-6bce6e1e-3a02-474a-aa5a-fc9d5a9007b7 req-110e62ea-a5f5-47a3-bb59-83e8074e426d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-deleted-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086789190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.942 238945 DEBUG oslo_concurrency.processutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.948 238945 DEBUG nova.compute.provider_tree [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:08 np0005597378 nova_compute[238941]: 2026-01-27 14:22:08.987 238945 DEBUG nova.scheduler.client.report [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.016 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.044 238945 INFO nova.scheduler.client.report [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Deleted allocations for instance 3a25c695-bd44-4d88-b931-920b89c75a4d#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.124 238945 DEBUG oslo_concurrency.lockutils [None req-6d057b5e-3310-4b3c-8cca-58e4ea32e917 0425e99118c045d98b41acd95be502b2 fab94160690148e98795259a1f20f590 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.842 238945 DEBUG nova.network.neutron [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.868 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.868 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance network_info: |[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.868 238945 DEBUG oslo_concurrency.lockutils [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.869 238945 DEBUG nova.network.neutron [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Refreshing network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.871 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start _get_guest_xml network_info=[{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.877 238945 WARNING nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.887 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.888 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.892 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.892 238945 DEBUG nova.virt.libvirt.host [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.893 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.893 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.894 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.895 238945 DEBUG nova.virt.hardware [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.899 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.939 238945 DEBUG nova.network.neutron [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updated VIF entry in instance network info cache for port d5582334-4cd2-421a-84da-5575a8f8ba69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.940 238945 DEBUG nova.network.neutron [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Updating instance_info_cache with network_info: [{"id": "d5582334-4cd2-421a-84da-5575a8f8ba69", "address": "fa:16:3e:e4:b5:e1", "network": {"id": "e4eebbb2-d419-456a-965c-2d46e9651992", "bridge": "br-int", "label": "tempest-network-smoke--177026913", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fab94160690148e98795259a1f20f590", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5582334-4c", "ovs_interfaceid": "d5582334-4cd2-421a-84da-5575a8f8ba69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:09 np0005597378 nova_compute[238941]: 2026-01-27 14:22:09.969 238945 DEBUG oslo_concurrency.lockutils [req-d880e6b3-9ccb-47af-a089-bc160c07ed06 req-8ec390e1-11de-420e-b4a2-dc924749bb97 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3a25c695-bd44-4d88-b931-920b89c75a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.310 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.445 238945 DEBUG nova.compute.manager [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG oslo_concurrency.lockutils [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG oslo_concurrency.lockutils [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG oslo_concurrency.lockutils [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3a25c695-bd44-4d88-b931-920b89c75a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 DEBUG nova.compute.manager [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] No waiting events found dispatching network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.446 238945 WARNING nova.compute.manager [req-f0109565-2c15-4ff6-b186-619302e8856e req-cd69daab-28cd-4e11-b6a0-595b6bdad2d7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Received unexpected event network-vif-plugged-d5582334-4cd2-421a-84da-5575a8f8ba69 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:22:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742809661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.534 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.558 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:10 np0005597378 nova_compute[238941]: 2026-01-27 14:22:10.563 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 246 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Jan 27 09:22:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856171167' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.178 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.180 238945 DEBUG nova.virt.libvirt.vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452791692',display_name='tempest-TestNetworkBasicOps-server-452791692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452791692',id=134,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeubEW8B0i8bBUu2pGROeTiLW5GReoN5OSwvpU7vKKFdGW1nyCUBEb7ktILgYyUMXV9X5Vx1h/TjV6I9zaHk/IHBPbNMgX0M6RqoQybzvDWb9VdAI+sqAiPz/IhRuMvMQ==',key_name='tempest-TestNetworkBasicOps-872581748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-3da2hs4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:05Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3200f931-0872-4524-bbd2-c480c1cce88c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.180 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.182 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.183 238945 DEBUG nova.objects.instance [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3200f931-0872-4524-bbd2-c480c1cce88c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.197 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <uuid>3200f931-0872-4524-bbd2-c480c1cce88c</uuid>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <name>instance-00000086</name>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-452791692</nova:name>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:22:09</nova:creationTime>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <nova:port uuid="a4f55f62-5a14-4d6a-ad2b-746f03792b7f">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <entry name="serial">3200f931-0872-4524-bbd2-c480c1cce88c</entry>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <entry name="uuid">3200f931-0872-4524-bbd2-c480c1cce88c</entry>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3200f931-0872-4524-bbd2-c480c1cce88c_disk">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3200f931-0872-4524-bbd2-c480c1cce88c_disk.config">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:40:9e:43"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <target dev="tapa4f55f62-5a"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/console.log" append="off"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:22:11 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:22:11 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:22:11 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:22:11 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.197 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Preparing to wait for external event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.198 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.198 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.198 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.199 238945 DEBUG nova.virt.libvirt.vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452791692',display_name='tempest-TestNetworkBasicOps-server-452791692',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452791692',id=134,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeubEW8B0i8bBUu2pGROeTiLW5GReoN5OSwvpU7vKKFdGW1nyCUBEb7ktILgYyUMXV9X5Vx1h/TjV6I9zaHk/IHBPbNMgX0M6RqoQybzvDWb9VdAI+sqAiPz/IhRuMvMQ==',key_name='tempest-TestNetworkBasicOps-872581748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-3da2hs4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:05Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3200f931-0872-4524-bbd2-c480c1cce88c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.199 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.200 238945 DEBUG nova.network.os_vif_util [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.200 238945 DEBUG os_vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.200 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.201 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.201 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.207 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.208 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4f55f62-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.208 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4f55f62-5a, col_values=(('external_ids', {'iface-id': 'a4f55f62-5a14-4d6a-ad2b-746f03792b7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:9e:43', 'vm-uuid': '3200f931-0872-4524-bbd2-c480c1cce88c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:11 np0005597378 NetworkManager[48904]: <info>  [1769523731.2116] manager: (tapa4f55f62-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.220 238945 INFO os_vif [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.295 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.295 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.296 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:40:9e:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.454 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Using config drive#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.474 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.480 238945 DEBUG nova.network.neutron [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updated VIF entry in instance network info cache for port a4f55f62-5a14-4d6a-ad2b-746f03792b7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.480 238945 DEBUG nova.network.neutron [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updating instance_info_cache with network_info: [{"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.506 238945 DEBUG oslo_concurrency.lockutils [req-4ef86994-2fd8-4c62-986c-0223110dadeb req-8fd02ce1-1e9c-4139-8ed8-d05d2ad6d001 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3200f931-0872-4524-bbd2-c480c1cce88c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.750 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Creating config drive at /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.755 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt1cr3k3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.896 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt1cr3k3" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.924 238945 DEBUG nova.storage.rbd_utils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:11 np0005597378 nova_compute[238941]: 2026-01-27 14:22:11.928 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.180 238945 DEBUG oslo_concurrency.processutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config 3200f931-0872-4524-bbd2-c480c1cce88c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.181 238945 INFO nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deleting local config drive /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c/disk.config because it was imported into RBD.#033[00m
Jan 27 09:22:12 np0005597378 kernel: tapa4f55f62-5a: entered promiscuous mode
Jan 27 09:22:12 np0005597378 NetworkManager[48904]: <info>  [1769523732.2413] manager: (tapa4f55f62-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/591)
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:12Z|01438|binding|INFO|Claiming lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f for this chassis.
Jan 27 09:22:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:12Z|01439|binding|INFO|a4f55f62-5a14-4d6a-ad2b-746f03792b7f: Claiming fa:16:3e:40:9e:43 10.100.0.13
Jan 27 09:22:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:12Z|01440|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f ovn-installed in OVS
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:12Z|01441|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f up in Southbound
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.263 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3200f931-0872-4524-bbd2-c480c1cce88c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '7', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.262 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.264 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 bound to our chassis#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.266 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22b98830-dbc4-457b-a04e-e9a5507f2880#033[00m
Jan 27 09:22:12 np0005597378 systemd-udevd[362023]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:12 np0005597378 systemd-machined[207425]: New machine qemu-166-instance-00000086.
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.278 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bcb0ea-b584-47ed-8dfd-f886e6eb9555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.279 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22b98830-d1 in ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.283 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22b98830-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.283 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccf62dc-bb15-49df-ac49-2f8920f83f70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 NetworkManager[48904]: <info>  [1769523732.2857] device (tapa4f55f62-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.284 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a987ae69-b699-4ba4-8a5c-b1f7fc0708fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 NetworkManager[48904]: <info>  [1769523732.2877] device (tapa4f55f62-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:22:12 np0005597378 systemd[1]: Started Virtual Machine qemu-166-instance-00000086.
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.298 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[22590f73-8325-4758-9252-30854e502948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0aca04b6-a957-4fa4-9eb2-a931906a1774]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.357 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2914ceae-1dc3-4a6d-ba77-bdeb7eb4e5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 systemd-udevd[362027]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:12 np0005597378 NetworkManager[48904]: <info>  [1769523732.3635] manager: (tap22b98830-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/592)
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.362 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[568876b6-b650-44aa-af34-750bbd15599b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.395 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f43a00-867c-401e-942d-670d51ab6ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.398 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2221c55b-b014-4d0f-9f36-976db73ab1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 NetworkManager[48904]: <info>  [1769523732.4226] device (tap22b98830-d0): carrier: link connected
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.430 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a609e27b-adc8-4e96-b536-9b771d8c826c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.449 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd34ad9-ec9f-4c60-9a8f-92268cec96e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637204, 'reachable_time': 29936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362057, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.469 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9712ce39-367a-41b8-9b45-7535b32f8aa5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:7ab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637204, 'tstamp': 637204}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362058, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[02138e79-8cee-426b-b2b4-e7f8a0f5432b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22b98830-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:7a:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637204, 'reachable_time': 29936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362059, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.524 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[331741c9-a3af-45e2-ae06-be8361441e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.592 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2e35bd-0e5a-40dc-b563-14ee388d7307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.593 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.594 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22b98830-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.595 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 kernel: tap22b98830-d0: entered promiscuous mode
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.598 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22b98830-d0, col_values=(('external_ids', {'iface-id': '590369bd-e8ed-4b9b-b108-a8b595693634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:12 np0005597378 NetworkManager[48904]: <info>  [1769523732.5994] manager: (tap22b98830-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Jan 27 09:22:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:12Z|01442|binding|INFO|Releasing lport 590369bd-e8ed-4b9b-b108-a8b595693634 from this chassis (sb_readonly=0)
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.600 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.617 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.619 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77f69f68-a5df-4696-85e2-70347ea382aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.621 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/22b98830-dbc4-457b-a04e-e9a5507f2880.pid.haproxy
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 22b98830-dbc4-457b-a04e-e9a5507f2880
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:22:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:12.621 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'env', 'PROCESS_TAG=haproxy-22b98830-dbc4-457b-a04e-e9a5507f2880', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22b98830-dbc4-457b-a04e-e9a5507f2880.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.817 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.818 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.839 238945 DEBUG nova.compute.manager [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG oslo_concurrency.lockutils [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG oslo_concurrency.lockutils [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG oslo_concurrency.lockutils [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.840 238945 DEBUG nova.compute.manager [req-79359265-4a73-4057-90ef-58d248f5aabc req-8726a386-1522-4986-b2fd-e8579f7058ac 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Processing event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.842 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.924 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.925 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.943 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.944 238945 INFO nova.compute.claims [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.977 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.978 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523732.9769418, 3200f931-0872-4524-bbd2-c480c1cce88c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.979 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Started (Lifecycle Event)#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.983 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.987 238945 INFO nova.virt.libvirt.driver [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance spawned successfully.#033[00m
Jan 27 09:22:12 np0005597378 nova_compute[238941]: 2026-01-27 14:22:12.987 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.006 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.018 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.023 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.024 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.024 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.025 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.025 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.026 238945 DEBUG nova.virt.libvirt.driver [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.040 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.041 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523732.977989, 3200f931-0872-4524-bbd2-c480c1cce88c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.041 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:22:13 np0005597378 podman[362132]: 2026-01-27 14:22:13.055465967 +0000 UTC m=+0.080277483 container create bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.069 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.078 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523732.982937, 3200f931-0872-4524-bbd2-c480c1cce88c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.079 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:22:13 np0005597378 systemd[1]: Started libpod-conmon-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474.scope.
Jan 27 09:22:13 np0005597378 podman[362132]: 2026-01-27 14:22:13.002690749 +0000 UTC m=+0.027502285 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.109 238945 INFO nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.110 238945 DEBUG nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.111 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.121 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:13 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d6be7271f1154f4ac4b0fc99db12404ab835ec4bc6c2926933a33539b47e60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 246 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Jan 27 09:22:13 np0005597378 podman[362132]: 2026-01-27 14:22:13.156489783 +0000 UTC m=+0.181301299 container init bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 09:22:13 np0005597378 podman[362132]: 2026-01-27 14:22:13.168863634 +0000 UTC m=+0.193675160 container start bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.184 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:22:13 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : New worker (362152) forked
Jan 27 09:22:13 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : Loading success.
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.215 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.257 238945 INFO nova.compute.manager [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 9.15 seconds to build instance.#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.278 238945 DEBUG oslo_concurrency.lockutils [None req-7c65e52a-b791-40bd-b0db-af7971068444 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:13Z|01443|binding|INFO|Releasing lport 590369bd-e8ed-4b9b-b108-a8b595693634 from this chassis (sb_readonly=0)
Jan 27 09:22:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:13Z|01444|binding|INFO|Releasing lport 506e7ffb-d74f-480e-9382-49f98d134f52 from this chassis (sb_readonly=0)
Jan 27 09:22:13 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:13Z|01445|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329905641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.826 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.833 238945 DEBUG nova.compute.provider_tree [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.852 238945 DEBUG nova.scheduler.client.report [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.878 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.879 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.927 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.927 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.950 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:22:13 np0005597378 nova_compute[238941]: 2026-01-27 14:22:13.971 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.079 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.081 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.081 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Creating image(s)#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.110 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.141 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.178 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.184 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.228 238945 DEBUG nova.policy [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.269 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.269 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.270 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.270 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.299 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:14 np0005597378 nova_compute[238941]: 2026-01-27 14:22:14.305 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.015 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.078 238945 DEBUG nova.compute.manager [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG oslo_concurrency.lockutils [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG oslo_concurrency.lockutils [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG oslo_concurrency.lockutils [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.079 238945 DEBUG nova.compute.manager [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.080 238945 WARNING nova.compute.manager [req-05cf021c-7dac-4611-aa14-ffbe0e3e94cb req-dfa3ca7e-c696-4f70-b0b9-679522a5b198 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state active and task_state None.#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.118 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:22:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 246 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Jan 27 09:22:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.316 238945 DEBUG nova.objects.instance [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.333 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.334 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Ensure instance console log exists: /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.334 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.335 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.335 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:15 np0005597378 nova_compute[238941]: 2026-01-27 14:22:15.666 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Successfully created port: 34c6ae80-2857-4eb0-8a36-b7866038913b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.047 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.047 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.048 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.048 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.048 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.050 238945 INFO nova.compute.manager [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Terminating instance#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.051 238945 DEBUG nova.compute.manager [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:22:16 np0005597378 kernel: tapa4f55f62-5a (unregistering): left promiscuous mode
Jan 27 09:22:16 np0005597378 NetworkManager[48904]: <info>  [1769523736.1231] device (tapa4f55f62-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:22:16 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:16Z|01446|binding|INFO|Releasing lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f from this chassis (sb_readonly=0)
Jan 27 09:22:16 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:16Z|01447|binding|INFO|Setting lport a4f55f62-5a14-4d6a-ad2b-746f03792b7f down in Southbound
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:16Z|01448|binding|INFO|Removing iface tapa4f55f62-5a ovn-installed in OVS
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 27 09:22:16 np0005597378 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Consumed 3.726s CPU time.
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.188 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9e:43 10.100.0.13'], port_security=['fa:16:3e:40:9e:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3200f931-0872-4524-bbd2-c480c1cce88c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b98830-dbc4-457b-a04e-e9a5507f2880', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1225568176', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '9', 'neutron:security_group_ids': '682970d9-7b43-4d54-988c-bd869bf25c42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d09327dc-d172-4e92-ad71-bca4adce4888, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a4f55f62-5a14-4d6a-ad2b-746f03792b7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:16 np0005597378 systemd-machined[207425]: Machine qemu-166-instance-00000086 terminated.
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.189 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a4f55f62-5a14-4d6a-ad2b-746f03792b7f in datapath 22b98830-dbc4-457b-a04e-e9a5507f2880 unbound from our chassis#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.191 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22b98830-dbc4-457b-a04e-e9a5507f2880, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.192 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[60d9ad6b-16f4-4293-878a-015d7397463e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.193 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 namespace which is not needed anymore#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.294 238945 INFO nova.virt.libvirt.driver [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Instance destroyed successfully.#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.295 238945 DEBUG nova.objects.instance [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 3200f931-0872-4524-bbd2-c480c1cce88c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : haproxy version is 2.8.14-c23fe91
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [NOTICE]   (362150) : path to executable is /usr/sbin/haproxy
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [WARNING]  (362150) : Exiting Master process...
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [WARNING]  (362150) : Exiting Master process...
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [ALERT]    (362150) : Current worker (362152) exited with code 143 (Terminated)
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880[362145]: [WARNING]  (362150) : All workers exited. Exiting... (0)
Jan 27 09:22:16 np0005597378 systemd[1]: libpod-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474.scope: Deactivated successfully.
Jan 27 09:22:16 np0005597378 podman[362379]: 2026-01-27 14:22:16.360876703 +0000 UTC m=+0.066202488 container died bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.416 238945 DEBUG nova.virt.libvirt.vif [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452791692',display_name='tempest-TestNetworkBasicOps-server-452791692',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452791692',id=134,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOeubEW8B0i8bBUu2pGROeTiLW5GReoN5OSwvpU7vKKFdGW1nyCUBEb7ktILgYyUMXV9X5Vx1h/TjV6I9zaHk/IHBPbNMgX0M6RqoQybzvDWb9VdAI+sqAiPz/IhRuMvMQ==',key_name='tempest-TestNetworkBasicOps-872581748',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:22:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-3da2hs4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:22:13Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3200f931-0872-4524-bbd2-c480c1cce88c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.419 238945 DEBUG nova.network.os_vif_util [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "address": "fa:16:3e:40:9e:43", "network": {"id": "22b98830-dbc4-457b-a04e-e9a5507f2880", "bridge": "br-int", "label": "tempest-network-smoke--1935121230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f55f62-5a", "ovs_interfaceid": "a4f55f62-5a14-4d6a-ad2b-746f03792b7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.420 238945 DEBUG nova.network.os_vif_util [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.421 238945 DEBUG os_vif [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474-userdata-shm.mount: Deactivated successfully.
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.424 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4f55f62-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-94d6be7271f1154f4ac4b0fc99db12404ab835ec4bc6c2926933a33539b47e60-merged.mount: Deactivated successfully.
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.429 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.430 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.431 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.431 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.432 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.433 238945 INFO nova.compute.manager [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Terminating instance#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.435 238945 DEBUG nova.compute.manager [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.438 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.441 238945 INFO os_vif [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9e:43,bridge_name='br-int',has_traffic_filtering=True,id=a4f55f62-5a14-4d6a-ad2b-746f03792b7f,network=Network(22b98830-dbc4-457b-a04e-e9a5507f2880),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa4f55f62-5a')#033[00m
Jan 27 09:22:16 np0005597378 podman[362379]: 2026-01-27 14:22:16.486895646 +0000 UTC m=+0.192221431 container cleanup bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:22:16 np0005597378 systemd[1]: libpod-conmon-bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474.scope: Deactivated successfully.
Jan 27 09:22:16 np0005597378 kernel: tap09c77aca-6d (unregistering): left promiscuous mode
Jan 27 09:22:16 np0005597378 NetworkManager[48904]: <info>  [1769523736.5299] device (tap09c77aca-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:16Z|01449|binding|INFO|Releasing lport 09c77aca-6ddf-4429-a493-6659c2468c83 from this chassis (sb_readonly=0)
Jan 27 09:22:16 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:16Z|01450|binding|INFO|Setting lport 09c77aca-6ddf-4429-a493-6659c2468c83 down in Southbound
Jan 27 09:22:16 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:16Z|01451|binding|INFO|Removing iface tap09c77aca-6d ovn-installed in OVS
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.548 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:fd:e4 10.100.0.14'], port_security=['fa:16:3e:dc:fd:e4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9588e56d-325a-44ac-b589-16da13fbcc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-820c1fd4-2071-45df-974d-54892e70889b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '649dea99-5b61-4f66-9587-d172de12a07d c497b409-cdfa-4ad1-9b57-9f3c97ba8246', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ababd73-2b6f-4f89-98d3-56671274acc6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=09c77aca-6ddf-4429-a493-6659c2468c83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 27 09:22:16 np0005597378 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 19.653s CPU time.
Jan 27 09:22:16 np0005597378 systemd-machined[207425]: Machine qemu-162-instance-00000082 terminated.
Jan 27 09:22:16 np0005597378 podman[362430]: 2026-01-27 14:22:16.63728937 +0000 UTC m=+0.124920645 container remove bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.644 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7117103b-4a20-4a45-9063-4d5db0b02a58]: (4, ('Tue Jan 27 02:22:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474)\nbb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474\nTue Jan 27 02:22:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 (bb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474)\nbb6d526afc558218b3a1cb6032bc24d29e4ad5e68df85b7dbcd43b1d509c0474\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.646 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[463f3f20-4e24-403f-941e-bc4553ba0161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.648 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b98830-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 kernel: tap22b98830-d0: left promiscuous mode
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.674 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0384cc37-169f-426f-9848-09e990f6eca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 NetworkManager[48904]: <info>  [1769523736.6791] manager: (tap09c77aca-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb8a28d-d39b-4040-864e-0b5a9b314935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.695 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[207a7cfd-adbf-47ab-b87d-395b0c0a94f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.694 238945 INFO nova.virt.libvirt.driver [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Instance destroyed successfully.#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.694 238945 DEBUG nova.objects.instance [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 9588e56d-325a-44ac-b589-16da13fbcc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.713 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b802a724-3649-4794-bf7d-aa359ef8a7c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637197, 'reachable_time': 21695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362505, 'error': None, 'target': 'ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.715 238945 DEBUG nova.virt.libvirt.vif [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:20:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1288478698',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=130,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+gqlVzG9h7jXhfyoskTs2NCm6wAB3wVDlwONrKb4mWpkwLIK+XxA+6h41JzRCoN6TybE0DPiUgsj35t6yTYW/Hd7vrF1apMuU/h4HUaTJzVzqD1e3yepTjEIwWfGCDQ==',key_name='tempest-TestSecurityGroupsBasicOps-931992880',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:20:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-msmno0o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:20:33Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9588e56d-325a-44ac-b589-16da13fbcc3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.716 238945 DEBUG nova.network.os_vif_util [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.717 238945 DEBUG nova.network.os_vif_util [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.717 238945 DEBUG os_vif [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.718 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22b98830-dbc4-457b-a04e-e9a5507f2880 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:22:16 np0005597378 systemd[1]: run-netns-ovnmeta\x2d22b98830\x2ddbc4\x2d457b\x2da04e\x2de9a5507f2880.mount: Deactivated successfully.
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.718 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c1fa08-9498-4963-83a8-67e1b9fa1ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.719 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.719 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 09c77aca-6ddf-4429-a493-6659c2468c83 in datapath 820c1fd4-2071-45df-974d-54892e70889b unbound from our chassis#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.719 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09c77aca-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.721 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.722 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 820c1fd4-2071-45df-974d-54892e70889b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.722 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[faa2d05f-8e2a-45d9-b209-8094ef9180c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:16 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:16.724 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-820c1fd4-2071-45df-974d-54892e70889b namespace which is not needed anymore#033[00m
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.724 238945 INFO os_vif [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:fd:e4,bridge_name='br-int',has_traffic_filtering=True,id=09c77aca-6ddf-4429-a493-6659c2468c83,network=Network(820c1fd4-2071-45df-974d-54892e70889b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09c77aca-6d')#033[00m
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : haproxy version is 2.8.14-c23fe91
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [NOTICE]   (358661) : path to executable is /usr/sbin/haproxy
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [WARNING]  (358661) : Exiting Master process...
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [ALERT]    (358661) : Current worker (358663) exited with code 143 (Terminated)
Jan 27 09:22:16 np0005597378 neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b[358657]: [WARNING]  (358661) : All workers exited. Exiting... (0)
Jan 27 09:22:16 np0005597378 systemd[1]: libpod-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557.scope: Deactivated successfully.
Jan 27 09:22:16 np0005597378 podman[362546]: 2026-01-27 14:22:16.969402046 +0000 UTC m=+0.151793164 container died 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:22:16 np0005597378 nova_compute[238941]: 2026-01-27 14:22:16.998 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Successfully updated port: 34c6ae80-2857-4eb0-8a36-b7866038913b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.019 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.019 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.019 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 261 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 160 op/s
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:22:17
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.203 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.285 238945 DEBUG nova.compute.manager [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.285 238945 DEBUG nova.compute.manager [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing instance network info cache due to event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.285 238945 DEBUG oslo_concurrency.lockutils [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.347 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing instance network info cache due to event network-changed-09c77aca-6ddf-4429-a493-6659c2468c83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.348 238945 DEBUG nova.network.neutron [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Refreshing network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.402 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.402 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 09:22:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557-userdata-shm.mount: Deactivated successfully.
Jan 27 09:22:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f2fc1971e69b0afb800322f8c5fff8c16f85eb53b98fe3874cad2ffac52b1fb4-merged.mount: Deactivated successfully.
Jan 27 09:22:17 np0005597378 podman[362546]: 2026-01-27 14:22:17.656210527 +0000 UTC m=+0.838601625 container cleanup 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:22:17 np0005597378 systemd[1]: libpod-conmon-48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557.scope: Deactivated successfully.
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.775 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.775 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.776 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.776 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:17 np0005597378 podman[362591]: 2026-01-27 14:22:17.872069318 +0000 UTC m=+0.177272782 container remove 48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.878 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[134cf808-6bbf-4c00-bc56-a698742af3cf]: (4, ('Tue Jan 27 02:22:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b (48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557)\n48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557\nTue Jan 27 02:22:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-820c1fd4-2071-45df-974d-54892e70889b (48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557)\n48cf5e15f899d01b54f2e81f81e35496b05b512ac8eb31b8b8fb147471c7f557\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.880 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[45b64bd8-0c04-451c-b481-b83e8058506d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.881 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap820c1fd4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:17 np0005597378 kernel: tap820c1fd4-20: left promiscuous mode
Jan 27 09:22:17 np0005597378 nova_compute[238941]: 2026-01-27 14:22:17.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.899 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41893c65-e72d-47d4-8db8-39ed2cdfe6f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.912 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8daf6bb5-4f3e-4dc7-b088-fd929ffd5650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.914 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[240a48a6-18ce-4c80-982f-744ec92578d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.930 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[491c0b13-5f02-4091-afcd-aec3126a9d39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627211, 'reachable_time': 31077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362623, 'error': None, 'target': 'ovnmeta-820c1fd4-2071-45df-974d-54892e70889b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.933 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-820c1fd4-2071-45df-974d-54892e70889b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:22:17 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:17.933 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[cee046c7-efc5-4bfc-8ce3-0c80782eb67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:17 np0005597378 systemd[1]: run-netns-ovnmeta\x2d820c1fd4\x2d2071\x2d45df\x2d974d\x2d54892e70889b.mount: Deactivated successfully.
Jan 27 09:22:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:22:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:22:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:22:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:22:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:22:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:22:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:22:18 np0005597378 podman[362686]: 2026-01-27 14:22:18.492791287 +0000 UTC m=+0.023838328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.666 238945 DEBUG nova.network.neutron [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:18 np0005597378 podman[362686]: 2026-01-27 14:22:18.68700216 +0000 UTC m=+0.218049161 container create 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance network_info: |[{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG oslo_concurrency.lockutils [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.691 238945 DEBUG nova.network.neutron [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.695 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start _get_guest_xml network_info=[{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.704 238945 WARNING nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.712 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.713 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.717 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.717 238945 DEBUG nova.virt.libvirt.host [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.718 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.718 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.718 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.719 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.720 238945 DEBUG nova.virt.hardware [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:22:18 np0005597378 nova_compute[238941]: 2026-01-27 14:22:18.723 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:18 np0005597378 systemd[1]: Started libpod-conmon-15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa.scope.
Jan 27 09:22:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:18 np0005597378 podman[362686]: 2026-01-27 14:22:18.896565613 +0000 UTC m=+0.427612634 container init 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 09:22:18 np0005597378 podman[362686]: 2026-01-27 14:22:18.905220174 +0000 UTC m=+0.436267185 container start 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:22:18 np0005597378 vigorous_wu[362703]: 167 167
Jan 27 09:22:18 np0005597378 systemd[1]: libpod-15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa.scope: Deactivated successfully.
Jan 27 09:22:18 np0005597378 podman[362686]: 2026-01-27 14:22:18.972558522 +0000 UTC m=+0.503605533 container attach 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:22:18 np0005597378 podman[362686]: 2026-01-27 14:22:18.973874087 +0000 UTC m=+0.504921078 container died 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:22:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 261 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Jan 27 09:22:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fa5b89c0b9cc16c2a82fec11a0777e732d6dbb089efe0f3f7f9205688a18de77-merged.mount: Deactivated successfully.
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.220 238945 DEBUG nova.network.neutron [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updated VIF entry in instance network info cache for port 09c77aca-6ddf-4429-a493-6659c2468c83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.221 238945 DEBUG nova.network.neutron [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [{"id": "09c77aca-6ddf-4429-a493-6659c2468c83", "address": "fa:16:3e:dc:fd:e4", "network": {"id": "820c1fd4-2071-45df-974d-54892e70889b", "bridge": "br-int", "label": "tempest-network-smoke--1857688", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09c77aca-6d", "ovs_interfaceid": "09c77aca-6ddf-4429-a493-6659c2468c83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.245 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9588e56d-325a-44ac-b589-16da13fbcc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.245 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG oslo_concurrency.lockutils [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] No waiting events found dispatching network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.246 238945 DEBUG nova.compute.manager [req-e187e7b5-51d9-4184-8487-51b7f4dc96ba req-9dfe9c28-0bc7-49ca-9625-a5442d2730ce 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-unplugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:22:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713289761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.306 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.328 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.331 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.427 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] No waiting events found dispatching network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.428 238945 WARNING nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Received unexpected event network-vif-plugged-a4f55f62-5a14-4d6a-ad2b-746f03792b7f for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-unplugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.429 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] No waiting events found dispatching network-vif-unplugged-09c77aca-6ddf-4429-a493-6659c2468c83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-unplugged-09c77aca-6ddf-4429-a493-6659c2468c83 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG oslo_concurrency.lockutils [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.430 238945 DEBUG nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] No waiting events found dispatching network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.431 238945 WARNING nova.compute.manager [req-44d48d53-4b16-49d8-8b37-b938667174da req-7fa10ef8-0ffc-40da-9637-cb5ab245ecee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received unexpected event network-vif-plugged-09c77aca-6ddf-4429-a493-6659c2468c83 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:22:19 np0005597378 podman[362686]: 2026-01-27 14:22:19.595191571 +0000 UTC m=+1.126238562 container remove 15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:22:19 np0005597378 systemd[1]: libpod-conmon-15d82240af0884b568ec879e557709c5400ef29ad2c1b7733fdc390b85355caa.scope: Deactivated successfully.
Jan 27 09:22:19 np0005597378 podman[362786]: 2026-01-27 14:22:19.779023187 +0000 UTC m=+0.026389055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:22:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898390222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.970 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.972 238945 DEBUG nova.virt.libvirt.vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-393288026',display_name='tempest-TestGettingAddress-server-393288026',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-393288026',id=135,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-svgilpev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ddccc961-2581-4996-9b3f-b29ebc1c25d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.972 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.973 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.974 238945 DEBUG nova.objects.instance [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:19 np0005597378 podman[362786]: 2026-01-27 14:22:19.996444901 +0000 UTC m=+0.243810729 container create 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:22:19 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.996 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:22:19 np0005597378 nova_compute[238941]:  <uuid>ddccc961-2581-4996-9b3f-b29ebc1c25d5</uuid>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:  <name>instance-00000087</name>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:22:19 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-393288026</nova:name>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:22:18</nova:creationTime>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:22:19 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:22:19 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <nova:port uuid="34c6ae80-2857-4eb0-8a36-b7866038913b">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe88:47d3" ipVersion="6"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe88:47d3" ipVersion="6"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <entry name="serial">ddccc961-2581-4996-9b3f-b29ebc1c25d5</entry>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <entry name="uuid">ddccc961-2581-4996-9b3f-b29ebc1c25d5</entry>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:88:47:d3"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <target dev="tap34c6ae80-28"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/console.log" append="off"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:22:20 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:22:20 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:22:20 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:22:20 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.999 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Preparing to wait for external event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:19.999 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.000 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.004 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.005 238945 DEBUG nova.virt.libvirt.vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-393288026',display_name='tempest-TestGettingAddress-server-393288026',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-393288026',id=135,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-svgilpev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ddccc961-2581-4996-9b3f-b29ebc1c25d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.006 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.007 238945 DEBUG nova.network.os_vif_util [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.007 238945 DEBUG os_vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.014 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.014 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.021 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.022 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34c6ae80-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.022 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34c6ae80-28, col_values=(('external_ids', {'iface-id': '34c6ae80-2857-4eb0-8a36-b7866038913b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:47:d3', 'vm-uuid': 'ddccc961-2581-4996-9b3f-b29ebc1c25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:20 np0005597378 NetworkManager[48904]: <info>  [1769523740.0256] manager: (tap34c6ae80-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.032 238945 INFO os_vif [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28')#033[00m
Jan 27 09:22:20 np0005597378 systemd[1]: Started libpod-conmon-3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8.scope.
Jan 27 09:22:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:20 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.364 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.365 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.366 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:88:47:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.366 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Using config drive#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.393 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.648 238945 INFO nova.virt.libvirt.driver [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deleting instance files /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c_del#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.910 238945 INFO nova.virt.libvirt.driver [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deletion of /var/lib/nova/instances/3200f931-0872-4524-bbd2-c480c1cce88c_del complete#033[00m
Jan 27 09:22:20 np0005597378 nova_compute[238941]: 2026-01-27 14:22:20.915 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:21 np0005597378 podman[362786]: 2026-01-27 14:22:21.015469569 +0000 UTC m=+1.262835417 container init 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:22:21 np0005597378 podman[362786]: 2026-01-27 14:22:21.028881797 +0000 UTC m=+1.276247655 container start 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.051 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.052 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.053 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 200 op/s
Jan 27 09:22:21 np0005597378 podman[362786]: 2026-01-27 14:22:21.171089564 +0000 UTC m=+1.418455462 container attach 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.220 238945 INFO nova.compute.manager [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 5.17 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.220 238945 DEBUG oslo.service.loopingcall [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.221 238945 DEBUG nova.compute.manager [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.221 238945 DEBUG nova.network.neutron [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.255 238945 WARNING nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 9588e56d-325a-44ac-b589-16da13fbcc3d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 11a944d0-c529-462a-a12d-95eadb9446a8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid 3200f931-0872-4524-bbd2-c480c1cce88c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Triggering sync for uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "9588e56d-325a-44ac-b589-16da13fbcc3d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.256 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.257 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.257 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "3200f931-0872-4524-bbd2-c480c1cce88c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.257 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.281 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:21 np0005597378 gifted_thompson[362807]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:22:21 np0005597378 gifted_thompson[362807]: --> All data devices are unavailable
Jan 27 09:22:21 np0005597378 systemd[1]: libpod-3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8.scope: Deactivated successfully.
Jan 27 09:22:21 np0005597378 podman[362786]: 2026-01-27 14:22:21.540641607 +0000 UTC m=+1.788007435 container died 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.909 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523726.908318, 3a25c695-bd44-4d88-b931-920b89c75a4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.910 238945 INFO nova.compute.manager [-] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.921 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Creating config drive at /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.925 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d_467e6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.973 238945 DEBUG nova.network.neutron [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updated VIF entry in instance network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.974 238945 DEBUG nova.network.neutron [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.977 238945 DEBUG nova.compute.manager [None req-77e35aaf-0dce-4cb8-a049-7cd2f8f2a6cb - - - - - -] [instance: 3a25c695-bd44-4d88-b931-920b89c75a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:21 np0005597378 nova_compute[238941]: 2026-01-27 14:22:21.994 238945 DEBUG oslo_concurrency.lockutils [req-03e22142-e9b1-4450-bfcf-f459bf3ced10 req-b89572af-98c1-401f-9dcc-46973aeca2e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8a8dd429e1b9f919b80d426863750b37ee71807255a3806081e98b9acb15437f-merged.mount: Deactivated successfully.
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.083 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d_467e6" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.114 238945 DEBUG nova.storage.rbd_utils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.119 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.303 238945 INFO nova.virt.libvirt.driver [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deleting instance files /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d_del#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.304 238945 INFO nova.virt.libvirt.driver [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deletion of /var/lib/nova/instances/9588e56d-325a-44ac-b589-16da13fbcc3d_del complete#033[00m
Jan 27 09:22:22 np0005597378 podman[362786]: 2026-01-27 14:22:22.352390913 +0000 UTC m=+2.599756761 container remove 3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_thompson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.378 238945 INFO nova.compute.manager [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 5.94 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.378 238945 DEBUG oslo.service.loopingcall [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.378 238945 DEBUG nova.compute.manager [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.379 238945 DEBUG nova.network.neutron [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.401 238945 DEBUG nova.network.neutron [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:22 np0005597378 systemd[1]: libpod-conmon-3bc3bc057fefd22bc52ed9b2c170490ea8841a4b8b4ff1abd6468af2cd9e39d8.scope: Deactivated successfully.
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.428 238945 INFO nova.compute.manager [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Took 1.21 seconds to deallocate network for instance.#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.478 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.479 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.568 238945 DEBUG oslo_concurrency.processutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.614 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:22 np0005597378 nova_compute[238941]: 2026-01-27 14:22:22.786 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:22 np0005597378 podman[362981]: 2026-01-27 14:22:22.90712822 +0000 UTC m=+0.116628174 container create 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:22:22 np0005597378 podman[362981]: 2026-01-27 14:22:22.81983242 +0000 UTC m=+0.029332334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.068 238945 DEBUG oslo_concurrency.processutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config ddccc961-2581-4996-9b3f-b29ebc1c25d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.949s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.069 238945 INFO nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deleting local config drive /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5/disk.config because it was imported into RBD.#033[00m
Jan 27 09:22:23 np0005597378 systemd[1]: Started libpod-conmon-8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789.scope.
Jan 27 09:22:23 np0005597378 kernel: tap34c6ae80-28: entered promiscuous mode
Jan 27 09:22:23 np0005597378 NetworkManager[48904]: <info>  [1769523743.1187] manager: (tap34c6ae80-28): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Jan 27 09:22:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:23Z|01452|binding|INFO|Claiming lport 34c6ae80-2857-4eb0-8a36-b7866038913b for this chassis.
Jan 27 09:22:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:23Z|01453|binding|INFO|34c6ae80-2857-4eb0-8a36-b7866038913b: Claiming fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.135 238945 DEBUG nova.network.neutron [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.141 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], port_security=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe88:47d3/64 2001:db8::f816:3eff:fe88:47d3/64', 'neutron:device_id': 'ddccc961-2581-4996-9b3f-b29ebc1c25d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=34c6ae80-2857-4eb0-8a36-b7866038913b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.144 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 34c6ae80-2857-4eb0-8a36-b7866038913b in datapath fadddb78-26b2-452e-a680-4fa4490a9885 bound to our chassis#033[00m
Jan 27 09:22:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:23Z|01454|binding|INFO|Setting lport 34c6ae80-2857-4eb0-8a36-b7866038913b ovn-installed in OVS
Jan 27 09:22:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:23Z|01455|binding|INFO|Setting lport 34c6ae80-2857-4eb0-8a36-b7866038913b up in Southbound
Jan 27 09:22:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.146 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fadddb78-26b2-452e-a680-4fa4490a9885#033[00m
Jan 27 09:22:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974001296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.146 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:23 np0005597378 systemd-machined[207425]: New machine qemu-167-instance-00000087.
Jan 27 09:22:23 np0005597378 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.166 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37372fab-94e6-4dc5-9084-92f93b222b9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:23 np0005597378 systemd-udevd[363017]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.179 238945 DEBUG oslo_concurrency.processutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:23 np0005597378 NetworkManager[48904]: <info>  [1769523743.1881] device (tap34c6ae80-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:22:23 np0005597378 NetworkManager[48904]: <info>  [1769523743.1889] device (tap34c6ae80-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.190 238945 DEBUG nova.compute.provider_tree [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.198 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0811cb37-d811-456d-af08-ff583ff600d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.201 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd1e71d-693d-4af1-a97b-5d685e1c4963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.215 238945 INFO nova.compute.manager [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Took 0.84 seconds to deallocate network for instance.#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.232 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[475bd899-c99b-4397-9098-da29cedc15e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.251 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1bd980-6601-4d53-9639-48cb093ffff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 6, 'rx_bytes': 2000, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 6, 'rx_bytes': 2000, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363029, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.262 238945 DEBUG nova.scheduler.client.report [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[84b5dad1-2efb-4d73-83c2-d0052ab640e9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634978, 'tstamp': 634978}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363030, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634980, 'tstamp': 634980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363030, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.270 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.272 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfadddb78-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.273 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.274 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfadddb78-20, col_values=(('external_ids', {'iface-id': '2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:23.274 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:23 np0005597378 podman[362981]: 2026-01-27 14:22:23.323203646 +0000 UTC m=+0.532703600 container init 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:22:23 np0005597378 podman[362981]: 2026-01-27 14:22:23.339653045 +0000 UTC m=+0.549152949 container start 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:22:23 np0005597378 compassionate_cerf[363003]: 167 167
Jan 27 09:22:23 np0005597378 systemd[1]: libpod-8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789.scope: Deactivated successfully.
Jan 27 09:22:23 np0005597378 podman[362981]: 2026-01-27 14:22:23.365113495 +0000 UTC m=+0.574613429 container attach 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:22:23 np0005597378 podman[362981]: 2026-01-27 14:22:23.36567297 +0000 UTC m=+0.575172874 container died 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.378 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.383 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.383 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.411 238945 INFO nova.scheduler.client.report [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 3200f931-0872-4524-bbd2-c480c1cce88c#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.451 238945 DEBUG nova.compute.manager [req-a3fa8640-e0c5-4727-b07c-b4b82a504185 req-d9bdf513-58fe-4634-b10e-9f9cb1200277 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Received event network-vif-deleted-09c77aca-6ddf-4429-a493-6659c2468c83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.470 238945 DEBUG oslo_concurrency.lockutils [None req-23c0283a-86f9-478c-b704-e67aa0c0bb93 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.471 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.471 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.471 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "3200f931-0872-4524-bbd2-c480c1cce88c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.476 238945 DEBUG oslo_concurrency.processutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.545 238945 DEBUG nova.compute.manager [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.545 238945 DEBUG oslo_concurrency.lockutils [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.546 238945 DEBUG oslo_concurrency.lockutils [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.546 238945 DEBUG oslo_concurrency.lockutils [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.546 238945 DEBUG nova.compute.manager [req-60d56010-3afb-4255-a592-83273b889434 req-155378e8-dd39-4641-ad60-1d4968e91588 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Processing event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.722 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.723 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523743.721725, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.724 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Started (Lifecycle Event)#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.727 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.730 238945 INFO nova.virt.libvirt.driver [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance spawned successfully.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.730 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:22:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9df16dc39001ec404e0848294614250c78b132a2c86eb6b9552ccda96ffd6a2a-merged.mount: Deactivated successfully.
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.752 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.791 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.792 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.792 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.793 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.793 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.793 238945 DEBUG nova.virt.libvirt.driver [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.812 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.812 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523743.722789, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.812 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.843 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.847 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523743.726616, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.847 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.873 238945 INFO nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 9.79 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.874 238945 DEBUG nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.875 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.881 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.917 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.935 238945 INFO nova.compute.manager [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 11.04 seconds to build instance.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.950 238945 DEBUG oslo_concurrency.lockutils [None req-2b25621a-44c7-4a1a-8696-d653f6c9f798 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.950 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.951 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:23 np0005597378 nova_compute[238941]: 2026-01-27 14:22:23.951 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:24 np0005597378 podman[362981]: 2026-01-27 14:22:23.997527824 +0000 UTC m=+1.207027728 container remove 8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:22:24 np0005597378 systemd[1]: libpod-conmon-8b2aec97e16af11d843aa0f77781aec21027518af8e71f7f5c79b70bc2686789.scope: Deactivated successfully.
Jan 27 09:22:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497958174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.051 238945 DEBUG oslo_concurrency.processutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.056 238945 DEBUG nova.compute.provider_tree [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.082 238945 DEBUG nova.scheduler.client.report [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.109 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.136 238945 INFO nova.scheduler.client.report [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 9588e56d-325a-44ac-b589-16da13fbcc3d#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.202 238945 DEBUG oslo_concurrency.lockutils [None req-97b6f474-31e4-430d-9fb1-6ccbac7b89f8 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.204 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.205 238945 INFO nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 27 09:22:24 np0005597378 nova_compute[238941]: 2026-01-27 14:22:24.205 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "9588e56d-325a-44ac-b589-16da13fbcc3d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:24 np0005597378 podman[363116]: 2026-01-27 14:22:24.183309313 +0000 UTC m=+0.028280875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:22:24 np0005597378 podman[363116]: 2026-01-27 14:22:24.285142702 +0000 UTC m=+0.130114284 container create e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:22:24 np0005597378 systemd[1]: Started libpod-conmon-e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385.scope.
Jan 27 09:22:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:24 np0005597378 podman[363116]: 2026-01-27 14:22:24.547549825 +0000 UTC m=+0.392521387 container init e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:22:24 np0005597378 podman[363116]: 2026-01-27 14:22:24.556526204 +0000 UTC m=+0.401497746 container start e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:22:24 np0005597378 podman[363116]: 2026-01-27 14:22:24.673338413 +0000 UTC m=+0.518309985 container attach e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]: {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:    "0": [
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:        {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "devices": [
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "/dev/loop3"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            ],
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_name": "ceph_lv0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_size": "21470642176",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "name": "ceph_lv0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "tags": {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cluster_name": "ceph",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.crush_device_class": "",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.encrypted": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.objectstore": "bluestore",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osd_id": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.type": "block",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.vdo": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.with_tpm": "0"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            },
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "type": "block",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "vg_name": "ceph_vg0"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:        }
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:    ],
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:    "1": [
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:        {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "devices": [
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "/dev/loop4"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            ],
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_name": "ceph_lv1",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_size": "21470642176",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "name": "ceph_lv1",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "tags": {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cluster_name": "ceph",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.crush_device_class": "",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.encrypted": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.objectstore": "bluestore",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osd_id": "1",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.type": "block",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.vdo": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.with_tpm": "0"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            },
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "type": "block",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "vg_name": "ceph_vg1"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:        }
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:    ],
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:    "2": [
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:        {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "devices": [
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "/dev/loop5"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            ],
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_name": "ceph_lv2",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_size": "21470642176",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "name": "ceph_lv2",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "tags": {
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.cluster_name": "ceph",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.crush_device_class": "",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.encrypted": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.objectstore": "bluestore",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osd_id": "2",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.type": "block",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.vdo": "0",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:                "ceph.with_tpm": "0"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            },
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "type": "block",
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:            "vg_name": "ceph_vg2"
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:        }
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]:    ]
Jan 27 09:22:24 np0005597378 upbeat_ptolemy[363132]: }
Jan 27 09:22:24 np0005597378 systemd[1]: libpod-e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385.scope: Deactivated successfully.
Jan 27 09:22:24 np0005597378 podman[363116]: 2026-01-27 14:22:24.919696169 +0000 UTC m=+0.764667711 container died e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:22:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4cca6a904fd090cb3d52906b65fb38b510be4c4b13c5840400aa7819be197043-merged.mount: Deactivated successfully.
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 27 09:22:25 np0005597378 podman[363116]: 2026-01-27 14:22:25.190919898 +0000 UTC m=+1.035891440 container remove e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:22:25 np0005597378 systemd[1]: libpod-conmon-e13d8ff0a6f3f6e01945ef4ed1a043017a4a43ef5291d45c87e7a945543f5385.scope: Deactivated successfully.
Jan 27 09:22:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.673 238945 DEBUG nova.compute.manager [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.674 238945 DEBUG oslo_concurrency.lockutils [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.675 238945 DEBUG oslo_concurrency.lockutils [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.675 238945 DEBUG oslo_concurrency.lockutils [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.675 238945 DEBUG nova.compute.manager [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] No waiting events found dispatching network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:25 np0005597378 nova_compute[238941]: 2026-01-27 14:22:25.676 238945 WARNING nova.compute.manager [req-71de0e07-dc6c-42e0-8a6b-a4e5c11e0c4b req-31c7f5e4-8cc1-46b9-aa6d-4ad34131fba3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received unexpected event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:22:25 np0005597378 podman[363216]: 2026-01-27 14:22:25.725169508 +0000 UTC m=+0.119300276 container create 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:22:25 np0005597378 podman[363216]: 2026-01-27 14:22:25.631975811 +0000 UTC m=+0.026106589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:22:26 np0005597378 systemd[1]: Started libpod-conmon-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope.
Jan 27 09:22:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:26 np0005597378 podman[363216]: 2026-01-27 14:22:26.127463735 +0000 UTC m=+0.521594503 container init 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:22:26 np0005597378 podman[363216]: 2026-01-27 14:22:26.13473399 +0000 UTC m=+0.528864748 container start 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:22:26 np0005597378 sad_payne[363232]: 167 167
Jan 27 09:22:26 np0005597378 systemd[1]: libpod-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope: Deactivated successfully.
Jan 27 09:22:26 np0005597378 conmon[363232]: conmon 0d5b56b1ae993edf496d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope/container/memory.events
Jan 27 09:22:26 np0005597378 podman[363216]: 2026-01-27 14:22:26.24977661 +0000 UTC m=+0.643907368 container attach 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:22:26 np0005597378 podman[363216]: 2026-01-27 14:22:26.250616402 +0000 UTC m=+0.644747180 container died 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:22:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9d9c1a97368f14d46802c0cce61bb6164e0a5879b51760fa8f80e6f49b78f5c0-merged.mount: Deactivated successfully.
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 27 09:22:27 np0005597378 podman[363216]: 2026-01-27 14:22:27.431617565 +0000 UTC m=+1.825748323 container remove 0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_payne, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:22:27 np0005597378 systemd[1]: libpod-conmon-0d5b56b1ae993edf496d2a82b0cb96e3b9ab9e80642bad241bae45c0fe519563.scope: Deactivated successfully.
Jan 27 09:22:27 np0005597378 podman[363257]: 2026-01-27 14:22:27.599589288 +0000 UTC m=+0.023540539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:22:27 np0005597378 podman[363257]: 2026-01-27 14:22:27.736408561 +0000 UTC m=+0.160359732 container create b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 09:22:27 np0005597378 nova_compute[238941]: 2026-01-27 14:22:27.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011214015772156847 of space, bias 1.0, pg target 0.3364204731647054 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693709119421792 of space, bias 1.0, pg target 0.20081127358265374 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0411115396594896e-06 of space, bias 4.0, pg target 0.0012493338475913875 quantized to 16 (current 16)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:22:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:22:27 np0005597378 systemd[1]: Started libpod-conmon-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope.
Jan 27 09:22:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:28 np0005597378 podman[363257]: 2026-01-27 14:22:28.085815366 +0000 UTC m=+0.509766537 container init b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 09:22:28 np0005597378 podman[363257]: 2026-01-27 14:22:28.093118061 +0000 UTC m=+0.517069262 container start b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:22:28 np0005597378 nova_compute[238941]: 2026-01-27 14:22:28.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:28.096 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:28 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:28.097 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:22:28 np0005597378 podman[363257]: 2026-01-27 14:22:28.397745952 +0000 UTC m=+0.821697123 container attach b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:22:28 np0005597378 lvm[363351]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:22:28 np0005597378 lvm[363351]: VG ceph_vg0 finished
Jan 27 09:22:28 np0005597378 lvm[363353]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:22:28 np0005597378 lvm[363353]: VG ceph_vg1 finished
Jan 27 09:22:28 np0005597378 lvm[363355]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:22:28 np0005597378 lvm[363355]: VG ceph_vg2 finished
Jan 27 09:22:28 np0005597378 blissful_hugle[363274]: {}
Jan 27 09:22:28 np0005597378 systemd[1]: libpod-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope: Deactivated successfully.
Jan 27 09:22:28 np0005597378 systemd[1]: libpod-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope: Consumed 1.327s CPU time.
Jan 27 09:22:28 np0005597378 podman[363257]: 2026-01-27 14:22:28.993904565 +0000 UTC m=+1.417855776 container died b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:22:29 np0005597378 nova_compute[238941]: 2026-01-27 14:22:29.046 238945 DEBUG nova.compute.manager [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:29 np0005597378 nova_compute[238941]: 2026-01-27 14:22:29.047 238945 DEBUG nova.compute.manager [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing instance network info cache due to event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:29 np0005597378 nova_compute[238941]: 2026-01-27 14:22:29.048 238945 DEBUG oslo_concurrency.lockutils [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:29 np0005597378 nova_compute[238941]: 2026-01-27 14:22:29.048 238945 DEBUG oslo_concurrency.lockutils [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:29 np0005597378 nova_compute[238941]: 2026-01-27 14:22:29.048 238945 DEBUG nova.network.neutron [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 27 09:22:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-022aaca9f8e25ff4e77c7ec9c02ff36300982f425f8f0c6bc83b1db2e701fd75-merged.mount: Deactivated successfully.
Jan 27 09:22:30 np0005597378 nova_compute[238941]: 2026-01-27 14:22:30.030 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:30 np0005597378 podman[363257]: 2026-01-27 14:22:30.118240885 +0000 UTC m=+2.542192056 container remove b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:22:30 np0005597378 systemd[1]: libpod-conmon-b79004363d96325db5f8c10c7edd7e69e0fb4fc0d6b862a57cafd9d5b65e97aa.scope: Deactivated successfully.
Jan 27 09:22:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:22:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:22:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:22:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:22:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 145 op/s
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.291 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523736.2910295, 3200f931-0872-4524-bbd2-c480c1cce88c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.292 238945 INFO nova.compute.manager [-] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.311 238945 DEBUG nova.compute.manager [None req-dc09689f-7f2d-4769-8b21-30d9d41e9443 - - - - - -] [instance: 3200f931-0872-4524-bbd2-c480c1cce88c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:31Z|01456|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 09:22:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:22:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.524 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.692 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523736.6916518, 9588e56d-325a-44ac-b589-16da13fbcc3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.693 238945 INFO nova.compute.manager [-] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.714 238945 DEBUG nova.compute.manager [None req-e8ee3d49-b832-45bf-b009-209d6ca0616b - - - - - -] [instance: 9588e56d-325a-44ac-b589-16da13fbcc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:31Z|01457|binding|INFO|Releasing lport 2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14 from this chassis (sb_readonly=0)
Jan 27 09:22:31 np0005597378 nova_compute[238941]: 2026-01-27 14:22:31.799 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:32 np0005597378 nova_compute[238941]: 2026-01-27 14:22:32.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:22:32 np0005597378 nova_compute[238941]: 2026-01-27 14:22:32.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 27 09:22:33 np0005597378 nova_compute[238941]: 2026-01-27 14:22:33.683 238945 DEBUG nova.network.neutron [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updated VIF entry in instance network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:33 np0005597378 nova_compute[238941]: 2026-01-27 14:22:33.684 238945 DEBUG nova.network.neutron [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:33 np0005597378 nova_compute[238941]: 2026-01-27 14:22:33.703 238945 DEBUG oslo_concurrency.lockutils [req-0a624b3f-a3a6-44e1-bd61-ab555e4cd442 req-cb22924a-fa0c-4f16-a00a-ee8b13626271 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:35 np0005597378 nova_compute[238941]: 2026-01-27 14:22:35.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 27 09:22:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:35 np0005597378 podman[363395]: 2026-01-27 14:22:35.755211493 +0000 UTC m=+0.066278041 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:22:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 166 KiB/s wr, 80 op/s
Jan 27 09:22:37 np0005597378 podman[363416]: 2026-01-27 14:22:37.742709711 +0000 UTC m=+0.084815804 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 27 09:22:37 np0005597378 nova_compute[238941]: 2026-01-27 14:22:37.791 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:38Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:47:d3 10.100.0.9
Jan 27 09:22:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:38Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:47:d3 10.100.0.9
Jan 27 09:22:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:38.099 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.303124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758303145, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 251, "total_data_size": 1630186, "memory_usage": 1653336, "flush_reason": "Manual Compaction"}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758313814, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1602614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48614, "largest_seqno": 49754, "table_properties": {"data_size": 1597185, "index_size": 2825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12062, "raw_average_key_size": 19, "raw_value_size": 1586170, "raw_average_value_size": 2621, "num_data_blocks": 126, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523655, "oldest_key_time": 1769523655, "file_creation_time": 1769523758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 10738 microseconds, and 4146 cpu microseconds.
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.313857) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1602614 bytes OK
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.313874) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316200) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316211) EVENT_LOG_v1 {"time_micros": 1769523758316208, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1624876, prev total WAL file size 1624876, number of live WAL files 2.
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1565KB)], [113(8559KB)]
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758316889, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10367062, "oldest_snapshot_seqno": -1}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7013 keys, 8580118 bytes, temperature: kUnknown
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758368120, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8580118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8535211, "index_size": 26299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182988, "raw_average_key_size": 26, "raw_value_size": 8411978, "raw_average_value_size": 1199, "num_data_blocks": 1021, "num_entries": 7013, "num_filter_entries": 7013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.368401) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8580118 bytes
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.370624) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.0 rd, 167.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.4 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 7527, records dropped: 514 output_compression: NoCompression
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.370646) EVENT_LOG_v1 {"time_micros": 1769523758370636, "job": 68, "event": "compaction_finished", "compaction_time_micros": 51310, "compaction_time_cpu_micros": 21234, "output_level": 6, "num_output_files": 1, "total_output_size": 8580118, "num_input_records": 7527, "num_output_records": 7013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758371039, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523758372343, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.316703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:22:38 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:22:38.372425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:22:39 np0005597378 nova_compute[238941]: 2026-01-27 14:22:39.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:39 np0005597378 nova_compute[238941]: 2026-01-27 14:22:39.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 167 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 166 KiB/s wr, 53 op/s
Jan 27 09:22:40 np0005597378 nova_compute[238941]: 2026-01-27 14:22:40.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 27 09:22:42 np0005597378 nova_compute[238941]: 2026-01-27 14:22:42.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:22:43 np0005597378 nova_compute[238941]: 2026-01-27 14:22:43.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:43 np0005597378 nova_compute[238941]: 2026-01-27 14:22:43.502 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:45 np0005597378 nova_compute[238941]: 2026-01-27 14:22:45.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:22:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:46.327 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.760 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.760 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.784 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.795 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:22:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.879 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.879 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.888 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:22:47 np0005597378 nova_compute[238941]: 2026-01-27 14:22:47.888 238945 INFO nova.compute.claims [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.040 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.334 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.334 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.335 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.335 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.335 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.337 238945 INFO nova.compute.manager [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Terminating instance#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.338 238945 DEBUG nova.compute.manager [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.391 238945 DEBUG nova.compute.manager [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.392 238945 DEBUG nova.compute.manager [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing instance network info cache due to event network-changed-34c6ae80-2857-4eb0-8a36-b7866038913b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.392 238945 DEBUG oslo_concurrency.lockutils [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.393 238945 DEBUG oslo_concurrency.lockutils [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.393 238945 DEBUG nova.network.neutron [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Refreshing network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:48 np0005597378 kernel: tap34c6ae80-28 (unregistering): left promiscuous mode
Jan 27 09:22:48 np0005597378 NetworkManager[48904]: <info>  [1769523768.4456] device (tap34c6ae80-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:22:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:48Z|01458|binding|INFO|Releasing lport 34c6ae80-2857-4eb0-8a36-b7866038913b from this chassis (sb_readonly=0)
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:48Z|01459|binding|INFO|Setting lport 34c6ae80-2857-4eb0-8a36-b7866038913b down in Southbound
Jan 27 09:22:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:48Z|01460|binding|INFO|Removing iface tap34c6ae80-28 ovn-installed in OVS
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.477 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], port_security=['fa:16:3e:88:47:d3 10.100.0.9 2001:db8:0:1:f816:3eff:fe88:47d3 2001:db8::f816:3eff:fe88:47d3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe88:47d3/64 2001:db8::f816:3eff:fe88:47d3/64', 'neutron:device_id': 'ddccc961-2581-4996-9b3f-b29ebc1c25d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=34c6ae80-2857-4eb0-8a36-b7866038913b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.479 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 34c6ae80-2857-4eb0-8a36-b7866038913b in datapath fadddb78-26b2-452e-a680-4fa4490a9885 unbound from our chassis#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.480 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fadddb78-26b2-452e-a680-4fa4490a9885#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.496 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aa924c-73c3-4a43-91f0-15b477253eed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:48 np0005597378 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 27 09:22:48 np0005597378 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Consumed 13.454s CPU time.
Jan 27 09:22:48 np0005597378 systemd-machined[207425]: Machine qemu-167-instance-00000087 terminated.
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.526 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b95a6c14-05ae-437c-b403-21ee39e2d7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.531 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2f03f2df-c0bc-4cbd-b240-c93451e74268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.566 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e00d272-63cb-41e1-b8c2-bfab03e4dd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.584 238945 INFO nova.virt.libvirt.driver [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Instance destroyed successfully.#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.586 238945 DEBUG nova.objects.instance [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid ddccc961-2581-4996-9b3f-b29ebc1c25d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.593 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a33b904-6222-4bb6-87e3-ff62fe0bd0c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfadddb78-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:62:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3468, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3468, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634965, 'reachable_time': 35390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363481, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.600 238945 DEBUG nova.virt.libvirt.vif [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-393288026',display_name='tempest-TestGettingAddress-server-393288026',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-393288026',id=135,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:22:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-svgilpev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:22:23Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=ddccc961-2581-4996-9b3f-b29ebc1c25d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.601 238945 DEBUG nova.network.os_vif_util [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.603 238945 DEBUG nova.network.os_vif_util [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.603 238945 DEBUG os_vif [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.605 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.605 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34c6ae80-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.612 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.615 238945 INFO os_vif [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:47:d3,bridge_name='br-int',has_traffic_filtering=True,id=34c6ae80-2857-4eb0-8a36-b7866038913b,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34c6ae80-28')#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.619 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e41b1f47-bce0-4c56-8a31-56049dbcfb57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634978, 'tstamp': 634978}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363485, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfadddb78-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634980, 'tstamp': 634980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363485, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.624 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfadddb78-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.625 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.625 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfadddb78-20, col_values=(('external_ids', {'iface-id': '2af358a4-9a76-4cfc-b0c5-aad3d7e5ea14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:48.626 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.636 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560543148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.668 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.675 238945 DEBUG nova.compute.provider_tree [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.693 238945 DEBUG nova.scheduler.client.report [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.713 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.714 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.770 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.771 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.788 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.808 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.899 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.901 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.901 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Creating image(s)#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.923 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.943 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.964 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:48 np0005597378 nova_compute[238941]: 2026-01-27 14:22:48.968 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.013 238945 DEBUG nova.policy [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.050 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.051 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.052 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.052 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.072 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.076 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 200 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.884 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.885 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.912 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.989 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.989 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.997 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:22:49 np0005597378 nova_compute[238941]: 2026-01-27 14:22:49.997 238945 INFO nova.compute.claims [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.185 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.370 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Successfully created port: b316b5fe-59c3-448f-897c-d7f990f2aeee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.375 238945 DEBUG nova.network.neutron [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updated VIF entry in instance network info cache for port 34c6ae80-2857-4eb0-8a36-b7866038913b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.375 238945 DEBUG nova.network.neutron [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [{"id": "34c6ae80-2857-4eb0-8a36-b7866038913b", "address": "fa:16:3e:88:47:d3", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:47d3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34c6ae80-28", "ovs_interfaceid": "34c6ae80-2857-4eb0-8a36-b7866038913b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.394 238945 DEBUG oslo_concurrency.lockutils [req-6187fd64-d2ee-42ee-b4c9-ee975b07d812 req-d0be30f1-134b-4cd5-8c01-45676cec7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-ddccc961-2581-4996-9b3f-b29ebc1c25d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084899342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.747 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.752 238945 DEBUG nova.compute.provider_tree [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.772 238945 DEBUG nova.scheduler.client.report [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.801 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.802 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-unplugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.853 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] No waiting events found dispatching network-vif-unplugged-34c6ae80-2857-4eb0-8a36-b7866038913b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-unplugged-34c6ae80-2857-4eb0-8a36-b7866038913b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.854 238945 DEBUG oslo_concurrency.lockutils [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.855 238945 DEBUG nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] No waiting events found dispatching network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.855 238945 WARNING nova.compute.manager [req-542f64d3-37c3-4388-8de6-62f03b162b6e req-bcc9471b-4180-48bb-bf08-94f7a560d7be 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received unexpected event network-vif-plugged-34c6ae80-2857-4eb0-8a36-b7866038913b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.858 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.858 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.878 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:22:50 np0005597378 nova_compute[238941]: 2026-01-27 14:22:50.906 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.082 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.083 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.084 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Creating image(s)#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.119 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.149 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 218 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.288 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.293 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.386 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.387 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.388 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.389 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.422 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.427 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.542 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.604 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:22:51 np0005597378 nova_compute[238941]: 2026-01-27 14:22:51.913 238945 DEBUG nova.policy [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.412 238945 DEBUG nova.objects.instance [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.431 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.432 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Ensure instance console log exists: /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.432 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.433 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.433 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.765 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Successfully updated port: b316b5fe-59c3-448f-897c-d7f990f2aeee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.781 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.782 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.782 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:22:52 np0005597378 nova_compute[238941]: 2026-01-27 14:22:52.796 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.026 238945 DEBUG nova.compute.manager [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.027 238945 DEBUG nova.compute.manager [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing instance network info cache due to event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.027 238945 DEBUG oslo_concurrency.lockutils [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.028 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.077 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:22:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 218 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 795 KiB/s wr, 13 op/s
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.159 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.260 238945 DEBUG nova.objects.instance [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid edc76197-7b28-4f2c-8086-0e78a3dcc8f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.275 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.276 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Ensure instance console log exists: /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.276 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.277 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.277 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.312 238945 INFO nova.virt.libvirt.driver [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deleting instance files /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5_del#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.313 238945 INFO nova.virt.libvirt.driver [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deletion of /var/lib/nova/instances/ddccc961-2581-4996-9b3f-b29ebc1c25d5_del complete#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.336 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Successfully created port: 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.371 238945 INFO nova.compute.manager [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 5.03 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.372 238945 DEBUG oslo.service.loopingcall [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.372 238945 DEBUG nova.compute.manager [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.373 238945 DEBUG nova.network.neutron [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.934 238945 DEBUG nova.network.neutron [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.961 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.962 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance network_info: |[{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.963 238945 DEBUG oslo_concurrency.lockutils [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.963 238945 DEBUG nova.network.neutron [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.968 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start _get_guest_xml network_info=[{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.974 238945 WARNING nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.980 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.981 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.984 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.985 238945 DEBUG nova.virt.libvirt.host [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.986 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.986 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.987 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.988 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.988 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.989 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.989 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.990 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.990 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.991 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.991 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.992 238945 DEBUG nova.virt.hardware [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:22:53 np0005597378 nova_compute[238941]: 2026-01-27 14:22:53.997 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.484 238945 DEBUG nova.network.neutron [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.511 238945 INFO nova.compute.manager [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Took 1.14 seconds to deallocate network for instance.#033[00m
Jan 27 09:22:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269803134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.555 238945 DEBUG nova.compute.manager [req-09123c71-928f-47d6-b264-5c356e7a5f76 req-24ce5f71-e56b-4eeb-ad11-dabf8bbe0efd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Received event network-vif-deleted-34c6ae80-2857-4eb0-8a36-b7866038913b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.567 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.568 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.572 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.596 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.600 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.749 238945 DEBUG oslo_concurrency.processutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.788 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Successfully updated port: 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.805 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.806 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.806 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:22:54 np0005597378 nova_compute[238941]: 2026-01-27 14:22:54.984 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:22:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 227 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 27 09:22:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295537586' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.183 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.185 238945 DEBUG nova.virt.libvirt.vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=136,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-jgbdz40r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:48Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=73e1c4d9-d84d-42d0-a385-e816ca65b541,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.185 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.186 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.188 238945 DEBUG nova.objects.instance [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.202 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <uuid>73e1c4d9-d84d-42d0-a385-e816ca65b541</uuid>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <name>instance-00000088</name>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653</nova:name>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:22:53</nova:creationTime>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <nova:port uuid="b316b5fe-59c3-448f-897c-d7f990f2aeee">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <entry name="serial">73e1c4d9-d84d-42d0-a385-e816ca65b541</entry>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <entry name="uuid">73e1c4d9-d84d-42d0-a385-e816ca65b541</entry>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/73e1c4d9-d84d-42d0-a385-e816ca65b541_disk">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:15:b9:af"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <target dev="tapb316b5fe-59"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/console.log" append="off"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:22:55 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:22:55 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:22:55 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:22:55 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.208 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Preparing to wait for external event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.208 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.209 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.209 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.210 238945 DEBUG nova.virt.libvirt.vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=136,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-jgbdz40r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:48Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=73e1c4d9-d84d-42d0-a385-e816ca65b541,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.211 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.211 238945 DEBUG nova.network.os_vif_util [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.212 238945 DEBUG os_vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.212 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.213 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.213 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.216 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.216 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb316b5fe-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.217 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb316b5fe-59, col_values=(('external_ids', {'iface-id': 'b316b5fe-59c3-448f-897c-d7f990f2aeee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:b9:af', 'vm-uuid': '73e1c4d9-d84d-42d0-a385-e816ca65b541'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:55 np0005597378 NetworkManager[48904]: <info>  [1769523775.2200] manager: (tapb316b5fe-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.218 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.225 238945 INFO os_vif [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59')#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.282 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.283 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.283 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:15:b9:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.284 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Using config drive#033[00m
Jan 27 09:22:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:22:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154019641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.310 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.317 238945 DEBUG oslo_concurrency.processutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.323 238945 DEBUG nova.compute.provider_tree [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.343 238945 DEBUG nova.scheduler.client.report [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.367 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.393 238945 INFO nova.scheduler.client.report [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance ddccc961-2581-4996-9b3f-b29ebc1c25d5#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.468 238945 DEBUG oslo_concurrency.lockutils [None req-c6635207-e36b-40a4-9938-4bd93595b2d5 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "ddccc961-2581-4996-9b3f-b29ebc1c25d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.676 238945 DEBUG nova.network.neutron [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated VIF entry in instance network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.676 238945 DEBUG nova.network.neutron [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.691 238945 DEBUG oslo_concurrency.lockutils [req-e13f59a8-1d31-4a3e-87ba-dfde5197d11a req-060c4d5f-5a0a-4c8c-9d0d-b171c9faa35c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.868 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Creating config drive at /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config#033[00m
Jan 27 09:22:55 np0005597378 nova_compute[238941]: 2026-01-27 14:22:55.873 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeasb3u2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.029 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeasb3u2d" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.055 238945 DEBUG nova.storage.rbd_utils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.058 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.231 238945 DEBUG oslo_concurrency.processutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config 73e1c4d9-d84d-42d0-a385-e816ca65b541_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.232 238945 INFO nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deleting local config drive /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541/disk.config because it was imported into RBD.#033[00m
Jan 27 09:22:56 np0005597378 kernel: tapb316b5fe-59: entered promiscuous mode
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.2740] manager: (tapb316b5fe-59): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01461|binding|INFO|Claiming lport b316b5fe-59c3-448f-897c-d7f990f2aeee for this chassis.
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01462|binding|INFO|b316b5fe-59c3-448f-897c-d7f990f2aeee: Claiming fa:16:3e:15:b9:af 10.100.0.5
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.277 238945 DEBUG nova.network.neutron [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.283 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:b9:af 10.100.0.5'], port_security=['fa:16:3e:15:b9:af 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '73e1c4d9-d84d-42d0-a385-e816ca65b541', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1 26585a9c-699d-4944-bb11-1f5060663014', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b316b5fe-59c3-448f-897c-d7f990f2aeee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.284 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b316b5fe-59c3-448f-897c-d7f990f2aeee in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 bound to our chassis#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.285 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acd03ef9-9bfd-4078-adf3-4b0b930dc081#033[00m
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01463|binding|INFO|Setting lport b316b5fe-59c3-448f-897c-d7f990f2aeee ovn-installed in OVS
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01464|binding|INFO|Setting lport b316b5fe-59c3-448f-897c-d7f990f2aeee up in Southbound
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.295 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.296 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance network_info: |[{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.299 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start _get_guest_xml network_info=[{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.302 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f2715994-0f38-4f8e-9350-434ce75483c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.303 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacd03ef9-91 in ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.305 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacd03ef9-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.305 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[56168185-b19b-4dee-9ac8-3764d479bd50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.306 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5be7d3-f073-4f81-9135-391bdd10460b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.309 238945 WARNING nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:22:56 np0005597378 systemd-udevd[364021]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.314 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.315 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.316 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[2feeaf70-3ae9-4f74-b496-28c196ca0b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 systemd-machined[207425]: New machine qemu-168-instance-00000088.
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.321 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.322 238945 DEBUG nova.virt.libvirt.host [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.323 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.324 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.325 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.3274] device (tapb316b5fe-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.325 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.3285] device (tapb316b5fe-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.328 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.328 238945 DEBUG nova.virt.hardware [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.332 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.332 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f5402-5b25-4611-9c91-a5983ac8b63d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 systemd[1]: Started Virtual Machine qemu-168-instance-00000088.
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.366 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fb845ff8-184a-414a-b5da-20e7181616c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 systemd-udevd[364025]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.372 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a751d34-f2dd-425a-abbc-31459280ca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.3740] manager: (tapacd03ef9-90): new Veth device (/org/freedesktop/NetworkManager/Devices/599)
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.427 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[75fa3d08-177b-4b54-8f6a-9e4f41e23ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.430 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[15e85fc8-f6a0-460b-9ffe-bc4e06a87ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.4602] device (tapacd03ef9-90): carrier: link connected
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.466 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9abbadd0-ddcd-4bbe-a284-420ad5907fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.494 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[83ebcdd8-ab32-4c51-a9f8-066b4902ecf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364056, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.512 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a03f48b5-9eb3-4166-9536-d941f2db11fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:f23f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641608, 'tstamp': 641608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364073, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.538 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd1798e-db8f-44a4-b23f-4e3d81a10a7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364075, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.576 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73d83573-ee4d-4a38-b27f-fafd4140f31b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.580 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.581 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.581 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.582 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.582 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.585 238945 INFO nova.compute.manager [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Terminating instance#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.588 238945 DEBUG nova.compute.manager [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.649 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3447c9f4-d9b7-4bc3-a3ce-7775efafb54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.650 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.650 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.651 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd03ef9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.6534] manager: (tapacd03ef9-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 kernel: tapacd03ef9-90: entered promiscuous mode
Jan 27 09:22:56 np0005597378 kernel: tap41bc1922-0e (unregistering): left promiscuous mode
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.659 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacd03ef9-90, col_values=(('external_ids', {'iface-id': '200fc390-2bd2-4617-9a70-937136a8fecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01465|binding|INFO|Releasing lport 200fc390-2bd2-4617-9a70-937136a8fecc from this chassis (sb_readonly=0)
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.661 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.661 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing instance network info cache due to event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.6621] device (tap41bc1922-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.662 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.662 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.662 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01466|binding|INFO|Releasing lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f from this chassis (sb_readonly=0)
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01467|binding|INFO|Setting lport 41bc1922-0e8b-4e12-a842-f9f8d958cc6f down in Southbound
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.693 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acd03ef9-9bfd-4078-adf3-4b0b930dc081.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acd03ef9-9bfd-4078-adf3-4b0b930dc081.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:22:56 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:56Z|01468|binding|INFO|Removing iface tap41bc1922-0e ovn-installed in OVS
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.693 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcd02ad-c9b5-4c74-8626-204e46e73e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.694 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/acd03ef9-9bfd-4078-adf3-4b0b930dc081.pid.haproxy
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID acd03ef9-9bfd-4078-adf3-4b0b930dc081
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.695 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'env', 'PROCESS_TAG=haproxy-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acd03ef9-9bfd-4078-adf3-4b0b930dc081.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:22:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:56.711 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], port_security=['fa:16:3e:da:ba:fd 10.100.0.14 2001:db8:0:1:f816:3eff:feda:bafd 2001:db8::f816:3eff:feda:bafd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feda:bafd/64 2001:db8::f816:3eff:feda:bafd/64', 'neutron:device_id': '11a944d0-c529-462a-a12d-95eadb9446a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fadddb78-26b2-452e-a680-4fa4490a9885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e686e0d-68d4-4db8-8131-d0b7de93a06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce356286-a5f0-495b-b6ce-1c56da2724d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=41bc1922-0e8b-4e12-a842-f9f8d958cc6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:56 np0005597378 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Consumed 15.551s CPU time.
Jan 27 09:22:56 np0005597378 systemd-machined[207425]: Machine qemu-165-instance-00000085 terminated.
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.806 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523776.8054776, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.808 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Started (Lifecycle Event)#033[00m
Jan 27 09:22:56 np0005597378 NetworkManager[48904]: <info>  [1769523776.8108] manager: (tap41bc1922-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/601)
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.828 238945 INFO nova.virt.libvirt.driver [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Instance destroyed successfully.#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.829 238945 DEBUG nova.objects.instance [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 11a944d0-c529-462a-a12d-95eadb9446a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.836 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.839 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523776.8065338, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.839 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.856 238945 DEBUG nova.virt.libvirt.vif [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1772227334',display_name='tempest-TestGettingAddress-server-1772227334',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1772227334',id=133,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7a94wbNcLIbMfLEVNT6Ywm0J4ThULf73rY3K1P6XjqcssI9mA4y0O7yNfHrJGIFW0sC/dFlfPPTPQC2MVSL5Gn/qBz15xmk9J+UY35a4PiIK9ZhxSE3A/WnGcgECUPew==',key_name='tempest-TestGettingAddress-1065796456',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8s66zbkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:21:50Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=11a944d0-c529-462a-a12d-95eadb9446a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.857 238945 DEBUG nova.network.os_vif_util [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.860 238945 DEBUG nova.network.os_vif_util [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.860 238945 DEBUG os_vif [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.862 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41bc1922-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.865 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.869 238945 INFO os_vif [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:ba:fd,bridge_name='br-int',has_traffic_filtering=True,id=41bc1922-0e8b-4e12-a842-f9f8d958cc6f,network=Network(fadddb78-26b2-452e-a680-4fa4490a9885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41bc1922-0e')#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.887 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.935 238945 DEBUG nova.compute.manager [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-unplugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.935 238945 DEBUG oslo_concurrency.lockutils [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.935 238945 DEBUG oslo_concurrency.lockutils [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.936 238945 DEBUG oslo_concurrency.lockutils [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.936 238945 DEBUG nova.compute.manager [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] No waiting events found dispatching network-vif-unplugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:56 np0005597378 nova_compute[238941]: 2026-01-27 14:22:56.936 238945 DEBUG nova.compute.manager [req-82bddc7f-1779-4b11-b45e-058b41219568 req-e383cf3c-4ea9-4c3b-9240-7cb873c1d42c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-unplugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:22:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397535358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.000 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.023 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.027 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 213 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 27 09:22:57 np0005597378 podman[364200]: 2026-01-27 14:22:57.067555464 +0000 UTC m=+0.021973316 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:22:57 np0005597378 podman[364200]: 2026-01-27 14:22:57.189647183 +0000 UTC m=+0.144065015 container create 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 09:22:57 np0005597378 systemd[1]: Started libpod-conmon-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope.
Jan 27 09:22:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:22:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e1da3a7cf3c81236a9b90efb0a1df267af565e04214fa2e0abef3887c5e783/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:22:57 np0005597378 podman[364200]: 2026-01-27 14:22:57.313574361 +0000 UTC m=+0.267992193 container init 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 09:22:57 np0005597378 podman[364200]: 2026-01-27 14:22:57.319374806 +0000 UTC m=+0.273792638 container start 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : New worker (364241) forked
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : Loading success.
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.435 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f in datapath fadddb78-26b2-452e-a680-4fa4490a9885 unbound from our chassis#033[00m
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.437 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fadddb78-26b2-452e-a680-4fa4490a9885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.437 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f877b11c-d780-4099-bf59-61679343cb01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.438 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 namespace which is not needed anymore#033[00m
Jan 27 09:22:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:22:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1352 writes, 6162 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 8.82 MB, 0.01 MB/s#012Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.2      1.99              0.18        34    0.058       0      0       0.0       0.0#012  L6      1/0    8.18 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5     74.5     62.6      4.36              0.72        33    0.132    196K    18K       0.0       0.0#012 Sum      1/0    8.18 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5     51.2     52.5      6.35              0.90        67    0.095    196K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     34.5     34.8      1.54              0.15        10    0.154     37K   2538       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     74.5     62.6      4.36              0.72        33    0.132    196K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.3      1.98              0.18        33    0.060       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.1 total, 600.0 interval#012Flush(GB): cumulative 0.059, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 6.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 38.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.0004 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2388,36.58 MB,12.0328%) FilterBlock(68,557.92 KB,0.179226%) IndexBlock(68,929.64 KB,0.298636%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : haproxy version is 2.8.14-c23fe91
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [NOTICE]   (361457) : path to executable is /usr/sbin/haproxy
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [WARNING]  (361457) : Exiting Master process...
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [ALERT]    (361457) : Current worker (361459) exited with code 143 (Terminated)
Jan 27 09:22:57 np0005597378 neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885[361453]: [WARNING]  (361457) : All workers exited. Exiting... (0)
Jan 27 09:22:57 np0005597378 systemd[1]: libpod-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope: Deactivated successfully.
Jan 27 09:22:57 np0005597378 conmon[361453]: conmon eb35766c97a66921e667 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope/container/memory.events
Jan 27 09:22:57 np0005597378 podman[364268]: 2026-01-27 14:22:57.611707479 +0000 UTC m=+0.071446988 container died eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 09:22:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:22:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558728396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.640 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.642 238945 DEBUG nova.virt.libvirt.vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-591157072',display_name='tempest-TestNetworkBasicOps-server-591157072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-591157072',id=137,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+ZSjQKi5EB+akV9J717ai93W9Kt+YZsloIxUmd1JjYsCapSWKXNIyGRNdCuE9WQkR6ZXIYKnKeNMZEQXOKhgeP2S9rMEfY5MP3tGH8Db6p2l/cvx/6vbSV+ZPeDfkFQ==',key_name='tempest-TestNetworkBasicOps-127183240',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d1v058hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:50Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=edc76197-7b28-4f2c-8086-0e78a3dcc8f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.642 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.643 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.644 238945 DEBUG nova.objects.instance [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid edc76197-7b28-4f2c-8086-0e78a3dcc8f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.670 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <uuid>edc76197-7b28-4f2c-8086-0e78a3dcc8f9</uuid>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <name>instance-00000089</name>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-591157072</nova:name>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:22:56</nova:creationTime>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <nova:port uuid="82793acf-1cb9-47d4-91fb-ce8fcb0358b3">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <entry name="serial">edc76197-7b28-4f2c-8086-0e78a3dcc8f9</entry>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <entry name="uuid">edc76197-7b28-4f2c-8086-0e78a3dcc8f9</entry>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:a9:5c:45"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <target dev="tap82793acf-1c"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/console.log" append="off"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:22:57 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:22:57 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:22:57 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:22:57 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.670 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Preparing to wait for external event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.670 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.671 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.671 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.672 238945 DEBUG nova.virt.libvirt.vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-591157072',display_name='tempest-TestNetworkBasicOps-server-591157072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-591157072',id=137,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+ZSjQKi5EB+akV9J717ai93W9Kt+YZsloIxUmd1JjYsCapSWKXNIyGRNdCuE9WQkR6ZXIYKnKeNMZEQXOKhgeP2S9rMEfY5MP3tGH8Db6p2l/cvx/6vbSV+ZPeDfkFQ==',key_name='tempest-TestNetworkBasicOps-127183240',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d1v058hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:22:50Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=edc76197-7b28-4f2c-8086-0e78a3dcc8f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.672 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.673 238945 DEBUG nova.network.os_vif_util [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.673 238945 DEBUG os_vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.674 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.677 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82793acf-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.677 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82793acf-1c, col_values=(('external_ids', {'iface-id': '82793acf-1cb9-47d4-91fb-ce8fcb0358b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:5c:45', 'vm-uuid': 'edc76197-7b28-4f2c-8086-0e78a3dcc8f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:57 np0005597378 NetworkManager[48904]: <info>  [1769523777.6800] manager: (tap82793acf-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.686 238945 INFO os_vif [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c')#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.760 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.761 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.761 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:a9:5c:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.761 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Using config drive#033[00m
Jan 27 09:22:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b-userdata-shm.mount: Deactivated successfully.
Jan 27 09:22:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0e343ece24984a12a7cdfd6f209a71eba45e9c527637fc2eceaf8acbff563793-merged.mount: Deactivated successfully.
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.784 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.798 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:57 np0005597378 podman[364268]: 2026-01-27 14:22:57.844073981 +0000 UTC m=+0.303813490 container cleanup eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:22:57 np0005597378 systemd[1]: libpod-conmon-eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b.scope: Deactivated successfully.
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.870 238945 INFO nova.virt.libvirt.driver [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deleting instance files /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8_del#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.871 238945 INFO nova.virt.libvirt.driver [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deletion of /var/lib/nova/instances/11a944d0-c529-462a-a12d-95eadb9446a8_del complete#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.937 238945 INFO nova.compute.manager [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 1.35 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.938 238945 DEBUG oslo.service.loopingcall [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.938 238945 DEBUG nova.compute.manager [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.939 238945 DEBUG nova.network.neutron [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:22:57 np0005597378 podman[364321]: 2026-01-27 14:22:57.983249575 +0000 UTC m=+0.117766214 container remove eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.988 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5a8745-23e3-4c1c-b5da-afe99e7aa260]: (4, ('Tue Jan 27 02:22:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 (eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b)\neb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b\nTue Jan 27 02:22:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 (eb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b)\neb35766c97a66921e667843474a9e60eb930b158d4eae8741452656ebb833f7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.990 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[30717e35-1697-4ca7-a423-01f93ae95ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:57.991 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfadddb78-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:57 np0005597378 nova_compute[238941]: 2026-01-27 14:22:57.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:57 np0005597378 kernel: tapfadddb78-20: left promiscuous mode
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.012 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[29284ff3-c8ad-4dc3-a7fb-7a3066a57fc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.027 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0111ca-6ccd-4e93-bd28-d155fd6a27fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.029 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd245c9f-31b1-4d39-bc93-7269c2412d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1962c7ab-fea8-4969-b6ff-80f4817417ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634936, 'reachable_time': 39051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364338, 'error': None, 'target': 'ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.052 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fadddb78-26b2-452e-a680-4fa4490a9885 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:22:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:58.052 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[009cadc1-0725-4c98-a10f-d786dc936389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:58 np0005597378 systemd[1]: run-netns-ovnmeta\x2dfadddb78\x2d26b2\x2d452e\x2da680\x2d4fa4490a9885.mount: Deactivated successfully.
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.703 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updated VIF entry in instance network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.704 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.727 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.727 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.727 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing instance network info cache due to event network-changed-41bc1922-0e8b-4e12-a842-f9f8d958cc6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.728 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.728 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.728 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Refreshing network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.900 238945 DEBUG nova.compute.manager [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.900 238945 DEBUG oslo_concurrency.lockutils [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.901 238945 DEBUG oslo_concurrency.lockutils [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.901 238945 DEBUG oslo_concurrency.lockutils [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.901 238945 DEBUG nova.compute.manager [req-8ca7afd8-20a4-468d-adfc-ca1fa46f597c req-f2ed0061-34bc-4ac8-8a7c-0a5ae2511b16 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Processing event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.902 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.905 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523778.9049835, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.908 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.912 238945 INFO nova.virt.libvirt.driver [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance spawned successfully.#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.912 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.928 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.933 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.939 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.939 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.940 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.940 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.940 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.941 238945 DEBUG nova.virt.libvirt.driver [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.963 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.989 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Creating config drive at /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config#033[00m
Jan 27 09:22:58 np0005597378 nova_compute[238941]: 2026-01-27 14:22:58.995 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ju28p0a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.038 238945 INFO nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 10.14 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.039 238945 DEBUG nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.041 238945 DEBUG nova.compute.manager [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.041 238945 DEBUG oslo_concurrency.lockutils [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.041 238945 DEBUG oslo_concurrency.lockutils [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.042 238945 DEBUG oslo_concurrency.lockutils [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.042 238945 DEBUG nova.compute.manager [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] No waiting events found dispatching network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.042 238945 WARNING nova.compute.manager [req-545b5c8f-a51a-4ec2-ab49-301ac7aa549a req-2d720a3d-0c10-4a71-bf5f-7d3d892b153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received unexpected event network-vif-plugged-41bc1922-0e8b-4e12-a842-f9f8d958cc6f for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.142 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ju28p0a" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 213 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.172 238945 DEBUG nova.storage.rbd_utils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.177 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.216 238945 INFO nova.compute.manager [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 11.37 seconds to build instance.#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.245 238945 DEBUG oslo_concurrency.lockutils [None req-238b212f-cec2-49bd-860e-6e35f0dbe260 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.330 238945 DEBUG oslo_concurrency.processutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config edc76197-7b28-4f2c-8086-0e78a3dcc8f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.331 238945 INFO nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deleting local config drive /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9/disk.config because it was imported into RBD.#033[00m
Jan 27 09:22:59 np0005597378 kernel: tap82793acf-1c: entered promiscuous mode
Jan 27 09:22:59 np0005597378 NetworkManager[48904]: <info>  [1769523779.3813] manager: (tap82793acf-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Jan 27 09:22:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:59Z|01469|binding|INFO|Claiming lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 for this chassis.
Jan 27 09:22:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:59Z|01470|binding|INFO|82793acf-1cb9-47d4-91fb-ce8fcb0358b3: Claiming fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.391 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:5c:45 10.100.0.8'], port_security=['fa:16:3e:a9:5c:45 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'edc76197-7b28-4f2c-8086-0e78a3dcc8f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b3391f4-e251-4299-a7f3-1660fc8627f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f44adeb-7a52-4531-8e08-6f5563aa7c97, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82793acf-1cb9-47d4-91fb-ce8fcb0358b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.392 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 in datapath 4e6dcb02-8757-49a6-9c0f-33153afd479e bound to our chassis#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.393 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e6dcb02-8757-49a6-9c0f-33153afd479e#033[00m
Jan 27 09:22:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:59Z|01471|binding|INFO|Setting lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 ovn-installed in OVS
Jan 27 09:22:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:59Z|01472|binding|INFO|Setting lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 up in Southbound
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.404 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.406 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[01b69429-0e40-4263-a33a-eecd32434429]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.406 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e6dcb02-81 in ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.409 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e6dcb02-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.409 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a73b070-ac9d-4608-9d21-9610cd30b4b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.412 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[05a5d49e-0ccd-4829-b428-27dae6cb73c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 systemd-udevd[364394]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.423 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[aed5cb1c-fdd1-4bb0-87f5-6383e8568df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 systemd-machined[207425]: New machine qemu-169-instance-00000089.
Jan 27 09:22:59 np0005597378 NetworkManager[48904]: <info>  [1769523779.4460] device (tap82793acf-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:22:59 np0005597378 NetworkManager[48904]: <info>  [1769523779.4469] device (tap82793acf-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:22:59 np0005597378 systemd[1]: Started Virtual Machine qemu-169-instance-00000089.
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.456 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7120e733-f6cb-47d6-8444-93a739ad6ddd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.489 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb165db-a4e1-40e9-8a24-cf592c9f0734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 NetworkManager[48904]: <info>  [1769523779.4980] manager: (tap4e6dcb02-80): new Veth device (/org/freedesktop/NetworkManager/Devices/604)
Jan 27 09:22:59 np0005597378 systemd-udevd[364397]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.497 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a896a62-0f43-4338-b0ea-6418f0df3bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.536 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[74f03064-c63d-4e18-aef7-37a628a829d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.539 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f143440c-3fe8-4a5d-9b27-225f19875e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 NetworkManager[48904]: <info>  [1769523779.5793] device (tap4e6dcb02-80): carrier: link connected
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.587 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8cba3974-e118-4481-a1c0-33d6fc5e0450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.607 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2cd189-108f-4b3c-a132-a2d502493230]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6dcb02-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:a3:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641920, 'reachable_time': 26087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364425, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:22:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527131638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:22:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:22:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527131638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.630 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a610e680-d70b-4610-a2f4-ceb928f297dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:a3cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641920, 'tstamp': 641920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364426, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.651 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[458394d9-36b6-4b30-b49b-24fd57cddd16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6dcb02-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:a3:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641920, 'reachable_time': 26087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364427, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.682 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[53011ee4-3416-4105-8173-6f9b40a25b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.750 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d30609-be44-4f5f-982d-cf14b409f08f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.752 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6dcb02-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.752 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.753 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e6dcb02-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 kernel: tap4e6dcb02-80: entered promiscuous mode
Jan 27 09:22:59 np0005597378 NetworkManager[48904]: <info>  [1769523779.7553] manager: (tap4e6dcb02-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.757 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.760 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e6dcb02-80, col_values=(('external_ids', {'iface-id': '1f366bd4-1a08-4a8c-b5c7-113e06697837'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:22:59Z|01473|binding|INFO|Releasing lport 1f366bd4-1a08-4a8c-b5c7-113e06697837 from this chassis (sb_readonly=0)
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.764 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e6dcb02-8757-49a6-9c0f-33153afd479e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e6dcb02-8757-49a6-9c0f-33153afd479e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.764 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[902a53d5-23f8-41a0-a9d4-fbc80c0a8e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.765 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4e6dcb02-8757-49a6-9c0f-33153afd479e
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4e6dcb02-8757-49a6-9c0f-33153afd479e.pid.haproxy
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4e6dcb02-8757-49a6-9c0f-33153afd479e
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:22:59 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:22:59.767 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'env', 'PROCESS_TAG=haproxy-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e6dcb02-8757-49a6-9c0f-33153afd479e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.916 238945 DEBUG nova.network.neutron [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:22:59 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.944 238945 INFO nova.compute.manager [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Took 2.00 seconds to deallocate network for instance.#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:22:59.999 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.000 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.095 238945 DEBUG oslo_concurrency.processutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.138 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523780.1176243, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.139 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Started (Lifecycle Event)#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.167 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.172 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523780.1179125, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.172 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:23:00 np0005597378 podman[364499]: 2026-01-27 14:23:00.190313422 +0000 UTC m=+0.067386189 container create a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.192 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.201 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.218 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:00 np0005597378 systemd[1]: Started libpod-conmon-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6.scope.
Jan 27 09:23:00 np0005597378 podman[364499]: 2026-01-27 14:23:00.15723197 +0000 UTC m=+0.034304757 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:23:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee070001db999123ec46736186fcea06d033d0cbac0f864ca8651276fd818478/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:00 np0005597378 podman[364499]: 2026-01-27 14:23:00.28666891 +0000 UTC m=+0.163741717 container init a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:23:00 np0005597378 podman[364499]: 2026-01-27 14:23:00.292598711 +0000 UTC m=+0.169671478 container start a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:23:00 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : New worker (364540) forked
Jan 27 09:23:00 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : Loading success.
Jan 27 09:23:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677728563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.729 238945 DEBUG oslo_concurrency.processutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.734 238945 DEBUG nova.compute.provider_tree [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.749 238945 DEBUG nova.scheduler.client.report [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.770 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.799 238945 INFO nova.scheduler.client.report [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 11a944d0-c529-462a-a12d-95eadb9446a8#033[00m
Jan 27 09:23:00 np0005597378 nova_compute[238941]: 2026-01-27 14:23:00.875 238945 DEBUG oslo_concurrency.lockutils [None req-db716feb-a511-41f7-8455-b6fd4348889c 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "11a944d0-c529-462a-a12d-95eadb9446a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.007 238945 DEBUG nova.compute.manager [req-9319fd3f-cef3-44aa-999b-cfb11724175d req-f06605a8-7352-423f-9ec5-ddc33b161adf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Received event network-vif-deleted-41bc1922-0e8b-4e12-a842-f9f8d958cc6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.133 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.134 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Processing event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.135 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.136 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.136 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.136 238945 DEBUG oslo_concurrency.lockutils [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.137 238945 DEBUG nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] No waiting events found dispatching network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.137 238945 WARNING nova.compute.manager [req-8e20dc82-95c6-404d-b7d2-8de6501dbbe4 req-6b40dc1c-fe68-4f13-bddb-89a65afcd10c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received unexpected event network-vif-plugged-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.138 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.150 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523781.1424851, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.150 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:23:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 134 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 3.6 MiB/s wr, 145 op/s
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.158 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.175 238945 INFO nova.virt.libvirt.driver [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance spawned successfully.#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.176 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.181 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.185 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.212 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.220 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.221 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.222 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.222 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.223 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.223 238945 DEBUG nova.virt.libvirt.driver [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.283 238945 INFO nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 10.20 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.283 238945 DEBUG nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.341 238945 INFO nova.compute.manager [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 11.37 seconds to build instance.#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.391 238945 DEBUG oslo_concurrency.lockutils [None req-e17eed1e-ae86-443a-bbf8-62f35e62873d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.665 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updated VIF entry in instance network info cache for port 41bc1922-0e8b-4e12-a842-f9f8d958cc6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.666 238945 DEBUG nova.network.neutron [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Updating instance_info_cache with network_info: [{"id": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "address": "fa:16:3e:da:ba:fd", "network": {"id": "fadddb78-26b2-452e-a680-4fa4490a9885", "bridge": "br-int", "label": "tempest-network-smoke--1397912381", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:bafd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41bc1922-0e", "ovs_interfaceid": "41bc1922-0e8b-4e12-a842-f9f8d958cc6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.685 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-11a944d0-c529-462a-a12d-95eadb9446a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.686 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.686 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 DEBUG oslo_concurrency.lockutils [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 DEBUG nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] No waiting events found dispatching network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:23:01 np0005597378 nova_compute[238941]: 2026-01-27 14:23:01.687 238945 WARNING nova.compute.manager [req-ee84621d-1518-4cd0-8332-8a047c1bc1cd req-fb4b9db6-16ff-430c-865f-53f5855aec8d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received unexpected event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:23:02 np0005597378 nova_compute[238941]: 2026-01-27 14:23:02.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:02 np0005597378 nova_compute[238941]: 2026-01-27 14:23:02.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 134 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 795 KiB/s rd, 2.8 MiB/s wr, 133 op/s
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.255 238945 DEBUG nova.compute.manager [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.255 238945 DEBUG nova.compute.manager [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing instance network info cache due to event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.255 238945 DEBUG oslo_concurrency.lockutils [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.256 238945 DEBUG oslo_concurrency.lockutils [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.256 238945 DEBUG nova.network.neutron [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.579 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523768.5743687, ddccc961-2581-4996-9b3f-b29ebc1c25d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.579 238945 INFO nova.compute.manager [-] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:23:03 np0005597378 nova_compute[238941]: 2026-01-27 14:23:03.598 238945 DEBUG nova.compute.manager [None req-fbb180be-4736-4be9-93dd-bdb502a0cb1e - - - - - -] [instance: ddccc961-2581-4996-9b3f-b29ebc1c25d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 230 op/s
Jan 27 09:23:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:05 np0005597378 nova_compute[238941]: 2026-01-27 14:23:05.506 238945 DEBUG nova.network.neutron [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated VIF entry in instance network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:05 np0005597378 nova_compute[238941]: 2026-01-27 14:23:05.507 238945 DEBUG nova.network.neutron [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:05 np0005597378 nova_compute[238941]: 2026-01-27 14:23:05.525 238945 DEBUG oslo_concurrency.lockutils [req-2bbbe606-c58b-4dc1-8181-ed1075091fd6 req-da64e52f-22d1-4ee1-81ae-a6955a8e60e6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG nova.compute.manager [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG nova.compute.manager [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing instance network info cache due to event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG oslo_concurrency.lockutils [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.285 238945 DEBUG oslo_concurrency.lockutils [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.286 238945 DEBUG nova.network.neutron [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.401 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:23:06 np0005597378 nova_compute[238941]: 2026-01-27 14:23:06.403 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:06 np0005597378 podman[364571]: 2026-01-27 14:23:06.725089055 +0000 UTC m=+0.056453254 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:23:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727718488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.034 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 505 KiB/s wr, 197 op/s
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.241 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.242 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.248 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.248 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.495 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.499 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3298MB free_disk=59.94564992189407GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.501 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.502 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.626 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 73e1c4d9-d84d-42d0-a385-e816ca65b541 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.627 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance edc76197-7b28-4f2c-8086-0e78a3dcc8f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.628 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.629 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.685 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:07 np0005597378 nova_compute[238941]: 2026-01-27 14:23:07.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.041 238945 DEBUG nova.network.neutron [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updated VIF entry in instance network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.042 238945 DEBUG nova.network.neutron [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.064 238945 DEBUG oslo_concurrency.lockutils [req-db284d4c-187b-4bda-a818-b820f8a003ed req-160b59dc-9821-464a-b1fd-5c794de7a63c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82943654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.311 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.317 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.331 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.348 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.349 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:08 np0005597378 podman[364615]: 2026-01-27 14:23:08.753318603 +0000 UTC m=+0.086046941 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:23:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:08Z|01474|binding|INFO|Releasing lport 1f366bd4-1a08-4a8c-b5c7-113e06697837 from this chassis (sb_readonly=0)
Jan 27 09:23:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:08Z|01475|binding|INFO|Releasing lport 200fc390-2bd2-4617-9a70-937136a8fecc from this chassis (sb_readonly=0)
Jan 27 09:23:08 np0005597378 nova_compute[238941]: 2026-01-27 14:23:08.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 175 op/s
Jan 27 09:23:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 176 op/s
Jan 27 09:23:11 np0005597378 nova_compute[238941]: 2026-01-27 14:23:11.349 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:11 np0005597378 nova_compute[238941]: 2026-01-27 14:23:11.908 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523776.8322167, 11a944d0-c529-462a-a12d-95eadb9446a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:11 np0005597378 nova_compute[238941]: 2026-01-27 14:23:11.908 238945 INFO nova.compute.manager [-] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:23:11 np0005597378 nova_compute[238941]: 2026-01-27 14:23:11.935 238945 DEBUG nova.compute.manager [None req-8a4082ee-e0e3-4b07-a0a9-83305de1d7fb - - - - - -] [instance: 11a944d0-c529-462a-a12d-95eadb9446a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:12 np0005597378 nova_compute[238941]: 2026-01-27 14:23:12.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:12 np0005597378 nova_compute[238941]: 2026-01-27 14:23:12.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:12 np0005597378 nova_compute[238941]: 2026-01-27 14:23:12.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:12Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:b9:af 10.100.0.5
Jan 27 09:23:12 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:12Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:b9:af 10.100.0.5
Jan 27 09:23:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 134 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 12 KiB/s wr, 114 op/s
Jan 27 09:23:13 np0005597378 nova_compute[238941]: 2026-01-27 14:23:13.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 09:23:14 np0005597378 nova_compute[238941]: 2026-01-27 14:23:14.341 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:14Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 09:23:14 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:14Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 09:23:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 175 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 177 op/s
Jan 27 09:23:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:23:17
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta']
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 195 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 138 op/s
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.758 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.758 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.758 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.759 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:17 np0005597378 nova_compute[238941]: 2026-01-27 14:23:17.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:23:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:23:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:23:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 195 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Jan 27 09:23:19 np0005597378 nova_compute[238941]: 2026-01-27 14:23:19.265 238945 INFO nova.compute.manager [None req-ed906b09-bf04-43a4-8d3b-24d43ba7d609 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Get console output#033[00m
Jan 27 09:23:19 np0005597378 nova_compute[238941]: 2026-01-27 14:23:19.272 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:23:20 np0005597378 nova_compute[238941]: 2026-01-27 14:23:20.001 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:20 np0005597378 nova_compute[238941]: 2026-01-27 14:23:20.024 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:20 np0005597378 nova_compute[238941]: 2026-01-27 14:23:20.025 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:23:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:20Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 09:23:20 np0005597378 nova_compute[238941]: 2026-01-27 14:23:20.382 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:20 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:20Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:5c:45 10.100.0.8
Jan 27 09:23:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 09:23:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.918 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2 2001:db8::f816:3eff:fe45:e90d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe45:e90d/64', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2c64544a-77a5-4e81-a088-de5cbdfdbfdd) old=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.919 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2c64544a-77a5-4e81-a088-de5cbdfdbfdd in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 updated#033[00m
Jan 27 09:23:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.920 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:23:21 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:21.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[51d6eeae-34be-464f-9703-1c342377db39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:22 np0005597378 nova_compute[238941]: 2026-01-27 14:23:22.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:22 np0005597378 nova_compute[238941]: 2026-01-27 14:23:22.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.579 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.580 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.603 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.676 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.677 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.687 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.687 238945 INFO nova.compute.claims [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:23:23 np0005597378 nova_compute[238941]: 2026-01-27 14:23:23.827 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926211527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.413 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.419 238945 DEBUG nova.compute.provider_tree [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.436 238945 DEBUG nova.scheduler.client.report [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.460 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.460 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.513 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.513 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.537 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.554 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.639 238945 DEBUG nova.compute.manager [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.640 238945 DEBUG nova.compute.manager [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing instance network info cache due to event network-changed-82793acf-1cb9-47d4-91fb-ce8fcb0358b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.640 238945 DEBUG oslo_concurrency.lockutils [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.641 238945 DEBUG oslo_concurrency.lockutils [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.641 238945 DEBUG nova.network.neutron [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Refreshing network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.645 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.646 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.647 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Creating image(s)#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.671 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.693 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.712 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.715 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.755 238945 DEBUG nova.policy [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.760 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.761 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.761 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.761 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.762 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.763 238945 INFO nova.compute.manager [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Terminating instance#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.764 238945 DEBUG nova.compute.manager [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.799 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.799 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.800 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.800 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:24 np0005597378 kernel: tap82793acf-1c (unregistering): left promiscuous mode
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.819 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:24 np0005597378 NetworkManager[48904]: <info>  [1769523804.8227] device (tap82793acf-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.826 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8495a58a-7371-4222-afef-f486eafff82d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:24Z|01476|binding|INFO|Releasing lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 from this chassis (sb_readonly=0)
Jan 27 09:23:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:24Z|01477|binding|INFO|Setting lport 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 down in Southbound
Jan 27 09:23:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:24Z|01478|binding|INFO|Removing iface tap82793acf-1c ovn-installed in OVS
Jan 27 09:23:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.841 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:5c:45 10.100.0.8'], port_security=['fa:16:3e:a9:5c:45 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'edc76197-7b28-4f2c-8086-0e78a3dcc8f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b3391f4-e251-4299-a7f3-1660fc8627f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f44adeb-7a52-4531-8e08-6f5563aa7c97, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=82793acf-1cb9-47d4-91fb-ce8fcb0358b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.842 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3 in datapath 4e6dcb02-8757-49a6-9c0f-33153afd479e unbound from our chassis#033[00m
Jan 27 09:23:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.843 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e6dcb02-8757-49a6-9c0f-33153afd479e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:23:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.844 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b6382f43-0eb9-41cb-bb10-099e63900704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:24.845 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e namespace which is not needed anymore#033[00m
Jan 27 09:23:24 np0005597378 nova_compute[238941]: 2026-01-27 14:23:24.863 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:24 np0005597378 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 27 09:23:24 np0005597378 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Consumed 13.812s CPU time.
Jan 27 09:23:24 np0005597378 systemd-machined[207425]: Machine qemu-169-instance-00000089 terminated.
Jan 27 09:23:24 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : haproxy version is 2.8.14-c23fe91
Jan 27 09:23:24 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [NOTICE]   (364538) : path to executable is /usr/sbin/haproxy
Jan 27 09:23:24 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [WARNING]  (364538) : Exiting Master process...
Jan 27 09:23:24 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [WARNING]  (364538) : Exiting Master process...
Jan 27 09:23:24 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [ALERT]    (364538) : Current worker (364540) exited with code 143 (Terminated)
Jan 27 09:23:24 np0005597378 neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e[364516]: [WARNING]  (364538) : All workers exited. Exiting... (0)
Jan 27 09:23:24 np0005597378 systemd[1]: libpod-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6.scope: Deactivated successfully.
Jan 27 09:23:24 np0005597378 podman[364776]: 2026-01-27 14:23:24.98371015 +0000 UTC m=+0.052639099 container died a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.005 238945 INFO nova.virt.libvirt.driver [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Instance destroyed successfully.#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.005 238945 DEBUG nova.objects.instance [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid edc76197-7b28-4f2c-8086-0e78a3dcc8f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.019 238945 DEBUG nova.virt.libvirt.vif [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-591157072',display_name='tempest-TestNetworkBasicOps-server-591157072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-591157072',id=137,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3+ZSjQKi5EB+akV9J717ai93W9Kt+YZsloIxUmd1JjYsCapSWKXNIyGRNdCuE9WQkR6ZXIYKnKeNMZEQXOKhgeP2S9rMEfY5MP3tGH8Db6p2l/cvx/6vbSV+ZPeDfkFQ==',key_name='tempest-TestNetworkBasicOps-127183240',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-d1v058hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:23:01Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=edc76197-7b28-4f2c-8086-0e78a3dcc8f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.020 238945 DEBUG nova.network.os_vif_util [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.022 238945 DEBUG nova.network.os_vif_util [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.022 238945 DEBUG os_vif [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.030 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82793acf-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.036 238945 INFO os_vif [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:5c:45,bridge_name='br-int',has_traffic_filtering=True,id=82793acf-1cb9-47d4-91fb-ce8fcb0358b3,network=Network(4e6dcb02-8757-49a6-9c0f-33153afd479e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82793acf-1c')#033[00m
Jan 27 09:23:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6-userdata-shm.mount: Deactivated successfully.
Jan 27 09:23:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ee070001db999123ec46736186fcea06d033d0cbac0f864ca8651276fd818478-merged.mount: Deactivated successfully.
Jan 27 09:23:25 np0005597378 podman[364776]: 2026-01-27 14:23:25.135014021 +0000 UTC m=+0.203942980 container cleanup a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:23:25 np0005597378 systemd[1]: libpod-conmon-a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6.scope: Deactivated successfully.
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.154 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 8495a58a-7371-4222-afef-f486eafff82d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 27 09:23:25 np0005597378 podman[364842]: 2026-01-27 14:23:25.208295268 +0000 UTC m=+0.049304382 container remove a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4b0c78-bc8f-46e5-9a32-67980c22c90b]: (4, ('Tue Jan 27 02:23:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e (a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6)\na66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6\nTue Jan 27 02:23:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e (a66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6)\na66da5315ec0b4d900502151bde8c1dd30c9652c2c4d70d196c0729c1f6cc2a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.219 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[79937340-d74e-44f6-838e-3dfdeb20f651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.220 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6dcb02-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.227 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.235 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:23:25 np0005597378 kernel: tap4e6dcb02-80: left promiscuous mode
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.243 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0566d7b0-6fb8-4a67-a5aa-e3e97737cbfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.260 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5007406f-6999-4e28-8dbb-894f12cdea8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.263 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcb9731-a2dd-4f27-8e31-a2a074fd27bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a73680d2-766d-4770-aaa9-5470144aa9fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641910, 'reachable_time': 33756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364912, 'error': None, 'target': 'ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4e6dcb02\x2d8757\x2d49a6\x2d9c0f\x2d33153afd479e.mount: Deactivated successfully.
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.285 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e6dcb02-8757-49a6-9c0f-33153afd479e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:23:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:25.285 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbc3631-cc8d-4220-88f0-15a2802b8a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.339 238945 DEBUG nova.objects.instance [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 8495a58a-7371-4222-afef-f486eafff82d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.355 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.356 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Ensure instance console log exists: /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.357 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.357 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.358 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.436 238945 INFO nova.virt.libvirt.driver [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deleting instance files /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_del#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.437 238945 INFO nova.virt.libvirt.driver [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deletion of /var/lib/nova/instances/edc76197-7b28-4f2c-8086-0e78a3dcc8f9_del complete#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.501 238945 INFO nova.compute.manager [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.502 238945 DEBUG oslo.service.loopingcall [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.502 238945 DEBUG nova.compute.manager [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:23:25 np0005597378 nova_compute[238941]: 2026-01-27 14:23:25.502 238945 DEBUG nova.network.neutron [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:23:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:26 np0005597378 nova_compute[238941]: 2026-01-27 14:23:26.833 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Successfully created port: 94907172-68c1-496b-a337-e4ff0944eba7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.090 238945 DEBUG nova.network.neutron [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.116 238945 INFO nova.compute.manager [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Took 1.61 seconds to deallocate network for instance.#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.152 238945 DEBUG nova.compute.manager [req-6037a953-bca5-4527-a1e2-b5f64226a7cd req-f7cf7a29-becf-427e-86cb-c66d18102334 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Received event network-vif-deleted-82793acf-1cb9-47d4-91fb-ce8fcb0358b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.165 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.166 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 173 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.2 MiB/s wr, 67 op/s
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.175 238945 DEBUG nova.network.neutron [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updated VIF entry in instance network info cache for port 82793acf-1cb9-47d4-91fb-ce8fcb0358b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.176 238945 DEBUG nova.network.neutron [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Updating instance_info_cache with network_info: [{"id": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "address": "fa:16:3e:a9:5c:45", "network": {"id": "4e6dcb02-8757-49a6-9c0f-33153afd479e", "bridge": "br-int", "label": "tempest-network-smoke--1112190473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82793acf-1c", "ovs_interfaceid": "82793acf-1cb9-47d4-91fb-ce8fcb0358b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.194 238945 DEBUG oslo_concurrency.lockutils [req-e04f6ba0-c3e8-4926-b069-c7a0460ccae4 req-52128252-42be-4af4-851e-db867b59c1ad 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-edc76197-7b28-4f2c-8086-0e78a3dcc8f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.239 238945 DEBUG oslo_concurrency.processutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.811 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833627139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.833 238945 DEBUG oslo_concurrency.processutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.839 238945 DEBUG nova.compute.provider_tree [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.858 238945 DEBUG nova.scheduler.client.report [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012926492419661725 of space, bias 1.0, pg target 0.38779477258985173 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693635530565573 of space, bias 1.0, pg target 0.2008090659169672 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0375097222243419e-06 of space, bias 4.0, pg target 0.0012450116666692104 quantized to 16 (current 16)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:23:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.931 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:27 np0005597378 nova_compute[238941]: 2026-01-27 14:23:27.965 238945 INFO nova.scheduler.client.report [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance edc76197-7b28-4f2c-8086-0e78a3dcc8f9#033[00m
Jan 27 09:23:28 np0005597378 nova_compute[238941]: 2026-01-27 14:23:28.099 238945 DEBUG oslo_concurrency.lockutils [None req-e24a68a9-10fc-45f9-a051-384f9d221e0a 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "edc76197-7b28-4f2c-8086-0e78a3dcc8f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 173 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 93 KiB/s wr, 10 op/s
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.035 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.192 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2 2001:db8:0:1:f816:3eff:fe45:e90d 2001:db8::f816:3eff:fe45:e90d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe45:e90d/64 2001:db8::f816:3eff:fe45:e90d/64', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2c64544a-77a5-4e81-a088-de5cbdfdbfdd) old=Port_Binding(mac=['fa:16:3e:45:e9:0d 10.100.0.2 2001:db8::f816:3eff:fe45:e90d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe45:e90d/64', 'neutron:device_id': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.193 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2c64544a-77a5-4e81-a088-de5cbdfdbfdd in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 updated#033[00m
Jan 27 09:23:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.194 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:23:30 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:30.195 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[98a2dde5-33db-4a83-ba04-eddc7a4be219]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.432 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Successfully updated port: 94907172-68c1-496b-a337-e4ff0944eba7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.453 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.454 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.454 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:23:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.603 238945 DEBUG nova.compute.manager [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-changed-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.604 238945 DEBUG nova.compute.manager [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Refreshing instance network info cache due to event network-changed-94907172-68c1-496b-a337-e4ff0944eba7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.604 238945 DEBUG oslo_concurrency.lockutils [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:30 np0005597378 nova_compute[238941]: 2026-01-27 14:23:30.685 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:23:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:31.094 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:31.095 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:23:31 np0005597378 nova_compute[238941]: 2026-01-27 14:23:31.095 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 167 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:23:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.740658105 +0000 UTC m=+0.041344206 container create 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:23:31 np0005597378 systemd[1]: Started libpod-conmon-0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9.scope.
Jan 27 09:23:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.719896455 +0000 UTC m=+0.020582576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.836237772 +0000 UTC m=+0.136923873 container init 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.844735072 +0000 UTC m=+0.145421163 container start 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.848478652 +0000 UTC m=+0.149164763 container attach 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:23:31 np0005597378 quirky_turing[365113]: 167 167
Jan 27 09:23:31 np0005597378 systemd[1]: libpod-0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9.scope: Deactivated successfully.
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.853391476 +0000 UTC m=+0.154077587 container died 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:23:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e24dfa7614d234602cecb434f6f4fe9935b40d95e627a22f51269e7f5d5b6298-merged.mount: Deactivated successfully.
Jan 27 09:23:31 np0005597378 podman[365097]: 2026-01-27 14:23:31.90068874 +0000 UTC m=+0.201374821 container remove 0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_turing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:23:31 np0005597378 systemd[1]: libpod-conmon-0af57db9ad62aa9e368463e00492d3a4db35fb025f6950973fa2128dcb753fb9.scope: Deactivated successfully.
Jan 27 09:23:32 np0005597378 podman[365137]: 2026-01-27 14:23:32.081436726 +0000 UTC m=+0.038795398 container create 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:23:32 np0005597378 systemd[1]: Started libpod-conmon-89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298.scope.
Jan 27 09:23:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:32 np0005597378 podman[365137]: 2026-01-27 14:23:32.066534513 +0000 UTC m=+0.023893125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:23:32 np0005597378 podman[365137]: 2026-01-27 14:23:32.165653567 +0000 UTC m=+0.123012159 container init 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:23:32 np0005597378 podman[365137]: 2026-01-27 14:23:32.173816917 +0000 UTC m=+0.131175499 container start 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:23:32 np0005597378 podman[365137]: 2026-01-27 14:23:32.177881656 +0000 UTC m=+0.135240258 container attach 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:23:32 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:23:32 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:23:32 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:32 np0005597378 inspiring_pike[365154]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:23:32 np0005597378 inspiring_pike[365154]: --> All data devices are unavailable
Jan 27 09:23:32 np0005597378 systemd[1]: libpod-89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298.scope: Deactivated successfully.
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.681 238945 DEBUG nova.network.neutron [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updating instance_info_cache with network_info: [{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:32 np0005597378 podman[365174]: 2026-01-27 14:23:32.68565603 +0000 UTC m=+0.024036749 container died 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.703 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.703 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance network_info: |[{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.703 238945 DEBUG oslo_concurrency.lockutils [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.704 238945 DEBUG nova.network.neutron [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Refreshing network info cache for port 94907172-68c1-496b-a337-e4ff0944eba7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.707 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start _get_guest_xml network_info=[{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:23:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-90e74bf777f014f9c02f55c4fd3dc772bed354f0c373e2458e2eedfd5f744e34-merged.mount: Deactivated successfully.
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.715 238945 WARNING nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:23:32 np0005597378 podman[365174]: 2026-01-27 14:23:32.726221924 +0000 UTC m=+0.064602623 container remove 89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pike, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.726 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.728 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:23:32 np0005597378 systemd[1]: libpod-conmon-89b205733ebb26a40cd09cabb162a0eb5de414a649292568e56095101a929298.scope: Deactivated successfully.
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.735 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.736 238945 DEBUG nova.virt.libvirt.host [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.736 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.736 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.737 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.738 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.738 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.738 238945 DEBUG nova.virt.hardware [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.741 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:32 np0005597378 nova_compute[238941]: 2026-01-27 14:23:32.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 167 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.247653726 +0000 UTC m=+0.113726308 container create 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.154123274 +0000 UTC m=+0.020195876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:23:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:23:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032077714' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.335 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:33 np0005597378 systemd[1]: Started libpod-conmon-443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5.scope.
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.358 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.363 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.392546514 +0000 UTC m=+0.258619126 container init 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.400483788 +0000 UTC m=+0.266556370 container start 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:23:33 np0005597378 focused_keldysh[365297]: 167 167
Jan 27 09:23:33 np0005597378 systemd[1]: libpod-443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5.scope: Deactivated successfully.
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.4068758 +0000 UTC m=+0.272948422 container attach 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.407681152 +0000 UTC m=+0.273753744 container died 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:23:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-92c8c1c2d7d13a100bf4a3ca862ee8493d827faca505efb7775f4b73677da035-merged.mount: Deactivated successfully.
Jan 27 09:23:33 np0005597378 podman[365272]: 2026-01-27 14:23:33.622969569 +0000 UTC m=+0.489042151 container remove 443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_keldysh, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:23:33 np0005597378 systemd[1]: libpod-conmon-443801683d5aceff54cd2ef5c80363b6eb70c07f4f74a32ca0e980907a932ec5.scope: Deactivated successfully.
Jan 27 09:23:33 np0005597378 podman[365352]: 2026-01-27 14:23:33.849536868 +0000 UTC m=+0.097379907 container create 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:23:33 np0005597378 podman[365352]: 2026-01-27 14:23:33.776265602 +0000 UTC m=+0.024108641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:23:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:23:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375274103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.951 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.954 238945 DEBUG nova.virt.libvirt.vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=138,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0ksp4wp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=8495a58a-7371-4222-afef-f486eafff82d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.955 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.956 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.957 238945 DEBUG nova.objects.instance [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 8495a58a-7371-4222-afef-f486eafff82d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:33 np0005597378 systemd[1]: Started libpod-conmon-47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956.scope.
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.976 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <uuid>8495a58a-7371-4222-afef-f486eafff82d</uuid>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <name>instance-0000008a</name>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077</nova:name>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:23:32</nova:creationTime>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <nova:port uuid="94907172-68c1-496b-a337-e4ff0944eba7">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <entry name="serial">8495a58a-7371-4222-afef-f486eafff82d</entry>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <entry name="uuid">8495a58a-7371-4222-afef-f486eafff82d</entry>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8495a58a-7371-4222-afef-f486eafff82d_disk">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/8495a58a-7371-4222-afef-f486eafff82d_disk.config">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:67:9d:b1"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <target dev="tap94907172-68"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/console.log" append="off"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:23:33 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:23:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:23:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:23:33 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.977 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Preparing to wait for external event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.977 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.978 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.978 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.978 238945 DEBUG nova.virt.libvirt.vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=138,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0ksp4wp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:24Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=8495a58a-7371-4222-afef-f486eafff82d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.979 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.979 238945 DEBUG nova.network.os_vif_util [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.980 238945 DEBUG os_vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.981 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.981 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.987 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94907172-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.987 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94907172-68, col_values=(('external_ids', {'iface-id': '94907172-68c1-496b-a337-e4ff0944eba7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:9d:b1', 'vm-uuid': '8495a58a-7371-4222-afef-f486eafff82d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:33 np0005597378 NetworkManager[48904]: <info>  [1769523813.9901] manager: (tap94907172-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.996 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:33 np0005597378 nova_compute[238941]: 2026-01-27 14:23:33.996 238945 INFO os_vif [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68')#033[00m
Jan 27 09:23:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:34 np0005597378 podman[365352]: 2026-01-27 14:23:34.061674809 +0000 UTC m=+0.309517868 container init 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:23:34 np0005597378 podman[365352]: 2026-01-27 14:23:34.068604296 +0000 UTC m=+0.316447325 container start 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.091 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:23:34 np0005597378 podman[365352]: 2026-01-27 14:23:34.091773461 +0000 UTC m=+0.339616500 container attach 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.091 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.091 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:67:9d:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.092 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Using config drive#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.114 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:34 np0005597378 serene_wright[365371]: {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:    "0": [
Jan 27 09:23:34 np0005597378 serene_wright[365371]:        {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "devices": [
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "/dev/loop3"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            ],
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_name": "ceph_lv0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_size": "21470642176",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "name": "ceph_lv0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "tags": {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cluster_name": "ceph",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.crush_device_class": "",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.encrypted": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.objectstore": "bluestore",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osd_id": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.type": "block",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.vdo": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.with_tpm": "0"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            },
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "type": "block",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "vg_name": "ceph_vg0"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:        }
Jan 27 09:23:34 np0005597378 serene_wright[365371]:    ],
Jan 27 09:23:34 np0005597378 serene_wright[365371]:    "1": [
Jan 27 09:23:34 np0005597378 serene_wright[365371]:        {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "devices": [
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "/dev/loop4"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            ],
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_name": "ceph_lv1",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_size": "21470642176",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "name": "ceph_lv1",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "tags": {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cluster_name": "ceph",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.crush_device_class": "",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.encrypted": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.objectstore": "bluestore",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osd_id": "1",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.type": "block",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.vdo": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.with_tpm": "0"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            },
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "type": "block",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "vg_name": "ceph_vg1"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:        }
Jan 27 09:23:34 np0005597378 serene_wright[365371]:    ],
Jan 27 09:23:34 np0005597378 serene_wright[365371]:    "2": [
Jan 27 09:23:34 np0005597378 serene_wright[365371]:        {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "devices": [
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "/dev/loop5"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            ],
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_name": "ceph_lv2",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_size": "21470642176",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "name": "ceph_lv2",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "tags": {
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.cluster_name": "ceph",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.crush_device_class": "",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.encrypted": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.objectstore": "bluestore",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osd_id": "2",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.type": "block",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.vdo": "0",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:                "ceph.with_tpm": "0"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            },
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "type": "block",
Jan 27 09:23:34 np0005597378 serene_wright[365371]:            "vg_name": "ceph_vg2"
Jan 27 09:23:34 np0005597378 serene_wright[365371]:        }
Jan 27 09:23:34 np0005597378 serene_wright[365371]:    ]
Jan 27 09:23:34 np0005597378 serene_wright[365371]: }
Jan 27 09:23:34 np0005597378 systemd[1]: libpod-47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956.scope: Deactivated successfully.
Jan 27 09:23:34 np0005597378 podman[365352]: 2026-01-27 14:23:34.366840919 +0000 UTC m=+0.614683948 container died 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:23:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9805a6cb07d9dc9a0c56dfe930eaed65516fee1f62e2a6d184a06f65c25217de-merged.mount: Deactivated successfully.
Jan 27 09:23:34 np0005597378 podman[365352]: 2026-01-27 14:23:34.489615911 +0000 UTC m=+0.737458940 container remove 47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:23:34 np0005597378 systemd[1]: libpod-conmon-47da6c253f00fb6010d1164b3aefebe8e79f1e9fbc82276c2c91ac1a8bd5d956.scope: Deactivated successfully.
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.552 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Creating config drive at /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.557 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7c_263pv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:34Z|01479|binding|INFO|Releasing lport 200fc390-2bd2-4617-9a70-937136a8fecc from this chassis (sb_readonly=0)
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.696 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7c_263pv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.719 238945 DEBUG nova.storage.rbd_utils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 8495a58a-7371-4222-afef-f486eafff82d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.722 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config 8495a58a-7371-4222-afef-f486eafff82d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.900 238945 DEBUG oslo_concurrency.processutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config 8495a58a-7371-4222-afef-f486eafff82d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.901 238945 INFO nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deleting local config drive /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d/disk.config because it was imported into RBD.#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.919 238945 DEBUG nova.network.neutron [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updated VIF entry in instance network info cache for port 94907172-68c1-496b-a337-e4ff0944eba7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.920 238945 DEBUG nova.network.neutron [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updating instance_info_cache with network_info: [{"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:34 np0005597378 podman[365515]: 2026-01-27 14:23:34.932689139 +0000 UTC m=+0.048839538 container create 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:23:34 np0005597378 kernel: tap94907172-68: entered promiscuous mode
Jan 27 09:23:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:34Z|01480|binding|INFO|Claiming lport 94907172-68c1-496b-a337-e4ff0944eba7 for this chassis.
Jan 27 09:23:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:34Z|01481|binding|INFO|94907172-68c1-496b-a337-e4ff0944eba7: Claiming fa:16:3e:67:9d:b1 10.100.0.6
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:34 np0005597378 NetworkManager[48904]: <info>  [1769523814.9571] manager: (tap94907172-68): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Jan 27 09:23:34 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:34Z|01482|binding|INFO|Setting lport 94907172-68c1-496b-a337-e4ff0944eba7 ovn-installed in OVS
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:34 np0005597378 nova_compute[238941]: 2026-01-27 14:23:34.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:34 np0005597378 systemd[1]: Started libpod-conmon-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope.
Jan 27 09:23:35 np0005597378 podman[365515]: 2026-01-27 14:23:34.906285307 +0000 UTC m=+0.022435726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.004 238945 DEBUG oslo_concurrency.lockutils [req-114f10f4-34f8-4773-924d-0a243a42cc58 req-d3cfcb54-331e-4da2-ae32-f6e5838d1c2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-8495a58a-7371-4222-afef-f486eafff82d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:35 np0005597378 systemd-machined[207425]: New machine qemu-170-instance-0000008a.
Jan 27 09:23:35 np0005597378 systemd[1]: Started Virtual Machine qemu-170-instance-0000008a.
Jan 27 09:23:35 np0005597378 systemd-udevd[365546]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:23:35 np0005597378 podman[365515]: 2026-01-27 14:23:35.03950326 +0000 UTC m=+0.155653679 container init 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:23:35 np0005597378 NetworkManager[48904]: <info>  [1769523815.0456] device (tap94907172-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:23:35 np0005597378 NetworkManager[48904]: <info>  [1769523815.0465] device (tap94907172-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:23:35 np0005597378 podman[365515]: 2026-01-27 14:23:35.048273767 +0000 UTC m=+0.164424166 container start 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:23:35 np0005597378 sad_elbakyan[365540]: 167 167
Jan 27 09:23:35 np0005597378 systemd[1]: libpod-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope: Deactivated successfully.
Jan 27 09:23:35 np0005597378 conmon[365540]: conmon 37a176055b9688a5f780 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope/container/memory.events
Jan 27 09:23:35 np0005597378 podman[365515]: 2026-01-27 14:23:35.054890455 +0000 UTC m=+0.171040884 container attach 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 09:23:35 np0005597378 podman[365515]: 2026-01-27 14:23:35.055261964 +0000 UTC m=+0.171412383 container died 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:23:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1f3c4f7b047d0c1df920188f8e54a7cd46df03c1ac1989c54167bdcd8a2334f2-merged.mount: Deactivated successfully.
Jan 27 09:23:35 np0005597378 podman[365515]: 2026-01-27 14:23:35.131584903 +0000 UTC m=+0.247735302 container remove 37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:23:35 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:35Z|01483|binding|INFO|Setting lport 94907172-68c1-496b-a337-e4ff0944eba7 up in Southbound
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.138 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9d:b1 10.100.0.6'], port_security=['fa:16:3e:67:9d:b1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8495a58a-7371-4222-afef-f486eafff82d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=94907172-68c1-496b-a337-e4ff0944eba7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.141 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 94907172-68c1-496b-a337-e4ff0944eba7 in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 bound to our chassis#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acd03ef9-9bfd-4078-adf3-4b0b930dc081#033[00m
Jan 27 09:23:35 np0005597378 systemd[1]: libpod-conmon-37a176055b9688a5f780119dda6c9f7af67b67e0e5aaf87cb0ecf8af3150350f.scope: Deactivated successfully.
Jan 27 09:23:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.193 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[546d4e09-f0bf-40e2-bc14-175451d02136]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.227 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8142cbad-9184-4475-bc37-41bb76f2a954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.236 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[abe8f7a9-4db3-4680-8904-55ceeed09e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.264 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2487a863-738e-4e29-b651-67df2f3cec96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.282 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[66dab145-7570-4515-be38-495010091b24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365581, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ea83d90d-25fd-46e8-8e98-6dbfeadf0423]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641623, 'tstamp': 641623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365589, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641626, 'tstamp': 641626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365589, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.300 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd03ef9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.303 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.304 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacd03ef9-90, col_values=(('external_ids', {'iface-id': '200fc390-2bd2-4617-9a70-937136a8fecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:35.305 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:35 np0005597378 podman[365582]: 2026-01-27 14:23:35.336300264 +0000 UTC m=+0.050476502 container create 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:23:35 np0005597378 systemd[1]: Started libpod-conmon-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope.
Jan 27 09:23:35 np0005597378 podman[365582]: 2026-01-27 14:23:35.312379529 +0000 UTC m=+0.026555797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:23:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:35 np0005597378 podman[365582]: 2026-01-27 14:23:35.453165236 +0000 UTC m=+0.167341484 container init 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:23:35 np0005597378 podman[365582]: 2026-01-27 14:23:35.46518782 +0000 UTC m=+0.179364058 container start 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:23:35 np0005597378 podman[365582]: 2026-01-27 14:23:35.476581507 +0000 UTC m=+0.190757775 container attach 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.496 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523815.495936, 8495a58a-7371-4222-afef-f486eafff82d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.497 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Started (Lifecycle Event)#033[00m
Jan 27 09:23:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.552 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.556 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523815.4960806, 8495a58a-7371-4222-afef-f486eafff82d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.557 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.679 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.682 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.704 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.851 238945 DEBUG nova.compute.manager [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG oslo_concurrency.lockutils [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG oslo_concurrency.lockutils [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG oslo_concurrency.lockutils [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.852 238945 DEBUG nova.compute.manager [req-7d96ef0b-6460-40e2-84c2-fc0b7969413c req-472a18b6-84a4-4ebc-80e2-583026420992 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Processing event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.853 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.855 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523815.8558033, 8495a58a-7371-4222-afef-f486eafff82d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.856 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.857 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.863 238945 INFO nova.virt.libvirt.driver [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance spawned successfully.#033[00m
Jan 27 09:23:35 np0005597378 nova_compute[238941]: 2026-01-27 14:23:35.864 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.017 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.023 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.023 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.024 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.024 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.024 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.025 238945 DEBUG nova.virt.libvirt.driver [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.029 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.102 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:36 np0005597378 lvm[365717]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:23:36 np0005597378 lvm[365717]: VG ceph_vg0 finished
Jan 27 09:23:36 np0005597378 lvm[365719]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:23:36 np0005597378 lvm[365719]: VG ceph_vg1 finished
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.140 238945 INFO nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 11.49 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.141 238945 DEBUG nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:36 np0005597378 lvm[365721]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:23:36 np0005597378 lvm[365721]: VG ceph_vg2 finished
Jan 27 09:23:36 np0005597378 lvm[365723]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:23:36 np0005597378 lvm[365723]: VG ceph_vg2 finished
Jan 27 09:23:36 np0005597378 interesting_wilson[365639]: {}
Jan 27 09:23:36 np0005597378 systemd[1]: libpod-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope: Deactivated successfully.
Jan 27 09:23:36 np0005597378 systemd[1]: libpod-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope: Consumed 1.301s CPU time.
Jan 27 09:23:36 np0005597378 podman[365582]: 2026-01-27 14:23:36.254276871 +0000 UTC m=+0.968453119 container died 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.356 238945 INFO nova.compute.manager [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 12.71 seconds to build instance.#033[00m
Jan 27 09:23:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-91f01c8c23702c66fa01b070911a74a897eb6e237152ee3f6a0f96386ea41d9b-merged.mount: Deactivated successfully.
Jan 27 09:23:36 np0005597378 podman[365582]: 2026-01-27 14:23:36.395655094 +0000 UTC m=+1.109831332 container remove 637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:23:36 np0005597378 systemd[1]: libpod-conmon-637bc49328666ecc49ec34bd30985cc2de2a7a80937e18ebd4781846f1be2fbb.scope: Deactivated successfully.
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.428 238945 DEBUG oslo_concurrency.lockutils [None req-039a9c01-3980-4808-af99-0f0c4758891f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:23:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:23:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:23:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.681 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.682 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.698 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.786 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.787 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.793 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.793 238945 INFO nova.compute.claims [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:23:36 np0005597378 nova_compute[238941]: 2026-01-27 14:23:36.952 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 09:23:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:23:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:23:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225825362' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.488 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.494 238945 DEBUG nova.compute.provider_tree [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.518 238945 DEBUG nova.scheduler.client.report [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.544 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.545 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.597 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.597 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.622 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.646 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:23:37 np0005597378 podman[365783]: 2026-01-27 14:23:37.710076382 +0000 UTC m=+0.053554116 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.769 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.770 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.770 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Creating image(s)#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.803 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.834 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.863 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.867 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.915 238945 DEBUG nova.policy [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.955 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.956 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.956 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.956 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.981 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:37 np0005597378 nova_compute[238941]: 2026-01-27 14:23:37.988 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.123 238945 DEBUG nova.compute.manager [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.124 238945 DEBUG oslo_concurrency.lockutils [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 DEBUG oslo_concurrency.lockutils [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 DEBUG oslo_concurrency.lockutils [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 DEBUG nova.compute.manager [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] No waiting events found dispatching network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.125 238945 WARNING nova.compute.manager [req-ace4720e-c90b-4678-94c1-43b89ef2a2ef req-254ed9b4-9523-44bf-ab0e-0b35af3f9368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received unexpected event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.412 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.473 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.571 238945 DEBUG nova.objects.instance [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid a48b56d5-6e62-4476-bee9-dc8cf3c1759d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.592 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.592 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Ensure instance console log exists: /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.593 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.593 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.593 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.792 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Successfully created port: b56b41e5-7177-4698-94c9-d69ffe22de91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:23:38 np0005597378 nova_compute[238941]: 2026-01-27 14:23:38.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:39.097 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 167 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Jan 27 09:23:39 np0005597378 podman[365969]: 2026-01-27 14:23:39.769080419 +0000 UTC m=+0.098190628 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.002 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523805.001263, edc76197-7b28-4f2c-8086-0e78a3dcc8f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.003 238945 INFO nova.compute.manager [-] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.024 238945 DEBUG nova.compute.manager [None req-6b3b5dbe-1f5c-4fa3-913b-75d96a13f20c - - - - - -] [instance: edc76197-7b28-4f2c-8086-0e78a3dcc8f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.215 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Successfully updated port: b56b41e5-7177-4698-94c9-d69ffe22de91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.239 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.239 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.239 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.360 238945 DEBUG nova.compute.manager [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.360 238945 DEBUG nova.compute.manager [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing instance network info cache due to event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.360 238945 DEBUG oslo_concurrency.lockutils [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:40 np0005597378 nova_compute[238941]: 2026-01-27 14:23:40.982 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:23:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Jan 27 09:23:42 np0005597378 nova_compute[238941]: 2026-01-27 14:23:42.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.933 238945 DEBUG nova.network.neutron [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.963 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.964 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance network_info: |[{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.964 238945 DEBUG oslo_concurrency.lockutils [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.965 238945 DEBUG nova.network.neutron [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.968 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start _get_guest_xml network_info=[{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.972 238945 WARNING nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:23:43 np0005597378 nova_compute[238941]: 2026-01-27 14:23:43.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.011 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.012 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.015 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.016 238945 DEBUG nova.virt.libvirt.host [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.017 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.017 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.018 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.018 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.018 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.019 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.019 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.020 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.020 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.020 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.021 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.021 238945 DEBUG nova.virt.hardware [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.024 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:23:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:23:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462597305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.597 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.619 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:44 np0005597378 nova_compute[238941]: 2026-01-27 14:23:44.625 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 09:23:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:23:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719711396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.217 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.219 238945 DEBUG nova.virt.libvirt.vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-562631681',display_name='tempest-TestGettingAddress-server-562631681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-562631681',id=139,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ncwzlcth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:37Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=a48b56d5-6e62-4476-bee9-dc8cf3c1759d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.220 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.221 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.222 238945 DEBUG nova.objects.instance [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid a48b56d5-6e62-4476-bee9-dc8cf3c1759d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.238 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <uuid>a48b56d5-6e62-4476-bee9-dc8cf3c1759d</uuid>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <name>instance-0000008b</name>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-562631681</nova:name>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:23:43</nova:creationTime>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <nova:port uuid="b56b41e5-7177-4698-94c9-d69ffe22de91">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe6a:4a93" ipVersion="6"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6a:4a93" ipVersion="6"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <entry name="serial">a48b56d5-6e62-4476-bee9-dc8cf3c1759d</entry>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <entry name="uuid">a48b56d5-6e62-4476-bee9-dc8cf3c1759d</entry>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:6a:4a:93"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <target dev="tapb56b41e5-71"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/console.log" append="off"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:23:45 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:23:45 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:23:45 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:23:45 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Preparing to wait for external event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.242 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.243 238945 DEBUG nova.virt.libvirt.vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-562631681',display_name='tempest-TestGettingAddress-server-562631681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-562631681',id=139,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ncwzlcth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:37Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=a48b56d5-6e62-4476-bee9-dc8cf3c1759d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.243 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.244 238945 DEBUG nova.network.os_vif_util [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.245 238945 DEBUG os_vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.245 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.246 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.248 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.248 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb56b41e5-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.248 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb56b41e5-71, col_values=(('external_ids', {'iface-id': 'b56b41e5-7177-4698-94c9-d69ffe22de91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:4a:93', 'vm-uuid': 'a48b56d5-6e62-4476-bee9-dc8cf3c1759d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.250 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:45 np0005597378 NetworkManager[48904]: <info>  [1769523825.2515] manager: (tapb56b41e5-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.260 238945 INFO os_vif [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71')#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.362 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.363 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.364 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:6a:4a:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.365 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Using config drive#033[00m
Jan 27 09:23:45 np0005597378 nova_compute[238941]: 2026-01-27 14:23:45.399 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.128 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Creating config drive at /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.133 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq0ihcxwo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.274 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq0ihcxwo" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.298 238945 DEBUG nova.storage.rbd_utils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.302 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:46.328 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.596 238945 DEBUG oslo_concurrency.processutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config a48b56d5-6e62-4476-bee9-dc8cf3c1759d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.597 238945 INFO nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deleting local config drive /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d/disk.config because it was imported into RBD.#033[00m
Jan 27 09:23:46 np0005597378 NetworkManager[48904]: <info>  [1769523826.6560] manager: (tapb56b41e5-71): new Tun device (/org/freedesktop/NetworkManager/Devices/609)
Jan 27 09:23:46 np0005597378 kernel: tapb56b41e5-71: entered promiscuous mode
Jan 27 09:23:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:46Z|01484|binding|INFO|Claiming lport b56b41e5-7177-4698-94c9-d69ffe22de91 for this chassis.
Jan 27 09:23:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:46Z|01485|binding|INFO|b56b41e5-7177-4698-94c9-d69ffe22de91: Claiming fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:46Z|01486|binding|INFO|Setting lport b56b41e5-7177-4698-94c9-d69ffe22de91 ovn-installed in OVS
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:46 np0005597378 nova_compute[238941]: 2026-01-27 14:23:46.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:46 np0005597378 systemd-machined[207425]: New machine qemu-171-instance-0000008b.
Jan 27 09:23:46 np0005597378 systemd-udevd[366134]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:23:46 np0005597378 systemd[1]: Started Virtual Machine qemu-171-instance-0000008b.
Jan 27 09:23:46 np0005597378 NetworkManager[48904]: <info>  [1769523826.7206] device (tapb56b41e5-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:23:46 np0005597378 NetworkManager[48904]: <info>  [1769523826.7219] device (tapb56b41e5-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:23:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:47Z|01487|binding|INFO|Setting lport b56b41e5-7177-4698-94c9-d69ffe22de91 up in Southbound
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.033 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], port_security=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe6a:4a93/64 2001:db8::f816:3eff:fe6a:4a93/64', 'neutron:device_id': 'a48b56d5-6e62-4476-bee9-dc8cf3c1759d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b56b41e5-7177-4698-94c9-d69ffe22de91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.036 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b56b41e5-7177-4698-94c9-d69ffe22de91 in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 bound to our chassis#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.039 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.052 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8a4097-be8b-4cc5-a3c1-84b1ac89b9ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.055 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8b62a287-41 in ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.057 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8b62a287-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.057 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f626f0b2-3e63-4b1a-9a08-49eb5081970f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.058 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188e12aa-cda8-47e2-88e1-5f7f38992cec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.089 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[0f34c25f-a495-4023-b273-515c4ab57cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.119 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[911753b8-673e-4456-9688-48823f6c63b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.150 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[90d67f33-8532-4fdd-941c-ecff64dbd260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 NetworkManager[48904]: <info>  [1769523827.1589] manager: (tap8b62a287-40): new Veth device (/org/freedesktop/NetworkManager/Devices/610)
Jan 27 09:23:47 np0005597378 systemd-udevd[366136]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.157 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4a773fb7-cd80-4459-b977-0137b1938696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.159 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523827.1588843, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.160 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Started (Lifecycle Event)#033[00m
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 213 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.184 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.189 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523827.1591272, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.189 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.201 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f7fdce-be01-4de7-946a-1e4633944639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.204 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a722212f-1062-47fb-8b14-c283d062e8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.213 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.218 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:47 np0005597378 NetworkManager[48904]: <info>  [1769523827.2322] device (tap8b62a287-40): carrier: link connected
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.238 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcf80e0-9b7f-4623-92e7-d732b7f3b7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.239 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.244 238945 DEBUG nova.network.neutron [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updated VIF entry in instance network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.244 238945 DEBUG nova.network.neutron [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.263 238945 DEBUG oslo_concurrency.lockutils [req-3d1584e8-9513-48df-b003-c15b9825a2b9 req-111cf6da-626a-4e3d-a8ba-e92bbaf08ff6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6898998e-2f4d-45e0-9531-cf2812ed6953]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 21782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366209, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.287 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd8254b-b2af-4be6-8c60-b4aaa31ff55c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:e90d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646685, 'tstamp': 646685}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366210, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.307 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5afbe242-b330-4f93-b09d-0731b3deda3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 21782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366211, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.336 238945 DEBUG nova.compute.manager [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG oslo_concurrency.lockutils [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG oslo_concurrency.lockutils [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG oslo_concurrency.lockutils [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.337 238945 DEBUG nova.compute.manager [req-89c27bf8-caab-4888-aea2-01d8cc0853d0 req-50c61612-7eb9-4933-b800-dc6fd1981332 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Processing event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.338 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.341 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523827.3414006, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.341 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.342 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3611e810-dc7e-4b8f-b0eb-4618373c48fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.344 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.347 238945 INFO nova.virt.libvirt.driver [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance spawned successfully.#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.348 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.362 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.369 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.373 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.373 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.374 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.374 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.375 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.375 238945 DEBUG nova.virt.libvirt.driver [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.399 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.416 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[367f2934-d61f-4b8f-b726-9878616472ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.417 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.417 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.417 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b62a287-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:47 np0005597378 kernel: tap8b62a287-40: entered promiscuous mode
Jan 27 09:23:47 np0005597378 NetworkManager[48904]: <info>  [1769523827.4198] manager: (tap8b62a287-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.419 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.422 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b62a287-40, col_values=(('external_ids', {'iface-id': '2c64544a-77a5-4e81-a088-de5cbdfdbfdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:47 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:47Z|01488|binding|INFO|Releasing lport 2c64544a-77a5-4e81-a088-de5cbdfdbfdd from this chassis (sb_readonly=0)
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.430 238945 INFO nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 9.66 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.431 238945 DEBUG nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.439 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8b62a287-47a7-4adb-9afa-c15812d1a9e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8b62a287-47a7-4adb-9afa-c15812d1a9e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.440 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[46c4039d-be60-4388-9267-22f32413f85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.441 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/8b62a287-47a7-4adb-9afa-c15812d1a9e4.pid.haproxy
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 8b62a287-47a7-4adb-9afa-c15812d1a9e4
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:23:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:47.441 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'env', 'PROCESS_TAG=haproxy-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8b62a287-47a7-4adb-9afa-c15812d1a9e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.489 238945 INFO nova.compute.manager [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 10.73 seconds to build instance.#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.504 238945 DEBUG oslo_concurrency.lockutils [None req-488a3123-a8d4-4310-9afb-3e418e7cb978 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:47 np0005597378 nova_compute[238941]: 2026-01-27 14:23:47.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:23:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:23:47 np0005597378 podman[366243]: 2026-01-27 14:23:47.853053152 +0000 UTC m=+0.067402519 container create 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 09:23:47 np0005597378 systemd[1]: Started libpod-conmon-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464.scope.
Jan 27 09:23:47 np0005597378 podman[366243]: 2026-01-27 14:23:47.813390503 +0000 UTC m=+0.027739950 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:23:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ce5402bbc95c4a252dd0faf198b68103aac7bfc53355d25b3cc66a82a3c46c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:47 np0005597378 podman[366243]: 2026-01-27 14:23:47.944443816 +0000 UTC m=+0.158793183 container init 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:23:47 np0005597378 podman[366243]: 2026-01-27 14:23:47.95051981 +0000 UTC m=+0.164869177 container start 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 27 09:23:47 np0005597378 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : New worker (366264) forked
Jan 27 09:23:47 np0005597378 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : Loading success.
Jan 27 09:23:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:48Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:9d:b1 10.100.0.6
Jan 27 09:23:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:48Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:9d:b1 10.100.0.6
Jan 27 09:23:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 228 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Jan 27 09:23:49 np0005597378 nova_compute[238941]: 2026-01-27 14:23:49.474 238945 DEBUG nova.compute.manager [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:49 np0005597378 nova_compute[238941]: 2026-01-27 14:23:49.474 238945 DEBUG oslo_concurrency.lockutils [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:49 np0005597378 nova_compute[238941]: 2026-01-27 14:23:49.474 238945 DEBUG oslo_concurrency.lockutils [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:49 np0005597378 nova_compute[238941]: 2026-01-27 14:23:49.475 238945 DEBUG oslo_concurrency.lockutils [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:49 np0005597378 nova_compute[238941]: 2026-01-27 14:23:49.475 238945 DEBUG nova.compute.manager [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] No waiting events found dispatching network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:23:49 np0005597378 nova_compute[238941]: 2026-01-27 14:23:49.475 238945 WARNING nova.compute.manager [req-c5222f37-7bf9-4995-83d5-41cd27e50c8e req-fcd78921-8d45-4e20-bc98-ef3dd1daf0b7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received unexpected event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:23:50 np0005597378 nova_compute[238941]: 2026-01-27 14:23:50.252 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 246 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.505 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.505 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.522 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.548 238945 DEBUG nova.compute.manager [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.549 238945 DEBUG nova.compute.manager [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing instance network info cache due to event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.549 238945 DEBUG oslo_concurrency.lockutils [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.550 238945 DEBUG oslo_concurrency.lockutils [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.550 238945 DEBUG nova.network.neutron [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.590 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.590 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.600 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.600 238945 INFO nova.compute.claims [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:23:51 np0005597378 nova_compute[238941]: 2026-01-27 14:23:51.728 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171017100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.283 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.289 238945 DEBUG nova.compute.provider_tree [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.341 238945 DEBUG nova.scheduler.client.report [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.364 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.365 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.422 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.423 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.455 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.480 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.587 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.588 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.588 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Creating image(s)#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.614 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.643 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.666 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.670 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.712 238945 DEBUG nova.policy [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.748 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.749 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.749 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.749 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.774 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.779 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3b760120-0ed3-4962-b9ba-775e88e9a482_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:52 np0005597378 nova_compute[238941]: 2026-01-27 14:23:52.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.130 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 3b760120-0ed3-4962-b9ba-775e88e9a482_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 246 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.192 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.297 238945 DEBUG nova.objects.instance [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.326 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.326 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Ensure instance console log exists: /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.327 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.327 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.328 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.501 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Successfully created port: 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.550 238945 DEBUG nova.network.neutron [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updated VIF entry in instance network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.550 238945 DEBUG nova.network.neutron [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:53 np0005597378 nova_compute[238941]: 2026-01-27 14:23:53.569 238945 DEBUG oslo_concurrency.lockutils [req-7aa1d17d-d81b-4a5f-9c23-41bdeacc2308 req-1a5b17d5-679d-45cf-a965-13fffcb3ed0c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.160 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Successfully updated port: 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.175 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.175 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.175 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.236 238945 DEBUG nova.compute.manager [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.236 238945 DEBUG nova.compute.manager [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.237 238945 DEBUG oslo_concurrency.lockutils [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:23:54 np0005597378 nova_compute[238941]: 2026-01-27 14:23:54.315 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:23:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 285 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 154 op/s
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.490 238945 DEBUG nova.network.neutron [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.515 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.515 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance network_info: |[{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.515 238945 DEBUG oslo_concurrency.lockutils [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.516 238945 DEBUG nova.network.neutron [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.518 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start _get_guest_xml network_info=[{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.522 238945 WARNING nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.528 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.528 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.531 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.531 238945 DEBUG nova.virt.libvirt.host [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.532 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.533 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.534 238945 DEBUG nova.virt.hardware [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.537 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.653 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.656 238945 INFO nova.compute.manager [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Terminating instance#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.657 238945 DEBUG nova.compute.manager [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:23:55 np0005597378 kernel: tap94907172-68 (unregistering): left promiscuous mode
Jan 27 09:23:55 np0005597378 NetworkManager[48904]: <info>  [1769523835.7210] device (tap94907172-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:23:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:55Z|01489|binding|INFO|Releasing lport 94907172-68c1-496b-a337-e4ff0944eba7 from this chassis (sb_readonly=0)
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:55Z|01490|binding|INFO|Setting lport 94907172-68c1-496b-a337-e4ff0944eba7 down in Southbound
Jan 27 09:23:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:55Z|01491|binding|INFO|Removing iface tap94907172-68 ovn-installed in OVS
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.749 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9d:b1 10.100.0.6'], port_security=['fa:16:3e:67:9d:b1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8495a58a-7371-4222-afef-f486eafff82d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=94907172-68c1-496b-a337-e4ff0944eba7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.751 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 94907172-68c1-496b-a337-e4ff0944eba7 in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 unbound from our chassis#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.752 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acd03ef9-9bfd-4078-adf3-4b0b930dc081#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.776 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[de04b8b0-44d4-4095-884e-64a49b87cd05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:55 np0005597378 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Jan 27 09:23:55 np0005597378 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Consumed 12.882s CPU time.
Jan 27 09:23:55 np0005597378 systemd-machined[207425]: Machine qemu-170-instance-0000008a terminated.
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.810 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[65aed5aa-85d8-4123-ad08-e16daacb0cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.814 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1096a363-43a8-435f-8ccf-594a5e2b32af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.848 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef8af27-a8b3-4e1a-927b-afd606b8c4f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.868 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe71426-513b-484b-ac4e-8c5d71092a4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacd03ef9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:f2:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641608, 'reachable_time': 25840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366492, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.892 238945 INFO nova.virt.libvirt.driver [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Instance destroyed successfully.#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.892 238945 DEBUG nova.objects.instance [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 8495a58a-7371-4222-afef-f486eafff82d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.894 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af611541-ab40-4283-8dd9-4c3778537b0b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641623, 'tstamp': 641623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366495, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacd03ef9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641626, 'tstamp': 641626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366495, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.899 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.905 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd03ef9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.907 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.910 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacd03ef9-90, col_values=(('external_ids', {'iface-id': '200fc390-2bd2-4617-9a70-937136a8fecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.909 238945 DEBUG nova.virt.libvirt.vif [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:23:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-0-1899411077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=138,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:23:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0ksp4wp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:23:36Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=8495a58a-7371-4222-afef-f486eafff82d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.911 238945 DEBUG nova.network.os_vif_util [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "94907172-68c1-496b-a337-e4ff0944eba7", "address": "fa:16:3e:67:9d:b1", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94907172-68", "ovs_interfaceid": "94907172-68c1-496b-a337-e4ff0944eba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:55.911 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.912 238945 DEBUG nova.network.os_vif_util [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.913 238945 DEBUG os_vif [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.914 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.915 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94907172-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.917 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.918 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.921 238945 INFO os_vif [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:b1,bridge_name='br-int',has_traffic_filtering=True,id=94907172-68c1-496b-a337-e4ff0944eba7,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94907172-68')#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.969 238945 DEBUG nova.compute.manager [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-unplugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG oslo_concurrency.lockutils [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG oslo_concurrency.lockutils [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG oslo_concurrency.lockutils [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.970 238945 DEBUG nova.compute.manager [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] No waiting events found dispatching network-vif-unplugged-94907172-68c1-496b-a337-e4ff0944eba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:23:55 np0005597378 nova_compute[238941]: 2026-01-27 14:23:55.971 238945 DEBUG nova.compute.manager [req-bedbbdf6-6cd7-437b-8de8-45c7b4dcaef6 req-9cbc1cd9-fe55-44c3-82cd-e55579cbb850 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-unplugged-94907172-68c1-496b-a337-e4ff0944eba7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:23:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:23:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241753347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.146 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.167 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.171 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.384 238945 INFO nova.virt.libvirt.driver [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deleting instance files /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d_del#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.385 238945 INFO nova.virt.libvirt.driver [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deletion of /var/lib/nova/instances/8495a58a-7371-4222-afef-f486eafff82d_del complete#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.447 238945 INFO nova.compute.manager [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.448 238945 DEBUG oslo.service.loopingcall [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.449 238945 DEBUG nova.compute.manager [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.449 238945 DEBUG nova.network.neutron [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:23:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:23:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699487319' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.747 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.749 238945 DEBUG nova.virt.libvirt.vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1968375921',display_name='tempest-TestNetworkBasicOps-server-1968375921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1968375921',id=140,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO2tG0iX4tZcL3XZkbM582zqvrqtVTYPW16jky41KmlMQPu5aBJe/s0ZkPuNBq+T6QvN5iR8uPNh1bxal/m862xoL0jVGsVzwPs53IF9FOj+3Vl0QQ7KhYEcj7GLQKVuEQ==',key_name='tempest-TestNetworkBasicOps-1667824706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-acmy2o20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3b760120-0ed3-4962-b9ba-775e88e9a482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.749 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.750 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.751 238945 DEBUG nova.objects.instance [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.767 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <uuid>3b760120-0ed3-4962-b9ba-775e88e9a482</uuid>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <name>instance-0000008c</name>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-1968375921</nova:name>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:23:55</nova:creationTime>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <nova:port uuid="9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <entry name="serial">3b760120-0ed3-4962-b9ba-775e88e9a482</entry>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <entry name="uuid">3b760120-0ed3-4962-b9ba-775e88e9a482</entry>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3b760120-0ed3-4962-b9ba-775e88e9a482_disk">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:b1:ef:72"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <target dev="tap9d1f9be3-07"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/console.log" append="off"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:23:56 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:23:56 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:23:56 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:23:56 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.768 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Preparing to wait for external event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.768 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.769 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.769 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.770 238945 DEBUG nova.virt.libvirt.vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1968375921',display_name='tempest-TestNetworkBasicOps-server-1968375921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1968375921',id=140,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO2tG0iX4tZcL3XZkbM582zqvrqtVTYPW16jky41KmlMQPu5aBJe/s0ZkPuNBq+T6QvN5iR8uPNh1bxal/m862xoL0jVGsVzwPs53IF9FOj+3Vl0QQ7KhYEcj7GLQKVuEQ==',key_name='tempest-TestNetworkBasicOps-1667824706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-acmy2o20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:23:52Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3b760120-0ed3-4962-b9ba-775e88e9a482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.770 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.771 238945 DEBUG nova.network.os_vif_util [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.771 238945 DEBUG os_vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.772 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.773 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.773 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.776 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1f9be3-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.777 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d1f9be3-07, col_values=(('external_ids', {'iface-id': '9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:ef:72', 'vm-uuid': '3b760120-0ed3-4962-b9ba-775e88e9a482'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:56 np0005597378 NetworkManager[48904]: <info>  [1769523836.7794] manager: (tap9d1f9be3-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.783 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.783 238945 INFO os_vif [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07')#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.839 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.840 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.840 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:b1:ef:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.841 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Using config drive#033[00m
Jan 27 09:23:56 np0005597378 nova_compute[238941]: 2026-01-27 14:23:56.866 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 27 09:23:57 np0005597378 nova_compute[238941]: 2026-01-27 14:23:57.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:57 np0005597378 nova_compute[238941]: 2026-01-27 14:23:57.940 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Creating config drive at /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config#033[00m
Jan 27 09:23:57 np0005597378 nova_compute[238941]: 2026-01-27 14:23:57.946 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyc2x7ba2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.047 238945 DEBUG nova.network.neutron [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.067 238945 INFO nova.compute.manager [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Took 1.62 seconds to deallocate network for instance.#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.069 238945 DEBUG nova.compute.manager [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.069 238945 DEBUG oslo_concurrency.lockutils [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "8495a58a-7371-4222-afef-f486eafff82d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.070 238945 DEBUG oslo_concurrency.lockutils [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.071 238945 DEBUG oslo_concurrency.lockutils [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.071 238945 DEBUG nova.compute.manager [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] No waiting events found dispatching network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.071 238945 WARNING nova.compute.manager [req-f8f39005-b920-47dd-a9c6-1dcb9f8e3000 req-f55cde77-d7d6-446d-9acb-01e1f7c958fb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received unexpected event network-vif-plugged-94907172-68c1-496b-a337-e4ff0944eba7 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.104 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyc2x7ba2" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.130 238945 DEBUG nova.storage.rbd_utils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.135 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.174 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.175 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.282 238945 DEBUG oslo_concurrency.processutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config 3b760120-0ed3-4962-b9ba-775e88e9a482_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.282 238945 INFO nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deleting local config drive /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482/disk.config because it was imported into RBD.#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.299 238945 DEBUG oslo_concurrency.processutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:23:58 np0005597378 kernel: tap9d1f9be3-07: entered promiscuous mode
Jan 27 09:23:58 np0005597378 NetworkManager[48904]: <info>  [1769523838.3376] manager: (tap9d1f9be3-07): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Jan 27 09:23:58 np0005597378 systemd-udevd[366484]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:23:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:58Z|01492|binding|INFO|Claiming lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for this chassis.
Jan 27 09:23:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:58Z|01493|binding|INFO|9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc: Claiming fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.339 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.346 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.348 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 bound to our chassis#033[00m
Jan 27 09:23:58 np0005597378 NetworkManager[48904]: <info>  [1769523838.3497] device (tap9d1f9be3-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.350 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59abc835-0295-4512-a74a-a69f40a71781#033[00m
Jan 27 09:23:58 np0005597378 NetworkManager[48904]: <info>  [1769523838.3506] device (tap9d1f9be3-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:23:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:58Z|01494|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc ovn-installed in OVS
Jan 27 09:23:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:58Z|01495|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc up in Southbound
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.364 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[75b699f8-625e-4b43-9233-a4971b73b19e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.365 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap59abc835-01 in ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.366 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap59abc835-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.366 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5e73961d-c3f6-44c9-8da3-95075071ef49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.368 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e210a7aa-47dd-48dc-b95a-f57bd2119fb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 systemd-machined[207425]: New machine qemu-172-instance-0000008c.
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.379 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[46addfcc-9dd2-42a0-831f-458f30570531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 systemd[1]: Started Virtual Machine qemu-172-instance-0000008c.
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.403 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f340588-07a2-43eb-b9f7-f7d6d0b45d00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.433 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[90d4070f-b3c5-4f61-a6ef-437121dfe992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 NetworkManager[48904]: <info>  [1769523838.4410] manager: (tap59abc835-00): new Veth device (/org/freedesktop/NetworkManager/Devices/614)
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.440 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[67a7b525-5687-4859-9c13-52ebb733162e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.474 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9a92ef-bfd1-4b17-91b0-37b9f2995bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.477 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1a19ff47-7500-4d44-b7f8-4eb0f1313e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 NetworkManager[48904]: <info>  [1769523838.5010] device (tap59abc835-00): carrier: link connected
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.505 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3fdff4-503f-484b-a1a6-89baa5e1cfd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.526 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea6e975-8a3d-493a-8b3c-118ba4a83b62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 24292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366693, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.543 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[97bcf75f-831e-44f0-a103-66169daff3ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:8012'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647812, 'tstamp': 647812}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366694, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.563 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[65560d92-9792-457d-b42e-7b6e3e32768c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 24292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366695, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.596 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73dfdaea-933b-4dae-8ade-6dc02225255b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b07965d2-d89f-43fb-80f8-5736d4e8f78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.664 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.664 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.664 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59abc835-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:58 np0005597378 NetworkManager[48904]: <info>  [1769523838.6679] manager: (tap59abc835-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Jan 27 09:23:58 np0005597378 kernel: tap59abc835-00: entered promiscuous mode
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.671 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59abc835-00, col_values=(('external_ids', {'iface-id': '21c9fe8e-89ff-4a00-8668-858e37e7400b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:23:58 np0005597378 ovn_controller[144812]: 2026-01-27T14:23:58Z|01496|binding|INFO|Releasing lport 21c9fe8e-89ff-4a00-8668-858e37e7400b from this chassis (sb_readonly=0)
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.672 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.690 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/59abc835-0295-4512-a74a-a69f40a71781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/59abc835-0295-4512-a74a-a69f40a71781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.691 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.691 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e9658d-6e95-45c1-abe2-7f25483807c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.692 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-59abc835-0295-4512-a74a-a69f40a71781
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/59abc835-0295-4512-a74a-a69f40a71781.pid.haproxy
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 59abc835-0295-4512-a74a-a69f40a71781
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:23:58 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:23:58.693 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'env', 'PROCESS_TAG=haproxy-59abc835-0295-4512-a74a-a69f40a71781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/59abc835-0295-4512-a74a-a69f40a71781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.844 238945 DEBUG nova.network.neutron [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.844 238945 DEBUG nova.network.neutron [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.858 238945 DEBUG oslo_concurrency.lockutils [req-1754de50-7cec-49ee-ba40-fd3c6fd863ce req-c5842bb6-8c1e-4b2c-8479-f9848577957f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:23:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:23:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847981508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.985 238945 DEBUG oslo_concurrency.processutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:23:58 np0005597378 nova_compute[238941]: 2026-01-27 14:23:58.992 238945 DEBUG nova.compute.provider_tree [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.007 238945 DEBUG nova.scheduler.client.report [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.034 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.063 238945 INFO nova.scheduler.client.report [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 8495a58a-7371-4222-afef-f486eafff82d#033[00m
Jan 27 09:23:59 np0005597378 podman[366729]: 2026-01-27 14:23:59.132211544 +0000 UTC m=+0.084113130 container create 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.137 238945 DEBUG oslo_concurrency.lockutils [None req-2e85feb5-1010-403f-abdf-840d8e5c65e6 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "8495a58a-7371-4222-afef-f486eafff82d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:23:59 np0005597378 podman[366729]: 2026-01-27 14:23:59.073156751 +0000 UTC m=+0.025058367 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:23:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 259 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Jan 27 09:23:59 np0005597378 systemd[1]: Started libpod-conmon-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope.
Jan 27 09:23:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:23:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f852c612dce2e676500890dc6eb092ee976eb82f95fd913d4cc520dfa2d5244f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:23:59 np0005597378 podman[366729]: 2026-01-27 14:23:59.266738632 +0000 UTC m=+0.218640228 container init 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:23:59 np0005597378 podman[366729]: 2026-01-27 14:23:59.274258245 +0000 UTC m=+0.226159841 container start 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:23:59 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : New worker (366791) forked
Jan 27 09:23:59 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : Loading success.
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.394 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523839.3940551, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.395 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Started (Lifecycle Event)#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.414 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.418 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523839.3942075, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.418 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.435 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.438 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:23:59 np0005597378 nova_compute[238941]: 2026-01-27 14:23:59.454 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:23:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:23:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2824500380' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:23:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:23:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2824500380' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.160 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Received event network-vif-deleted-94907172-68c1-496b-a337-e4ff0944eba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.160 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.161 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Processing event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG oslo_concurrency.lockutils [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.162 238945 DEBUG nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.163 238945 WARNING nova.compute.manager [req-25fd1748-728e-47e4-9112-8489f6c35202 req-4525d476-665f-4fde-bb27-a6f55329153a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.163 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.166 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523840.1662898, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.166 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.168 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.170 238945 INFO nova.virt.libvirt.driver [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance spawned successfully.#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.171 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.190 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.194 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.195 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.195 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.195 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.196 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.196 238945 DEBUG nova.virt.libvirt.driver [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.200 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.236 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.261 238945 INFO nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.261 238945 DEBUG nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.351 238945 INFO nova.compute.manager [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 8.79 seconds to build instance.#033[00m
Jan 27 09:24:00 np0005597378 nova_compute[238941]: 2026-01-27 14:24:00.394 238945 DEBUG oslo_concurrency.lockutils [None req-5c02aed7-34e7-45f6-acbf-895784c9a14d 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:00Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:4a:93 10.100.0.14
Jan 27 09:24:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:00Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:4a:93 10.100.0.14
Jan 27 09:24:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 232 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 186 op/s
Jan 27 09:24:01 np0005597378 nova_compute[238941]: 2026-01-27 14:24:01.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.288 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.289 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.290 238945 INFO nova.compute.manager [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Terminating instance#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.291 238945 DEBUG nova.compute.manager [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.394 238945 DEBUG nova.compute.manager [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.395 238945 DEBUG nova.compute.manager [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing instance network info cache due to event network-changed-b316b5fe-59c3-448f-897c-d7f990f2aeee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.395 238945 DEBUG oslo_concurrency.lockutils [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.395 238945 DEBUG oslo_concurrency.lockutils [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.396 238945 DEBUG nova.network.neutron [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Refreshing network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:02 np0005597378 kernel: tapb316b5fe-59 (unregistering): left promiscuous mode
Jan 27 09:24:02 np0005597378 NetworkManager[48904]: <info>  [1769523842.4261] device (tapb316b5fe-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:24:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:02Z|01497|binding|INFO|Releasing lport b316b5fe-59c3-448f-897c-d7f990f2aeee from this chassis (sb_readonly=0)
Jan 27 09:24:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:02Z|01498|binding|INFO|Setting lport b316b5fe-59c3-448f-897c-d7f990f2aeee down in Southbound
Jan 27 09:24:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:02Z|01499|binding|INFO|Removing iface tapb316b5fe-59 ovn-installed in OVS
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.453 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:b9:af 10.100.0.5'], port_security=['fa:16:3e:15:b9:af 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '73e1c4d9-d84d-42d0-a385-e816ca65b541', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05b4b753-ecde-4c48-a69b-2458162ac6c1 26585a9c-699d-4944-bb11-1f5060663014', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d080dd9-6e75-419f-9464-5edb98123c9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b316b5fe-59c3-448f-897c-d7f990f2aeee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.455 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b316b5fe-59c3-448f-897c-d7f990f2aeee in datapath acd03ef9-9bfd-4078-adf3-4b0b930dc081 unbound from our chassis#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.456 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acd03ef9-9bfd-4078-adf3-4b0b930dc081, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.457 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[149ecee8-5d85-4934-9ace-2000dc3ec915]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.459 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 namespace which is not needed anymore#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.459 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Deactivated successfully.
Jan 27 09:24:02 np0005597378 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Consumed 15.525s CPU time.
Jan 27 09:24:02 np0005597378 systemd-machined[207425]: Machine qemu-168-instance-00000088 terminated.
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.532 238945 INFO nova.virt.libvirt.driver [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Instance destroyed successfully.#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.533 238945 DEBUG nova.objects.instance [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 73e1c4d9-d84d-42d0-a385-e816ca65b541 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.552 238945 DEBUG nova.virt.libvirt.vif [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:22:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-211052653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=136,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACLypJyTNxr/Fynn0x7Iox4aV5f8zs6GUMs/FFzJg47MW9rIYS5DVZM7fp21I1fAVy1UUd2Zq3YeUu5V+K6RS6iNFuNQPbzgLn+nM6VWellLvTCtw8csDZlLRuw7hOCNQ==',key_name='tempest-TestSecurityGroupsBasicOps-733108341',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:22:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-jgbdz40r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:22:59Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=73e1c4d9-d84d-42d0-a385-e816ca65b541,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.553 238945 DEBUG nova.network.os_vif_util [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.554 238945 DEBUG nova.network.os_vif_util [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.554 238945 DEBUG os_vif [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.556 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb316b5fe-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.599 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.603 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.605 238945 INFO os_vif [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:b9:af,bridge_name='br-int',has_traffic_filtering=True,id=b316b5fe-59c3-448f-897c-d7f990f2aeee,network=Network(acd03ef9-9bfd-4078-adf3-4b0b930dc081),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb316b5fe-59')#033[00m
Jan 27 09:24:02 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : haproxy version is 2.8.14-c23fe91
Jan 27 09:24:02 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [NOTICE]   (364239) : path to executable is /usr/sbin/haproxy
Jan 27 09:24:02 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [WARNING]  (364239) : Exiting Master process...
Jan 27 09:24:02 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [WARNING]  (364239) : Exiting Master process...
Jan 27 09:24:02 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [ALERT]    (364239) : Current worker (364241) exited with code 143 (Terminated)
Jan 27 09:24:02 np0005597378 neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081[364235]: [WARNING]  (364239) : All workers exited. Exiting... (0)
Jan 27 09:24:02 np0005597378 systemd[1]: libpod-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope: Deactivated successfully.
Jan 27 09:24:02 np0005597378 conmon[364235]: conmon 6964a44a04bffd4c1511 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope/container/memory.events
Jan 27 09:24:02 np0005597378 podman[366835]: 2026-01-27 14:24:02.697758961 +0000 UTC m=+0.126511912 container died 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.702 238945 DEBUG nova.compute.manager [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-unplugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.702 238945 DEBUG oslo_concurrency.lockutils [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.703 238945 DEBUG oslo_concurrency.lockutils [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.703 238945 DEBUG oslo_concurrency.lockutils [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.704 238945 DEBUG nova.compute.manager [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] No waiting events found dispatching network-vif-unplugged-b316b5fe-59c3-448f-897c-d7f990f2aeee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.704 238945 DEBUG nova.compute.manager [req-c8f41f88-1f9d-4480-a670-a267cea3a739 req-63f95b17-4861-4081-accd-99c0678f786d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-unplugged-b316b5fe-59c3-448f-897c-d7f990f2aeee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:24:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb-userdata-shm.mount: Deactivated successfully.
Jan 27 09:24:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-38e1da3a7cf3c81236a9b90efb0a1df267af565e04214fa2e0abef3887c5e783-merged.mount: Deactivated successfully.
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 podman[366835]: 2026-01-27 14:24:02.866731129 +0000 UTC m=+0.295484060 container cleanup 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 09:24:02 np0005597378 systemd[1]: libpod-conmon-6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb.scope: Deactivated successfully.
Jan 27 09:24:02 np0005597378 podman[366882]: 2026-01-27 14:24:02.96545181 +0000 UTC m=+0.078013054 container remove 6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.972 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b122f8-560f-4146-91b3-098cc2198a70]: (4, ('Tue Jan 27 02:24:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 (6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb)\n6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb\nTue Jan 27 02:24:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 (6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb)\n6964a44a04bffd4c1511650129d1b1176e653c82224237aaec735f41bd6285eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.974 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bae52c18-7a84-4b21-86f8-4f9e38d56b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.975 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd03ef9-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 kernel: tapacd03ef9-90: left promiscuous mode
Jan 27 09:24:02 np0005597378 nova_compute[238941]: 2026-01-27 14:24:02.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:02.998 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88ac15b8-3c8a-4249-a645-183653542933]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.017 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc357ed-1128-481b-a9d5-08e65e944388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.018 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5106b1ca-8867-4aed-8a05-32e85e88b7f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.034 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3d94ac-bcdc-40d4-bb3f-4f69ad4ddd60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641598, 'reachable_time': 41799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366898, 'error': None, 'target': 'ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:03 np0005597378 systemd[1]: run-netns-ovnmeta\x2dacd03ef9\x2d9bfd\x2d4078\x2dadf3\x2d4b0b930dc081.mount: Deactivated successfully.
Jan 27 09:24:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.038 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acd03ef9-9bfd-4078-adf3-4b0b930dc081 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:24:03 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:03.039 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ded98125-c884-4c15-80a1-c0487f98604f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:24:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 40K writes, 161K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 40K writes, 14K syncs, 2.83 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5642 writes, 23K keys, 5642 commit groups, 1.0 writes per commit group, ingest: 28.69 MB, 0.05 MB/s#012Interval WAL: 5642 writes, 2062 syncs, 2.74 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:24:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 232 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Jan 27 09:24:03 np0005597378 nova_compute[238941]: 2026-01-27 14:24:03.820 238945 DEBUG nova.network.neutron [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updated VIF entry in instance network info cache for port b316b5fe-59c3-448f-897c-d7f990f2aeee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:03 np0005597378 nova_compute[238941]: 2026-01-27 14:24:03.821 238945 DEBUG nova.network.neutron [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [{"id": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "address": "fa:16:3e:15:b9:af", "network": {"id": "acd03ef9-9bfd-4078-adf3-4b0b930dc081", "bridge": "br-int", "label": "tempest-network-smoke--468321734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb316b5fe-59", "ovs_interfaceid": "b316b5fe-59c3-448f-897c-d7f990f2aeee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:03 np0005597378 nova_compute[238941]: 2026-01-27 14:24:03.883 238945 DEBUG oslo_concurrency.lockutils [req-ef3369b3-d7dc-4df4-ab3d-0754bcb6757b req-af39d077-85dd-4ba8-8195-9957decbd45d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-73e1c4d9-d84d-42d0-a385-e816ca65b541" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.729 238945 INFO nova.virt.libvirt.driver [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deleting instance files /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541_del#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.730 238945 INFO nova.virt.libvirt.driver [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deletion of /var/lib/nova/instances/73e1c4d9-d84d-42d0-a385-e816ca65b541_del complete#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.788 238945 INFO nova.compute.manager [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 2.50 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.789 238945 DEBUG oslo.service.loopingcall [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.789 238945 DEBUG nova.compute.manager [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.789 238945 DEBUG nova.network.neutron [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.819 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.819 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.820 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.820 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.820 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] No waiting events found dispatching network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.821 238945 WARNING nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received unexpected event network-vif-plugged-b316b5fe-59c3-448f-897c-d7f990f2aeee for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.821 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.821 238945 DEBUG nova.compute.manager [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.822 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.822 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:04 np0005597378 nova_compute[238941]: 2026-01-27 14:24:04.822 238945 DEBUG nova.network.neutron [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 189 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Jan 27 09:24:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.643 238945 DEBUG nova.network.neutron [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.663 238945 INFO nova.compute.manager [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Took 0.87 seconds to deallocate network for instance.#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.726 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.727 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.741 238945 DEBUG nova.compute.manager [req-d38a9cc7-0443-4e65-921a-f74a847d405c req-e0d54331-c665-4c66-9720-2a461636b72a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Received event network-vif-deleted-b316b5fe-59c3-448f-897c-d7f990f2aeee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.772 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.788 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.788 238945 DEBUG nova.compute.provider_tree [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.803 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.843 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:24:05 np0005597378 nova_compute[238941]: 2026-01-27 14:24:05.923 238945 DEBUG oslo_concurrency.processutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.172 238945 DEBUG nova.network.neutron [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.174 238945 DEBUG nova.network.neutron [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.216 238945 DEBUG oslo_concurrency.lockutils [req-2b4a3a39-ebb6-483b-9639-8adfc110b97f req-16a423b3-e5ee-41cb-b0f8-fa47d65159d5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.437 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490138571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.490 238945 DEBUG oslo_concurrency.processutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.495 238945 DEBUG nova.compute.provider_tree [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.626 238945 DEBUG nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.659 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.662 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.662 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.663 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.663 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.731 238945 INFO nova.scheduler.client.report [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 73e1c4d9-d84d-42d0-a385-e816ca65b541#033[00m
Jan 27 09:24:06 np0005597378 nova_compute[238941]: 2026-01-27 14:24:06.877 238945 DEBUG oslo_concurrency.lockutils [None req-b0c4900c-e057-48f9-af18-dcd3e661c123 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "73e1c4d9-d84d-42d0-a385-e816ca65b541" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 201 op/s
Jan 27 09:24:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687532963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.260 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.473 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.474 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.479 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.479 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.684 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.686 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3276MB free_disk=59.90927825495601GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.686 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.687 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.839 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance a48b56d5-6e62-4476-bee9-dc8cf3c1759d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.839 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3b760120-0ed3-4962-b9ba-775e88e9a482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.840 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.840 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:24:07 np0005597378 nova_compute[238941]: 2026-01-27 14:24:07.890 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:24:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.3 total, 600.0 interval#012Cumulative writes: 42K writes, 166K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.77 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5853 writes, 25K keys, 5853 commit groups, 1.0 writes per commit group, ingest: 29.76 MB, 0.05 MB/s#012Interval WAL: 5853 writes, 2145 syncs, 2.73 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:24:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771885321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:08 np0005597378 nova_compute[238941]: 2026-01-27 14:24:08.482 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:08 np0005597378 nova_compute[238941]: 2026-01-27 14:24:08.487 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:08 np0005597378 nova_compute[238941]: 2026-01-27 14:24:08.506 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:08 np0005597378 nova_compute[238941]: 2026-01-27 14:24:08.527 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:24:08 np0005597378 nova_compute[238941]: 2026-01-27 14:24:08.528 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:08 np0005597378 podman[366967]: 2026-01-27 14:24:08.7098533 +0000 UTC m=+0.047445331 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 09:24:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 27 09:24:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:10 np0005597378 podman[366987]: 2026-01-27 14:24:10.769224208 +0000 UTC m=+0.110947883 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:24:10 np0005597378 nova_compute[238941]: 2026-01-27 14:24:10.891 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523835.8891525, 8495a58a-7371-4222-afef-f486eafff82d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:10 np0005597378 nova_compute[238941]: 2026-01-27 14:24:10.892 238945 INFO nova.compute.manager [-] [instance: 8495a58a-7371-4222-afef-f486eafff82d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:24:10 np0005597378 nova_compute[238941]: 2026-01-27 14:24:10.997 238945 DEBUG nova.compute.manager [None req-23de46f7-f0b9-4960-a59f-fd47749d045d - - - - - -] [instance: 8495a58a-7371-4222-afef-f486eafff82d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.138 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.139 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.382 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.527 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.913 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.913 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.919 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:24:11 np0005597378 nova_compute[238941]: 2026-01-27 14:24:11.920 238945 INFO nova.compute.claims [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.321 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.322 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.584 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.629 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.800 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:12 np0005597378 nova_compute[238941]: 2026-01-27 14:24:12.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2470145415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.153 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.159 238945 DEBUG nova.compute.provider_tree [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.184 238945 DEBUG nova.scheduler.client.report [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 167 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 387 KiB/s wr, 129 op/s
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.285 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.286 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.288 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.301 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.302 238945 INFO nova.compute.claims [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.378 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.378 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.410 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.440 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.523 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.561 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.563 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.564 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Creating image(s)#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.591 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.614 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.636 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.641 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.676 238945 DEBUG nova.policy [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.713 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.713 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.714 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.714 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.732 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:13 np0005597378 nova_compute[238941]: 2026-01-27 14:24:13.735 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207907209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.141 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.148 238945 DEBUG nova.compute.provider_tree [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.244 238945 DEBUG nova.scheduler.client.report [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:24:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.0 total, 600.0 interval#012Cumulative writes: 34K writes, 138K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 34K writes, 11K syncs, 2.87 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4081 writes, 17K keys, 4081 commit groups, 1.0 writes per commit group, ingest: 21.12 MB, 0.04 MB/s#012Interval WAL: 4081 writes, 1534 syncs, 2.66 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.711 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.712 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.850 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.850 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:24:14 np0005597378 nova_compute[238941]: 2026-01-27 14:24:14.904 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.000 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.095 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:15 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:15Z|01500|binding|INFO|Releasing lport 2c64544a-77a5-4e81-a088-de5cbdfdbfdd from this chassis (sb_readonly=0)
Jan 27 09:24:15 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:15Z|01501|binding|INFO|Releasing lport 21c9fe8e-89ff-4a00-8668-858e37e7400b from this chassis (sb_readonly=0)
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.162 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:24:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 185 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Jan 27 09:24:15 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:15Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 09:24:15 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:15Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.296 238945 DEBUG nova.policy [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.300 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.301 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.302 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Creating image(s)#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.326 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.343 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.365 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.369 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.441 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.442 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.443 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.444 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.464 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:15 np0005597378 nova_compute[238941]: 2026-01-27 14:24:15.466 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.015 238945 DEBUG nova.objects.instance [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid dc9117b7-6a0b-4142-a1be-23eca138e6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.063 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Successfully created port: 1a78c49b-c423-4133-be6b-7c0298bc59ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.170 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.170 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Ensure instance console log exists: /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.171 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.171 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:16 np0005597378 nova_compute[238941]: 2026-01-27 14:24:16.171 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:24:17
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 224 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 3.5 MiB/s wr, 100 op/s
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.459 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.463 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Successfully created port: 2bea04aa-c7f0-4939-8929-e4635c88700e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.526 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523842.5259473, 73e1c4d9-d84d-42d0-a385-e816ca65b541 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.527 238945 INFO nova.compute.manager [-] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.552 238945 DEBUG nova.compute.manager [None req-7597c2ce-ed78-4ff9-8bee-76dfd6d1d593 - - - - - -] [instance: 73e1c4d9-d84d-42d0-a385-e816ca65b541] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.614 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Successfully updated port: 1a78c49b-c423-4133-be6b-7c0298bc59ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.631 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.633 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.633 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.633 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.711 238945 DEBUG nova.compute.manager [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.711 238945 DEBUG nova.compute.manager [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing instance network info cache due to event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.711 238945 DEBUG oslo_concurrency.lockutils [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.802 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:24:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:24:17 np0005597378 nova_compute[238941]: 2026-01-27 14:24:17.869 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:24:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.328 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.392 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.488 238945 DEBUG nova.objects.instance [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid d26d9a39-75ed-4895-a69d-13ebb76c1e5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.515 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.516 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Ensure instance console log exists: /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.516 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.517 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.517 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.727 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Successfully updated port: 2bea04aa-c7f0-4939-8929-e4635c88700e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.762 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.763 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.763 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.863 238945 DEBUG nova.compute.manager [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.863 238945 DEBUG nova.compute.manager [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing instance network info cache due to event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.863 238945 DEBUG oslo_concurrency.lockutils [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:18 np0005597378 nova_compute[238941]: 2026-01-27 14:24:18.970 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:24:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 237 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.402 238945 DEBUG nova.network.neutron [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.473 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.473 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance network_info: |[{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.474 238945 DEBUG oslo_concurrency.lockutils [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.474 238945 DEBUG nova.network.neutron [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.477 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start _get_guest_xml network_info=[{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.482 238945 WARNING nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.547 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.548 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.555 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.555 238945 DEBUG nova.virt.libvirt.host [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.556 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.556 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.557 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.558 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.558 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.558 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.559 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.559 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.560 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.560 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.561 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.561 238945 DEBUG nova.virt.hardware [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:24:19 np0005597378 nova_compute[238941]: 2026-01-27 14:24:19.566 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:24:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232594521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.168 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.196 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.202 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:24:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355629532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.819 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.821 238945 DEBUG nova.virt.libvirt.vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1604894193',display_name='tempest-TestGettingAddress-server-1604894193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1604894193',id=141,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-zetdo6gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=dc9117b7-6a0b-4142-a1be-23eca138e6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.822 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.823 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.825 238945 DEBUG nova.objects.instance [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc9117b7-6a0b-4142-a1be-23eca138e6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.857 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <uuid>dc9117b7-6a0b-4142-a1be-23eca138e6ed</uuid>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <name>instance-0000008d</name>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-1604894193</nova:name>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:24:19</nova:creationTime>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <nova:port uuid="1a78c49b-c423-4133-be6b-7c0298bc59ed">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe95:7457" ipVersion="6"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:7457" ipVersion="6"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <entry name="serial">dc9117b7-6a0b-4142-a1be-23eca138e6ed</entry>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <entry name="uuid">dc9117b7-6a0b-4142-a1be-23eca138e6ed</entry>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:95:74:57"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <target dev="tap1a78c49b-c4"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/console.log" append="off"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:24:20 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:24:20 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:24:20 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:24:20 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.858 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Preparing to wait for external event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.859 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.860 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.861 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.862 238945 DEBUG nova.virt.libvirt.vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1604894193',display_name='tempest-TestGettingAddress-server-1604894193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1604894193',id=141,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-zetdo6gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:13Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=dc9117b7-6a0b-4142-a1be-23eca138e6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.862 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.864 238945 DEBUG nova.network.os_vif_util [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.864 238945 DEBUG os_vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.866 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.867 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.872 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a78c49b-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.873 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a78c49b-c4, col_values=(('external_ids', {'iface-id': '1a78c49b-c423-4133-be6b-7c0298bc59ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:74:57', 'vm-uuid': 'dc9117b7-6a0b-4142-a1be-23eca138e6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:20 np0005597378 NetworkManager[48904]: <info>  [1769523860.8765] manager: (tap1a78c49b-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.879 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:20 np0005597378 nova_compute[238941]: 2026-01-27 14:24:20.887 238945 INFO os_vif [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4')#033[00m
Jan 27 09:24:21 np0005597378 nova_compute[238941]: 2026-01-27 14:24:21.055 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:24:21 np0005597378 nova_compute[238941]: 2026-01-27 14:24:21.056 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:24:21 np0005597378 nova_compute[238941]: 2026-01-27 14:24:21.056 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:95:74:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:24:21 np0005597378 nova_compute[238941]: 2026-01-27 14:24:21.056 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Using config drive#033[00m
Jan 27 09:24:21 np0005597378 nova_compute[238941]: 2026-01-27 14:24:21.086 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 5.7 MiB/s wr, 122 op/s
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.119 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Creating config drive at /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.124 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd7k1v185 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.260 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd7k1v185" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.293 238945 DEBUG nova.storage.rbd_utils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.297 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.530 238945 DEBUG oslo_concurrency.processutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config dc9117b7-6a0b-4142-a1be-23eca138e6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.532 238945 INFO nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deleting local config drive /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed/disk.config because it was imported into RBD.#033[00m
Jan 27 09:24:22 np0005597378 kernel: tap1a78c49b-c4: entered promiscuous mode
Jan 27 09:24:22 np0005597378 NetworkManager[48904]: <info>  [1769523862.6009] manager: (tap1a78c49b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Jan 27 09:24:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:22Z|01502|binding|INFO|Claiming lport 1a78c49b-c423-4133-be6b-7c0298bc59ed for this chassis.
Jan 27 09:24:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:22Z|01503|binding|INFO|1a78c49b-c423-4133-be6b-7c0298bc59ed: Claiming fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.604 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.613 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], port_security=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe95:7457/64 2001:db8::f816:3eff:fe95:7457/64', 'neutron:device_id': 'dc9117b7-6a0b-4142-a1be-23eca138e6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1a78c49b-c423-4133-be6b-7c0298bc59ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.614 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1a78c49b-c423-4133-be6b-7c0298bc59ed in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 bound to our chassis#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.615 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4#033[00m
Jan 27 09:24:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:22Z|01504|binding|INFO|Setting lport 1a78c49b-c423-4133-be6b-7c0298bc59ed ovn-installed in OVS
Jan 27 09:24:22 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:22Z|01505|binding|INFO|Setting lport 1a78c49b-c423-4133-be6b-7c0298bc59ed up in Southbound
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.634 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb9b5d2-7175-4c8a-a96a-e8273180b3d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.634 238945 DEBUG nova.network.neutron [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:22 np0005597378 systemd-machined[207425]: New machine qemu-173-instance-0000008d.
Jan 27 09:24:22 np0005597378 systemd-udevd[367527]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.676 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.677 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance network_info: |[{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:24:22 np0005597378 NetworkManager[48904]: <info>  [1769523862.6796] device (tap1a78c49b-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:24:22 np0005597378 NetworkManager[48904]: <info>  [1769523862.6805] device (tap1a78c49b-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:24:22 np0005597378 systemd[1]: Started Virtual Machine qemu-173-instance-0000008d.
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.678 238945 DEBUG oslo_concurrency.lockutils [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.679 238945 DEBUG nova.network.neutron [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.682 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start _get_guest_xml network_info=[{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.689 238945 WARNING nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.693 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a4101406-513f-48bb-b7b7-2afd963deeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.695 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.696 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.697 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[6e39e263-da4c-49a6-87e2-bebe3f79fdf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.706 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.707 238945 DEBUG nova.virt.libvirt.host [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.707 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.707 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.708 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.708 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.708 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.709 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.710 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.710 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.710 238945 DEBUG nova.virt.hardware [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.714 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.733 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[308eb3cd-50e0-4b68-b776-982f1d0923c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.754 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b85ceee-c833-40a2-b9b5-a2cec8261347]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 21782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367539, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2563e918-0834-442a-9107-66f15860b9be]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646700, 'tstamp': 646700}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367541, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646703, 'tstamp': 646703}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367541, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.775 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.778 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b62a287-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.779 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.779 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b62a287-40, col_values=(('external_ids', {'iface-id': '2c64544a-77a5-4e81-a088-de5cbdfdbfdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:22 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:22.780 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:22 np0005597378 nova_compute[238941]: 2026-01-27 14:24:22.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.071 238945 DEBUG nova.compute.manager [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.072 238945 DEBUG oslo_concurrency.lockutils [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.073 238945 DEBUG oslo_concurrency.lockutils [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.074 238945 DEBUG oslo_concurrency.lockutils [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.075 238945 DEBUG nova.compute.manager [req-bf25d0ec-440d-4e58-b4d6-ab6c13c01628 req-24e65bbf-078c-4856-b6af-3b488faf452e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Processing event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 5.7 MiB/s wr, 115 op/s
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.225 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523863.2252874, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.227 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Started (Lifecycle Event)#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.230 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.233 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.237 238945 INFO nova.virt.libvirt.driver [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance spawned successfully.#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.237 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.273 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.281 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.285 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.285 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.286 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.286 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.287 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.287 238945 DEBUG nova.virt.libvirt.driver [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:24:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168032418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.303 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.304 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523863.2263472, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.316 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.337 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.341 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.383 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.388 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523863.2345464, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.388 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.398 238945 INFO nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.398 238945 DEBUG nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.463 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.470 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.520 238945 INFO nova.compute.manager [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 11.62 seconds to build instance.#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.542 238945 DEBUG oslo_concurrency.lockutils [None req-6e2296b2-af40-4a54-97cb-c99d038ebb91 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:24:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306292712' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.923 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.925 238945 DEBUG nova.virt.libvirt.vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-369475052',display_name='tempest-TestNetworkBasicOps-server-369475052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-369475052',id=142,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5w5T+3kCzYfWmq+5KqGrTxgprvjwVZGlRdvUqLd/42OfJ3cq/ld//vcwc0/1PXWydVvUOFEKiE2lZdeo8YOq3qITNuRPGs8LSPTJjJ2JVXnqR8zBCbMVeFNoTO3IF5Vg==',key_name='tempest-TestNetworkBasicOps-518969303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-77an57a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:15Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d26d9a39-75ed-4895-a69d-13ebb76c1e5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.925 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.926 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.927 238945 DEBUG nova.objects.instance [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid d26d9a39-75ed-4895-a69d-13ebb76c1e5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.966 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <uuid>d26d9a39-75ed-4895-a69d-13ebb76c1e5d</uuid>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <name>instance-0000008e</name>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-369475052</nova:name>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:24:22</nova:creationTime>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <nova:port uuid="2bea04aa-c7f0-4939-8929-e4635c88700e">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <entry name="serial">d26d9a39-75ed-4895-a69d-13ebb76c1e5d</entry>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <entry name="uuid">d26d9a39-75ed-4895-a69d-13ebb76c1e5d</entry>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:fb:c6:84"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <target dev="tap2bea04aa-c7"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/console.log" append="off"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:24:23 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:24:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:24:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:24:23 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.967 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Preparing to wait for external event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.968 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.968 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.968 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.969 238945 DEBUG nova.virt.libvirt.vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-369475052',display_name='tempest-TestNetworkBasicOps-server-369475052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-369475052',id=142,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5w5T+3kCzYfWmq+5KqGrTxgprvjwVZGlRdvUqLd/42OfJ3cq/ld//vcwc0/1PXWydVvUOFEKiE2lZdeo8YOq3qITNuRPGs8LSPTJjJ2JVXnqR8zBCbMVeFNoTO3IF5Vg==',key_name='tempest-TestNetworkBasicOps-518969303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-77an57a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:15Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d26d9a39-75ed-4895-a69d-13ebb76c1e5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.969 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.970 238945 DEBUG nova.network.os_vif_util [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.970 238945 DEBUG os_vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.971 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.972 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.975 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bea04aa-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.976 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bea04aa-c7, col_values=(('external_ids', {'iface-id': '2bea04aa-c7f0-4939-8929-e4635c88700e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:c6:84', 'vm-uuid': 'd26d9a39-75ed-4895-a69d-13ebb76c1e5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.977 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:23 np0005597378 NetworkManager[48904]: <info>  [1769523863.9781] manager: (tap2bea04aa-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.980 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:23 np0005597378 nova_compute[238941]: 2026-01-27 14:24:23.984 238945 INFO os_vif [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7')#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.176 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.177 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.177 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:fb:c6:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.178 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Using config drive#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.204 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.244 238945 DEBUG nova.network.neutron [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updated VIF entry in instance network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.245 238945 DEBUG nova.network.neutron [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:24 np0005597378 nova_compute[238941]: 2026-01-27 14:24:24.260 238945 DEBUG oslo_concurrency.lockutils [req-8402532c-48ca-4478-aa38-5a3fc78db45c req-cd135356-bdef-4f19-8567-cd487a928e84 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 744 KiB/s rd, 5.7 MiB/s wr, 138 op/s
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.192 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Creating config drive at /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.202 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk589w_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.262 238945 DEBUG nova.compute.manager [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.262 238945 DEBUG oslo_concurrency.lockutils [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.263 238945 DEBUG oslo_concurrency.lockutils [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.263 238945 DEBUG oslo_concurrency.lockutils [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.263 238945 DEBUG nova.compute.manager [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] No waiting events found dispatching network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.264 238945 WARNING nova.compute.manager [req-f6afa2ec-63f1-4680-8c59-c0ed08301a9e req-9469dd49-a996-4f28-91de-3bc8663c2397 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received unexpected event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed for instance with vm_state active and task_state None.#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.371 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk589w_x" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.411 238945 DEBUG nova.storage.rbd_utils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.416 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.456 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.700 238945 DEBUG nova.network.neutron [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updated VIF entry in instance network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.701 238945 DEBUG nova.network.neutron [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.724 238945 DEBUG oslo_concurrency.lockutils [req-438d5a1e-aaf4-4174-aa95-6ef4a1e79e79 req-bf867a6b-a54d-4e72-9d7a-9c87f7d198e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.890 238945 DEBUG oslo_concurrency.processutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config d26d9a39-75ed-4895-a69d-13ebb76c1e5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.891 238945 INFO nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deleting local config drive /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d/disk.config because it was imported into RBD.#033[00m
Jan 27 09:24:25 np0005597378 kernel: tap2bea04aa-c7: entered promiscuous mode
Jan 27 09:24:25 np0005597378 NetworkManager[48904]: <info>  [1769523865.9366] manager: (tap2bea04aa-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:25Z|01506|binding|INFO|Claiming lport 2bea04aa-c7f0-4939-8929-e4635c88700e for this chassis.
Jan 27 09:24:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:25Z|01507|binding|INFO|2bea04aa-c7f0-4939-8929-e4635c88700e: Claiming fa:16:3e:fb:c6:84 10.100.0.10
Jan 27 09:24:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:25Z|01508|binding|INFO|Setting lport 2bea04aa-c7f0-4939-8929-e4635c88700e ovn-installed in OVS
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:25 np0005597378 nova_compute[238941]: 2026-01-27 14:24:25.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:25Z|01509|binding|INFO|Setting lport 2bea04aa-c7f0-4939-8929-e4635c88700e up in Southbound
Jan 27 09:24:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:25.967 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c6:84 10.100.0.10'], port_security=['fa:16:3e:fb:c6:84 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd26d9a39-75ed-4895-a69d-13ebb76c1e5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc325c3c-6581-442b-bd64-dc83fa8573bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2bea04aa-c7f0-4939-8929-e4635c88700e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:25.968 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2bea04aa-c7f0-4939-8929-e4635c88700e in datapath 59abc835-0295-4512-a74a-a69f40a71781 bound to our chassis#033[00m
Jan 27 09:24:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:25.969 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59abc835-0295-4512-a74a-a69f40a71781#033[00m
Jan 27 09:24:25 np0005597378 systemd-udevd[367720]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:24:25 np0005597378 systemd-machined[207425]: New machine qemu-174-instance-0000008e.
Jan 27 09:24:25 np0005597378 NetworkManager[48904]: <info>  [1769523865.9944] device (tap2bea04aa-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:24:25 np0005597378 NetworkManager[48904]: <info>  [1769523865.9958] device (tap2bea04aa-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:24:25 np0005597378 systemd[1]: Started Virtual Machine qemu-174-instance-0000008e.
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.000 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0681678b-7c36-4ad8-b1b1-0b4807c52df3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.041 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7c3c48-ea34-4457-b207-b27b972ead8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.045 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[74e145cf-5d4d-4cc6-8ab6-3b65c8f70c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.081 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0e644f-74ae-4954-8866-c5884f652989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.101 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f03fb2-53d5-48b3-9bd9-910428f30488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 24292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367733, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.122 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e74ad6-4ddc-43cf-8c7e-a483a9763b37]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647825, 'tstamp': 647825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367734, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647828, 'tstamp': 647828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367734, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.124 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:26 np0005597378 nova_compute[238941]: 2026-01-27 14:24:26.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:26 np0005597378 nova_compute[238941]: 2026-01-27 14:24:26.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.127 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59abc835-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.127 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.128 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59abc835-00, col_values=(('external_ids', {'iface-id': '21c9fe8e-89ff-4a00-8668-858e37e7400b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:26.128 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:26 np0005597378 nova_compute[238941]: 2026-01-27 14:24:26.655 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523866.6544719, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:26 np0005597378 nova_compute[238941]: 2026-01-27 14:24:26.655 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Started (Lifecycle Event)#033[00m
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 151 op/s
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.334 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.340 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523866.6546416, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.341 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.381 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.381 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.382 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.382 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.382 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Processing event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.383 238945 DEBUG oslo_concurrency.lockutils [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.384 238945 DEBUG nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] No waiting events found dispatching network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.384 238945 WARNING nova.compute.manager [req-10f490df-1c54-4b7e-b802-f06b3c32f97a req-b494fac0-0d13-49bd-bff6-5bdd6e51bd22 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received unexpected event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.385 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.386 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.389 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.393 238945 INFO nova.virt.libvirt.driver [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance spawned successfully.#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.394 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.396 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.400 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523867.390825, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.400 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.418 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.419 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.420 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.420 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.421 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.421 238945 DEBUG nova.virt.libvirt.driver [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.426 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.430 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.485 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.512 238945 INFO nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 12.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.513 238945 DEBUG nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.566 238945 INFO nova.compute.manager [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 14.78 seconds to build instance.#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.590 238945 DEBUG oslo_concurrency.lockutils [None req-b3134a71-95fc-4d11-b8d0-ff04407017d4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:27 np0005597378 nova_compute[238941]: 2026-01-27 14:24:27.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022245570925085185 of space, bias 1.0, pg target 0.6673671277525556 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693412900987901 of space, bias 1.0, pg target 0.20080238702963704 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.035957214709192e-06 of space, bias 4.0, pg target 0.0012431486576510303 quantized to 16 (current 16)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:24:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:24:28 np0005597378 nova_compute[238941]: 2026-01-27 14:24:28.310 238945 DEBUG nova.compute.manager [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:28 np0005597378 nova_compute[238941]: 2026-01-27 14:24:28.310 238945 DEBUG nova.compute.manager [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing instance network info cache due to event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:28 np0005597378 nova_compute[238941]: 2026-01-27 14:24:28.311 238945 DEBUG oslo_concurrency.lockutils [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:28 np0005597378 nova_compute[238941]: 2026-01-27 14:24:28.312 238945 DEBUG oslo_concurrency.lockutils [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:28 np0005597378 nova_compute[238941]: 2026-01-27 14:24:28.312 238945 DEBUG nova.network.neutron [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:28 np0005597378 nova_compute[238941]: 2026-01-27 14:24:28.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 169 op/s
Jan 27 09:24:29 np0005597378 nova_compute[238941]: 2026-01-27 14:24:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:29 np0005597378 nova_compute[238941]: 2026-01-27 14:24:29.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:24:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Jan 27 09:24:31 np0005597378 nova_compute[238941]: 2026-01-27 14:24:31.248 238945 DEBUG nova.network.neutron [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updated VIF entry in instance network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:31 np0005597378 nova_compute[238941]: 2026-01-27 14:24:31.249 238945 DEBUG nova.network.neutron [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:31 np0005597378 nova_compute[238941]: 2026-01-27 14:24:31.294 238945 DEBUG oslo_concurrency.lockutils [req-bb974d16-6840-45aa-93b2-fd10a7baf65a req-730244c4-4b25-4b2c-b4f8-4a8c59ed1947 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:31.987 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:31 np0005597378 nova_compute[238941]: 2026-01-27 14:24:31.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:31.988 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.923 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.977 238945 DEBUG nova.compute.manager [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.978 238945 DEBUG nova.compute.manager [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing instance network info cache due to event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.978 238945 DEBUG oslo_concurrency.lockutils [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.978 238945 DEBUG oslo_concurrency.lockutils [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:32 np0005597378 nova_compute[238941]: 2026-01-27 14:24:32.979 238945 DEBUG nova.network.neutron [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 37 KiB/s wr, 147 op/s
Jan 27 09:24:33 np0005597378 nova_compute[238941]: 2026-01-27 14:24:33.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:34 np0005597378 nova_compute[238941]: 2026-01-27 14:24:34.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:24:34 np0005597378 nova_compute[238941]: 2026-01-27 14:24:34.968 238945 DEBUG nova.network.neutron [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updated VIF entry in instance network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:34 np0005597378 nova_compute[238941]: 2026-01-27 14:24:34.969 238945 DEBUG nova.network.neutron [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:34 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:34.989 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:35 np0005597378 nova_compute[238941]: 2026-01-27 14:24:35.014 238945 DEBUG oslo_concurrency.lockutils [req-898e282e-caca-4c4b-bdc7-455e8cda5935 req-46be3b1b-7426-4c6c-ba0b-3dc1af9eccf3 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 40 KiB/s wr, 148 op/s
Jan 27 09:24:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.687 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.690 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.706 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.792 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.793 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.802 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.803 238945 INFO nova.compute.claims [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:24:36 np0005597378 nova_compute[238941]: 2026-01-27 14:24:36.942 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 29 KiB/s wr, 131 op/s
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255829608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.578 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.587 238945 DEBUG nova.compute.provider_tree [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.608 238945 DEBUG nova.scheduler.client.report [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.635 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.636 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.676 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.677 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.696 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.717 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:24:37 np0005597378 podman[367945]: 2026-01-27 14:24:37.785192608 +0000 UTC m=+0.077081290 container create e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.800 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.802 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.802 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Creating image(s)#033[00m
Jan 27 09:24:37 np0005597378 podman[367945]: 2026-01-27 14:24:37.740302378 +0000 UTC m=+0.032191060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:24:37 np0005597378 systemd[1]: Started libpod-conmon-e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e.scope.
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.838 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.867 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.891 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.898 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.943 238945 DEBUG nova.policy [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.985 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.986 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.986 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:37 np0005597378 nova_compute[238941]: 2026-01-27 14:24:37.986 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.012 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.015 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:38 np0005597378 podman[367945]: 2026-01-27 14:24:38.059099555 +0000 UTC m=+0.350988237 container init e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:24:38 np0005597378 podman[367945]: 2026-01-27 14:24:38.067008388 +0000 UTC m=+0.358897070 container start e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:24:38 np0005597378 dreamy_hellman[367979]: 167 167
Jan 27 09:24:38 np0005597378 systemd[1]: libpod-e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e.scope: Deactivated successfully.
Jan 27 09:24:38 np0005597378 podman[367945]: 2026-01-27 14:24:38.133912863 +0000 UTC m=+0.425801545 container attach e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:24:38 np0005597378 podman[367945]: 2026-01-27 14:24:38.135934217 +0000 UTC m=+0.427822899 container died e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:24:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:24:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:24:38 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:24:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c530fbf456348b3418c9170a157bec8bfe1ae60a7197f8336a1a2389ba65ff11-merged.mount: Deactivated successfully.
Jan 27 09:24:38 np0005597378 podman[367945]: 2026-01-27 14:24:38.336379183 +0000 UTC m=+0.628267865 container remove e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:24:38 np0005597378 systemd[1]: libpod-conmon-e04c26b9358d83a19cde1487e5077a5ed701ec89a8665cbc6cbc4096a38d5c6e.scope: Deactivated successfully.
Jan 27 09:24:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:38Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:74:57 10.100.0.12
Jan 27 09:24:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:38Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:74:57 10.100.0.12
Jan 27 09:24:38 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.551 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Successfully created port: b3ff749e-8765-47c6-88f6-a029bc9d426b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:24:38 np0005597378 podman[368079]: 2026-01-27 14:24:38.553317804 +0000 UTC m=+0.047786561 container create 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.565 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:38 np0005597378 systemd[1]: Started libpod-conmon-5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c.scope.
Jan 27 09:24:38 np0005597378 podman[368079]: 2026-01-27 14:24:38.532043909 +0000 UTC m=+0.026512666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.636 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:24:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:38 np0005597378 podman[368079]: 2026-01-27 14:24:38.69224413 +0000 UTC m=+0.186712907 container init 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:24:38 np0005597378 podman[368079]: 2026-01-27 14:24:38.700643197 +0000 UTC m=+0.195111954 container start 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:24:38 np0005597378 podman[368079]: 2026-01-27 14:24:38.711624633 +0000 UTC m=+0.206093420 container attach 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.758 238945 DEBUG nova.objects.instance [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 9d641ca9-51bf-4390-9b51-faf9982c1c8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.779 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.779 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Ensure instance console log exists: /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.780 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.780 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.780 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:38 np0005597378 nova_compute[238941]: 2026-01-27 14:24:38.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 336 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 27 09:24:39 np0005597378 tender_sanderson[368128]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:24:39 np0005597378 tender_sanderson[368128]: --> All data devices are unavailable
Jan 27 09:24:39 np0005597378 systemd[1]: libpod-5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c.scope: Deactivated successfully.
Jan 27 09:24:39 np0005597378 podman[368079]: 2026-01-27 14:24:39.255562842 +0000 UTC m=+0.750031599 container died 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:24:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1c9919e7f95e38c0fa0bd353580956b6cce677b35c571b92cb67878796471c6d-merged.mount: Deactivated successfully.
Jan 27 09:24:39 np0005597378 podman[368079]: 2026-01-27 14:24:39.313432472 +0000 UTC m=+0.807901229 container remove 5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:24:39 np0005597378 systemd[1]: libpod-conmon-5c512ddee836acdafc144e2800135cf0dae69d1437fffef43ab1489254fcd40c.scope: Deactivated successfully.
Jan 27 09:24:39 np0005597378 podman[368187]: 2026-01-27 14:24:39.364759807 +0000 UTC m=+0.071786827 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.467 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Successfully updated port: b3ff749e-8765-47c6-88f6-a029bc9d426b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.486 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.486 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.486 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.606 238945 DEBUG nova.compute.manager [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.606 238945 DEBUG nova.compute.manager [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing instance network info cache due to event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.607 238945 DEBUG oslo_concurrency.lockutils [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:39 np0005597378 nova_compute[238941]: 2026-01-27 14:24:39.656 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:24:39 np0005597378 podman[368276]: 2026-01-27 14:24:39.821774542 +0000 UTC m=+0.079112825 container create cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:24:39 np0005597378 podman[368276]: 2026-01-27 14:24:39.776910881 +0000 UTC m=+0.034249184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:24:39 np0005597378 systemd[1]: Started libpod-conmon-cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d.scope.
Jan 27 09:24:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:39 np0005597378 podman[368276]: 2026-01-27 14:24:39.9770803 +0000 UTC m=+0.234418603 container init cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:24:39 np0005597378 podman[368276]: 2026-01-27 14:24:39.989265569 +0000 UTC m=+0.246603892 container start cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:24:40 np0005597378 cool_tu[368292]: 167 167
Jan 27 09:24:40 np0005597378 systemd[1]: libpod-cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d.scope: Deactivated successfully.
Jan 27 09:24:40 np0005597378 podman[368276]: 2026-01-27 14:24:40.030915752 +0000 UTC m=+0.288254055 container attach cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:24:40 np0005597378 podman[368276]: 2026-01-27 14:24:40.032439573 +0000 UTC m=+0.289777866 container died cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:24:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f1a1a195e3b62c152ba4990b58be38aceb4f683619ca26d7c11ba252ca255509-merged.mount: Deactivated successfully.
Jan 27 09:24:40 np0005597378 podman[368276]: 2026-01-27 14:24:40.100993722 +0000 UTC m=+0.358332005 container remove cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:24:40 np0005597378 systemd[1]: libpod-conmon-cd4c929239dc51e782399d2fac5494f4671a8a34534887a3d8aad3b9b9b7f77d.scope: Deactivated successfully.
Jan 27 09:24:40 np0005597378 podman[368318]: 2026-01-27 14:24:40.316873444 +0000 UTC m=+0.052561108 container create 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:24:40 np0005597378 systemd[1]: Started libpod-conmon-1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda.scope.
Jan 27 09:24:40 np0005597378 podman[368318]: 2026-01-27 14:24:40.291351996 +0000 UTC m=+0.027039700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:24:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:40 np0005597378 podman[368318]: 2026-01-27 14:24:40.436892071 +0000 UTC m=+0.172579755 container init 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:24:40 np0005597378 podman[368318]: 2026-01-27 14:24:40.446743396 +0000 UTC m=+0.182431060 container start 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:24:40 np0005597378 podman[368318]: 2026-01-27 14:24:40.450982721 +0000 UTC m=+0.186670405 container attach 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.495 238945 DEBUG nova.network.neutron [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.522 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.522 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance network_info: |[{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.523 238945 DEBUG oslo_concurrency.lockutils [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.523 238945 DEBUG nova.network.neutron [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.527 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start _get_guest_xml network_info=[{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.532 238945 WARNING nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.541 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.542 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.545 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.545 238945 DEBUG nova.virt.libvirt.host [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.546 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.546 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.546 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.547 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.548 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.548 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.548 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.549 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.549 238945 DEBUG nova.virt.hardware [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:24:40 np0005597378 nova_compute[238941]: 2026-01-27 14:24:40.551 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]: {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:    "0": [
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:        {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "devices": [
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "/dev/loop3"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            ],
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_name": "ceph_lv0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_size": "21470642176",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "name": "ceph_lv0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "tags": {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cluster_name": "ceph",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.crush_device_class": "",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.encrypted": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.objectstore": "bluestore",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osd_id": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.type": "block",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.vdo": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.with_tpm": "0"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            },
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "type": "block",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "vg_name": "ceph_vg0"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:        }
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:    ],
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:    "1": [
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:        {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "devices": [
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "/dev/loop4"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            ],
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_name": "ceph_lv1",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_size": "21470642176",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "name": "ceph_lv1",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "tags": {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cluster_name": "ceph",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.crush_device_class": "",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.encrypted": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.objectstore": "bluestore",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osd_id": "1",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.type": "block",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.vdo": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.with_tpm": "0"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            },
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "type": "block",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "vg_name": "ceph_vg1"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:        }
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:    ],
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:    "2": [
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:        {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "devices": [
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "/dev/loop5"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            ],
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_name": "ceph_lv2",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_size": "21470642176",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "name": "ceph_lv2",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "tags": {
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.cluster_name": "ceph",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.crush_device_class": "",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.encrypted": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.objectstore": "bluestore",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osd_id": "2",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.type": "block",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.vdo": "0",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:                "ceph.with_tpm": "0"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            },
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "type": "block",
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:            "vg_name": "ceph_vg2"
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:        }
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]:    ]
Jan 27 09:24:40 np0005597378 eager_visvesvaraya[368335]: }
Jan 27 09:24:40 np0005597378 systemd[1]: libpod-1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda.scope: Deactivated successfully.
Jan 27 09:24:40 np0005597378 podman[368364]: 2026-01-27 14:24:40.804931006 +0000 UTC m=+0.029248890 container died 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:24:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e73402b2dfab3c4d6fc0214bc29cf810cb8eb93d8f33def29c556a6ca3ee61b0-merged.mount: Deactivated successfully.
Jan 27 09:24:40 np0005597378 podman[368364]: 2026-01-27 14:24:40.853553437 +0000 UTC m=+0.077871291 container remove 1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:24:40 np0005597378 systemd[1]: libpod-conmon-1b95799e66b1b36713d36bade1ad739d3b3fbc331ed762abfd5bee3d5fea1cda.scope: Deactivated successfully.
Jan 27 09:24:40 np0005597378 podman[368379]: 2026-01-27 14:24:40.975136976 +0000 UTC m=+0.111419985 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:24:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:24:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532367044' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.185 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 117 op/s
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.208 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.214 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.345031601 +0000 UTC m=+0.041721145 container create 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:24:41 np0005597378 systemd[1]: Started libpod-conmon-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope.
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.403 238945 DEBUG nova.network.neutron [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updated VIF entry in instance network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.404 238945 DEBUG nova.network.neutron [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.421 238945 DEBUG oslo_concurrency.lockutils [req-0fecb24c-a823-4b56-95ef-adf82911a8ee req-d2bd71ad-3f75-427f-8040-649aae951b62 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.327218402 +0000 UTC m=+0.023907976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.435560493 +0000 UTC m=+0.132250047 container init 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.442627804 +0000 UTC m=+0.139317348 container start 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:24:41 np0005597378 keen_archimedes[368522]: 167 167
Jan 27 09:24:41 np0005597378 systemd[1]: libpod-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope: Deactivated successfully.
Jan 27 09:24:41 np0005597378 conmon[368522]: conmon 9b61c59551f652d77fa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope/container/memory.events
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.44877993 +0000 UTC m=+0.145469504 container attach 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.449768786 +0000 UTC m=+0.146458330 container died 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:24:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-965e6b2f1194dfaf17c4b3503d8779e09bd344d8afa30db72b1e4566c2c7da17-merged.mount: Deactivated successfully.
Jan 27 09:24:41 np0005597378 podman[368486]: 2026-01-27 14:24:41.498933412 +0000 UTC m=+0.195622956 container remove 9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:24:41 np0005597378 systemd[1]: libpod-conmon-9b61c59551f652d77fa4a3bf2a4b7bfcb8326eae997fddb85144bf317b587b6b.scope: Deactivated successfully.
Jan 27 09:24:41 np0005597378 podman[368546]: 2026-01-27 14:24:41.70090753 +0000 UTC m=+0.053012241 container create d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:24:41 np0005597378 systemd[1]: Started libpod-conmon-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope.
Jan 27 09:24:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:41 np0005597378 podman[368546]: 2026-01-27 14:24:41.676258634 +0000 UTC m=+0.028363375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:24:41 np0005597378 podman[368546]: 2026-01-27 14:24:41.783532138 +0000 UTC m=+0.135636869 container init d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:24:41 np0005597378 podman[368546]: 2026-01-27 14:24:41.790871925 +0000 UTC m=+0.142976636 container start d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:24:41 np0005597378 podman[368546]: 2026-01-27 14:24:41.795613993 +0000 UTC m=+0.147718714 container attach d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:24:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:24:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847707508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.837 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.841 238945 DEBUG nova.virt.libvirt.vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=143,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq+VoxsGmBbzc+D7tgsx0vjHmsPUWZSdmxqwRLFEOukACJkapOac1CwnGHBN3I+DYeVtyl+9o3eNYycx6pgOXLK2TRFDrqka4yppTyaJZN11t3rZ1Q0XfH8zG5pa3xqWQ==',key_name='tempest-TestSecurityGroupsBasicOps-1298980945',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-a3x2n6wg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:37Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9d641ca9-51bf-4390-9b51-faf9982c1c8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.842 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.843 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.845 238945 DEBUG nova.objects.instance [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d641ca9-51bf-4390-9b51-faf9982c1c8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.860 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <uuid>9d641ca9-51bf-4390-9b51-faf9982c1c8a</uuid>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <name>instance-0000008f</name>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809</nova:name>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:24:40</nova:creationTime>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <nova:port uuid="b3ff749e-8765-47c6-88f6-a029bc9d426b">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <entry name="serial">9d641ca9-51bf-4390-9b51-faf9982c1c8a</entry>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <entry name="uuid">9d641ca9-51bf-4390-9b51-faf9982c1c8a</entry>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e5:aa:38"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <target dev="tapb3ff749e-87"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/console.log" append="off"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:24:41 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:24:41 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:24:41 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:24:41 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.869 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Preparing to wait for external event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.870 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.870 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.871 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.872 238945 DEBUG nova.virt.libvirt.vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=143,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq+VoxsGmBbzc+D7tgsx0vjHmsPUWZSdmxqwRLFEOukACJkapOac1CwnGHBN3I+DYeVtyl+9o3eNYycx6pgOXLK2TRFDrqka4yppTyaJZN11t3rZ1Q0XfH8zG5pa3xqWQ==',key_name='tempest-TestSecurityGroupsBasicOps-1298980945',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-a3x2n6wg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:24:37Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9d641ca9-51bf-4390-9b51-faf9982c1c8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.873 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.874 238945 DEBUG nova.network.os_vif_util [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.874 238945 DEBUG os_vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.875 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.876 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.877 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.882 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3ff749e-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.883 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3ff749e-87, col_values=(('external_ids', {'iface-id': 'b3ff749e-8765-47c6-88f6-a029bc9d426b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:aa:38', 'vm-uuid': '9d641ca9-51bf-4390-9b51-faf9982c1c8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:41 np0005597378 NetworkManager[48904]: <info>  [1769523881.8866] manager: (tapb3ff749e-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.895 238945 INFO os_vif [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87')#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.965 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.979 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.980 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:e5:aa:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:24:41 np0005597378 nova_compute[238941]: 2026-01-27 14:24:41.980 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Using config drive#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.005 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.273 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Creating config drive at /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.277 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf2aqshti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.419 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf2aqshti" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.446 238945 DEBUG nova.storage.rbd_utils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.452 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:42 np0005597378 lvm[368703]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:24:42 np0005597378 lvm[368704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:24:42 np0005597378 lvm[368703]: VG ceph_vg0 finished
Jan 27 09:24:42 np0005597378 lvm[368704]: VG ceph_vg1 finished
Jan 27 09:24:42 np0005597378 lvm[368706]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:24:42 np0005597378 lvm[368706]: VG ceph_vg2 finished
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.608 238945 DEBUG oslo_concurrency.processutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config 9d641ca9-51bf-4390-9b51-faf9982c1c8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.609 238945 INFO nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deleting local config drive /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a/disk.config because it was imported into RBD.#033[00m
Jan 27 09:24:42 np0005597378 competent_carson[368563]: {}
Jan 27 09:24:42 np0005597378 kernel: tapb3ff749e-87: entered promiscuous mode
Jan 27 09:24:42 np0005597378 NetworkManager[48904]: <info>  [1769523882.6672] manager: (tapb3ff749e-87): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Jan 27 09:24:42 np0005597378 systemd-udevd[368701]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:24:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:42Z|01510|binding|INFO|Claiming lport b3ff749e-8765-47c6-88f6-a029bc9d426b for this chassis.
Jan 27 09:24:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:42Z|01511|binding|INFO|b3ff749e-8765-47c6-88f6-a029bc9d426b: Claiming fa:16:3e:e5:aa:38 10.100.0.3
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.677 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:aa:38 10.100.0.3'], port_security=['fa:16:3e:e5:aa:38 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9d641ca9-51bf-4390-9b51-faf9982c1c8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56932f70-69fd-4849-9484-7365b82e7b06 e5c8c56a-2fd3-4fbe-a45d-eee3df467bf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9401c88-97fe-4738-854c-6d37b11b7963, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3ff749e-8765-47c6-88f6-a029bc9d426b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.678 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3ff749e-8765-47c6-88f6-a029bc9d426b in datapath 548c417a-f816-4e3a-8297-8c6898e6d0ec bound to our chassis#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.680 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 548c417a-f816-4e3a-8297-8c6898e6d0ec#033[00m
Jan 27 09:24:42 np0005597378 NetworkManager[48904]: <info>  [1769523882.6818] device (tapb3ff749e-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:24:42 np0005597378 NetworkManager[48904]: <info>  [1769523882.6845] device (tapb3ff749e-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:24:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:42Z|01512|binding|INFO|Setting lport b3ff749e-8765-47c6-88f6-a029bc9d426b ovn-installed in OVS
Jan 27 09:24:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:42Z|01513|binding|INFO|Setting lport b3ff749e-8765-47c6-88f6-a029bc9d426b up in Southbound
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:42 np0005597378 systemd[1]: libpod-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope: Deactivated successfully.
Jan 27 09:24:42 np0005597378 systemd[1]: libpod-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope: Consumed 1.376s CPU time.
Jan 27 09:24:42 np0005597378 podman[368546]: 2026-01-27 14:24:42.692039519 +0000 UTC m=+1.044144250 container died d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.692 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.692 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8994b3-3bec-4442-b87b-cf194ace56a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.695 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap548c417a-f1 in ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.698 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap548c417a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.698 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d57d5309-9a1b-43c5-ba6c-c2027f4dcee7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.700 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d450f62b-1c9f-4c83-8089-bdb5eb49a957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 systemd-machined[207425]: New machine qemu-175-instance-0000008f.
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.718 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[85e6472b-b2a0-42a2-a11c-25ac3139fbb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 systemd[1]: Started Virtual Machine qemu-175-instance-0000008f.
Jan 27 09:24:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-98fe3e2877c463320021140425433d8f4e2223509400b9e2807e48bbbef6c613-merged.mount: Deactivated successfully.
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.740 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[da36ba77-03b1-4d21-8869-def1daae9f50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 podman[368546]: 2026-01-27 14:24:42.745063499 +0000 UTC m=+1.097168210 container remove d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_carson, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:24:42 np0005597378 systemd[1]: libpod-conmon-d183ce0d8b36f93761f6a49ee9c604eb8a3ee345125d8dea336ad2f203e56cac.scope: Deactivated successfully.
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.773 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fade6b0d-f37f-401d-bf51-bbf612e71721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.778 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1bba05-4cda-4875-b734-2d524f3a3f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 NetworkManager[48904]: <info>  [1769523882.7798] manager: (tap548c417a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/622)
Jan 27 09:24:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.813 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[02dad02e-5631-4298-8fd1-1236451b65c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.817 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4909b464-e577-4b60-af21-3e35fe9dc9ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:24:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:24:42 np0005597378 NetworkManager[48904]: <info>  [1769523882.8472] device (tap548c417a-f0): carrier: link connected
Jan 27 09:24:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.856 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[43f28bf0-93c5-4979-aa0b-5dea25045bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.882 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d348c08-abee-4520-93da-d322564856e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap548c417a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:b1:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652247, 'reachable_time': 22686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368771, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.900 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91b875e7-8e69-4346-a648-c8f3fc25cd5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:b13e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652247, 'tstamp': 652247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368789, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.908 238945 DEBUG nova.compute.manager [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.908 238945 DEBUG oslo_concurrency.lockutils [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.908 238945 DEBUG oslo_concurrency.lockutils [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.909 238945 DEBUG oslo_concurrency.lockutils [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.909 238945 DEBUG nova.compute.manager [req-9ce7cd6a-5c1c-4c95-827b-315dc806cab0 req-5a5be890-de2d-41c4-9c9e-1292cce8ca05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Processing event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[beac392a-2438-47fb-ab63-171a4f516ae8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap548c417a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:b1:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652247, 'reachable_time': 22686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368791, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:42 np0005597378 nova_compute[238941]: 2026-01-27 14:24:42.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:42 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:42.956 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d179037e-304b-41af-9435-fe57562a6c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.019 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b65cbe37-910b-441b-a6ee-e3f2ec1805bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.020 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap548c417a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.021 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.021 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap548c417a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:43 np0005597378 NetworkManager[48904]: <info>  [1769523883.0235] manager: (tap548c417a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Jan 27 09:24:43 np0005597378 kernel: tap548c417a-f0: entered promiscuous mode
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.026 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap548c417a-f0, col_values=(('external_ids', {'iface-id': '1b069ba6-066d-43f6-bff3-b1997a730e15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:43Z|01514|binding|INFO|Releasing lport 1b069ba6-066d-43f6-bff3-b1997a730e15 from this chassis (sb_readonly=0)
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.045 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/548c417a-f816-4e3a-8297-8c6898e6d0ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/548c417a-f816-4e3a-8297-8c6898e6d0ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.046 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[36dc39db-927f-41a7-a710-b43d7b2470ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.047 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-548c417a-f816-4e3a-8297-8c6898e6d0ec
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/548c417a-f816-4e3a-8297-8c6898e6d0ec.pid.haproxy
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 548c417a-f816-4e3a-8297-8c6898e6d0ec
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:24:43 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:43.048 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'env', 'PROCESS_TAG=haproxy-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/548c417a-f816-4e3a-8297-8c6898e6d0ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:24:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Jan 27 09:24:43 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 27 09:24:43 np0005597378 podman[368859]: 2026-01-27 14:24:43.423073365 +0000 UTC m=+0.022273321 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.710 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523883.7101893, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.711 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Started (Lifecycle Event)#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.713 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.716 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.721 238945 INFO nova.virt.libvirt.driver [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance spawned successfully.#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.721 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.743 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.755 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.759 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.759 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.760 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.760 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.760 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.761 238945 DEBUG nova.virt.libvirt.driver [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:24:43 np0005597378 podman[368859]: 2026-01-27 14:24:43.779138178 +0000 UTC m=+0.378338114 container create 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.800 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.800 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523883.7104437, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.801 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:24:43 np0005597378 systemd[1]: Started libpod-conmon-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844.scope.
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.832 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.836 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523883.7163138, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.836 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:24:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.850 238945 INFO nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 6.05 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.850 238945 DEBUG nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4f2afb3f556e728134c906f45ca8e8817bc0d5e26cfeb71cf8823f94ce481d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.862 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.866 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.889 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:24:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:24:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:24:43 np0005597378 podman[368859]: 2026-01-27 14:24:43.925020502 +0000 UTC m=+0.524220458 container init 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:24:43 np0005597378 podman[368859]: 2026-01-27 14:24:43.93122058 +0000 UTC m=+0.530420516 container start 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.945 238945 INFO nova.compute.manager [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 7.20 seconds to build instance.#033[00m
Jan 27 09:24:43 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : New worker (368886) forked
Jan 27 09:24:43 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : Loading success.
Jan 27 09:24:43 np0005597378 nova_compute[238941]: 2026-01-27 14:24:43.966 238945 DEBUG oslo_concurrency.lockutils [None req-eabbe000-4f24-475c-90d8-aaf901e425a1 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:44 np0005597378 nova_compute[238941]: 2026-01-27 14:24:44.986 238945 DEBUG nova.compute.manager [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:44 np0005597378 nova_compute[238941]: 2026-01-27 14:24:44.986 238945 DEBUG oslo_concurrency.lockutils [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:44 np0005597378 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 DEBUG oslo_concurrency.lockutils [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:44 np0005597378 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 DEBUG oslo_concurrency.lockutils [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:44 np0005597378 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 DEBUG nova.compute.manager [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] No waiting events found dispatching network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:44 np0005597378 nova_compute[238941]: 2026-01-27 14:24:44.987 238945 WARNING nova.compute.manager [req-e6978ebd-c9ed-4082-bd1b-e8d684fb7312 req-7b8c87e7-dbc1-4d18-9b2d-749c7105d75e 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received unexpected event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:24:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:45Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:c6:84 10.100.0.10
Jan 27 09:24:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:45Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:c6:84 10.100.0.10
Jan 27 09:24:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 388 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 5.3 MiB/s wr, 116 op/s
Jan 27 09:24:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:46.330 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:46.331 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:46 np0005597378 nova_compute[238941]: 2026-01-27 14:24:46.886 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.0 MiB/s wr, 151 op/s
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:24:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.888 238945 DEBUG nova.compute.manager [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.889 238945 DEBUG nova.compute.manager [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing instance network info cache due to event network-changed-1a78c49b-c423-4133-be6b-7c0298bc59ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.889 238945 DEBUG oslo_concurrency.lockutils [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.889 238945 DEBUG oslo_concurrency.lockutils [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.890 238945 DEBUG nova.network.neutron [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Refreshing network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.929 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.962 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.962 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.962 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.963 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.963 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.964 238945 INFO nova.compute.manager [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Terminating instance#033[00m
Jan 27 09:24:47 np0005597378 nova_compute[238941]: 2026-01-27 14:24:47.965 238945 DEBUG nova.compute.manager [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:24:48 np0005597378 kernel: tap1a78c49b-c4 (unregistering): left promiscuous mode
Jan 27 09:24:48 np0005597378 NetworkManager[48904]: <info>  [1769523888.0163] device (tap1a78c49b-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:24:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:48Z|01515|binding|INFO|Releasing lport 1a78c49b-c423-4133-be6b-7c0298bc59ed from this chassis (sb_readonly=0)
Jan 27 09:24:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:48Z|01516|binding|INFO|Setting lport 1a78c49b-c423-4133-be6b-7c0298bc59ed down in Southbound
Jan 27 09:24:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:48Z|01517|binding|INFO|Removing iface tap1a78c49b-c4 ovn-installed in OVS
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.044 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], port_security=['fa:16:3e:95:74:57 10.100.0.12 2001:db8:0:1:f816:3eff:fe95:7457 2001:db8::f816:3eff:fe95:7457'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe95:7457/64 2001:db8::f816:3eff:fe95:7457/64', 'neutron:device_id': 'dc9117b7-6a0b-4142-a1be-23eca138e6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=1a78c49b-c423-4133-be6b-7c0298bc59ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.046 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 1a78c49b-c423-4133-be6b-7c0298bc59ed in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 unbound from our chassis#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.047 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.073 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5ea214-b79e-4a95-bcc8-9553ce3d116d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:48 np0005597378 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 27 09:24:48 np0005597378 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Consumed 15.027s CPU time.
Jan 27 09:24:48 np0005597378 systemd-machined[207425]: Machine qemu-173-instance-0000008d terminated.
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.110 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed2b9bf-d269-4c4a-b712-c286fda03949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.114 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[74b0cd66-8d4b-4296-881f-135cb876990e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.150 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[47305a71-05eb-4ace-8c9b-f85f6c0c4beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.173 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95b7dcd2-aa1b-4ce6-ab09-0d546623c24b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b62a287-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:e9:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 432], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646685, 'reachable_time': 26957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368906, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.199 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e166fe03-ef11-44b4-9aee-d1c7ee07afd9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646700, 'tstamp': 646700}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368909, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8b62a287-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646703, 'tstamp': 646703}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368909, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.201 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.230 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.236 238945 INFO nova.virt.libvirt.driver [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Instance destroyed successfully.#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.236 238945 DEBUG nova.objects.instance [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid dc9117b7-6a0b-4142-a1be-23eca138e6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.243 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b62a287-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.243 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.244 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b62a287-40, col_values=(('external_ids', {'iface-id': '2c64544a-77a5-4e81-a088-de5cbdfdbfdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:48.244 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.252 238945 DEBUG nova.virt.libvirt.vif [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1604894193',display_name='tempest-TestGettingAddress-server-1604894193',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1604894193',id=141,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-zetdo6gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:23Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=dc9117b7-6a0b-4142-a1be-23eca138e6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.253 238945 DEBUG nova.network.os_vif_util [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.254 238945 DEBUG nova.network.os_vif_util [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.255 238945 DEBUG os_vif [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.258 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a78c49b-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.260 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.263 238945 INFO os_vif [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:74:57,bridge_name='br-int',has_traffic_filtering=True,id=1a78c49b-c423-4133-be6b-7c0298bc59ed,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a78c49b-c4')#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.587 238945 INFO nova.virt.libvirt.driver [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deleting instance files /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed_del#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.588 238945 INFO nova.virt.libvirt.driver [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deletion of /var/lib/nova/instances/dc9117b7-6a0b-4142-a1be-23eca138e6ed_del complete#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.655 238945 INFO nova.compute.manager [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.656 238945 DEBUG oslo.service.loopingcall [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.656 238945 DEBUG nova.compute.manager [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:24:48 np0005597378 nova_compute[238941]: 2026-01-27 14:24:48.657 238945 DEBUG nova.network.neutron [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:24:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 371 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 232 op/s
Jan 27 09:24:49 np0005597378 nova_compute[238941]: 2026-01-27 14:24:49.443 238945 DEBUG nova.network.neutron [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:49 np0005597378 nova_compute[238941]: 2026-01-27 14:24:49.458 238945 INFO nova.compute.manager [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Took 0.80 seconds to deallocate network for instance.#033[00m
Jan 27 09:24:49 np0005597378 nova_compute[238941]: 2026-01-27 14:24:49.496 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:49 np0005597378 nova_compute[238941]: 2026-01-27 14:24:49.497 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:49 np0005597378 nova_compute[238941]: 2026-01-27 14:24:49.622 238945 DEBUG oslo_concurrency.processutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.071 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-unplugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.072 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.072 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.073 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.073 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] No waiting events found dispatching network-vif-unplugged-1a78c49b-c423-4133-be6b-7c0298bc59ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.073 238945 WARNING nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received unexpected event network-vif-unplugged-1a78c49b-c423-4133-be6b-7c0298bc59ed for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.074 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.074 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.074 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.075 238945 DEBUG oslo_concurrency.lockutils [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.075 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] No waiting events found dispatching network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.075 238945 WARNING nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received unexpected event network-vif-plugged-1a78c49b-c423-4133-be6b-7c0298bc59ed for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.076 238945 DEBUG nova.compute.manager [req-ae8c02e4-30a8-4373-b24d-44cc4f3b82af req-657151c5-1171-4d81-b0d1-e7232aae7cbc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Received event network-vif-deleted-1a78c49b-c423-4133-be6b-7c0298bc59ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.078 238945 DEBUG nova.compute.manager [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.079 238945 DEBUG nova.compute.manager [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing instance network info cache due to event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.079 238945 DEBUG oslo_concurrency.lockutils [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.079 238945 DEBUG oslo_concurrency.lockutils [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.080 238945 DEBUG nova.network.neutron [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046044943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.236 238945 DEBUG oslo_concurrency.processutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.242 238945 DEBUG nova.compute.provider_tree [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.256 238945 DEBUG nova.scheduler.client.report [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.280 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.300 238945 INFO nova.scheduler.client.report [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance dc9117b7-6a0b-4142-a1be-23eca138e6ed#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.366 238945 DEBUG oslo_concurrency.lockutils [None req-b6935ad9-2f89-41c1-9423-7a3eefda6842 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "dc9117b7-6a0b-4142-a1be-23eca138e6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.851 238945 DEBUG nova.network.neutron [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updated VIF entry in instance network info cache for port 1a78c49b-c423-4133-be6b-7c0298bc59ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.851 238945 DEBUG nova.network.neutron [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Updating instance_info_cache with network_info: [{"id": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "address": "fa:16:3e:95:74:57", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7457", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a78c49b-c4", "ovs_interfaceid": "1a78c49b-c423-4133-be6b-7c0298bc59ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:50 np0005597378 nova_compute[238941]: 2026-01-27 14:24:50.869 238945 DEBUG oslo_concurrency.lockutils [req-1c59e3c8-d6af-476d-9a6f-ac0cc9086576 req-118035e8-c2a0-40b2-8dae-ff7ae59532ff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dc9117b7-6a0b-4142-a1be-23eca138e6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 202 op/s
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.827 238945 DEBUG nova.network.neutron [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updated VIF entry in instance network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.827 238945 DEBUG nova.network.neutron [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.853 238945 DEBUG oslo_concurrency.lockutils [req-3bcded08-d66d-46f8-beba-e5b65e872096 req-36ae5b4a-a104-4d19-aeef-b3103fb652eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.968 238945 DEBUG nova.compute.manager [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG nova.compute.manager [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing instance network info cache due to event network-changed-b56b41e5-7177-4698-94c9-d69ffe22de91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG oslo_concurrency.lockutils [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG oslo_concurrency.lockutils [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:24:52 np0005597378 nova_compute[238941]: 2026-01-27 14:24:52.969 238945 DEBUG nova.network.neutron [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Refreshing network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.038 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.038 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.038 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.039 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.039 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.040 238945 INFO nova.compute.manager [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Terminating instance#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.041 238945 DEBUG nova.compute.manager [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:24:53 np0005597378 kernel: tapb56b41e5-71 (unregistering): left promiscuous mode
Jan 27 09:24:53 np0005597378 NetworkManager[48904]: <info>  [1769523893.1355] device (tapb56b41e5-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:24:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:53Z|01518|binding|INFO|Releasing lport b56b41e5-7177-4698-94c9-d69ffe22de91 from this chassis (sb_readonly=0)
Jan 27 09:24:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:53Z|01519|binding|INFO|Setting lport b56b41e5-7177-4698-94c9-d69ffe22de91 down in Southbound
Jan 27 09:24:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:53Z|01520|binding|INFO|Removing iface tapb56b41e5-71 ovn-installed in OVS
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.149 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], port_security=['fa:16:3e:6a:4a:93 10.100.0.14 2001:db8:0:1:f816:3eff:fe6a:4a93 2001:db8::f816:3eff:fe6a:4a93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe6a:4a93/64 2001:db8::f816:3eff:fe6a:4a93/64', 'neutron:device_id': 'a48b56d5-6e62-4476-bee9-dc8cf3c1759d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09ee8f62-586d-4295-89e6-85eb382ffb49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=970192a6-003b-47e3-89a0-ee0f3cb35882, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b56b41e5-7177-4698-94c9-d69ffe22de91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.150 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b56b41e5-7177-4698-94c9-d69ffe22de91 in datapath 8b62a287-47a7-4adb-9afa-c15812d1a9e4 unbound from our chassis#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.152 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b62a287-47a7-4adb-9afa-c15812d1a9e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.153 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec65cfc0-4765-4bd3-aacb-2338b17a4a38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.154 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 namespace which is not needed anymore#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 167 op/s
Jan 27 09:24:53 np0005597378 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Jan 27 09:24:53 np0005597378 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Consumed 15.477s CPU time.
Jan 27 09:24:53 np0005597378 systemd-machined[207425]: Machine qemu-171-instance-0000008b terminated.
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.259 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.279 238945 INFO nova.virt.libvirt.driver [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Instance destroyed successfully.#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.280 238945 DEBUG nova.objects.instance [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid a48b56d5-6e62-4476-bee9-dc8cf3c1759d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.299 238945 DEBUG nova.virt.libvirt.vif [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-562631681',display_name='tempest-TestGettingAddress-server-562631681',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-562631681',id=139,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFecr+tX4/5Xu1tZ9NkxPRNp+sU4Sm9UH/xXlVYsgMAtclcrsWrG7L7f6bcerkRRUeyQngAWsSlep0AlnkCJJUfVUUUYoEAh7gvJP5a8vhXSQvgUuH8L5c7NH6HiA18qA==',key_name='tempest-TestGettingAddress-1130984253',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:23:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-ncwzlcth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:23:47Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=a48b56d5-6e62-4476-bee9-dc8cf3c1759d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.300 238945 DEBUG nova.network.os_vif_util [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.301 238945 DEBUG nova.network.os_vif_util [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.301 238945 DEBUG os_vif [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.303 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.304 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56b41e5-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.311 238945 INFO os_vif [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:4a:93,bridge_name='br-int',has_traffic_filtering=True,id=b56b41e5-7177-4698-94c9-d69ffe22de91,network=Network(8b62a287-47a7-4adb-9afa-c15812d1a9e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56b41e5-71')#033[00m
Jan 27 09:24:53 np0005597378 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : haproxy version is 2.8.14-c23fe91
Jan 27 09:24:53 np0005597378 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [NOTICE]   (366262) : path to executable is /usr/sbin/haproxy
Jan 27 09:24:53 np0005597378 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [ALERT]    (366262) : Current worker (366264) exited with code 143 (Terminated)
Jan 27 09:24:53 np0005597378 neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4[366258]: [WARNING]  (366262) : All workers exited. Exiting... (0)
Jan 27 09:24:53 np0005597378 systemd[1]: libpod-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464.scope: Deactivated successfully.
Jan 27 09:24:53 np0005597378 podman[368988]: 2026-01-27 14:24:53.339410625 +0000 UTC m=+0.062464246 container died 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:24:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464-userdata-shm.mount: Deactivated successfully.
Jan 27 09:24:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e0ce5402bbc95c4a252dd0faf198b68103aac7bfc53355d25b3cc66a82a3c46c-merged.mount: Deactivated successfully.
Jan 27 09:24:53 np0005597378 podman[368988]: 2026-01-27 14:24:53.390133073 +0000 UTC m=+0.113186684 container cleanup 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:24:53 np0005597378 systemd[1]: libpod-conmon-09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464.scope: Deactivated successfully.
Jan 27 09:24:53 np0005597378 podman[369041]: 2026-01-27 14:24:53.4812494 +0000 UTC m=+0.059078014 container remove 09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.489 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[58ee0ff3-d80a-44e8-8428-e6f8c7a805b3]: (4, ('Tue Jan 27 02:24:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 (09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464)\n09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464\nTue Jan 27 02:24:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 (09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464)\n09b036107e24b27310f68ce67e0dce579bc127e167bbad577296bdc207011464\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.492 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[011056b2-df0b-42f4-b60b-d8cd99f4cad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.501 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b62a287-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 kernel: tap8b62a287-40: left promiscuous mode
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.520 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.523 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[566864d2-eac8-4572-8a0b-a82de72f0d72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.540 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c25a109c-92a5-47ba-9902-855f1ef566de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.542 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[81db15b4-ee60-4850-abc1-ef21b3232ece]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.561 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[dd64be77-012e-4e7e-9e41-ca2a0e23c6f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646676, 'reachable_time': 17026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369056, 'error': None, 'target': 'ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 systemd[1]: run-netns-ovnmeta\x2d8b62a287\x2d47a7\x2d4adb\x2d9afa\x2dc15812d1a9e4.mount: Deactivated successfully.
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.566 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b62a287-47a7-4adb-9afa-c15812d1a9e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:24:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:24:53.567 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff7b531-3e75-41f0-9131-1ce1f7d2d90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.630 238945 INFO nova.virt.libvirt.driver [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deleting instance files /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_del#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.631 238945 INFO nova.virt.libvirt.driver [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deletion of /var/lib/nova/instances/a48b56d5-6e62-4476-bee9-dc8cf3c1759d_del complete#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.693 238945 INFO nova.compute.manager [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.694 238945 DEBUG oslo.service.loopingcall [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.694 238945 DEBUG nova.compute.manager [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:24:53 np0005597378 nova_compute[238941]: 2026-01-27 14:24:53.694 238945 DEBUG nova.network.neutron [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.216 238945 DEBUG nova.network.neutron [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.237 238945 INFO nova.compute.manager [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Took 0.54 seconds to deallocate network for instance.#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.282 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.283 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.285 238945 DEBUG nova.compute.manager [req-1e539cfa-3860-4aed-98e4-469cc85487ba req-6ab6a07d-8e78-4eec-90f3-f5fd1a04c3f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-deleted-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.375 238945 DEBUG oslo_concurrency.processutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:24:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:24:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410991503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.949 238945 DEBUG oslo_concurrency.processutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.956 238945 DEBUG nova.compute.provider_tree [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:24:54 np0005597378 nova_compute[238941]: 2026-01-27 14:24:54.974 238945 DEBUG nova.scheduler.client.report [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.001 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.027 238945 INFO nova.scheduler.client.report [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance a48b56d5-6e62-4476-bee9-dc8cf3c1759d#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.042 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-unplugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] No waiting events found dispatching network-vif-unplugged-b56b41e5-7177-4698-94c9-d69ffe22de91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.043 238945 WARNING nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received unexpected event network-vif-unplugged-b56b41e5-7177-4698-94c9-d69ffe22de91 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG oslo_concurrency.lockutils [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 DEBUG nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] No waiting events found dispatching network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.044 238945 WARNING nova.compute.manager [req-f0251a5c-ebe8-4708-959b-ac64da369600 req-7462ddb8-9039-49a0-85e1-1d94d8df71c1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Received unexpected event network-vif-plugged-b56b41e5-7177-4698-94c9-d69ffe22de91 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.078 238945 DEBUG nova.network.neutron [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updated VIF entry in instance network info cache for port b56b41e5-7177-4698-94c9-d69ffe22de91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.078 238945 DEBUG nova.network.neutron [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Updating instance_info_cache with network_info: [{"id": "b56b41e5-7177-4698-94c9-d69ffe22de91", "address": "fa:16:3e:6a:4a:93", "network": {"id": "8b62a287-47a7-4adb-9afa-c15812d1a9e4", "bridge": "br-int", "label": "tempest-network-smoke--419039562", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:4a93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56b41e5-71", "ovs_interfaceid": "b56b41e5-7177-4698-94c9-d69ffe22de91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.120 238945 DEBUG oslo_concurrency.lockutils [None req-7edcd7ad-9c01-49b2-9e49-2c23c9698544 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "a48b56d5-6e62-4476-bee9-dc8cf3c1759d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:24:55 np0005597378 nova_compute[238941]: 2026-01-27 14:24:55.122 238945 DEBUG oslo_concurrency.lockutils [req-aff1cbea-e389-495c-9345-239ea942e8c3 req-55d213d2-a508-4b90-91a4-f62b300fe707 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-a48b56d5-6e62-4476-bee9-dc8cf3c1759d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:24:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 276 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 27 09:24:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:24:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 790 KiB/s wr, 167 op/s
Jan 27 09:24:57 np0005597378 nova_compute[238941]: 2026-01-27 14:24:57.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:58 np0005597378 nova_compute[238941]: 2026-01-27 14:24:58.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:24:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 262 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 162 op/s
Jan 27 09:24:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:59Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:aa:38 10.100.0.3
Jan 27 09:24:59 np0005597378 ovn_controller[144812]: 2026-01-27T14:24:59Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:aa:38 10.100.0.3
Jan 27 09:24:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:24:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206007283' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:24:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:24:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206007283' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:25:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Jan 27 09:25:02 np0005597378 nova_compute[238941]: 2026-01-27 14:25:02.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 27 09:25:03 np0005597378 nova_compute[238941]: 2026-01-27 14:25:03.290 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523888.231429, dc9117b7-6a0b-4142-a1be-23eca138e6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:03 np0005597378 nova_compute[238941]: 2026-01-27 14:25:03.291 238945 INFO nova.compute.manager [-] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:25:03 np0005597378 nova_compute[238941]: 2026-01-27 14:25:03.309 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:03 np0005597378 nova_compute[238941]: 2026-01-27 14:25:03.314 238945 DEBUG nova.compute.manager [None req-96b51588-808c-4af9-a9ad-53b90d136874 - - - - - -] [instance: dc9117b7-6a0b-4142-a1be-23eca138e6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:04Z|01521|binding|INFO|Releasing lport 1b069ba6-066d-43f6-bff3-b1997a730e15 from this chassis (sb_readonly=0)
Jan 27 09:25:04 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:04Z|01522|binding|INFO|Releasing lport 21c9fe8e-89ff-4a00-8668-858e37e7400b from this chassis (sb_readonly=0)
Jan 27 09:25:04 np0005597378 nova_compute[238941]: 2026-01-27 14:25:04.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.179 238945 INFO nova.compute.manager [None req-69fbbb00-269a-4c37-95ce-c462e90beea9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Get console output#033[00m
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.185 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:25:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 27 09:25:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG nova.compute.manager [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG nova.compute.manager [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG oslo_concurrency.lockutils [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.950 238945 DEBUG oslo_concurrency.lockutils [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:05 np0005597378 nova_compute[238941]: 2026-01-27 14:25:05.951 238945 DEBUG nova.network.neutron [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG nova.compute.manager [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG oslo_concurrency.lockutils [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG oslo_concurrency.lockutils [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.067 238945 DEBUG oslo_concurrency.lockutils [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.068 238945 DEBUG nova.compute.manager [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.068 238945 WARNING nova.compute.manager [req-9073bae0-31ca-4c5c-9b75-c786c9323c32 req-5bf5009a-85a2-4596-8247-88c4fe570cff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.410 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273199468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:06 np0005597378 nova_compute[238941]: 2026-01-27 14:25:06.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.051 238945 INFO nova.compute.manager [None req-8925edc8-5b07-4149-b90f-a9c5efec31a9 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Get console output#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.055 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.126 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.126 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.129 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.130 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:25:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.318 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2976MB free_disk=59.85095764603466GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.320 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.320 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 3b760120-0ed3-4962-b9ba-775e88e9a482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance d26d9a39-75ed-4895-a69d-13ebb76c1e5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 9d641ca9-51bf-4390-9b51-faf9982c1c8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.484 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.559 238945 DEBUG nova.network.neutron [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.560 238945 DEBUG nova.network.neutron [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.575 238945 DEBUG oslo_concurrency.lockutils [req-48ec868c-9c7a-4d86-bebf-5df66e81b7e5 req-cfc2df38-0a5c-45d9-8ff5-8d35fe2c72f7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:07 np0005597378 nova_compute[238941]: 2026-01-27 14:25:07.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605401281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.105 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.110 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.130 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.157 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.157 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.161 238945 DEBUG nova.compute.manager [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.162 238945 DEBUG oslo_concurrency.lockutils [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.162 238945 DEBUG oslo_concurrency.lockutils [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.163 238945 DEBUG oslo_concurrency.lockutils [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.163 238945 DEBUG nova.compute.manager [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.163 238945 WARNING nova.compute.manager [req-441412fb-79a6-4db8-80fa-00ab7b2382d7 req-13d5a971-6b61-455f-b37c-0af9fff72d56 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.277 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523893.2756505, a48b56d5-6e62-4476-bee9-dc8cf3c1759d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.278 238945 INFO nova.compute.manager [-] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.301 238945 DEBUG nova.compute.manager [None req-f2c7575c-df51-4568-97f0-45df78cfe159 - - - - - -] [instance: a48b56d5-6e62-4476-bee9-dc8cf3c1759d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG nova.compute.manager [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG nova.compute.manager [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG oslo_concurrency.lockutils [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.817 238945 DEBUG oslo_concurrency.lockutils [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:08 np0005597378 nova_compute[238941]: 2026-01-27 14:25:08.818 238945 DEBUG nova.network.neutron [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.079 238945 INFO nova.compute.manager [None req-94d6b557-9399-457b-a653-6a9710e2c016 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Get console output#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.086 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:25:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.459 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.460 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.461 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.461 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.462 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.464 238945 INFO nova.compute.manager [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Terminating instance#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.466 238945 DEBUG nova.compute.manager [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:25:09 np0005597378 kernel: tapb3ff749e-87 (unregistering): left promiscuous mode
Jan 27 09:25:09 np0005597378 NetworkManager[48904]: <info>  [1769523909.5188] device (tapb3ff749e-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:25:09 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:09Z|01523|binding|INFO|Releasing lport b3ff749e-8765-47c6-88f6-a029bc9d426b from this chassis (sb_readonly=0)
Jan 27 09:25:09 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:09Z|01524|binding|INFO|Setting lport b3ff749e-8765-47c6-88f6-a029bc9d426b down in Southbound
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.527 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:09Z|01525|binding|INFO|Removing iface tapb3ff749e-87 ovn-installed in OVS
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.529 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.535 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:aa:38 10.100.0.3'], port_security=['fa:16:3e:e5:aa:38 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9d641ca9-51bf-4390-9b51-faf9982c1c8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56932f70-69fd-4849-9484-7365b82e7b06 e5c8c56a-2fd3-4fbe-a45d-eee3df467bf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9401c88-97fe-4738-854c-6d37b11b7963, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3ff749e-8765-47c6-88f6-a029bc9d426b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.536 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3ff749e-8765-47c6-88f6-a029bc9d426b in datapath 548c417a-f816-4e3a-8297-8c6898e6d0ec unbound from our chassis#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.537 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 548c417a-f816-4e3a-8297-8c6898e6d0ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.539 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b99bc290-9968-4ba6-8dbc-988980a9218a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.540 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec namespace which is not needed anymore#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 27 09:25:09 np0005597378 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008f.scope: Consumed 16.010s CPU time.
Jan 27 09:25:09 np0005597378 systemd-machined[207425]: Machine qemu-175-instance-0000008f terminated.
Jan 27 09:25:09 np0005597378 podman[369124]: 2026-01-27 14:25:09.621925642 +0000 UTC m=+0.075852806 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:25:09 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : haproxy version is 2.8.14-c23fe91
Jan 27 09:25:09 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [NOTICE]   (368884) : path to executable is /usr/sbin/haproxy
Jan 27 09:25:09 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [WARNING]  (368884) : Exiting Master process...
Jan 27 09:25:09 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [ALERT]    (368884) : Current worker (368886) exited with code 143 (Terminated)
Jan 27 09:25:09 np0005597378 neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec[368880]: [WARNING]  (368884) : All workers exited. Exiting... (0)
Jan 27 09:25:09 np0005597378 systemd[1]: libpod-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844.scope: Deactivated successfully.
Jan 27 09:25:09 np0005597378 podman[369166]: 2026-01-27 14:25:09.667747838 +0000 UTC m=+0.045556110 container died 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844-userdata-shm.mount: Deactivated successfully.
Jan 27 09:25:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6f4f2afb3f556e728134c906f45ca8e8817bc0d5e26cfeb71cf8823f94ce481d-merged.mount: Deactivated successfully.
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 podman[369166]: 2026-01-27 14:25:09.711607001 +0000 UTC m=+0.089415283 container cleanup 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.713 238945 INFO nova.virt.libvirt.driver [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Instance destroyed successfully.#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.714 238945 DEBUG nova.objects.instance [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 9d641ca9-51bf-4390-9b51-faf9982c1c8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:09 np0005597378 systemd[1]: libpod-conmon-35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844.scope: Deactivated successfully.
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.730 238945 DEBUG nova.virt.libvirt.vif [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-475252809',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=143,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq+VoxsGmBbzc+D7tgsx0vjHmsPUWZSdmxqwRLFEOukACJkapOac1CwnGHBN3I+DYeVtyl+9o3eNYycx6pgOXLK2TRFDrqka4yppTyaJZN11t3rZ1Q0XfH8zG5pa3xqWQ==',key_name='tempest-TestSecurityGroupsBasicOps-1298980945',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-a3x2n6wg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:43Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=9d641ca9-51bf-4390-9b51-faf9982c1c8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.731 238945 DEBUG nova.network.os_vif_util [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.731 238945 DEBUG nova.network.os_vif_util [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.732 238945 DEBUG os_vif [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.733 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.734 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3ff749e-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.736 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.738 238945 INFO os_vif [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:aa:38,bridge_name='br-int',has_traffic_filtering=True,id=b3ff749e-8765-47c6-88f6-a029bc9d426b,network=Network(548c417a-f816-4e3a-8297-8c6898e6d0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff749e-87')#033[00m
Jan 27 09:25:09 np0005597378 podman[369201]: 2026-01-27 14:25:09.779251186 +0000 UTC m=+0.045573711 container remove 35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.785 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7bddd6-2efe-42bb-9b78-32243d6e1f45]: (4, ('Tue Jan 27 02:25:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec (35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844)\n35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844\nTue Jan 27 02:25:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec (35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844)\n35d2c4ffb424a83503b277f72fdc14615c8ce74fb715e771be683d1820530844\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.787 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0cea4b1d-fdb6-40bd-a657-fb402cd043d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.788 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap548c417a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 kernel: tap548c417a-f0: left promiscuous mode
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.804 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 nova_compute[238941]: 2026-01-27 14:25:09.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.807 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea2c516-6887-4c25-8127-dd8d1f6209b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.821 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b10900a7-e872-4ef3-8c7d-697fbfcf8c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.823 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[77fdf60f-0e79-4b2b-9819-f1c13a55926b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.840 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb2199f-1a78-4a2b-9aa0-d0fc59e2bd18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652239, 'reachable_time': 24182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369233, 'error': None, 'target': 'ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.842 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-548c417a-f816-4e3a-8297-8c6898e6d0ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:25:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:09.843 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[45607ad3-7136-4586-a9e4-7b80ba1ebfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:09 np0005597378 systemd[1]: run-netns-ovnmeta\x2d548c417a\x2df816\x2d4e3a\x2d8297\x2d8c6898e6d0ec.mount: Deactivated successfully.
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.110 238945 INFO nova.virt.libvirt.driver [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deleting instance files /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a_del#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.111 238945 INFO nova.virt.libvirt.driver [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deletion of /var/lib/nova/instances/9d641ca9-51bf-4390-9b51-faf9982c1c8a_del complete#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.185 238945 INFO nova.compute.manager [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.185 238945 DEBUG oslo.service.loopingcall [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.185 238945 DEBUG nova.compute.manager [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.186 238945 DEBUG nova.network.neutron [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.281 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.282 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 WARNING nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.283 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.284 238945 DEBUG oslo_concurrency.lockutils [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.284 238945 DEBUG nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.284 238945 WARNING nova.compute.manager [req-72675ca1-ab54-4938-b645-b8abae8efe8f req-888441f9-a4d8-4c48-beb6-0b8e4b5f91dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state None.#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.458 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.459 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.459 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.459 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.460 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.461 238945 INFO nova.compute.manager [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Terminating instance#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.462 238945 DEBUG nova.compute.manager [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.485 238945 DEBUG nova.network.neutron [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.486 238945 DEBUG nova.network.neutron [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.507 238945 DEBUG oslo_concurrency.lockutils [req-a0c2ef03-3c5a-4729-ab30-96964e1f2000 req-144f9e99-0948-416a-b230-2aba11e56baf 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:10 np0005597378 kernel: tap2bea04aa-c7 (unregistering): left promiscuous mode
Jan 27 09:25:10 np0005597378 NetworkManager[48904]: <info>  [1769523910.5178] device (tap2bea04aa-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:25:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:10Z|01526|binding|INFO|Releasing lport 2bea04aa-c7f0-4939-8929-e4635c88700e from this chassis (sb_readonly=0)
Jan 27 09:25:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:10Z|01527|binding|INFO|Setting lport 2bea04aa-c7f0-4939-8929-e4635c88700e down in Southbound
Jan 27 09:25:10 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:10Z|01528|binding|INFO|Removing iface tap2bea04aa-c7 ovn-installed in OVS
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.532 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c6:84 10.100.0.10'], port_security=['fa:16:3e:fb:c6:84 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd26d9a39-75ed-4895-a69d-13ebb76c1e5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc325c3c-6581-442b-bd64-dc83fa8573bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=2bea04aa-c7f0-4939-8929-e4635c88700e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.534 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 2bea04aa-c7f0-4939-8929-e4635c88700e in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.535 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 59abc835-0295-4512-a74a-a69f40a71781#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.560 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5c437577-6442-4dbd-82e5-bc4b2c5d0dd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:10 np0005597378 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Jan 27 09:25:10 np0005597378 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Consumed 18.368s CPU time.
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.604 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0af9bcb4-e09d-47a6-b4dc-b0f142a48df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:10 np0005597378 systemd-machined[207425]: Machine qemu-174-instance-0000008e terminated.
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.609 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[928ab22f-4b4a-4cc4-b664-bab6c27e5bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.641 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a9b4ac-cf65-4243-8074-10f084bae236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.662 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8f8831-6ace-480c-9def-f2e680bf867a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap59abc835-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647812, 'reachable_time': 28951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369243, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.690 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31914049-2563-41cc-b25e-ceff125236b0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647825, 'tstamp': 647825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369244, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap59abc835-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647828, 'tstamp': 647828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369244, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.691 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.732 238945 INFO nova.virt.libvirt.driver [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Instance destroyed successfully.#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.732 238945 DEBUG nova.objects.instance [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid d26d9a39-75ed-4895-a69d-13ebb76c1e5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.734 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.735 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59abc835-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.735 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap59abc835-00, col_values=(('external_ids', {'iface-id': '21c9fe8e-89ff-4a00-8668-858e37e7400b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:10.736 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.747 238945 DEBUG nova.virt.libvirt.vif [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-369475052',display_name='tempest-TestNetworkBasicOps-server-369475052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-369475052',id=142,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5w5T+3kCzYfWmq+5KqGrTxgprvjwVZGlRdvUqLd/42OfJ3cq/ld//vcwc0/1PXWydVvUOFEKiE2lZdeo8YOq3qITNuRPGs8LSPTJjJ2JVXnqR8zBCbMVeFNoTO3IF5Vg==',key_name='tempest-TestNetworkBasicOps-518969303',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-77an57a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:27Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=d26d9a39-75ed-4895-a69d-13ebb76c1e5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.748 238945 DEBUG nova.network.os_vif_util [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.748 238945 DEBUG nova.network.os_vif_util [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.749 238945 DEBUG os_vif [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.750 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.751 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bea04aa-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.755 238945 INFO os_vif [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:c6:84,bridge_name='br-int',has_traffic_filtering=True,id=2bea04aa-c7f0-4939-8929-e4635c88700e,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bea04aa-c7')#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.931 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.931 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing instance network info cache due to event network-changed-b3ff749e-8765-47c6-88f6-a029bc9d426b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.931 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.932 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:10 np0005597378 nova_compute[238941]: 2026-01-27 14:25:10.932 238945 DEBUG nova.network.neutron [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Refreshing network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.438 238945 DEBUG nova.network.neutron [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.457 238945 INFO nova.compute.manager [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Took 1.27 seconds to deallocate network for instance.#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.497 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.498 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.585 238945 INFO nova.virt.libvirt.driver [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deleting instance files /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_del#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.586 238945 INFO nova.virt.libvirt.driver [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deletion of /var/lib/nova/instances/d26d9a39-75ed-4895-a69d-13ebb76c1e5d_del complete#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.590 238945 DEBUG oslo_concurrency.processutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.671 238945 INFO nova.compute.manager [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.672 238945 DEBUG oslo.service.loopingcall [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.672 238945 DEBUG nova.compute.manager [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:25:11 np0005597378 nova_compute[238941]: 2026-01-27 14:25:11.672 238945 DEBUG nova.network.neutron [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:25:11 np0005597378 podman[369277]: 2026-01-27 14:25:11.745048711 +0000 UTC m=+0.078306713 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 09:25:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612821137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.238 238945 DEBUG oslo_concurrency.processutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.244 238945 DEBUG nova.compute.provider_tree [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.261 238945 DEBUG nova.scheduler.client.report [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.282 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.312 238945 INFO nova.scheduler.client.report [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 9d641ca9-51bf-4390-9b51-faf9982c1c8a#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.374 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.374 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing instance network info cache due to event network-changed-2bea04aa-c7f0-4939-8929-e4635c88700e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.374 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.375 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.375 238945 DEBUG nova.network.neutron [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Refreshing network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:12 np0005597378 nova_compute[238941]: 2026-01-27 14:25:12.411 238945 DEBUG oslo_concurrency.lockutils [None req-cae92d2f-ff47-4d4d-8e64-87a008ee6e4f 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.008 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.046 238945 DEBUG nova.compute.manager [req-b7e6e37d-0efe-45af-afe2-9792c3025c10 req-3cf62814-7084-4d8b-a654-291865c3a830 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-deleted-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.157 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 48 KiB/s wr, 77 op/s
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.330 238945 DEBUG nova.network.neutron [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updated VIF entry in instance network info cache for port b3ff749e-8765-47c6-88f6-a029bc9d426b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.331 238945 DEBUG nova.network.neutron [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Updating instance_info_cache with network_info: [{"id": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "address": "fa:16:3e:e5:aa:38", "network": {"id": "548c417a-f816-4e3a-8297-8c6898e6d0ec", "bridge": "br-int", "label": "tempest-network-smoke--2121577413", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff749e-87", "ovs_interfaceid": "b3ff749e-8765-47c6-88f6-a029bc9d426b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.355 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-9d641ca9-51bf-4390-9b51-faf9982c1c8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-unplugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.356 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] No waiting events found dispatching network-vif-unplugged-b3ff749e-8765-47c6-88f6-a029bc9d426b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-unplugged-b3ff749e-8765-47c6-88f6-a029bc9d426b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.357 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.358 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.358 238945 DEBUG oslo_concurrency.lockutils [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "9d641ca9-51bf-4390-9b51-faf9982c1c8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.358 238945 DEBUG nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] No waiting events found dispatching network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.359 238945 WARNING nova.compute.manager [req-4369def2-a4cb-46c9-8436-0865aa291298 req-eca774ae-62ff-431c-af03-d99da0aa4621 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Received unexpected event network-vif-plugged-b3ff749e-8765-47c6-88f6-a029bc9d426b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.409 238945 DEBUG nova.network.neutron [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.425 238945 INFO nova.compute.manager [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Took 1.75 seconds to deallocate network for instance.#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.466 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.467 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:13 np0005597378 nova_compute[238941]: 2026-01-27 14:25:13.539 238945 DEBUG oslo_concurrency.processutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3958367782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.140 238945 DEBUG oslo_concurrency.processutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.147 238945 DEBUG nova.compute.provider_tree [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.167 238945 DEBUG nova.scheduler.client.report [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.197 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.218 238945 INFO nova.scheduler.client.report [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance d26d9a39-75ed-4895-a69d-13ebb76c1e5d#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.304 238945 DEBUG oslo_concurrency.lockutils [None req-2ed3dafe-7ae6-42bd-88bb-f710443304cf 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.426 238945 DEBUG nova.network.neutron [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updated VIF entry in instance network info cache for port 2bea04aa-c7f0-4939-8929-e4635c88700e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.427 238945 DEBUG nova.network.neutron [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Updating instance_info_cache with network_info: [{"id": "2bea04aa-c7f0-4939-8929-e4635c88700e", "address": "fa:16:3e:fb:c6:84", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bea04aa-c7", "ovs_interfaceid": "2bea04aa-c7f0-4939-8929-e4635c88700e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.450 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-d26d9a39-75ed-4895-a69d-13ebb76c1e5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-unplugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.451 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] No waiting events found dispatching network-vif-unplugged-2bea04aa-c7f0-4939-8929-e4635c88700e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-unplugged-2bea04aa-c7f0-4939-8929-e4635c88700e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.452 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 DEBUG oslo_concurrency.lockutils [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "d26d9a39-75ed-4895-a69d-13ebb76c1e5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 DEBUG nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] No waiting events found dispatching network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.453 238945 WARNING nova.compute.manager [req-7ff92178-5b3c-494d-a721-4a779a0906a8 req-7a0c5588-6e41-4804-8082-32030079dd9d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received unexpected event network-vif-plugged-2bea04aa-c7f0-4939-8929-e4635c88700e for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:25:14 np0005597378 nova_compute[238941]: 2026-01-27 14:25:14.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:15 np0005597378 nova_compute[238941]: 2026-01-27 14:25:15.176 238945 DEBUG nova.compute.manager [req-55d080fe-1ff3-490e-9fe6-6fe079a256d2 req-776dc277-ef48-435e-b122-8b8e09bf97ec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Received event network-vif-deleted-2bea04aa-c7f0-4939-8929-e4635c88700e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 151 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 53 KiB/s wr, 119 op/s
Jan 27 09:25:15 np0005597378 nova_compute[238941]: 2026-01-27 14:25:15.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:15 np0005597378 nova_compute[238941]: 2026-01-27 14:25:15.752 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:25:17
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 121 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 19 KiB/s wr, 112 op/s
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:25:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.972 238945 DEBUG nova.compute.manager [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG nova.compute.manager [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing instance network info cache due to event network-changed-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG oslo_concurrency.lockutils [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG oslo_concurrency.lockutils [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:17 np0005597378 nova_compute[238941]: 2026-01-27 14:25:17.973 238945 DEBUG nova.network.neutron [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Refreshing network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.011 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.049 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.050 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.051 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.051 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.051 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.053 238945 INFO nova.compute.manager [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Terminating instance#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.054 238945 DEBUG nova.compute.manager [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.060 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:18 np0005597378 kernel: tap9d1f9be3-07 (unregistering): left promiscuous mode
Jan 27 09:25:18 np0005597378 NetworkManager[48904]: <info>  [1769523918.1093] device (tap9d1f9be3-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01529|binding|INFO|Releasing lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc from this chassis (sb_readonly=0)
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01530|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc down in Southbound
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.118 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01531|binding|INFO|Removing iface tap9d1f9be3-07 ovn-installed in OVS
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.121 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.130 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.132 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.133 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59abc835-0295-4512-a74a-a69f40a71781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.134 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bce4dd76-78d9-4a24-b915-508a1f9984cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.134 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 namespace which is not needed anymore#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Jan 27 09:25:18 np0005597378 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Consumed 16.316s CPU time.
Jan 27 09:25:18 np0005597378 systemd-machined[207425]: Machine qemu-172-instance-0000008c terminated.
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:25:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:25:18 np0005597378 kernel: tap9d1f9be3-07: entered promiscuous mode
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01532|binding|INFO|Claiming lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for this chassis.
Jan 27 09:25:18 np0005597378 NetworkManager[48904]: <info>  [1769523918.2790] manager: (tap9d1f9be3-07): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01533|binding|INFO|9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc: Claiming fa:16:3e:b1:ef:72 10.100.0.6
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.279 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 kernel: tap9d1f9be3-07 (unregistering): left promiscuous mode
Jan 27 09:25:18 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : haproxy version is 2.8.14-c23fe91
Jan 27 09:25:18 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [NOTICE]   (366785) : path to executable is /usr/sbin/haproxy
Jan 27 09:25:18 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [WARNING]  (366785) : Exiting Master process...
Jan 27 09:25:18 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [WARNING]  (366785) : Exiting Master process...
Jan 27 09:25:18 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [ALERT]    (366785) : Current worker (366791) exited with code 143 (Terminated)
Jan 27 09:25:18 np0005597378 neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781[366762]: [WARNING]  (366785) : All workers exited. Exiting... (0)
Jan 27 09:25:18 np0005597378 systemd[1]: libpod-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope: Deactivated successfully.
Jan 27 09:25:18 np0005597378 conmon[366762]: conmon 83da3e4df95ba8acf105 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope/container/memory.events
Jan 27 09:25:18 np0005597378 podman[369370]: 2026-01-27 14:25:18.296500913 +0000 UTC m=+0.057897653 container died 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01534|binding|INFO|Setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc ovn-installed in OVS
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01535|if_status|INFO|Dropped 4 log messages in last 280 seconds (most recently, 280 seconds ago) due to excessive rate
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01536|if_status|INFO|Not setting lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc down as sb is readonly
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.309 238945 INFO nova.virt.libvirt.driver [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance destroyed successfully.#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.313 238945 DEBUG nova.objects.instance [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109-userdata-shm.mount: Deactivated successfully.
Jan 27 09:25:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f852c612dce2e676500890dc6eb092ee976eb82f95fd913d4cc520dfa2d5244f-merged.mount: Deactivated successfully.
Jan 27 09:25:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:18Z|01537|binding|INFO|Releasing lport 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc from this chassis (sb_readonly=0)
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.340 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:18 np0005597378 podman[369370]: 2026-01-27 14:25:18.34163273 +0000 UTC m=+0.103029450 container cleanup 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.350 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:ef:72 10.100.0.6'], port_security=['fa:16:3e:b1:ef:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b760120-0ed3-4962-b9ba-775e88e9a482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-59abc835-0295-4512-a74a-a69f40a71781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bd7dba-01a9-428d-9280-5b6493a6f919', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72c4a6ff-2118-4ec2-9861-05a7a4bf207f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 systemd[1]: libpod-conmon-83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109.scope: Deactivated successfully.
Jan 27 09:25:18 np0005597378 podman[369400]: 2026-01-27 14:25:18.413232491 +0000 UTC m=+0.042289162 container remove 83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.419 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e6398400-d0c0-4bdc-a7cd-a57eeb713346]: (4, ('Tue Jan 27 02:25:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 (83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109)\n83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109\nTue Jan 27 02:25:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 (83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109)\n83da3e4df95ba8acf105b35e96a51cd991dcd24b740b37db9f9f459ebf201109\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.421 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ff38d5c6-6c8d-45a6-a75a-f2ff024a6bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.422 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59abc835-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 kernel: tap59abc835-00: left promiscuous mode
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.447 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[582f921a-399f-4145-9797-7d08d92ce4d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.461 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[114fa0d9-319d-40cd-9934-fe911f1ab155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.463 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6cef4ac4-b9a8-461b-9eca-74434cfd31a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.479 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb881e30-2816-4478-b03d-6feb480a820b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647805, 'reachable_time': 38350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369419, 'error': None, 'target': 'ovnmeta-59abc835-0295-4512-a74a-a69f40a71781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.482 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-59abc835-0295-4512-a74a-a69f40a71781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.482 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e51121-3be8-4fb5-9df8-1e091974addb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.483 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis#033[00m
Jan 27 09:25:18 np0005597378 systemd[1]: run-netns-ovnmeta\x2d59abc835\x2d0295\x2d4512\x2da74a\x2da69f40a71781.mount: Deactivated successfully.
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.483 238945 DEBUG nova.virt.libvirt.vif [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1968375921',display_name='tempest-TestNetworkBasicOps-server-1968375921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1968375921',id=140,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO2tG0iX4tZcL3XZkbM582zqvrqtVTYPW16jky41KmlMQPu5aBJe/s0ZkPuNBq+T6QvN5iR8uPNh1bxal/m862xoL0jVGsVzwPs53IF9FOj+3Vl0QQ7KhYEcj7GLQKVuEQ==',key_name='tempest-TestNetworkBasicOps-1667824706',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:24:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-acmy2o20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:24:00Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=3b760120-0ed3-4962-b9ba-775e88e9a482,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.483 238945 DEBUG nova.network.os_vif_util [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.484 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59abc835-0295-4512-a74a-a69f40a71781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.484 238945 DEBUG nova.network.os_vif_util [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.485 238945 DEBUG os_vif [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.485 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e141b383-10a6-47ae-87b2-b8cf12a5b8c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.486 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc in datapath 59abc835-0295-4512-a74a-a69f40a71781 unbound from our chassis#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.487 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1f9be3-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.487 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 59abc835-0295-4512-a74a-a69f40a71781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:25:18 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:18.488 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5a58a161-cefc-4e98-8996-e49bf5830fca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.495 238945 INFO os_vif [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:ef:72,bridge_name='br-int',has_traffic_filtering=True,id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc,network=Network(59abc835-0295-4512-a74a-a69f40a71781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d1f9be3-07')#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.774 238945 INFO nova.virt.libvirt.driver [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deleting instance files /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482_del#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.775 238945 INFO nova.virt.libvirt.driver [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deletion of /var/lib/nova/instances/3b760120-0ed3-4962-b9ba-775e88e9a482_del complete#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.841 238945 INFO nova.compute.manager [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.842 238945 DEBUG oslo.service.loopingcall [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.842 238945 DEBUG nova.compute.manager [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:25:18 np0005597378 nova_compute[238941]: 2026-01-27 14:25:18.843 238945 DEBUG nova.network.neutron [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 59 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 19 KiB/s wr, 129 op/s
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.565 238945 DEBUG nova.network.neutron [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated VIF entry in instance network info cache for port 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.565 238945 DEBUG nova.network.neutron [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.602 238945 DEBUG oslo_concurrency.lockutils [req-e91b1bec-4dc6-476f-bb60-f7afe5550c91 req-33276f27-f6f7-4280-bdc5-ce46c13a6b00 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.603 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.603 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:25:19 np0005597378 nova_compute[238941]: 2026-01-27 14:25:19.603 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3b760120-0ed3-4962-b9ba-775e88e9a482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.109 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.109 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.109 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-unplugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.110 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG oslo_concurrency.lockutils [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 DEBUG nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] No waiting events found dispatching network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.111 238945 WARNING nova.compute.manager [req-ba5ed7de-bda0-4384-9f6c-9c6e0154396e req-9f12362d-0d1b-41a3-aca9-b1a756979d05 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received unexpected event network-vif-plugged-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:25:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.887 238945 DEBUG nova.network.neutron [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:20 np0005597378 nova_compute[238941]: 2026-01-27 14:25:20.925 238945 INFO nova.compute.manager [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Took 2.08 seconds to deallocate network for instance.#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.057 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.058 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.110 238945 DEBUG oslo_concurrency.processutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.179 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updating instance_info_cache with network_info: [{"id": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "address": "fa:16:3e:b1:ef:72", "network": {"id": "59abc835-0295-4512-a74a-a69f40a71781", "bridge": "br-int", "label": "tempest-network-smoke--316452736", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d1f9be3-07", "ovs_interfaceid": "9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.205 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-3b760120-0ed3-4962-b9ba-775e88e9a482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.206 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:25:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 41 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 7.8 KiB/s wr, 102 op/s
Jan 27 09:25:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150547422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.676 238945 DEBUG oslo_concurrency.processutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.680 238945 DEBUG nova.compute.provider_tree [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.701 238945 DEBUG nova.scheduler.client.report [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.723 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.750 238945 INFO nova.scheduler.client.report [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance 3b760120-0ed3-4962-b9ba-775e88e9a482#033[00m
Jan 27 09:25:21 np0005597378 nova_compute[238941]: 2026-01-27 14:25:21.923 238945 DEBUG oslo_concurrency.lockutils [None req-5a97a66c-d41e-4c3d-9844-ec616b4c4ed4 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "3b760120-0ed3-4962-b9ba-775e88e9a482" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:22 np0005597378 nova_compute[238941]: 2026-01-27 14:25:22.771 238945 DEBUG nova.compute.manager [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Received event network-vif-deleted-9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:22 np0005597378 nova_compute[238941]: 2026-01-27 14:25:22.771 238945 INFO nova.compute.manager [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Neutron deleted interface 9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:25:22 np0005597378 nova_compute[238941]: 2026-01-27 14:25:22.772 238945 DEBUG nova.network.neutron [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 27 09:25:22 np0005597378 nova_compute[238941]: 2026-01-27 14:25:22.775 238945 DEBUG nova.compute.manager [req-36a9249c-722f-484b-a6d0-4f0e3f9f4cda req-3aef918a-1cc2-4dcc-b793-8dc2161332f2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Detach interface failed, port_id=9d1f9be3-07ec-48e9-bcdb-e5871e5a8cbc, reason: Instance 3b760120-0ed3-4962-b9ba-775e88e9a482 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:25:23 np0005597378 nova_compute[238941]: 2026-01-27 14:25:23.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 41 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.5 KiB/s wr, 78 op/s
Jan 27 09:25:23 np0005597378 nova_compute[238941]: 2026-01-27 14:25:23.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:24 np0005597378 nova_compute[238941]: 2026-01-27 14:25:24.710 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523909.709377, 9d641ca9-51bf-4390-9b51-faf9982c1c8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:24 np0005597378 nova_compute[238941]: 2026-01-27 14:25:24.711 238945 INFO nova.compute.manager [-] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:25:25 np0005597378 nova_compute[238941]: 2026-01-27 14:25:25.127 238945 DEBUG nova.compute.manager [None req-d00399ac-ba81-4411-9707-8e0cfd79ff03 - - - - - -] [instance: 9d641ca9-51bf-4390-9b51-faf9982c1c8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.5 KiB/s wr, 78 op/s
Jan 27 09:25:25 np0005597378 nova_compute[238941]: 2026-01-27 14:25:25.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:25 np0005597378 nova_compute[238941]: 2026-01-27 14:25:25.696 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523910.694067, d26d9a39-75ed-4895-a69d-13ebb76c1e5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:25 np0005597378 nova_compute[238941]: 2026-01-27 14:25:25.697 238945 INFO nova.compute.manager [-] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:25:25 np0005597378 nova_compute[238941]: 2026-01-27 14:25:25.719 238945 DEBUG nova.compute.manager [None req-9bb93f8d-5b7e-4696-b2ae-ea17de826a62 - - - - - -] [instance: d26d9a39-75ed-4895-a69d-13ebb76c1e5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.465 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:d5:6d 10.100.0.2 2001:db8::f816:3eff:fef1:d56d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:d56d/64', 'neutron:device_id': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d0dd5362-2188-444d-9dd1-a00fea1ddb1a) old=Port_Binding(mac=['fa:16:3e:f1:d5:6d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.467 154802 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d0dd5362-2188-444d-9dd1-a00fea1ddb1a in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c updated#033[00m
Jan 27 09:25:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.469 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:25:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:26.471 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c488c869-dd45-4d02-8a48-4369b90d4316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4330125642162815e-05 of space, bias 1.0, pg target 0.004299037692648845 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669327239905778 of space, bias 1.0, pg target 0.2007981719717334 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0332713767079828e-06 of space, bias 4.0, pg target 0.0012399256520495793 quantized to 16 (current 16)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:25:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:25:28 np0005597378 nova_compute[238941]: 2026-01-27 14:25:28.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:28 np0005597378 nova_compute[238941]: 2026-01-27 14:25:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:28 np0005597378 nova_compute[238941]: 2026-01-27 14:25:28.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:25:29 np0005597378 nova_compute[238941]: 2026-01-27 14:25:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:29 np0005597378 nova_compute[238941]: 2026-01-27 14:25:29.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:25:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 7 op/s
Jan 27 09:25:33 np0005597378 nova_compute[238941]: 2026-01-27 14:25:33.066 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:25:33 np0005597378 nova_compute[238941]: 2026-01-27 14:25:33.306 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523918.3044662, 3b760120-0ed3-4962-b9ba-775e88e9a482 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:33 np0005597378 nova_compute[238941]: 2026-01-27 14:25:33.306 238945 INFO nova.compute.manager [-] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:25:33 np0005597378 nova_compute[238941]: 2026-01-27 14:25:33.326 238945 DEBUG nova.compute.manager [None req-208d0641-42f6-404e-be07-b5648a5e3ace - - - - - -] [instance: 3b760120-0ed3-4962-b9ba-775e88e9a482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:33 np0005597378 nova_compute[238941]: 2026-01-27 14:25:33.493 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:35.156 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:35 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:35.157 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:25:35 np0005597378 nova_compute[238941]: 2026-01-27 14:25:35.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:25:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:36 np0005597378 nova_compute[238941]: 2026-01-27 14:25:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.362 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.363 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.397 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.489 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.490 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.497 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.498 238945 INFO nova.compute.claims [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:25:37 np0005597378 nova_compute[238941]: 2026-01-27 14:25:37.626 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:38 np0005597378 nova_compute[238941]: 2026-01-27 14:25:38.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012229480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:38 np0005597378 nova_compute[238941]: 2026-01-27 14:25:38.212 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:38 np0005597378 nova_compute[238941]: 2026-01-27 14:25:38.220 238945 DEBUG nova.compute.provider_tree [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:38 np0005597378 nova_compute[238941]: 2026-01-27 14:25:38.569 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:25:39 np0005597378 nova_compute[238941]: 2026-01-27 14:25:39.855 238945 DEBUG nova.scheduler.client.report [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:39 np0005597378 nova_compute[238941]: 2026-01-27 14:25:39.912 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:39 np0005597378 nova_compute[238941]: 2026-01-27 14:25:39.913 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:25:39 np0005597378 nova_compute[238941]: 2026-01-27 14:25:39.990 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:25:39 np0005597378 nova_compute[238941]: 2026-01-27 14:25:39.991 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.013 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.045 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.193 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.195 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.196 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Creating image(s)#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.221 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.247 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.268 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.272 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.321 238945 DEBUG nova.policy [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.372 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.373 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.373 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.374 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.394 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.398 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:40 np0005597378 podman[369578]: 2026-01-27 14:25:40.722785578 +0000 UTC m=+0.058435298 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.841 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:40 np0005597378 nova_compute[238941]: 2026-01-27 14:25:40.911 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:25:41 np0005597378 nova_compute[238941]: 2026-01-27 14:25:41.030 238945 DEBUG nova.objects.instance [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:41 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:41.159 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:25:41 np0005597378 nova_compute[238941]: 2026-01-27 14:25:41.267 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:25:41 np0005597378 nova_compute[238941]: 2026-01-27 14:25:41.268 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Ensure instance console log exists: /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:25:41 np0005597378 nova_compute[238941]: 2026-01-27 14:25:41.268 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:41 np0005597378 nova_compute[238941]: 2026-01-27 14:25:41.269 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:41 np0005597378 nova_compute[238941]: 2026-01-27 14:25:41.269 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:42 np0005597378 nova_compute[238941]: 2026-01-27 14:25:42.146 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Successfully created port: 8f387573-0891-4f0a-9601-3736c186d288 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:25:42 np0005597378 podman[369672]: 2026-01-27 14:25:42.750539503 +0000 UTC m=+0.091740136 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.071 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.112 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Successfully updated port: 8f387573-0891-4f0a-9601-3736c186d288 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.143 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.143 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.143 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:25:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.257 238945 DEBUG nova.compute.manager [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-changed-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.257 238945 DEBUG nova.compute.manager [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing instance network info cache due to event network-changed-8f387573-0891-4f0a-9601-3736c186d288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.258 238945 DEBUG oslo_concurrency.lockutils [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.399 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.417 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.418 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.447 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.543 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.543 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.551 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.551 238945 INFO nova.compute.claims [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:25:43 np0005597378 nova_compute[238941]: 2026-01-27 14:25:43.730 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071516544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.339 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.346 238945 DEBUG nova.compute.provider_tree [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.399 238945 DEBUG nova.scheduler.client.report [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.448 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.449 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.542 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.542 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.588 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.615 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:44 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.730 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.731 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.732 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Creating image(s)#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.772 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.797 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.820 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.824 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.859 238945 DEBUG nova.policy [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.895 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.897 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.897 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.898 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.927 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:44 np0005597378 nova_compute[238941]: 2026-01-27 14:25:44.934 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0131fc36-bc84-47cd-8067-04bef1ed346b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:44 np0005597378 podman[369991]: 2026-01-27 14:25:44.980723177 +0000 UTC m=+0.071729335 container create 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 27 09:25:45 np0005597378 podman[369991]: 2026-01-27 14:25:44.935849557 +0000 UTC m=+0.026855745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:25:45 np0005597378 systemd[1]: Started libpod-conmon-842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56.scope.
Jan 27 09:25:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:45 np0005597378 podman[369991]: 2026-01-27 14:25:45.147625328 +0000 UTC m=+0.238631516 container init 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:25:45 np0005597378 podman[369991]: 2026-01-27 14:25:45.15623415 +0000 UTC m=+0.247240308 container start 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:25:45 np0005597378 blissful_margulis[370042]: 167 167
Jan 27 09:25:45 np0005597378 systemd[1]: libpod-842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56.scope: Deactivated successfully.
Jan 27 09:25:45 np0005597378 podman[369991]: 2026-01-27 14:25:45.187595236 +0000 UTC m=+0.278601414 container attach 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:25:45 np0005597378 podman[369991]: 2026-01-27 14:25:45.189093167 +0000 UTC m=+0.280099345 container died 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:25:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 67 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Jan 27 09:25:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1e296ed236997f97ad6a7863b331b574ff5531c1ac8f6f4e615fda33a6c7df3a-merged.mount: Deactivated successfully.
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.338 238945 DEBUG nova.network.neutron [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.369 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.369 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance network_info: |[{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.370 238945 DEBUG oslo_concurrency.lockutils [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.370 238945 DEBUG nova.network.neutron [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing network info cache for port 8f387573-0891-4f0a-9601-3736c186d288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.373 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start _get_guest_xml network_info=[{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.383 238945 WARNING nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.388 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.389 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.397 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.398 238945 DEBUG nova.virt.libvirt.host [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.398 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.399 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.399 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.399 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.400 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.400 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.400 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.401 238945 DEBUG nova.virt.hardware [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.405 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:45 np0005597378 podman[369991]: 2026-01-27 14:25:45.528734516 +0000 UTC m=+0.619740674 container remove 842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:25:45 np0005597378 systemd[1]: libpod-conmon-842f2c2babe7299b0ea446d2dc5723ae235bb948a233670930babed2fbee7e56.scope: Deactivated successfully.
Jan 27 09:25:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.632 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 0131fc36-bc84-47cd-8067-04bef1ed346b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.696 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.732 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Successfully created port: a97b74ff-5e1f-4cb1-a688-f986acf75619 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:25:45 np0005597378 podman[370105]: 2026-01-27 14:25:45.739659274 +0000 UTC m=+0.067836301 container create 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:25:45 np0005597378 podman[370105]: 2026-01-27 14:25:45.706231973 +0000 UTC m=+0.034409020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:25:45 np0005597378 systemd[1]: Started libpod-conmon-52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979.scope.
Jan 27 09:25:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.910 238945 DEBUG nova.objects.instance [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid 0131fc36-bc84-47cd-8067-04bef1ed346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:45 np0005597378 podman[370105]: 2026-01-27 14:25:45.952822883 +0000 UTC m=+0.280999940 container init 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.959 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.959 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Ensure instance console log exists: /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.960 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.960 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:45 np0005597378 nova_compute[238941]: 2026-01-27 14:25:45.961 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:45 np0005597378 podman[370105]: 2026-01-27 14:25:45.962108474 +0000 UTC m=+0.290285501 container start 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:25:45 np0005597378 podman[370105]: 2026-01-27 14:25:45.978478284 +0000 UTC m=+0.306655341 container attach 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:25:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:25:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691873215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.098 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.133 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.141 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:46.329 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:46.330 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:46.330 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:46 np0005597378 zealous_austin[370160]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:25:46 np0005597378 zealous_austin[370160]: --> All data devices are unavailable
Jan 27 09:25:46 np0005597378 systemd[1]: libpod-52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979.scope: Deactivated successfully.
Jan 27 09:25:46 np0005597378 podman[370105]: 2026-01-27 14:25:46.485028696 +0000 UTC m=+0.813205983 container died 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:25:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9e9f58d55674c6850ded9d8f9ca6a24ebed68bbc5f545083816aa4308849949d-merged.mount: Deactivated successfully.
Jan 27 09:25:46 np0005597378 podman[370105]: 2026-01-27 14:25:46.594888789 +0000 UTC m=+0.923065806 container remove 52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:25:46 np0005597378 systemd[1]: libpod-conmon-52f27eb8ccf99b3534b571b714dd0950402c8864c407e05f1debc02b84da5979.scope: Deactivated successfully.
Jan 27 09:25:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:25:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989244499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.771 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.773 238945 DEBUG nova.virt.libvirt.vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-34426443',display_name='tempest-TestGettingAddress-server-34426443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-34426443',id=144,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-uysr3jco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:40Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6635dda1-c175-403d-ac21-0ec9dca6a77c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.774 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.775 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.776 238945 DEBUG nova.objects.instance [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.815 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <uuid>6635dda1-c175-403d-ac21-0ec9dca6a77c</uuid>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <name>instance-00000090</name>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-34426443</nova:name>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:25:45</nova:creationTime>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <nova:port uuid="8f387573-0891-4f0a-9601-3736c186d288">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe86:c509" ipVersion="6"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <entry name="serial">6635dda1-c175-403d-ac21-0ec9dca6a77c</entry>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <entry name="uuid">6635dda1-c175-403d-ac21-0ec9dca6a77c</entry>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6635dda1-c175-403d-ac21-0ec9dca6a77c_disk">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:86:c5:09"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <target dev="tap8f387573-08"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/console.log" append="off"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:25:46 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:25:46 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:25:46 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:25:46 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.816 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Preparing to wait for external event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.817 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.817 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.817 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.818 238945 DEBUG nova.virt.libvirt.vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-34426443',display_name='tempest-TestGettingAddress-server-34426443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-34426443',id=144,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-uysr3jco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:40Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6635dda1-c175-403d-ac21-0ec9dca6a77c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.819 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.819 238945 DEBUG nova.network.os_vif_util [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.820 238945 DEBUG os_vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.821 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.822 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.826 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f387573-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.827 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f387573-08, col_values=(('external_ids', {'iface-id': '8f387573-0891-4f0a-9601-3736c186d288', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:c5:09', 'vm-uuid': '6635dda1-c175-403d-ac21-0ec9dca6a77c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:46 np0005597378 NetworkManager[48904]: <info>  [1769523946.8294] manager: (tap8f387573-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.830 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.837 238945 INFO os_vif [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08')#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.941 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.942 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.942 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:86:c5:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.942 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Using config drive#033[00m
Jan 27 09:25:46 np0005597378 nova_compute[238941]: 2026-01-27 14:25:46.976 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.124604084 +0000 UTC m=+0.055821686 container create a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:25:47 np0005597378 systemd[1]: Started libpod-conmon-a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c.scope.
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.095436627 +0000 UTC m=+0.026654249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:25:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 106 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.23938273 +0000 UTC m=+0.170600342 container init a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.249865982 +0000 UTC m=+0.181083574 container start a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:25:47 np0005597378 epic_ramanujan[370349]: 167 167
Jan 27 09:25:47 np0005597378 systemd[1]: libpod-a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c.scope: Deactivated successfully.
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.260618462 +0000 UTC m=+0.191836084 container attach a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.26125253 +0000 UTC m=+0.192470122 container died a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.279 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Successfully updated port: a97b74ff-5e1f-4cb1-a688-f986acf75619 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:25:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-aac4f2044c0963c725cdcd39addf43b3f0a25d82c8fc1157117a692eb59d5edd-merged.mount: Deactivated successfully.
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.306 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.307 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.307 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:25:47 np0005597378 podman[370333]: 2026-01-27 14:25:47.331211517 +0000 UTC m=+0.262429109 container remove a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:25:47 np0005597378 systemd[1]: libpod-conmon-a2f37c4cfbaab44adb652c9ac7310d2c07ed265144414f732924b2625500284c.scope: Deactivated successfully.
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.403 238945 DEBUG nova.compute.manager [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.404 238945 DEBUG nova.compute.manager [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing instance network info cache due to event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.405 238945 DEBUG oslo_concurrency.lockutils [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.450 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Creating config drive at /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.456 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0cevp1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:47 np0005597378 podman[370372]: 2026-01-27 14:25:47.500539333 +0000 UTC m=+0.047046590 container create a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.560 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:25:47 np0005597378 systemd[1]: Started libpod-conmon-a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb.scope.
Jan 27 09:25:47 np0005597378 podman[370372]: 2026-01-27 14:25:47.475474087 +0000 UTC m=+0.021981364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:25:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.599 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0cevp1d" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:47 np0005597378 podman[370372]: 2026-01-27 14:25:47.625645967 +0000 UTC m=+0.172153234 container init a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.629 238945 DEBUG nova.storage.rbd_utils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:47 np0005597378 podman[370372]: 2026-01-27 14:25:47.633745915 +0000 UTC m=+0.180253162 container start a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.635 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:47 np0005597378 podman[370372]: 2026-01-27 14:25:47.639188922 +0000 UTC m=+0.185696169 container attach a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:25:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.877 238945 DEBUG nova.network.neutron [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated VIF entry in instance network info cache for port 8f387573-0891-4f0a-9601-3736c186d288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.878 238945 DEBUG nova.network.neutron [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.896 238945 DEBUG oslo_concurrency.lockutils [req-1073a305-089f-437c-af7d-7792116b70c5 req-b8fbb0be-7538-414e-acf5-d4e13e6bb2f5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.931 238945 DEBUG oslo_concurrency.processutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config 6635dda1-c175-403d-ac21-0ec9dca6a77c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:47 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.931 238945 INFO nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deleting local config drive /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c/disk.config because it was imported into RBD.#033[00m
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]: {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:    "0": [
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:        {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "devices": [
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "/dev/loop3"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            ],
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_name": "ceph_lv0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_size": "21470642176",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "name": "ceph_lv0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "tags": {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cluster_name": "ceph",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.crush_device_class": "",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.encrypted": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.objectstore": "bluestore",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osd_id": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.type": "block",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.vdo": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.with_tpm": "0"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            },
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "type": "block",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "vg_name": "ceph_vg0"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:        }
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:    ],
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:    "1": [
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:        {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "devices": [
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "/dev/loop4"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            ],
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_name": "ceph_lv1",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_size": "21470642176",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "name": "ceph_lv1",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "tags": {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cluster_name": "ceph",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.crush_device_class": "",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.encrypted": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.objectstore": "bluestore",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osd_id": "1",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.type": "block",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.vdo": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.with_tpm": "0"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            },
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "type": "block",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "vg_name": "ceph_vg1"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:        }
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:    ],
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:    "2": [
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:        {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "devices": [
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "/dev/loop5"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            ],
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_name": "ceph_lv2",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_size": "21470642176",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "name": "ceph_lv2",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "tags": {
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.cluster_name": "ceph",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.crush_device_class": "",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.encrypted": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.objectstore": "bluestore",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osd_id": "2",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.type": "block",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.vdo": "0",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:                "ceph.with_tpm": "0"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            },
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "type": "block",
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:            "vg_name": "ceph_vg2"
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:        }
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]:    ]
Jan 27 09:25:47 np0005597378 thirsty_perlman[370391]: }
Jan 27 09:25:47 np0005597378 kernel: tap8f387573-08: entered promiscuous mode
Jan 27 09:25:48 np0005597378 NetworkManager[48904]: <info>  [1769523947.9996] manager: (tap8f387573-08): new Tun device (/org/freedesktop/NetworkManager/Devices/626)
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:47.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:48Z|01538|binding|INFO|Claiming lport 8f387573-0891-4f0a-9601-3736c186d288 for this chassis.
Jan 27 09:25:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:48Z|01539|binding|INFO|8f387573-0891-4f0a-9601-3736c186d288: Claiming fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509
Jan 27 09:25:48 np0005597378 systemd[1]: libpod-a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb.scope: Deactivated successfully.
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.019 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.020 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c bound to our chassis#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.022 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c#033[00m
Jan 27 09:25:48 np0005597378 systemd-udevd[370451]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.042 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[23ad30bb-3bf3-4c92-88d7-ebea236a141f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.044 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bdc2751-91 in ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:25:48 np0005597378 systemd-machined[207425]: New machine qemu-176-instance-00000090.
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.047 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bdc2751-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.047 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c1795219-d6ea-4327-add7-09bf0b1cb029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.057 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8b6d82-5367-444a-b161-73ad2cb91f0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 NetworkManager[48904]: <info>  [1769523948.0609] device (tap8f387573-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:25:48 np0005597378 NetworkManager[48904]: <info>  [1769523948.0623] device (tap8f387573-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:25:48 np0005597378 systemd[1]: Started Virtual Machine qemu-176-instance-00000090.
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.078 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[80383a9c-3827-4d2c-8d04-6529791240e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 podman[370449]: 2026-01-27 14:25:48.081391297 +0000 UTC m=+0.037489901 container died a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.085 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:48Z|01540|binding|INFO|Setting lport 8f387573-0891-4f0a-9601-3736c186d288 ovn-installed in OVS
Jan 27 09:25:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:48Z|01541|binding|INFO|Setting lport 8f387573-0891-4f0a-9601-3736c186d288 up in Southbound
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.092 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.106 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4fdd7d-80e7-4371-a6a9-3c24bfa57443]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.138 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[79c74b02-5ba5-4f57-a4b5-446cfe8aa33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 systemd-udevd[370462]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:25:48 np0005597378 NetworkManager[48904]: <info>  [1769523948.1454] manager: (tap3bdc2751-90): new Veth device (/org/freedesktop/NetworkManager/Devices/627)
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.144 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[188f876f-5e97-44e3-ba06-f36aed855884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.176 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[0bce5814-44fa-4875-875a-d7b10f9770be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.181 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[96fdb0c0-299a-4949-b9dd-b9fa72d31bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5e2b70f84faa0b8d8f6550438a70e50fd9709a02ead29431f8eef2c18865c452-merged.mount: Deactivated successfully.
Jan 27 09:25:48 np0005597378 NetworkManager[48904]: <info>  [1769523948.2105] device (tap3bdc2751-90): carrier: link connected
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.218 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c390b219-c54b-43b8-8a2a-4aaf64a75e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.236 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[913534c0-6b6a-4b95-877a-2e9b3986f250]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370498, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 podman[370449]: 2026-01-27 14:25:48.246440678 +0000 UTC m=+0.202539252 container remove a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9fbfc9-b957-4d81-8726-a842a669dbde]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:d56d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658783, 'tstamp': 658783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370499, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 systemd[1]: libpod-conmon-a4b3b83cbac844ac2ff3ed5aa08844407fb073dce9d2d7e4d645ff48b4d31ecb.scope: Deactivated successfully.
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.268 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc2b088-4099-403d-886d-9a1ac949904a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370500, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.323 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fb9f25-c8ad-410c-a175-ea96305f0472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.396 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0f6009-443d-4b32-8e56-324a2a968e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.398 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.399 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.399 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bdc2751-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:48 np0005597378 NetworkManager[48904]: <info>  [1769523948.4021] manager: (tap3bdc2751-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.402 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 kernel: tap3bdc2751-90: entered promiscuous mode
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.407 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bdc2751-90, col_values=(('external_ids', {'iface-id': 'd0dd5362-2188-444d-9dd1-a00fea1ddb1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:48 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:48Z|01542|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.409 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.409 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.410 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[afba0c1a-9091-49a5-bf53-d315db9339eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.411 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.pid.haproxy
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:25:48 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:48.412 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'env', 'PROCESS_TAG=haproxy-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bdc2751-918c-46d6-9a4d-729ae5cc6d9c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.427 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.454 238945 DEBUG nova.network.neutron [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.508 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.508 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance network_info: |[{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.509 238945 DEBUG oslo_concurrency.lockutils [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.509 238945 DEBUG nova.network.neutron [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.511 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start _get_guest_xml network_info=[{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.518 238945 WARNING nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.523 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.525 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.538 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.539 238945 DEBUG nova.virt.libvirt.host [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.540 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.541 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.542 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.542 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.542 238945 DEBUG nova.virt.hardware [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.546 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:48 np0005597378 podman[370618]: 2026-01-27 14:25:48.811790275 +0000 UTC m=+0.065969000 container create 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:25:48 np0005597378 podman[370618]: 2026-01-27 14:25:48.777457809 +0000 UTC m=+0.031636564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:25:48 np0005597378 systemd[1]: Started libpod-conmon-638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a.scope.
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.904 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523948.903954, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.906 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Started (Lifecycle Event)#033[00m
Jan 27 09:25:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.931 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.936 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523948.904715, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.936 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:25:48 np0005597378 podman[370659]: 2026-01-27 14:25:48.851399543 +0000 UTC m=+0.049895306 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:25:48 np0005597378 podman[370659]: 2026-01-27 14:25:48.948352568 +0000 UTC m=+0.146848311 container create 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.962 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:48 np0005597378 nova_compute[238941]: 2026-01-27 14:25:48.966 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.020 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:25:49 np0005597378 systemd[1]: Started libpod-conmon-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff.scope.
Jan 27 09:25:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:49 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ed0e26ea6428fe6635f263a1d3467fd96eb5dadaee64722fe6a65a5a04a8d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 134 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 27 09:25:49 np0005597378 podman[370618]: 2026-01-27 14:25:49.239728176 +0000 UTC m=+0.493906921 container init 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:25:49 np0005597378 podman[370618]: 2026-01-27 14:25:49.249446608 +0000 UTC m=+0.503625333 container start 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:25:49 np0005597378 nifty_khorana[370677]: 167 167
Jan 27 09:25:49 np0005597378 systemd[1]: libpod-638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a.scope: Deactivated successfully.
Jan 27 09:25:49 np0005597378 podman[370618]: 2026-01-27 14:25:49.31589541 +0000 UTC m=+0.570074155 container attach 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:25:49 np0005597378 podman[370618]: 2026-01-27 14:25:49.317190855 +0000 UTC m=+0.571369600 container died 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:25:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:25:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500750089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.341 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.372 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.378 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.489 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.490 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.490 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.490 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Processing event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.491 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.492 238945 DEBUG oslo_concurrency.lockutils [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.492 238945 DEBUG nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] No waiting events found dispatching network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.492 238945 WARNING nova.compute.manager [req-cda4976a-6a05-4e71-bb86-c53d9842180a req-1f574c4c-fcbf-436d-8cf8-4788f3805cca 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received unexpected event network-vif-plugged-8f387573-0891-4f0a-9601-3736c186d288 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.493 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.500 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523949.4990327, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.509 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.513 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.524 238945 INFO nova.virt.libvirt.driver [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance spawned successfully.#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.524 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.541 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.550 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.555 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.555 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.556 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.556 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.557 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.557 238945 DEBUG nova.virt.libvirt.driver [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.570 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.627 238945 INFO nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 9.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.628 238945 DEBUG nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:49 np0005597378 podman[370659]: 2026-01-27 14:25:49.642847048 +0000 UTC m=+0.841342821 container init 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:49 np0005597378 podman[370659]: 2026-01-27 14:25:49.650598286 +0000 UTC m=+0.849094029 container start 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:25:49 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : New worker (370746) forked
Jan 27 09:25:49 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : Loading success.
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.699 238945 INFO nova.compute.manager [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 12.24 seconds to build instance.#033[00m
Jan 27 09:25:49 np0005597378 nova_compute[238941]: 2026-01-27 14:25:49.732 238945 DEBUG oslo_concurrency.lockutils [None req-9e6dd3a1-8650-4c92-8b29-5d8d34ef6ee8 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:25:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332249388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.052 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.054 238945 DEBUG nova.virt.libvirt.vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=145,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0me1tmc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:44Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=0131fc36-bc84-47cd-8067-04bef1ed346b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.054 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.055 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.056 238945 DEBUG nova.objects.instance [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid 0131fc36-bc84-47cd-8067-04bef1ed346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.069 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <uuid>0131fc36-bc84-47cd-8067-04bef1ed346b</uuid>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <name>instance-00000091</name>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658</nova:name>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:25:48</nova:creationTime>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <nova:port uuid="a97b74ff-5e1f-4cb1-a688-f986acf75619">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <entry name="serial">0131fc36-bc84-47cd-8067-04bef1ed346b</entry>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <entry name="uuid">0131fc36-bc84-47cd-8067-04bef1ed346b</entry>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/0131fc36-bc84-47cd-8067-04bef1ed346b_disk">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:f4:e3:b5"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <target dev="tapa97b74ff-5e"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/console.log" append="off"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:25:50 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:25:50 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:25:50 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:25:50 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Preparing to wait for external event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.070 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.071 238945 DEBUG nova.virt.libvirt.vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=145,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0me1tmc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:44Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=0131fc36-bc84-47cd-8067-04bef1ed346b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.071 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.071 238945 DEBUG nova.network.os_vif_util [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.072 238945 DEBUG os_vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.073 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.073 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.075 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.075 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa97b74ff-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.076 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa97b74ff-5e, col_values=(('external_ids', {'iface-id': 'a97b74ff-5e1f-4cb1-a688-f986acf75619', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:e3:b5', 'vm-uuid': '0131fc36-bc84-47cd-8067-04bef1ed346b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:50 np0005597378 NetworkManager[48904]: <info>  [1769523950.0794] manager: (tapa97b74ff-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.080 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:25:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-00c6d09c8f477b50b0ef066919d4535e9b553bee0fed5015d1ece8398628f6f2-merged.mount: Deactivated successfully.
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.086 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.087 238945 INFO os_vif [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e')#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.192 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.193 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.193 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:f4:e3:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.194 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Using config drive#033[00m
Jan 27 09:25:50 np0005597378 podman[370618]: 2026-01-27 14:25:50.216473597 +0000 UTC m=+1.470652322 container remove 638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:25:50 np0005597378 nova_compute[238941]: 2026-01-27 14:25:50.225 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:50 np0005597378 systemd[1]: libpod-conmon-638cdb2eff3b0ab418aca5f4d79f9cf8139ed7ddcf9d6cfc1b0325ba614e928a.scope: Deactivated successfully.
Jan 27 09:25:50 np0005597378 podman[370785]: 2026-01-27 14:25:50.371306723 +0000 UTC m=+0.026540877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:25:50 np0005597378 podman[370785]: 2026-01-27 14:25:50.467162178 +0000 UTC m=+0.122396302 container create 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:25:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:50 np0005597378 systemd[1]: Started libpod-conmon-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope.
Jan 27 09:25:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:50 np0005597378 podman[370785]: 2026-01-27 14:25:50.66301492 +0000 UTC m=+0.318249074 container init 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:25:50 np0005597378 podman[370785]: 2026-01-27 14:25:50.706665467 +0000 UTC m=+0.361899591 container start 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:25:50 np0005597378 podman[370785]: 2026-01-27 14:25:50.754758394 +0000 UTC m=+0.409992548 container attach 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:25:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Jan 27 09:25:51 np0005597378 lvm[370879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:25:51 np0005597378 lvm[370879]: VG ceph_vg0 finished
Jan 27 09:25:51 np0005597378 lvm[370881]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:25:51 np0005597378 lvm[370881]: VG ceph_vg1 finished
Jan 27 09:25:51 np0005597378 lvm[370883]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:25:51 np0005597378 lvm[370883]: VG ceph_vg2 finished
Jan 27 09:25:51 np0005597378 affectionate_tu[370802]: {}
Jan 27 09:25:51 np0005597378 systemd[1]: libpod-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope: Deactivated successfully.
Jan 27 09:25:51 np0005597378 systemd[1]: libpod-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope: Consumed 1.422s CPU time.
Jan 27 09:25:51 np0005597378 podman[370785]: 2026-01-27 14:25:51.566388312 +0000 UTC m=+1.221622456 container died 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:25:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bf23d66e3963bfa9afd1d62b72dfb056acebf482323408f922c79e2a443a5c9d-merged.mount: Deactivated successfully.
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.066 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Creating config drive at /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.074 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4k3_av4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.230 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4k3_av4" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.277 238945 DEBUG nova.storage.rbd_utils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.290 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:52 np0005597378 podman[370785]: 2026-01-27 14:25:52.34052839 +0000 UTC m=+1.995762544 container remove 62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:25:52 np0005597378 systemd[1]: libpod-conmon-62edd8c3f9a09819f4d828214cf74dbe918a0c1aa582876ef25dbadb1d692444.scope: Deactivated successfully.
Jan 27 09:25:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:25:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:25:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.751 238945 DEBUG nova.network.neutron [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updated VIF entry in instance network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.753 238945 DEBUG nova.network.neutron [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.843 238945 DEBUG oslo_concurrency.processutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config 0131fc36-bc84-47cd-8067-04bef1ed346b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.843 238945 INFO nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deleting local config drive /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b/disk.config because it was imported into RBD.#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.854 238945 DEBUG oslo_concurrency.lockutils [req-d50c4563-c737-4d1c-b9f2-27d9fcd577be req-62204aa1-624c-4125-9551-224674e430c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:52 np0005597378 kernel: tapa97b74ff-5e: entered promiscuous mode
Jan 27 09:25:52 np0005597378 systemd-udevd[370880]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:25:52 np0005597378 NetworkManager[48904]: <info>  [1769523952.9302] manager: (tapa97b74ff-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Jan 27 09:25:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:52Z|01543|binding|INFO|Claiming lport a97b74ff-5e1f-4cb1-a688-f986acf75619 for this chassis.
Jan 27 09:25:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:52Z|01544|binding|INFO|a97b74ff-5e1f-4cb1-a688-f986acf75619: Claiming fa:16:3e:f4:e3:b5 10.100.0.8
Jan 27 09:25:52 np0005597378 NetworkManager[48904]: <info>  [1769523952.9321] device (tapa97b74ff-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:25:52 np0005597378 NetworkManager[48904]: <info>  [1769523952.9331] device (tapa97b74ff-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:52 np0005597378 nova_compute[238941]: 2026-01-27 14:25:52.937 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.947 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:b5 10.100.0.8'], port_security=['fa:16:3e:f4:e3:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0131fc36-bc84-47cd-8067-04bef1ed346b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '401bcc9e-e379-4df5-b1b1-d040fa28b0f0 66468c20-6e25-42a7-908a-965ba4bd54ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a97b74ff-5e1f-4cb1-a688-f986acf75619) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.948 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a97b74ff-5e1f-4cb1-a688-f986acf75619 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e bound to our chassis#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.949 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07470876-8c4c-4f83-bb7f-48d1eefc447e#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.960 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ce25c1d5-ce7e-453b-bf71-74c869813a7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.961 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap07470876-81 in ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.964 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap07470876-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.964 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[86e73c95-1f6f-4711-aa76-5bf4a4cd1eef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:52 np0005597378 systemd-machined[207425]: New machine qemu-177-instance-00000091.
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[31787a10-c5e8-41a0-8bb8-ec7f00edf215]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:52.977 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[756a51fc-86a7-43b3-b72a-531be2628415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:52 np0005597378 systemd[1]: Started Virtual Machine qemu-177-instance-00000091.
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.002 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5cc493-b00d-45da-b383-13da38e8b07a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:53Z|01545|binding|INFO|Setting lport a97b74ff-5e1f-4cb1-a688-f986acf75619 ovn-installed in OVS
Jan 27 09:25:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:53Z|01546|binding|INFO|Setting lport a97b74ff-5e1f-4cb1-a688-f986acf75619 up in Southbound
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.041 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e89c1c61-1414-4c7d-a982-1df4a7a08e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.049 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[efed818a-3b52-49d2-823e-d2bdf2b5f98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 NetworkManager[48904]: <info>  [1769523953.0513] manager: (tap07470876-80): new Veth device (/org/freedesktop/NetworkManager/Devices/631)
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.082 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[787db29f-62be-473c-bc87-96874734a7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.088 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f56b1faf-66b6-4625-b311-0397d2c57ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:53 np0005597378 NetworkManager[48904]: <info>  [1769523953.1152] device (tap07470876-80): carrier: link connected
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.122 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f94264be-e160-4422-b17c-365a430b6a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.145 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[82882160-8fd3-48c9-93e5-875946ad774f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371006, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.165 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db948614-4d7a-4531-9d78-ab6ffb44d4d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:2039'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659273, 'tstamp': 659273}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371007, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.191 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[76943b21-9b12-419b-9753-cb1c308babae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371008, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[37a38666-8aaa-4e1f-93e6-50afa3e5a9de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.289 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc0df9d-5de8-4416-b8db-acbb7eeff1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.291 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.291 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.292 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07470876-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:53 np0005597378 NetworkManager[48904]: <info>  [1769523953.2950] manager: (tap07470876-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Jan 27 09:25:53 np0005597378 kernel: tap07470876-80: entered promiscuous mode
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.298 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07470876-80, col_values=(('external_ids', {'iface-id': 'd43985de-77e3-4402-a6c7-37813cd055a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:53 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:53Z|01547|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.318 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/07470876-8c4c-4f83-bb7f-48d1eefc447e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/07470876-8c4c-4f83-bb7f-48d1eefc447e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.319 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[73efdd7f-0b5e-4d10-a68c-670d63797ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.320 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/07470876-8c4c-4f83-bb7f-48d1eefc447e.pid.haproxy
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 07470876-8c4c-4f83-bb7f-48d1eefc447e
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:25:53 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:25:53.322 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'env', 'PROCESS_TAG=haproxy-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/07470876-8c4c-4f83-bb7f-48d1eefc447e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:25:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:53 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.597 238945 DEBUG nova.compute.manager [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.597 238945 DEBUG oslo_concurrency.lockutils [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.598 238945 DEBUG oslo_concurrency.lockutils [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.598 238945 DEBUG oslo_concurrency.lockutils [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.598 238945 DEBUG nova.compute.manager [req-19f6d3ab-91ba-4c60-bbcb-e5fee69c08c4 req-d2c608f3-0be5-49b4-97f3-ec43d789bae7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Processing event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.659 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523953.6587224, 0131fc36-bc84-47cd-8067-04bef1ed346b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.659 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Started (Lifecycle Event)#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.662 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.665 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.669 238945 INFO nova.virt.libvirt.driver [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance spawned successfully.#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.669 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.696 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.701 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.715 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.716 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.717 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.718 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.720 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.721 238945 DEBUG nova.virt.libvirt.driver [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.762 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.762 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523953.6589286, 0131fc36-bc84-47cd-8067-04bef1ed346b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.763 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:25:53 np0005597378 podman[371080]: 2026-01-27 14:25:53.71028406 +0000 UTC m=+0.032808676 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.807 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.810 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523953.6652231, 0131fc36-bc84-47cd-8067-04bef1ed346b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.811 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.832 238945 INFO nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 9.10 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.832 238945 DEBUG nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.839 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.842 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:25:53 np0005597378 podman[371080]: 2026-01-27 14:25:53.872917076 +0000 UTC m=+0.195441672 container create 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:25:53 np0005597378 systemd[1]: Started libpod-conmon-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672.scope.
Jan 27 09:25:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:25:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0983a650b3169c921500c114c9fdc13efb4c9ab6f00a01ee418ff9eafbf101/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:25:53 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.997 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:53.999 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.002 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.019 238945 INFO nova.compute.manager [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 10.51 seconds to build instance.#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.023 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:25:54 np0005597378 podman[371080]: 2026-01-27 14:25:54.062993192 +0000 UTC m=+0.385517808 container init 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:25:54 np0005597378 podman[371080]: 2026-01-27 14:25:54.070563146 +0000 UTC m=+0.393087742 container start 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.071 238945 DEBUG oslo_concurrency.lockutils [None req-1cfb73e1-d349-41ea-adff-b851319db2f7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:54 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : New worker (371102) forked
Jan 27 09:25:54 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : Loading success.
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.145 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.146 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.156 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.156 238945 INFO nova.compute.claims [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.363 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:25:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511408161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.955 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.962 238945 DEBUG nova.compute.provider_tree [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:25:54 np0005597378 nova_compute[238941]: 2026-01-27 14:25:54.993 238945 DEBUG nova.scheduler.client.report [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.018 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.021 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.079 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.085 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.086 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.104 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.140 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:25:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 112 op/s
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.247 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.249 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.250 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Creating image(s)#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.278 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.308 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.343 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.350 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.404 238945 DEBUG nova.policy [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d46184e35e7421399cd129ff694002b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f23ddb09daa435abd7b8175bd920876', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.446 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.447 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.448 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.448 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.482 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.488 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f fb802d98-5381-45db-a4c3-c14ad2e557d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:55 np0005597378 NetworkManager[48904]: <info>  [1769523955.6858] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Jan 27 09:25:55 np0005597378 NetworkManager[48904]: <info>  [1769523955.6865] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.726 238945 DEBUG nova.compute.manager [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG oslo_concurrency.lockutils [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG oslo_concurrency.lockutils [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG oslo_concurrency.lockutils [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.727 238945 DEBUG nova.compute.manager [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] No waiting events found dispatching network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.728 238945 WARNING nova.compute.manager [req-1218f7f7-b763-447e-a087-83c3e17cddf0 req-e6450331-b0d8-4caf-80ee-9c5b3ac392cd 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received unexpected event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:25:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:55Z|01548|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 09:25:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:25:55Z|01549|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 09:25:55 np0005597378 nova_compute[238941]: 2026-01-27 14:25:55.810 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.150 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f fb802d98-5381-45db-a4c3-c14ad2e557d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.226 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] resizing rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.394 238945 DEBUG nova.objects.instance [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'migration_context' on Instance uuid fb802d98-5381-45db-a4c3-c14ad2e557d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.409 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.409 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Ensure instance console log exists: /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.411 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.412 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:25:56 np0005597378 nova_compute[238941]: 2026-01-27 14:25:56.412 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.020 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Successfully created port: 184493bf-c349-4722-ac0f-2c428638e3d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:25:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 134 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.669 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Successfully updated port: 184493bf-c349-4722-ac0f-2c428638e3d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.700 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.701 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.701 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.783 238945 DEBUG nova.compute.manager [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.784 238945 DEBUG nova.compute.manager [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing instance network info cache due to event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.784 238945 DEBUG oslo_concurrency.lockutils [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.834 238945 DEBUG nova.compute.manager [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-changed-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.834 238945 DEBUG nova.compute.manager [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing instance network info cache due to event network-changed-8f387573-0891-4f0a-9601-3736c186d288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.835 238945 DEBUG oslo_concurrency.lockutils [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.835 238945 DEBUG oslo_concurrency.lockutils [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.835 238945 DEBUG nova.network.neutron [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing network info cache for port 8f387573-0891-4f0a-9601-3736c186d288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:57 np0005597378 nova_compute[238941]: 2026-01-27 14:25:57.846 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:25:58 np0005597378 nova_compute[238941]: 2026-01-27 14:25:58.093 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.076 238945 DEBUG nova.network.neutron [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.107 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.107 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance network_info: |[{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.107 238945 DEBUG oslo_concurrency.lockutils [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.108 238945 DEBUG nova.network.neutron [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.113 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start _get_guest_xml network_info=[{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.116 238945 WARNING nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.120 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.120 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.125 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.125 238945 DEBUG nova.virt.libvirt.host [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.126 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.126 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.126 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.127 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.128 238945 DEBUG nova.virt.hardware [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.131 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 156 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 27 09:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078881261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078881261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:25:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:25:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016943676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.748 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.772 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.776 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.954 238945 DEBUG nova.compute.manager [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.955 238945 DEBUG nova.compute.manager [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing instance network info cache due to event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.955 238945 DEBUG oslo_concurrency.lockutils [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.955 238945 DEBUG oslo_concurrency.lockutils [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:25:59 np0005597378 nova_compute[238941]: 2026-01-27 14:25:59.956 238945 DEBUG nova.network.neutron [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.083 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:26:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382119676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.351 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.353 238945 DEBUG nova.virt.libvirt.vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1721864726',display_name='tempest-TestNetworkBasicOps-server-1721864726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1721864726',id=146,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiAfQFWh/P5sR8CJeetN5Y+QKWiGTUVdn0zcI4otWOiDUvErfJPsGzM/uL1uYTiQAgBsUOPfQ5T6SYnCAImXeFhJ2GTbmK3gQHdA7VHWmfjtXGf++SSpErbTeu32DMIvg==',key_name='tempest-TestNetworkBasicOps-63078609',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-lox03tl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:55Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=fb802d98-5381-45db-a4c3-c14ad2e557d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.353 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.354 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.356 238945 DEBUG nova.objects.instance [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb802d98-5381-45db-a4c3-c14ad2e557d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.374 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <uuid>fb802d98-5381-45db-a4c3-c14ad2e557d1</uuid>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <name>instance-00000092</name>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestNetworkBasicOps-server-1721864726</nova:name>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:25:59</nova:creationTime>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:user uuid="8d46184e35e7421399cd129ff694002b">tempest-TestNetworkBasicOps-761138983-project-member</nova:user>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:project uuid="6f23ddb09daa435abd7b8175bd920876">tempest-TestNetworkBasicOps-761138983</nova:project>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <nova:port uuid="184493bf-c349-4722-ac0f-2c428638e3d3">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <entry name="serial">fb802d98-5381-45db-a4c3-c14ad2e557d1</entry>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <entry name="uuid">fb802d98-5381-45db-a4c3-c14ad2e557d1</entry>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/fb802d98-5381-45db-a4c3-c14ad2e557d1_disk">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:fe:84:96"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <target dev="tap184493bf-c3"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/console.log" append="off"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:26:00 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:26:00 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:26:00 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:26:00 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.375 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Preparing to wait for external event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.376 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.376 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.376 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.377 238945 DEBUG nova.virt.libvirt.vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:25:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1721864726',display_name='tempest-TestNetworkBasicOps-server-1721864726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1721864726',id=146,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiAfQFWh/P5sR8CJeetN5Y+QKWiGTUVdn0zcI4otWOiDUvErfJPsGzM/uL1uYTiQAgBsUOPfQ5T6SYnCAImXeFhJ2GTbmK3gQHdA7VHWmfjtXGf++SSpErbTeu32DMIvg==',key_name='tempest-TestNetworkBasicOps-63078609',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-lox03tl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:25:55Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=fb802d98-5381-45db-a4c3-c14ad2e557d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.377 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.378 238945 DEBUG nova.network.os_vif_util [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.379 238945 DEBUG os_vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.379 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.380 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.380 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.384 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap184493bf-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.384 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap184493bf-c3, col_values=(('external_ids', {'iface-id': '184493bf-c349-4722-ac0f-2c428638e3d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:84:96', 'vm-uuid': 'fb802d98-5381-45db-a4c3-c14ad2e557d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:00 np0005597378 NetworkManager[48904]: <info>  [1769523960.3873] manager: (tap184493bf-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.390 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.394 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.395 238945 INFO os_vif [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3')#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.464 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.465 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.465 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] No VIF found with MAC fa:16:3e:fe:84:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.466 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Using config drive#033[00m
Jan 27 09:26:00 np0005597378 nova_compute[238941]: 2026-01-27 14:26:00.491 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 181 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.297 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Creating config drive at /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config#033[00m
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.301 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_41f99vv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.448 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_41f99vv" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.482 238945 DEBUG nova.storage.rbd_utils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] rbd image fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.488 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.882 238945 DEBUG oslo_concurrency.processutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config fb802d98-5381-45db-a4c3-c14ad2e557d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.883 238945 INFO nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deleting local config drive /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1/disk.config because it was imported into RBD.#033[00m
Jan 27 09:26:01 np0005597378 NetworkManager[48904]: <info>  [1769523961.9372] manager: (tap184493bf-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Jan 27 09:26:01 np0005597378 kernel: tap184493bf-c3: entered promiscuous mode
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:01Z|01550|binding|INFO|Claiming lport 184493bf-c349-4722-ac0f-2c428638e3d3 for this chassis.
Jan 27 09:26:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:01Z|01551|binding|INFO|184493bf-c349-4722-ac0f-2c428638e3d3: Claiming fa:16:3e:fe:84:96 10.100.0.3
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.953 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:84:96 10.100.0.3'], port_security=['fa:16:3e:fe:84:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'fb802d98-5381-45db-a4c3-c14ad2e557d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18e2903-a184-4f44-9330-e27dd970207e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7158c75-b922-4d85-bcb0-16239fae726c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8fdd222-623b-4052-bc0a-791c75513848, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=184493bf-c349-4722-ac0f-2c428638e3d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.955 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 184493bf-c349-4722-ac0f-2c428638e3d3 in datapath b18e2903-a184-4f44-9330-e27dd970207e bound to our chassis#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.956 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b18e2903-a184-4f44-9330-e27dd970207e#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.970 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ceec8c1e-c21f-4708-9043-e59b544d7d78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.971 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb18e2903-a1 in ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:26:01 np0005597378 systemd-udevd[371437]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.975 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb18e2903-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.975 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[88413609-e1f2-45e7-b4a2-c4ef8dfd36c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.976 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b03e5b-5fd5-489e-8be4-2f33b18cc766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:01Z|01552|binding|INFO|Setting lport 184493bf-c349-4722-ac0f-2c428638e3d3 ovn-installed in OVS
Jan 27 09:26:01 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:01Z|01553|binding|INFO|Setting lport 184493bf-c349-4722-ac0f-2c428638e3d3 up in Southbound
Jan 27 09:26:01 np0005597378 NetworkManager[48904]: <info>  [1769523961.9906] device (tap184493bf-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:26:01 np0005597378 NetworkManager[48904]: <info>  [1769523961.9912] device (tap184493bf-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:26:01 np0005597378 nova_compute[238941]: 2026-01-27 14:26:01.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:01.990 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[de7c5ce4-6116-471c-ab48-758c54c5d2bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:01 np0005597378 systemd-machined[207425]: New machine qemu-178-instance-00000092.
Jan 27 09:26:02 np0005597378 systemd[1]: Started Virtual Machine qemu-178-instance-00000092.
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.015 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[aa36cf9e-5fd1-47c1-a09b-39a39e7642ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.052 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[855295ee-0681-43bb-b7bb-54491cf83c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 NetworkManager[48904]: <info>  [1769523962.0596] manager: (tapb18e2903-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/637)
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.058 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e6009-e061-4d9e-b79c-b2720a42a1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.095 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec97d3f-0657-452a-aa13-c9625046168a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.099 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a400bb19-8b99-4420-9aef-b09c93ad0055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 NetworkManager[48904]: <info>  [1769523962.1309] device (tapb18e2903-a0): carrier: link connected
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.135 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b2946e-19f3-4e4b-bb0a-7c3abe8ccc39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.139 238945 DEBUG nova.network.neutron [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated VIF entry in instance network info cache for port 8f387573-0891-4f0a-9601-3736c186d288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.140 238945 DEBUG nova.network.neutron [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.154 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[61c8db08-34a3-4d6f-bcef-6fbda3391dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb18e2903-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:18:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660175, 'reachable_time': 31669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371470, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.168 238945 DEBUG oslo_concurrency.lockutils [req-638c9b3a-6e86-486a-bdb1-cef2fd2ec54b req-edbea05c-aa0e-472d-8d47-da76a417b905 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.175 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc3c3d8-60d9-4cab-b770-a0e41a107c69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:181e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660175, 'tstamp': 660175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371471, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.193 238945 DEBUG nova.compute.manager [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.193 238945 DEBUG oslo_concurrency.lockutils [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.194 238945 DEBUG oslo_concurrency.lockutils [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.194 238945 DEBUG oslo_concurrency.lockutils [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.194 238945 DEBUG nova.compute.manager [req-a5ecf4eb-9326-4f88-b92c-56ce2b5aed3c req-60d0e2d5-84c5-4d44-8487-234b165122b6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Processing event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.198 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0683d6c0-d2dc-454c-bb5e-1e51113bae6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb18e2903-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:18:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660175, 'reachable_time': 31669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371472, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.232 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c0410094-507c-4c07-9b38-82eda1169d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.298 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fb14ff60-13b5-45f2-942c-c33b2762372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.300 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18e2903-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.300 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.301 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb18e2903-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:02 np0005597378 NetworkManager[48904]: <info>  [1769523962.3036] manager: (tapb18e2903-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Jan 27 09:26:02 np0005597378 kernel: tapb18e2903-a0: entered promiscuous mode
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.306 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb18e2903-a0, col_values=(('external_ids', {'iface-id': '4fbf5a6a-fa6c-49b1-b291-7451f8fe7b1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:02Z|01554|binding|INFO|Releasing lport 4fbf5a6a-fa6c-49b1-b291-7451f8fe7b1e from this chassis (sb_readonly=0)
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.326 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b18e2903-a184-4f44-9330-e27dd970207e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b18e2903-a184-4f44-9330-e27dd970207e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f68586-eec0-4e18-bed6-16803089630a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.328 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-b18e2903-a184-4f44-9330-e27dd970207e
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/b18e2903-a184-4f44-9330-e27dd970207e.pid.haproxy
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID b18e2903-a184-4f44-9330-e27dd970207e
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:26:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:02.330 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'env', 'PROCESS_TAG=haproxy-b18e2903-a184-4f44-9330-e27dd970207e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b18e2903-a184-4f44-9330-e27dd970207e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.777 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523962.7771897, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.778 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Started (Lifecycle Event)#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.780 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.783 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.788 238945 INFO nova.virt.libvirt.driver [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance spawned successfully.#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.788 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.796 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.804 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.810 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.811 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.812 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.812 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.812 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:02 np0005597378 podman[371540]: 2026-01-27 14:26:02.71699386 +0000 UTC m=+0.021107690 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.813 238945 DEBUG nova.virt.libvirt.driver [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:02Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:c5:09 10.100.0.12
Jan 27 09:26:02 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:02Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:c5:09 10.100.0.12
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.835 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.835 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523962.7780385, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.835 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.860 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.863 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523962.7824726, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.864 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.870 238945 INFO nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 7.62 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.871 238945 DEBUG nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.885 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:02 np0005597378 nova_compute[238941]: 2026-01-27 14:26:02.889 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:26:03 np0005597378 podman[371540]: 2026-01-27 14:26:03.033652719 +0000 UTC m=+0.337766519 container create 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.054 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.062 238945 DEBUG nova.network.neutron [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updated VIF entry in instance network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.063 238945 DEBUG nova.network.neutron [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.066 238945 INFO nova.compute.manager [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 8.95 seconds to build instance.#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.084 238945 DEBUG oslo_concurrency.lockutils [req-df8cd817-fdec-4e86-ab5f-a3bee45e7a75 req-f06e9976-9b7f-4b2f-8ee2-48bc5028b81f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.092 238945 DEBUG oslo_concurrency.lockutils [None req-b7b18b60-cd09-4eff-b1f3-b4a1a8b86c8b 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.097 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:03 np0005597378 systemd[1]: Started libpod-conmon-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478.scope.
Jan 27 09:26:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c82db2d2d068220333869382cd542849d31de17f853203896cd066e2c627c15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.207 238945 DEBUG nova.network.neutron [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updated VIF entry in instance network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.207 238945 DEBUG nova.network.neutron [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:03 np0005597378 podman[371540]: 2026-01-27 14:26:03.209656966 +0000 UTC m=+0.513770796 container init 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:26:03 np0005597378 podman[371540]: 2026-01-27 14:26:03.218754322 +0000 UTC m=+0.522868122 container start 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 09:26:03 np0005597378 nova_compute[238941]: 2026-01-27 14:26:03.224 238945 DEBUG oslo_concurrency.lockutils [req-777bfb8d-1445-45e0-bca8-f89adbf892f9 req-345ebcfd-84cd-4553-a3cc-2ec2453d0de5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 181 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 27 09:26:03 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : New worker (371566) forked
Jan 27 09:26:03 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : Loading success.
Jan 27 09:26:04 np0005597378 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG nova.compute.manager [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:04 np0005597378 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG oslo_concurrency.lockutils [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:04 np0005597378 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG oslo_concurrency.lockutils [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:04 np0005597378 nova_compute[238941]: 2026-01-27 14:26:04.279 238945 DEBUG oslo_concurrency.lockutils [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:04 np0005597378 nova_compute[238941]: 2026-01-27 14:26:04.280 238945 DEBUG nova.compute.manager [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] No waiting events found dispatching network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:04 np0005597378 nova_compute[238941]: 2026-01-27 14:26:04.280 238945 WARNING nova.compute.manager [req-fbe5c3d6-3190-46d1-9f2f-a4a5298e8900 req-0abfb07e-c2a5-4e39-a6df-ca274c14dba6 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received unexpected event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:26:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 194 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.9 MiB/s wr, 243 op/s
Jan 27 09:26:05 np0005597378 nova_compute[238941]: 2026-01-27 14:26:05.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 211 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.404 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.405 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.999 238945 DEBUG nova.compute.manager [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:07 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.999 238945 DEBUG nova.compute.manager [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing instance network info cache due to event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:07.999 238945 DEBUG oslo_concurrency.lockutils [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190830493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.001 238945 DEBUG oslo_concurrency.lockutils [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.002 238945 DEBUG nova.network.neutron [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.026 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.102 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.102 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.105 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.106 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.108 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.295 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.297 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2974MB free_disk=59.90020981896669GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.297 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.297 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 6635dda1-c175-403d-ac21-0ec9dca6a77c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance 0131fc36-bc84-47cd-8067-04bef1ed346b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.470 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance fb802d98-5381-45db-a4c3-c14ad2e557d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.471 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:26:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:08Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:e3:b5 10.100.0.8
Jan 27 09:26:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:08Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:e3:b5 10.100.0.8
Jan 27 09:26:08 np0005597378 nova_compute[238941]: 2026-01-27 14:26:08.620 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.120 238945 DEBUG nova.network.neutron [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updated VIF entry in instance network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.121 238945 DEBUG nova.network.neutron [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.171 238945 DEBUG oslo_concurrency.lockutils [req-66ffb3ad-e4b8-4c2a-b057-a5da8b9f68ec req-49c7a3b4-e71f-4501-ac91-5a19c155e20b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 236 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 242 op/s
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019138885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.276 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.281 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.313 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.374212) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523969374252, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2067, "num_deletes": 251, "total_data_size": 3271569, "memory_usage": 3317568, "flush_reason": "Manual Compaction"}
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.401 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:26:09 np0005597378 nova_compute[238941]: 2026-01-27 14:26:09.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523969598262, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3203838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49755, "largest_seqno": 51821, "table_properties": {"data_size": 3194669, "index_size": 5663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19338, "raw_average_key_size": 20, "raw_value_size": 3176148, "raw_average_value_size": 3322, "num_data_blocks": 251, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523759, "oldest_key_time": 1769523759, "file_creation_time": 1769523969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 224160 microseconds, and 11870 cpu microseconds.
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.598365) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3203838 bytes OK
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.598391) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.664590) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.664631) EVENT_LOG_v1 {"time_micros": 1769523969664623, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.664654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3262873, prev total WAL file size 3262873, number of live WAL files 2.
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.665558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3128KB)], [116(8379KB)]
Jan 27 09:26:09 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523969665609, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11783956, "oldest_snapshot_seqno": -1}
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7455 keys, 10026072 bytes, temperature: kUnknown
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523970031732, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10026072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9976909, "index_size": 29403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 192899, "raw_average_key_size": 25, "raw_value_size": 9844667, "raw_average_value_size": 1320, "num_data_blocks": 1150, "num_entries": 7455, "num_filter_entries": 7455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.031966) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10026072 bytes
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.131748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.2 rd, 27.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7969, records dropped: 514 output_compression: NoCompression
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.131797) EVENT_LOG_v1 {"time_micros": 1769523970131778, "job": 70, "event": "compaction_finished", "compaction_time_micros": 366190, "compaction_time_cpu_micros": 26740, "output_level": 6, "num_output_files": 1, "total_output_size": 10026072, "num_input_records": 7969, "num_output_records": 7455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523970133119, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523970135735, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:09.665452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:10.135831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:10 np0005597378 nova_compute[238941]: 2026-01-27 14:26:10.391 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Jan 27 09:26:11 np0005597378 podman[371620]: 2026-01-27 14:26:11.719167665 +0000 UTC m=+0.057618164 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:26:13 np0005597378 nova_compute[238941]: 2026-01-27 14:26:13.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Jan 27 09:26:13 np0005597378 nova_compute[238941]: 2026-01-27 14:26:13.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:13 np0005597378 nova_compute[238941]: 2026-01-27 14:26:13.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:13 np0005597378 podman[371639]: 2026-01-27 14:26:13.761539174 +0000 UTC m=+0.103130231 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:26:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.395 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.630 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.630 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.678 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.770 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.770 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.776 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:26:15 np0005597378 nova_compute[238941]: 2026-01-27 14:26:15.777 238945 INFO nova.compute.claims [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.039 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206535781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.643 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.651 238945 DEBUG nova.compute.provider_tree [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.674 238945 DEBUG nova.scheduler.client.report [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.704 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.704 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.783 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.783 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.866 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:26:16 np0005597378 nova_compute[238941]: 2026-01-27 14:26:16.891 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.068 238945 DEBUG nova.policy [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54150da90e49498bb01ba6afc80f5562', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90600d8549a94e0fa1932cd257a4f609', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.082 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.083 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.084 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Creating image(s)#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.102 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.122 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.140 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.143 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:26:17
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'vms', 'backups', 'default.rgw.log']
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.210 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.211 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.211 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.212 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.229 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.231 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 251 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:17 np0005597378 nova_compute[238941]: 2026-01-27 14:26:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:26:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.102 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:18Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:84:96 10.100.0.3
Jan 27 09:26:18 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:18Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:84:96 10.100.0.3
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:26:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.293 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Successfully created port: addbb44d-80d2-4bb4-ae54-d198de4a9755 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.424 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.673 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.674 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.674 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:26:18 np0005597378 nova_compute[238941]: 2026-01-27 14:26:18.674 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 263 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.139 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Successfully updated port: addbb44d-80d2-4bb4-ae54-d198de4a9755 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.187 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.188 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.188 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.355 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.397 238945 DEBUG nova.compute.manager [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.398 238945 DEBUG nova.compute.manager [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing instance network info cache due to event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.398 238945 DEBUG oslo_concurrency.lockutils [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.398 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.571 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:20 np0005597378 nova_compute[238941]: 2026-01-27 14:26:20.727 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] resizing rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:20.824537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523980824568, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 353, "num_deletes": 256, "total_data_size": 185197, "memory_usage": 193336, "flush_reason": "Manual Compaction"}
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523980937485, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 183800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51822, "largest_seqno": 52174, "table_properties": {"data_size": 181634, "index_size": 330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5278, "raw_average_key_size": 17, "raw_value_size": 177331, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523970, "oldest_key_time": 1769523970, "file_creation_time": 1769523980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 112994 microseconds, and 1270 cpu microseconds.
Jan 27 09:26:20 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:20.937530) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 183800 bytes OK
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:20.937547) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.139584) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.139660) EVENT_LOG_v1 {"time_micros": 1769523981139644, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.139707) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 182806, prev total WAL file size 182806, number of live WAL files 2.
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.140628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(179KB)], [119(9791KB)]
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981140707, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10209872, "oldest_snapshot_seqno": -1}
Jan 27 09:26:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 292 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 3.2 MiB/s wr, 89 op/s
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7234 keys, 10101733 bytes, temperature: kUnknown
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981392869, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10101733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10053343, "index_size": 29206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 189241, "raw_average_key_size": 26, "raw_value_size": 9924316, "raw_average_value_size": 1371, "num_data_blocks": 1138, "num_entries": 7234, "num_filter_entries": 7234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769523981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.398 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.398 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.418 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.393681) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10101733 bytes
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.487775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.4 rd, 40.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.6 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(110.5) write-amplify(55.0) OK, records in: 7753, records dropped: 519 output_compression: NoCompression
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.487814) EVENT_LOG_v1 {"time_micros": 1769523981487799, "job": 72, "event": "compaction_finished", "compaction_time_micros": 252811, "compaction_time_cpu_micros": 23990, "output_level": 6, "num_output_files": 1, "total_output_size": 10101733, "num_input_records": 7753, "num_output_records": 7234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981488003, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769523981489665, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.140434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:21 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:26:21.489869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.505 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.505 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.510 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.510 238945 INFO nova.compute.claims [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.580 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.593 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.593 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.672 238945 DEBUG nova.objects.instance [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.704 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.744 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.745 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Ensure instance console log exists: /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.745 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.746 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:21 np0005597378 nova_compute[238941]: 2026-01-27 14:26:21.746 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983530081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.457 238945 DEBUG nova.network.neutron [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.472 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.478 238945 DEBUG nova.compute.provider_tree [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.500 238945 DEBUG nova.scheduler.client.report [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.505 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.505 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance network_info: |[{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.506 238945 DEBUG oslo_concurrency.lockutils [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.506 238945 DEBUG nova.network.neutron [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.510 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start _get_guest_xml network_info=[{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.514 238945 WARNING nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.518 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.519 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.523 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.524 238945 DEBUG nova.virt.libvirt.host [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.524 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.524 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.525 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.526 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.527 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.527 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.528 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.528 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.528 238945 DEBUG nova.virt.hardware [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.535 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.587 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.588 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.696 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.697 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.777 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.792 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.917 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.919 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.919 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Creating image(s)#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.946 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:22 np0005597378 nova_compute[238941]: 2026-01-27 14:26:22.976 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.033 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.039 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:26:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500607830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.089 238945 DEBUG nova.policy [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.127 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.153 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.157 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.193 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.194 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.195 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.195 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.222 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.227 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dab9f91a-166a-4055-95d9-c98bede611a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 292 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 3.0 MiB/s wr, 80 op/s
Jan 27 09:26:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:26:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571737836' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.752 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.753 238945 DEBUG nova.virt.libvirt.vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-285236295',display_name='tempest-TestGettingAddress-server-285236295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-285236295',id=147,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8ldqv5w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=1b157d23-83f3-456c-8dae-d4ac1bcf3cdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.754 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.755 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.756 238945 DEBUG nova.objects.instance [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.769 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <uuid>1b157d23-83f3-456c-8dae-d4ac1bcf3cdb</uuid>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <name>instance-00000093</name>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestGettingAddress-server-285236295</nova:name>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:26:22</nova:creationTime>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:user uuid="54150da90e49498bb01ba6afc80f5562">tempest-TestGettingAddress-1672904195-project-member</nova:user>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:project uuid="90600d8549a94e0fa1932cd257a4f609">tempest-TestGettingAddress-1672904195</nova:project>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <nova:port uuid="addbb44d-80d2-4bb4-ae54-d198de4a9755">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3c:49d9" ipVersion="6"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <entry name="serial">1b157d23-83f3-456c-8dae-d4ac1bcf3cdb</entry>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <entry name="uuid">1b157d23-83f3-456c-8dae-d4ac1bcf3cdb</entry>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:3c:49:d9"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <target dev="tapaddbb44d-80"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/console.log" append="off"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:26:23 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:26:23 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:26:23 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:26:23 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.770 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Preparing to wait for external event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.771 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.771 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.772 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.772 238945 DEBUG nova.virt.libvirt.vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-285236295',display_name='tempest-TestGettingAddress-server-285236295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-285236295',id=147,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8ldqv5w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:16Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=1b157d23-83f3-456c-8dae-d4ac1bcf3cdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.773 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.774 238945 DEBUG nova.network.os_vif_util [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.774 238945 DEBUG os_vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.775 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.776 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.777 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.780 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaddbb44d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.781 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaddbb44d-80, col_values=(('external_ids', {'iface-id': 'addbb44d-80d2-4bb4-ae54-d198de4a9755', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:49:d9', 'vm-uuid': '1b157d23-83f3-456c-8dae-d4ac1bcf3cdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:23 np0005597378 NetworkManager[48904]: <info>  [1769523983.7835] manager: (tapaddbb44d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:23 np0005597378 nova_compute[238941]: 2026-01-27 14:26:23.789 238945 INFO os_vif [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80')#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.046 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.047 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.047 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] No VIF found with MAC fa:16:3e:3c:49:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.047 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Using config drive#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.422 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.430 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Successfully created port: b3dcf519-7c56-406e-a80a-e3a3bdf38620 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.847 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f dab9f91a-166a-4055-95d9-c98bede611a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.914 238945 INFO nova.compute.manager [None req-7fdac3cc-5029-4446-b03b-dd8695d9b2cb 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Get console output#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.919 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:26:24 np0005597378 nova_compute[238941]: 2026-01-27 14:26:24.993 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.047 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Creating config drive at /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.052 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqvxrdr9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.087 238945 DEBUG nova.network.neutron [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updated VIF entry in instance network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.088 238945 DEBUG nova.network.neutron [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.108 238945 DEBUG oslo_concurrency.lockutils [req-c05858e6-031b-4400-9f5c-28f536485192 req-2c704492-cf64-4947-9ee3-bcce29190cf1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.194 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqvxrdr9" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.216 238945 DEBUG nova.storage.rbd_utils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] rbd image 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.220 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 333 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.371 238945 DEBUG nova.objects.instance [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid dab9f91a-166a-4055-95d9-c98bede611a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.376 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Successfully updated port: b3dcf519-7c56-406e-a80a-e3a3bdf38620 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.407 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.408 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Ensure instance console log exists: /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.408 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.409 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.409 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.414 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.414 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.415 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.484 238945 DEBUG nova.compute.manager [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.484 238945 DEBUG nova.compute.manager [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing instance network info cache due to event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.484 238945 DEBUG oslo_concurrency.lockutils [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:25Z|01555|binding|INFO|Releasing lport 4fbf5a6a-fa6c-49b1-b291-7451f8fe7b1e from this chassis (sb_readonly=0)
Jan 27 09:26:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:25Z|01556|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 09:26:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:25Z|01557|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.624 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:25 np0005597378 nova_compute[238941]: 2026-01-27 14:26:25.636 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:26:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.190 238945 DEBUG oslo_concurrency.processutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.971s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.191 238945 INFO nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deleting local config drive /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb/disk.config because it was imported into RBD.#033[00m
Jan 27 09:26:26 np0005597378 kernel: tapaddbb44d-80: entered promiscuous mode
Jan 27 09:26:26 np0005597378 NetworkManager[48904]: <info>  [1769523986.2515] manager: (tapaddbb44d-80): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:26Z|01558|binding|INFO|Claiming lport addbb44d-80d2-4bb4-ae54-d198de4a9755 for this chassis.
Jan 27 09:26:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:26Z|01559|binding|INFO|addbb44d-80d2-4bb4-ae54-d198de4a9755: Claiming fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.265 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], port_security=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe3c:49d9/64', 'neutron:device_id': '1b157d23-83f3-456c-8dae-d4ac1bcf3cdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=addbb44d-80d2-4bb4-ae54-d198de4a9755) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.267 154802 INFO neutron.agent.ovn.metadata.agent [-] Port addbb44d-80d2-4bb4-ae54-d198de4a9755 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c bound to our chassis#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.268 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c#033[00m
Jan 27 09:26:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:26Z|01560|binding|INFO|Setting lport addbb44d-80d2-4bb4-ae54-d198de4a9755 ovn-installed in OVS
Jan 27 09:26:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:26Z|01561|binding|INFO|Setting lport addbb44d-80d2-4bb4-ae54-d198de4a9755 up in Southbound
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.288 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6698e49b-0b05-4fbc-a057-e3426e544d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:26 np0005597378 systemd-machined[207425]: New machine qemu-179-instance-00000093.
Jan 27 09:26:26 np0005597378 systemd[1]: Started Virtual Machine qemu-179-instance-00000093.
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.322 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9db1e4cc-4385-493d-99ef-5633a9eea0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.327 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5de9c2-b379-4efe-a920-e4ef247f56c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:26 np0005597378 systemd-udevd[372182]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:26:26 np0005597378 NetworkManager[48904]: <info>  [1769523986.3415] device (tapaddbb44d-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:26:26 np0005597378 NetworkManager[48904]: <info>  [1769523986.3420] device (tapaddbb44d-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.361 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f7a54f-ad02-416a-b897-cccfa073d6be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.380 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ad437bfa-86b1-4c76-a257-3aab4259b07e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372192, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.406 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[12e1d6b0-51ee-45b3-b191-b5e6f70d3f61]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658797, 'tstamp': 658797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372194, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658801, 'tstamp': 658801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372194, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.409 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.411 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.412 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bdc2751-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.413 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bdc2751-90, col_values=(('external_ids', {'iface-id': 'd0dd5362-2188-444d-9dd1-a00fea1ddb1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:26.414 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.750 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523986.7503045, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.751 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Started (Lifecycle Event)#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.785 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.789 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523986.7506728, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.789 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.827 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.830 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:26:26 np0005597378 nova_compute[238941]: 2026-01-27 14:26:26.849 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 4.9 MiB/s wr, 111 op/s
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.403 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.876 238945 DEBUG nova.compute.manager [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.877 238945 DEBUG oslo_concurrency.lockutils [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.877 238945 DEBUG oslo_concurrency.lockutils [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.878 238945 DEBUG oslo_concurrency.lockutils [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.878 238945 DEBUG nova.compute.manager [req-07f28d82-f644-4644-b577-e7cbb2912bd3 req-fe2e6d0f-c694-4ec3-8313-ce4a3c10691d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Processing event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.879 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.880 238945 INFO nova.compute.manager [None req-582571a4-f1e1-4b34-a916-d5b3614688c8 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Get console output#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.884 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523987.8839445, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.884 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.886 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.887 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.892 238945 INFO nova.virt.libvirt.driver [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance spawned successfully.#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.893 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002982403203178416 of space, bias 1.0, pg target 0.8947209609535248 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693378901073319 of space, bias 1.0, pg target 0.20080136703219958 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0344823325697998e-06 of space, bias 4.0, pg target 0.0012413787990837597 quantized to 16 (current 16)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:26:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.925 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.926 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.926 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.926 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.927 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.927 238945 DEBUG nova.virt.libvirt.driver [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.930 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.932 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:26:27 np0005597378 nova_compute[238941]: 2026-01-27 14:26:27.965 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.006 238945 INFO nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 10.92 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.007 238945 DEBUG nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.084 238945 INFO nova.compute.manager [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 12.34 seconds to build instance.#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.104 238945 DEBUG oslo_concurrency.lockutils [None req-672ac6a2-e040-471e-80fd-5cfae2f288b2 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.107 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.732 238945 DEBUG nova.network.neutron [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.762 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.763 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance network_info: |[{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.763 238945 DEBUG oslo_concurrency.lockutils [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.763 238945 DEBUG nova.network.neutron [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.766 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start _get_guest_xml network_info=[{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.770 238945 WARNING nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.775 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.777 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.779 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.780 238945 DEBUG nova.virt.libvirt.host [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.780 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.780 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.781 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.782 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.782 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.782 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.783 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.783 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.783 238945 DEBUG nova.virt.hardware [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.786 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:28 np0005597378 nova_compute[238941]: 2026-01-27 14:26:28.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:29 np0005597378 nova_compute[238941]: 2026-01-27 14:26:29.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 420 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Jan 27 09:26:29 np0005597378 nova_compute[238941]: 2026-01-27 14:26:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:29 np0005597378 nova_compute[238941]: 2026-01-27 14:26:29.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:26:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506905460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:26:29 np0005597378 nova_compute[238941]: 2026-01-27 14:26:29.532 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:29 np0005597378 nova_compute[238941]: 2026-01-27 14:26:29.556 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:29 np0005597378 nova_compute[238941]: 2026-01-27 14:26:29.562 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:26:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3563204661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.193 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.195 238945 DEBUG nova.virt.libvirt.vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=148,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-oo4hn6a0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:22Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=dab9f91a-166a-4055-95d9-c98bede611a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.196 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.197 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.199 238945 DEBUG nova.objects.instance [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid dab9f91a-166a-4055-95d9-c98bede611a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.221 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <uuid>dab9f91a-166a-4055-95d9-c98bede611a4</uuid>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <name>instance-00000094</name>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674</nova:name>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:26:28</nova:creationTime>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <nova:port uuid="b3dcf519-7c56-406e-a80a-e3a3bdf38620">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <entry name="serial">dab9f91a-166a-4055-95d9-c98bede611a4</entry>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <entry name="uuid">dab9f91a-166a-4055-95d9-c98bede611a4</entry>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dab9f91a-166a-4055-95d9-c98bede611a4_disk">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/dab9f91a-166a-4055-95d9-c98bede611a4_disk.config">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:d4:9d:97"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <target dev="tapb3dcf519-7c"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/console.log" append="off"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:26:30 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:26:30 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:26:30 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:26:30 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.221 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Preparing to wait for external event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.223 238945 DEBUG nova.virt.libvirt.vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:26:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=148,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-oo4hn6a0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:26:22Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=dab9f91a-166a-4055-95d9-c98bede611a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.224 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.224 238945 DEBUG nova.network.os_vif_util [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.225 238945 DEBUG os_vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.226 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.227 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.231 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3dcf519-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.232 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3dcf519-7c, col_values=(('external_ids', {'iface-id': 'b3dcf519-7c56-406e-a80a-e3a3bdf38620', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:9d:97', 'vm-uuid': 'dab9f91a-166a-4055-95d9-c98bede611a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.234 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.236 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:26:30 np0005597378 NetworkManager[48904]: <info>  [1769523990.2358] manager: (tapb3dcf519-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.242 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.244 238945 INFO os_vif [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c')#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.479 238945 DEBUG nova.compute.manager [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.480 238945 DEBUG oslo_concurrency.lockutils [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.480 238945 DEBUG oslo_concurrency.lockutils [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.480 238945 DEBUG oslo_concurrency.lockutils [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.481 238945 DEBUG nova.compute.manager [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] No waiting events found dispatching network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.481 238945 WARNING nova.compute.manager [req-c74880fb-9fe4-4902-9ffb-e11c2ea712db req-f86d27ef-a88d-4fc2-9644-0528dd6edf1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received unexpected event network-vif-plugged-addbb44d-80d2-4bb4-ae54-d198de4a9755 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.506 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.506 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.507 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:d4:9d:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.507 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Using config drive#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.528 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.575 238945 INFO nova.compute.manager [None req-398e471c-4844-47c6-b3ea-d96c39c88276 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Get console output#033[00m
Jan 27 09:26:30 np0005597378 nova_compute[238941]: 2026-01-27 14:26:30.579 282814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 27 09:26:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 162 op/s
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.476 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Creating config drive at /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.483 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppvufwu34 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.631 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppvufwu34" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.666 238945 DEBUG nova.storage.rbd_utils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image dab9f91a-166a-4055-95d9-c98bede611a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.671 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config dab9f91a-166a-4055-95d9-c98bede611a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.930 238945 DEBUG nova.network.neutron [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updated VIF entry in instance network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.930 238945 DEBUG nova.network.neutron [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.947 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.947 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.948 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.948 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.948 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.949 238945 INFO nova.compute.manager [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Terminating instance#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.950 238945 DEBUG nova.compute.manager [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:26:31 np0005597378 nova_compute[238941]: 2026-01-27 14:26:31.951 238945 DEBUG oslo_concurrency.lockutils [req-b26609ab-f426-4bf5-bb11-2fd6ae13d9c2 req-deb7a326-9dfa-4b7e-9ba9-232adde5d181 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:32 np0005597378 kernel: tap184493bf-c3 (unregistering): left promiscuous mode
Jan 27 09:26:32 np0005597378 NetworkManager[48904]: <info>  [1769523992.3562] device (tap184493bf-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01562|binding|INFO|Releasing lport 184493bf-c349-4722-ac0f-2c428638e3d3 from this chassis (sb_readonly=0)
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01563|binding|INFO|Setting lport 184493bf-c349-4722-ac0f-2c428638e3d3 down in Southbound
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01564|binding|INFO|Removing iface tap184493bf-c3 ovn-installed in OVS
Jan 27 09:26:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.372 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:84:96 10.100.0.3'], port_security=['fa:16:3e:fe:84:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'fb802d98-5381-45db-a4c3-c14ad2e557d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18e2903-a184-4f44-9330-e27dd970207e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f23ddb09daa435abd7b8175bd920876', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7158c75-b922-4d85-bcb0-16239fae726c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8fdd222-623b-4052-bc0a-791c75513848, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=184493bf-c349-4722-ac0f-2c428638e3d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.373 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 184493bf-c349-4722-ac0f-2c428638e3d3 in datapath b18e2903-a184-4f44-9330-e27dd970207e unbound from our chassis#033[00m
Jan 27 09:26:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.374 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b18e2903-a184-4f44-9330-e27dd970207e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:26:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.374 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6919cd91-70fc-4645-a75d-175c2cffd3fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.375 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e namespace which is not needed anymore#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.393 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Deactivated successfully.
Jan 27 09:26:32 np0005597378 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Consumed 13.943s CPU time.
Jan 27 09:26:32 np0005597378 systemd-machined[207425]: Machine qemu-178-instance-00000092 terminated.
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG nova.compute.manager [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG nova.compute.manager [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing instance network info cache due to event network-changed-184493bf-c349-4722-ac0f-2c428638e3d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG oslo_concurrency.lockutils [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.566 238945 DEBUG oslo_concurrency.lockutils [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.567 238945 DEBUG nova.network.neutron [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Refreshing network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.589 238945 INFO nova.virt.libvirt.driver [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Instance destroyed successfully.#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.589 238945 DEBUG nova.objects.instance [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lazy-loading 'resources' on Instance uuid fb802d98-5381-45db-a4c3-c14ad2e557d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.604 238945 DEBUG nova.virt.libvirt.vif [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:25:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1721864726',display_name='tempest-TestNetworkBasicOps-server-1721864726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1721864726',id=146,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKiAfQFWh/P5sR8CJeetN5Y+QKWiGTUVdn0zcI4otWOiDUvErfJPsGzM/uL1uYTiQAgBsUOPfQ5T6SYnCAImXeFhJ2GTbmK3gQHdA7VHWmfjtXGf++SSpErbTeu32DMIvg==',key_name='tempest-TestNetworkBasicOps-63078609',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:26:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f23ddb09daa435abd7b8175bd920876',ramdisk_id='',reservation_id='r-lox03tl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-761138983',owner_user_name='tempest-TestNetworkBasicOps-761138983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:26:02Z,user_data=None,user_id='8d46184e35e7421399cd129ff694002b',uuid=fb802d98-5381-45db-a4c3-c14ad2e557d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.605 238945 DEBUG nova.network.os_vif_util [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converting VIF {"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.606 238945 DEBUG nova.network.os_vif_util [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.606 238945 DEBUG os_vif [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.610 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.610 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap184493bf-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.611 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.613 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.616 238945 DEBUG oslo_concurrency.processutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config dab9f91a-166a-4055-95d9-c98bede611a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.945s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.616 238945 INFO nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deleting local config drive /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4/disk.config because it was imported into RBD.#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.617 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.619 238945 INFO os_vif [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:84:96,bridge_name='br-int',has_traffic_filtering=True,id=184493bf-c349-4722-ac0f-2c428638e3d3,network=Network(b18e2903-a184-4f44-9330-e27dd970207e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184493bf-c3')#033[00m
Jan 27 09:26:32 np0005597378 kernel: tapb3dcf519-7c: entered promiscuous mode
Jan 27 09:26:32 np0005597378 systemd-udevd[372367]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 NetworkManager[48904]: <info>  [1769523992.6792] manager: (tapb3dcf519-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01565|binding|INFO|Claiming lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 for this chassis.
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01566|binding|INFO|b3dcf519-7c56-406e-a80a-e3a3bdf38620: Claiming fa:16:3e:d4:9d:97 10.100.0.9
Jan 27 09:26:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:32.689 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9d:97 10.100.0.9'], port_security=['fa:16:3e:d4:9d:97 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dab9f91a-166a-4055-95d9-c98bede611a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '401bcc9e-e379-4df5-b1b1-d040fa28b0f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3dcf519-7c56-406e-a80a-e3a3bdf38620) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:32 np0005597378 NetworkManager[48904]: <info>  [1769523992.6955] device (tapb3dcf519-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:26:32 np0005597378 NetworkManager[48904]: <info>  [1769523992.6969] device (tapb3dcf519-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01567|binding|INFO|Setting lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 up in Southbound
Jan 27 09:26:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:32Z|01568|binding|INFO|Setting lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 ovn-installed in OVS
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.704 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.711 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:32 np0005597378 systemd-machined[207425]: New machine qemu-180-instance-00000094.
Jan 27 09:26:32 np0005597378 systemd[1]: Started Virtual Machine qemu-180-instance-00000094.
Jan 27 09:26:32 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : haproxy version is 2.8.14-c23fe91
Jan 27 09:26:32 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [NOTICE]   (371564) : path to executable is /usr/sbin/haproxy
Jan 27 09:26:32 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [WARNING]  (371564) : Exiting Master process...
Jan 27 09:26:32 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [WARNING]  (371564) : Exiting Master process...
Jan 27 09:26:32 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [ALERT]    (371564) : Current worker (371566) exited with code 143 (Terminated)
Jan 27 09:26:32 np0005597378 neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e[371557]: [WARNING]  (371564) : All workers exited. Exiting... (0)
Jan 27 09:26:32 np0005597378 systemd[1]: libpod-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478.scope: Deactivated successfully.
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.763 238945 DEBUG nova.compute.manager [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-unplugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.763 238945 DEBUG oslo_concurrency.lockutils [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG oslo_concurrency.lockutils [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG oslo_concurrency.lockutils [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG nova.compute.manager [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] No waiting events found dispatching network-vif-unplugged-184493bf-c349-4722-ac0f-2c428638e3d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:32 np0005597378 nova_compute[238941]: 2026-01-27 14:26:32.764 238945 DEBUG nova.compute.manager [req-e2360c83-fb8e-4c8d-9777-6a416d1e363a req-bbc169c0-dd4d-48c6-b613-cdca6a9d3d2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-unplugged-184493bf-c349-4722-ac0f-2c428638e3d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:26:32 np0005597378 podman[372384]: 2026-01-27 14:26:32.766830117 +0000 UTC m=+0.282544161 container died 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:26:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478-userdata-shm.mount: Deactivated successfully.
Jan 27 09:26:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5c82db2d2d068220333869382cd542849d31de17f853203896cd066e2c627c15-merged.mount: Deactivated successfully.
Jan 27 09:26:32 np0005597378 podman[372384]: 2026-01-27 14:26:32.89632006 +0000 UTC m=+0.412034104 container cleanup 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:26:32 np0005597378 systemd[1]: libpod-conmon-2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478.scope: Deactivated successfully.
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:33 np0005597378 podman[372464]: 2026-01-27 14:26:33.139548669 +0000 UTC m=+0.209391938 container remove 2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.146 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf14aab-eeac-4798-9e3e-123f3451949e]: (4, ('Tue Jan 27 02:26:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e (2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478)\n2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478\nTue Jan 27 02:26:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e (2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478)\n2f9e09790fa40d79de47f51df96ecb6a5e5d9d5401401938fef6ba3a28ddc478\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.148 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c46dc2c4-aaf4-45c2-83df-016e5320757e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.149 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18e2903-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.150 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:33 np0005597378 kernel: tapb18e2903-a0: left promiscuous mode
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[af8c3098-a1da-4388-882e-119b3a3ba0a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.181 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae091cd-0cf1-44cc-ba16-8d91aed1c1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.183 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b3380004-80d3-4433-a8c8-59b2970cf436]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.203 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9658a8-7456-4cc8-b376-16699c7ba114]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660166, 'reachable_time': 39941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372513, 'error': None, 'target': 'ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 systemd[1]: run-netns-ovnmeta\x2db18e2903\x2da184\x2d4f44\x2d9330\x2de27dd970207e.mount: Deactivated successfully.
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.209 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b18e2903-a184-4f44-9330-e27dd970207e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.209 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[4091f487-7971-4a54-98d0-f14959962619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.210 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3dcf519-7c56-406e-a80a-e3a3bdf38620 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e unbound from our chassis#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.211 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07470876-8c4c-4f83-bb7f-48d1eefc447e#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.227 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a44a706b-7461-4470-9ee5-d172c54e3282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 128 op/s
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.260 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ef64fac0-ca76-4fcf-9ff2-d07452e0719c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.265 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[8af87552-5c6e-45f5-9a86-1094193a9ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.302 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[71c13da9-cece-4a40-a97f-1b83b10d4502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[14d4a764-78a4-4d36-9152-a56d6d3b6bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372526, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.346 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd29ae55-ab68-4969-aa70-00fe218f70c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659287, 'tstamp': 659287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372527, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659290, 'tstamp': 659290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372527, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.348 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.350 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.352 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523993.3518388, dab9f91a-166a-4055-95d9-c98bede611a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.352 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07470876-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.352 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Started (Lifecycle Event)#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.352 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.352 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07470876-80, col_values=(('external_ids', {'iface-id': 'd43985de-77e3-4402-a6c7-37813cd055a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:33 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:33.353 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.376 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.381 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523993.3519351, dab9f91a-166a-4055-95d9-c98bede611a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.381 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.403 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.407 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:26:33 np0005597378 nova_compute[238941]: 2026-01-27 14:26:33.443 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.050 238945 DEBUG nova.network.neutron [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updated VIF entry in instance network info cache for port 184493bf-c349-4722-ac0f-2c428638e3d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.050 238945 DEBUG nova.network.neutron [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [{"id": "184493bf-c349-4722-ac0f-2c428638e3d3", "address": "fa:16:3e:fe:84:96", "network": {"id": "b18e2903-a184-4f44-9330-e27dd970207e", "bridge": "br-int", "label": "tempest-network-smoke--1277636293", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f23ddb09daa435abd7b8175bd920876", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184493bf-c3", "ovs_interfaceid": "184493bf-c349-4722-ac0f-2c428638e3d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.139 238945 DEBUG oslo_concurrency.lockutils [req-2fa6a8d3-a908-4157-be86-55b806ff0cf6 req-efadd7ca-e246-4c2f-a31b-aec05cc1e9d0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-fb802d98-5381-45db-a4c3-c14ad2e557d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.685 238945 INFO nova.virt.libvirt.driver [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deleting instance files /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1_del#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.686 238945 INFO nova.virt.libvirt.driver [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deletion of /var/lib/nova/instances/fb802d98-5381-45db-a4c3-c14ad2e557d1_del complete#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.701 238945 DEBUG nova.compute.manager [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.702 238945 DEBUG nova.compute.manager [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing instance network info cache due to event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.702 238945 DEBUG oslo_concurrency.lockutils [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.702 238945 DEBUG oslo_concurrency.lockutils [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.703 238945 DEBUG nova.network.neutron [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.749 238945 INFO nova.compute.manager [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 2.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.750 238945 DEBUG oslo.service.loopingcall [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.750 238945 DEBUG nova.compute.manager [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.750 238945 DEBUG nova.network.neutron [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.918 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] No waiting events found dispatching network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.919 238945 WARNING nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received unexpected event network-vif-plugged-184493bf-c349-4722-ac0f-2c428638e3d3 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Processing event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.920 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG oslo_concurrency.lockutils [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 DEBUG nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] No waiting events found dispatching network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.921 238945 WARNING nova.compute.manager [req-ecd4d410-89cd-45a0-af4f-714588597488 req-414f0470-6886-4cc9-bc73-a19aa906bd2a 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received unexpected event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.922 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.925 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769523994.9251897, dab9f91a-166a-4055-95d9-c98bede611a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.925 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.927 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.930 238945 INFO nova.virt.libvirt.driver [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance spawned successfully.#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.930 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.946 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.950 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.953 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.953 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.954 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.954 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.954 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.955 238945 DEBUG nova.virt.libvirt.driver [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:26:34 np0005597378 nova_compute[238941]: 2026-01-27 14:26:34.984 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:26:35 np0005597378 nova_compute[238941]: 2026-01-27 14:26:35.013 238945 INFO nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 12.10 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:26:35 np0005597378 nova_compute[238941]: 2026-01-27 14:26:35.013 238945 DEBUG nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:35 np0005597378 nova_compute[238941]: 2026-01-27 14:26:35.227 238945 INFO nova.compute.manager [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 13.76 seconds to build instance.#033[00m
Jan 27 09:26:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 324 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 27 09:26:35 np0005597378 nova_compute[238941]: 2026-01-27 14:26:35.248 238945 DEBUG oslo_concurrency.lockutils [None req-88bda194-748c-4bec-879b-5d47ec0f49d7 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.540 238945 DEBUG nova.network.neutron [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.590 238945 INFO nova.compute.manager [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Took 1.84 seconds to deallocate network for instance.#033[00m
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.656 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.658 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.798 238945 DEBUG nova.compute.manager [req-e276c531-cd21-41f8-a52f-d3823039e28f req-0cf52e96-a48e-4ac9-9dfd-908d32c3a9e8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Received event network-vif-deleted-184493bf-c349-4722-ac0f-2c428638e3d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:36 np0005597378 nova_compute[238941]: 2026-01-27 14:26:36.832 238945 DEBUG oslo_concurrency.processutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 132 op/s
Jan 27 09:26:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108447260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.397 238945 DEBUG oslo_concurrency.processutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.403 238945 DEBUG nova.compute.provider_tree [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.423 238945 DEBUG nova.scheduler.client.report [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.455 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.476 238945 INFO nova.scheduler.client.report [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Deleted allocations for instance fb802d98-5381-45db-a4c3-c14ad2e557d1#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.549 238945 DEBUG oslo_concurrency.lockutils [None req-dcc3da06-21c2-4ccf-88c7-c7d108ce2791 8d46184e35e7421399cd129ff694002b 6f23ddb09daa435abd7b8175bd920876 - - default default] Lock "fb802d98-5381-45db-a4c3-c14ad2e557d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.614 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.958 238945 DEBUG nova.network.neutron [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updated VIF entry in instance network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.959 238945 DEBUG nova.network.neutron [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:37 np0005597378 nova_compute[238941]: 2026-01-27 14:26:37.981 238945 DEBUG oslo_concurrency.lockutils [req-72d6c420-d6cb-4818-a0bf-56880f119865 req-f7b9c5fb-41dd-4b6c-9798-5b897f196f1b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:38 np0005597378 nova_compute[238941]: 2026-01-27 14:26:38.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:38 np0005597378 nova_compute[238941]: 2026-01-27 14:26:38.891 238945 DEBUG nova.compute.manager [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:38 np0005597378 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG nova.compute.manager [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing instance network info cache due to event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:38 np0005597378 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG oslo_concurrency.lockutils [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:38 np0005597378 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG oslo_concurrency.lockutils [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:38 np0005597378 nova_compute[238941]: 2026-01-27 14:26:38.892 238945 DEBUG nova.network.neutron [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 821 KiB/s wr, 123 op/s
Jan 27 09:26:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:40 np0005597378 nova_compute[238941]: 2026-01-27 14:26:40.971 238945 DEBUG nova.compute.manager [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:40 np0005597378 nova_compute[238941]: 2026-01-27 14:26:40.971 238945 DEBUG nova.compute.manager [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing instance network info cache due to event network-changed-b3dcf519-7c56-406e-a80a-e3a3bdf38620. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:40 np0005597378 nova_compute[238941]: 2026-01-27 14:26:40.972 238945 DEBUG oslo_concurrency.lockutils [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:41 np0005597378 nova_compute[238941]: 2026-01-27 14:26:41.132 238945 DEBUG nova.network.neutron [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updated VIF entry in instance network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:41 np0005597378 nova_compute[238941]: 2026-01-27 14:26:41.132 238945 DEBUG nova.network.neutron [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:41 np0005597378 nova_compute[238941]: 2026-01-27 14:26:41.151 238945 DEBUG oslo_concurrency.lockutils [req-2102966e-b839-4643-a595-0330c2ed79c6 req-76e4c916-0f0a-4139-aafe-27454af66368 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:41 np0005597378 nova_compute[238941]: 2026-01-27 14:26:41.152 238945 DEBUG oslo_concurrency.lockutils [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:41 np0005597378 nova_compute[238941]: 2026-01-27 14:26:41.153 238945 DEBUG nova.network.neutron [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Refreshing network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 824 KiB/s wr, 187 op/s
Jan 27 09:26:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:42Z|01569|binding|INFO|Releasing lport d0dd5362-2188-444d-9dd1-a00fea1ddb1a from this chassis (sb_readonly=0)
Jan 27 09:26:42 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:42Z|01570|binding|INFO|Releasing lport d43985de-77e3-4402-a6c7-37813cd055a9 from this chassis (sb_readonly=0)
Jan 27 09:26:42 np0005597378 nova_compute[238941]: 2026-01-27 14:26:42.192 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:42 np0005597378 nova_compute[238941]: 2026-01-27 14:26:42.616 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:42 np0005597378 podman[372551]: 2026-01-27 14:26:42.75447145 +0000 UTC m=+0.080913574 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 09:26:42 np0005597378 nova_compute[238941]: 2026-01-27 14:26:42.963 238945 DEBUG nova.network.neutron [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updated VIF entry in instance network info cache for port b3dcf519-7c56-406e-a80a-e3a3bdf38620. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:42 np0005597378 nova_compute[238941]: 2026-01-27 14:26:42.963 238945 DEBUG nova.network.neutron [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [{"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:42 np0005597378 nova_compute[238941]: 2026-01-27 14:26:42.977 238945 DEBUG oslo_concurrency.lockutils [req-a9008d98-a89e-4f2f-a98b-6935d8c23400 req-db44bcb8-cad8-4880-8bd5-28140ea331a4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-dab9f91a-166a-4055-95d9-c98bede611a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:43 np0005597378 nova_compute[238941]: 2026-01-27 14:26:43.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 103 op/s
Jan 27 09:26:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:43Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:49:d9 10.100.0.10
Jan 27 09:26:43 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:43Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:49:d9 10.100.0.10
Jan 27 09:26:44 np0005597378 podman[372571]: 2026-01-27 14:26:44.764528847 +0000 UTC m=+0.109537695 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:26:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 27 09:26:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:46.331 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:46.332 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:46.333 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 27 09:26:47 np0005597378 nova_compute[238941]: 2026-01-27 14:26:47.587 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769523992.585851, fb802d98-5381-45db-a4c3-c14ad2e557d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:26:47 np0005597378 nova_compute[238941]: 2026-01-27 14:26:47.588 238945 INFO nova.compute.manager [-] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:26:47 np0005597378 nova_compute[238941]: 2026-01-27 14:26:47.605 238945 DEBUG nova.compute.manager [None req-7b816850-f4e2-4c85-b132-a5900dd5f5cd - - - - - -] [instance: fb802d98-5381-45db-a4c3-c14ad2e557d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:26:47 np0005597378 nova_compute[238941]: 2026-01-27 14:26:47.620 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:26:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:26:48 np0005597378 nova_compute[238941]: 2026-01-27 14:26:48.210 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:49Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:9d:97 10.100.0.9
Jan 27 09:26:49 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:49Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:9d:97 10.100.0.9
Jan 27 09:26:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 27 09:26:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG nova.compute.manager [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG nova.compute.manager [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing instance network info cache due to event network-changed-addbb44d-80d2-4bb4-ae54-d198de4a9755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG oslo_concurrency.lockutils [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.244 238945 DEBUG oslo_concurrency.lockutils [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.245 238945 DEBUG nova.network.neutron [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Refreshing network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.321 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.321 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.321 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.322 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.322 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.323 238945 INFO nova.compute.manager [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Terminating instance#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.324 238945 DEBUG nova.compute.manager [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 kernel: tapaddbb44d-80 (unregistering): left promiscuous mode
Jan 27 09:26:52 np0005597378 NetworkManager[48904]: <info>  [1769524012.5966] device (tapaddbb44d-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.618 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.619 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:52Z|01571|binding|INFO|Releasing lport addbb44d-80d2-4bb4-ae54-d198de4a9755 from this chassis (sb_readonly=0)
Jan 27 09:26:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:52Z|01572|binding|INFO|Setting lport addbb44d-80d2-4bb4-ae54-d198de4a9755 down in Southbound
Jan 27 09:26:52 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:52Z|01573|binding|INFO|Removing iface tapaddbb44d-80 ovn-installed in OVS
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.646 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], port_security=['fa:16:3e:3c:49:d9 10.100.0.10 2001:db8::f816:3eff:fe3c:49d9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe3c:49d9/64', 'neutron:device_id': '1b157d23-83f3-456c-8dae-d4ac1bcf3cdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=addbb44d-80d2-4bb4-ae54-d198de4a9755) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.647 154802 INFO neutron.agent.ovn.metadata.agent [-] Port addbb44d-80d2-4bb4-ae54-d198de4a9755 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.648 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.665 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[91faf482-0a65-4caf-ae19-11b4dfe7e944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:52 np0005597378 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 27 09:26:52 np0005597378 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Consumed 15.446s CPU time.
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.699 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[167a345c-938e-4cba-94b9-6d5f6be637ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:52 np0005597378 systemd-machined[207425]: Machine qemu-179-instance-00000093 terminated.
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.702 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[870a2c4c-3b45-47d6-a184-8bfce276db04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.736 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3efd4fa0-174f-4421-90a4-f05c43780114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.759 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[090091a6-b0e7-402f-b8de-4b0f5b29c5ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bdc2751-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:d5:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658783, 'reachable_time': 38942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372654, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.760 238945 INFO nova.virt.libvirt.driver [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Instance destroyed successfully.#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.761 238945 DEBUG nova.objects.instance [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.778 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[978c3551-914f-4509-bb03-ea6aaea980e7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658797, 'tstamp': 658797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372666, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bdc2751-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658801, 'tstamp': 658801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372666, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.780 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.782 238945 DEBUG nova.virt.libvirt.vif [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:26:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-285236295',display_name='tempest-TestGettingAddress-server-285236295',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-285236295',id=147,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:26:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-8ldqv5w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:26:28Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=1b157d23-83f3-456c-8dae-d4ac1bcf3cdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.783 238945 DEBUG nova.network.os_vif_util [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.785 238945 DEBUG nova.network.os_vif_util [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.785 238945 DEBUG os_vif [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.787 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bdc2751-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.787 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.787 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.787 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bdc2751-90, col_values=(('external_ids', {'iface-id': 'd0dd5362-2188-444d-9dd1-a00fea1ddb1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:52 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:52.788 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.788 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaddbb44d-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:52 np0005597378 nova_compute[238941]: 2026-01-27 14:26:52.793 238945 INFO os_vif [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:49:d9,bridge_name='br-int',has_traffic_filtering=True,id=addbb44d-80d2-4bb4-ae54-d198de4a9755,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaddbb44d-80')#033[00m
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.213 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.299 238945 INFO nova.virt.libvirt.driver [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deleting instance files /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_del#033[00m
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.300 238945 INFO nova.virt.libvirt.driver [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deletion of /var/lib/nova/instances/1b157d23-83f3-456c-8dae-d4ac1bcf3cdb_del complete#033[00m
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.363 238945 INFO nova.compute.manager [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.364 238945 DEBUG oslo.service.loopingcall [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.364 238945 DEBUG nova.compute.manager [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:26:53 np0005597378 nova_compute[238941]: 2026-01-27 14:26:53.364 238945 DEBUG nova.network.neutron [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:26:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:26:53 np0005597378 podman[372782]: 2026-01-27 14:26:53.992679438 +0000 UTC m=+0.057016439 container create cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:26:54 np0005597378 systemd[1]: Started libpod-conmon-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope.
Jan 27 09:26:54 np0005597378 podman[372782]: 2026-01-27 14:26:53.959226136 +0000 UTC m=+0.023563167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:26:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:54 np0005597378 podman[372782]: 2026-01-27 14:26:54.105032388 +0000 UTC m=+0.169369439 container init cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 27 09:26:54 np0005597378 podman[372782]: 2026-01-27 14:26:54.113685111 +0000 UTC m=+0.178022112 container start cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:26:54 np0005597378 beautiful_satoshi[372799]: 167 167
Jan 27 09:26:54 np0005597378 systemd[1]: libpod-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope: Deactivated successfully.
Jan 27 09:26:54 np0005597378 conmon[372799]: conmon cc36ee37410346109f66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope/container/memory.events
Jan 27 09:26:54 np0005597378 podman[372782]: 2026-01-27 14:26:54.12587453 +0000 UTC m=+0.190211531 container attach cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:26:54 np0005597378 podman[372782]: 2026-01-27 14:26:54.127554855 +0000 UTC m=+0.191891866 container died cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:26:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ca2b83037add7d665080d6a083f5e70ec46381b8149ab254ee58d26622e69b4f-merged.mount: Deactivated successfully.
Jan 27 09:26:54 np0005597378 podman[372782]: 2026-01-27 14:26:54.215915208 +0000 UTC m=+0.280252199 container remove cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_satoshi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:26:54 np0005597378 systemd[1]: libpod-conmon-cc36ee37410346109f66b7869e5199b9fd5dd5e5eee2607ad8889d6cff388d83.scope: Deactivated successfully.
Jan 27 09:26:54 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:26:54 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:26:54 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:26:54 np0005597378 podman[372825]: 2026-01-27 14:26:54.464614785 +0000 UTC m=+0.102841324 container create 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:26:54 np0005597378 podman[372825]: 2026-01-27 14:26:54.386968742 +0000 UTC m=+0.025195311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:26:54 np0005597378 systemd[1]: Started libpod-conmon-5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1.scope.
Jan 27 09:26:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:54.621 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:54 np0005597378 podman[372825]: 2026-01-27 14:26:54.653946441 +0000 UTC m=+0.292173000 container init 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:26:54 np0005597378 podman[372825]: 2026-01-27 14:26:54.662740569 +0000 UTC m=+0.300967108 container start 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:26:54 np0005597378 podman[372825]: 2026-01-27 14:26:54.738830211 +0000 UTC m=+0.377056780 container attach 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.777 238945 DEBUG nova.network.neutron [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updated VIF entry in instance network info cache for port addbb44d-80d2-4bb4-ae54-d198de4a9755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.778 238945 DEBUG nova.network.neutron [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [{"id": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "address": "fa:16:3e:3c:49:d9", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:49d9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaddbb44d-80", "ovs_interfaceid": "addbb44d-80d2-4bb4-ae54-d198de4a9755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.819 238945 DEBUG oslo_concurrency.lockutils [req-c434842a-a35b-4627-ab6d-54f4f1118d39 req-d251c3af-2811-4cbd-854b-b552ff5e96f8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.824 238945 DEBUG nova.network.neutron [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.848 238945 INFO nova.compute.manager [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Took 1.48 seconds to deallocate network for instance.#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.911 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.911 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:54 np0005597378 nova_compute[238941]: 2026-01-27 14:26:54.917 238945 DEBUG nova.compute.manager [req-cb3486ed-1127-4120-8fd1-ad64bdc61669 req-2877320c-f2aa-47b6-bdcc-c8625a8edb3d 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Received event network-vif-deleted-addbb44d-80d2-4bb4-ae54-d198de4a9755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.028 238945 DEBUG oslo_concurrency.processutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:55 np0005597378 elated_yonath[372842]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:26:55 np0005597378 elated_yonath[372842]: --> All data devices are unavailable
Jan 27 09:26:55 np0005597378 systemd[1]: libpod-5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1.scope: Deactivated successfully.
Jan 27 09:26:55 np0005597378 podman[372882]: 2026-01-27 14:26:55.220838439 +0000 UTC m=+0.026440194 container died 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:26:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 4.3 MiB/s wr, 146 op/s
Jan 27 09:26:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1095eb5b3acac1fc51fc33727901ab19e80b92f5d4fab9257a8a80c58a706527-merged.mount: Deactivated successfully.
Jan 27 09:26:55 np0005597378 podman[372882]: 2026-01-27 14:26:55.363698502 +0000 UTC m=+0.169300237 container remove 5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:26:55 np0005597378 systemd[1]: libpod-conmon-5d45d35c3bad433651d41f583b699300323627e263a17ffd21fa43ef1ec899a1.scope: Deactivated successfully.
Jan 27 09:26:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624686725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.654 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.655 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.657 238945 INFO nova.compute.manager [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Terminating instance#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.658 238945 DEBUG nova.compute.manager [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:26:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.679 238945 DEBUG oslo_concurrency.processutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.687 238945 DEBUG nova.compute.provider_tree [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:26:55 np0005597378 kernel: tapb3dcf519-7c (unregistering): left promiscuous mode
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.717 238945 DEBUG nova.scheduler.client.report [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:26:55 np0005597378 NetworkManager[48904]: <info>  [1769524015.7200] device (tapb3dcf519-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.729 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:55Z|01574|binding|INFO|Releasing lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 from this chassis (sb_readonly=0)
Jan 27 09:26:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:55Z|01575|binding|INFO|Setting lport b3dcf519-7c56-406e-a80a-e3a3bdf38620 down in Southbound
Jan 27 09:26:55 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:55Z|01576|binding|INFO|Removing iface tapb3dcf519-7c ovn-installed in OVS
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.746 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:9d:97 10.100.0.9', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dab9f91a-166a-4055-95d9-c98bede611a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b3dcf519-7c56-406e-a80a-e3a3bdf38620) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.748 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b3dcf519-7c56-406e-a80a-e3a3bdf38620 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e unbound from our chassis#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.750 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07470876-8c4c-4f83-bb7f-48d1eefc447e#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.750 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.773 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa36521-6392-4c53-8f87-15715e0ffe31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:55 np0005597378 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 27 09:26:55 np0005597378 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Consumed 14.394s CPU time.
Jan 27 09:26:55 np0005597378 systemd-machined[207425]: Machine qemu-180-instance-00000094 terminated.
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.802 238945 INFO nova.scheduler.client.report [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.812 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ea559a6d-fd48-496e-a6ee-008bd3c602e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.815 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[873cc1d0-86f3-4b4a-8806-fd564a9e594b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.849 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7becc9-b45f-44ca-8912-94f73aead3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:55 np0005597378 podman[372969]: 2026-01-27 14:26:55.87012647 +0000 UTC m=+0.044075510 container create 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.868 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0f99aa41-43ee-42fd-83b2-14ed09122679]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07470876-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:20:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659273, 'reachable_time': 39548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372985, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.890 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc004c81-c08b-4531-a4c2-d549fff90541]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659287, 'tstamp': 659287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372987, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap07470876-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659290, 'tstamp': 659290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372987, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.893 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.896 238945 DEBUG oslo_concurrency.lockutils [None req-b712d36d-832f-4488-a4de-5bb97be4eb62 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "1b157d23-83f3-456c-8dae-d4ac1bcf3cdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.901 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.901 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07470876-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07470876-80, col_values=(('external_ids', {'iface-id': 'd43985de-77e3-4402-a6c7-37813cd055a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:55.902 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.904 238945 INFO nova.virt.libvirt.driver [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Instance destroyed successfully.#033[00m
Jan 27 09:26:55 np0005597378 nova_compute[238941]: 2026-01-27 14:26:55.905 238945 DEBUG nova.objects.instance [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid dab9f91a-166a-4055-95d9-c98bede611a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:55 np0005597378 systemd[1]: Started libpod-conmon-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope.
Jan 27 09:26:55 np0005597378 podman[372969]: 2026-01-27 14:26:55.851496368 +0000 UTC m=+0.025445438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:26:55 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:55 np0005597378 podman[372969]: 2026-01-27 14:26:55.971713809 +0000 UTC m=+0.145662879 container init 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:26:55 np0005597378 podman[372969]: 2026-01-27 14:26:55.982758877 +0000 UTC m=+0.156707917 container start 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 27 09:26:55 np0005597378 podman[372969]: 2026-01-27 14:26:55.988866152 +0000 UTC m=+0.162815212 container attach 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:26:55 np0005597378 vibrant_hermann[373001]: 167 167
Jan 27 09:26:55 np0005597378 systemd[1]: libpod-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope: Deactivated successfully.
Jan 27 09:26:55 np0005597378 conmon[373001]: conmon 626ce4ec52ba859e32ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope/container/memory.events
Jan 27 09:26:55 np0005597378 podman[372969]: 2026-01-27 14:26:55.992310085 +0000 UTC m=+0.166259125 container died 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.012 238945 DEBUG nova.virt.libvirt.vif [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:26:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1446324674',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=148,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-oo4hn6a0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:26:35Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=dab9f91a-166a-4055-95d9-c98bede611a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.013 238945 DEBUG nova.network.os_vif_util [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "address": "fa:16:3e:d4:9d:97", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3dcf519-7c", "ovs_interfaceid": "b3dcf519-7c56-406e-a80a-e3a3bdf38620", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.014 238945 DEBUG nova.network.os_vif_util [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.014 238945 DEBUG os_vif [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.017 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3dcf519-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2272c4f20ef0093d09e9c7fd81c05a19997020ccf68090c368b1cf65b3957a86-merged.mount: Deactivated successfully.
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.026 238945 INFO os_vif [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:9d:97,bridge_name='br-int',has_traffic_filtering=True,id=b3dcf519-7c56-406e-a80a-e3a3bdf38620,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3dcf519-7c')#033[00m
Jan 27 09:26:56 np0005597378 podman[372969]: 2026-01-27 14:26:56.036411245 +0000 UTC m=+0.210360285 container remove 626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:26:56 np0005597378 systemd[1]: libpod-conmon-626ce4ec52ba859e32eef24a4043916d6a39b9cbb0abc727785ab4c3dee2ce79.scope: Deactivated successfully.
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.090 238945 DEBUG nova.compute.manager [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-unplugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG oslo_concurrency.lockutils [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG oslo_concurrency.lockutils [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG oslo_concurrency.lockutils [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG nova.compute.manager [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] No waiting events found dispatching network-vif-unplugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.091 238945 DEBUG nova.compute.manager [req-9916ec3d-e569-4012-9b14-a8fecc9c5953 req-c33dadc1-4050-47ef-8921-61e69df02734 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-unplugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.260169898 +0000 UTC m=+0.042607899 container create 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:26:56 np0005597378 systemd[1]: Started libpod-conmon-04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb.scope.
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.243480609 +0000 UTC m=+0.025918630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:26:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:56 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.350 238945 INFO nova.virt.libvirt.driver [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deleting instance files /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4_del#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.350 238945 INFO nova.virt.libvirt.driver [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deletion of /var/lib/nova/instances/dab9f91a-166a-4055-95d9-c98bede611a4_del complete#033[00m
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.36144007 +0000 UTC m=+0.143878091 container init 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.367806061 +0000 UTC m=+0.150244062 container start 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.370833073 +0000 UTC m=+0.153271104 container attach 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.578 238945 INFO nova.compute.manager [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.581 238945 DEBUG oslo.service.loopingcall [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.582 238945 DEBUG nova.compute.manager [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.582 238945 DEBUG nova.network.neutron [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:26:56 np0005597378 nova_compute[238941]: 2026-01-27 14:26:56.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]: {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:    "0": [
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:        {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "devices": [
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "/dev/loop3"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            ],
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_name": "ceph_lv0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_size": "21470642176",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "name": "ceph_lv0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "tags": {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cluster_name": "ceph",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.crush_device_class": "",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.encrypted": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.objectstore": "bluestore",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osd_id": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.type": "block",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.vdo": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.with_tpm": "0"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            },
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "type": "block",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "vg_name": "ceph_vg0"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:        }
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:    ],
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:    "1": [
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:        {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "devices": [
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "/dev/loop4"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            ],
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_name": "ceph_lv1",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_size": "21470642176",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "name": "ceph_lv1",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "tags": {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cluster_name": "ceph",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.crush_device_class": "",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.encrypted": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.objectstore": "bluestore",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osd_id": "1",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.type": "block",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.vdo": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.with_tpm": "0"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            },
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "type": "block",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "vg_name": "ceph_vg1"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:        }
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:    ],
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:    "2": [
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:        {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "devices": [
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "/dev/loop5"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            ],
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_name": "ceph_lv2",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_size": "21470642176",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "name": "ceph_lv2",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "tags": {
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.cluster_name": "ceph",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.crush_device_class": "",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.encrypted": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.objectstore": "bluestore",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osd_id": "2",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.type": "block",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.vdo": "0",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:                "ceph.with_tpm": "0"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            },
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "type": "block",
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:            "vg_name": "ceph_vg2"
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:        }
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]:    ]
Jan 27 09:26:56 np0005597378 focused_visvesvaraya[373060]: }
Jan 27 09:26:56 np0005597378 systemd[1]: libpod-04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb.scope: Deactivated successfully.
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.68775881 +0000 UTC m=+0.470196881 container died 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:26:56 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1966a0a4a57e5d65439cf784e384fc6a5bd5a4a20008cb76be9dede4014666ab-merged.mount: Deactivated successfully.
Jan 27 09:26:56 np0005597378 podman[373043]: 2026-01-27 14:26:56.734808789 +0000 UTC m=+0.517246790 container remove 04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:26:56 np0005597378 systemd[1]: libpod-conmon-04f5e6d40b65ab0ade72c6c1b6da705bb39a4ac8e8131b13a3fe7c78afc3aaeb.scope: Deactivated successfully.
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.246357575 +0000 UTC m=+0.051640384 container create 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:26:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 3.0 MiB/s wr, 120 op/s
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.267 238945 DEBUG nova.compute.manager [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-changed-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.267 238945 DEBUG nova.compute.manager [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing instance network info cache due to event network-changed-8f387573-0891-4f0a-9601-3736c186d288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.267 238945 DEBUG oslo_concurrency.lockutils [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.268 238945 DEBUG oslo_concurrency.lockutils [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.268 238945 DEBUG nova.network.neutron [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Refreshing network info cache for port 8f387573-0891-4f0a-9601-3736c186d288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:26:57 np0005597378 systemd[1]: Started libpod-conmon-40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5.scope.
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.222281536 +0000 UTC m=+0.027564365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:26:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.334252126 +0000 UTC m=+0.139534945 container init 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.342907079 +0000 UTC m=+0.148189878 container start 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.346782894 +0000 UTC m=+0.152065693 container attach 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:26:57 np0005597378 sweet_kilby[373160]: 167 167
Jan 27 09:26:57 np0005597378 systemd[1]: libpod-40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5.scope: Deactivated successfully.
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.350314468 +0000 UTC m=+0.155597267 container died 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:26:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7886797bd55eceed2107ddd3a118f2068b6c8924410b9bd62659e2747d5bb462-merged.mount: Deactivated successfully.
Jan 27 09:26:57 np0005597378 podman[373144]: 2026-01-27 14:26:57.402876406 +0000 UTC m=+0.208159205 container remove 40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:26:57 np0005597378 systemd[1]: libpod-conmon-40c8fcf398f9a84618a55bca760f0b63b1e8c439c15f14d6d80af09f619009c5.scope: Deactivated successfully.
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.498 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.499 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.500 238945 INFO nova.compute.manager [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Terminating instance#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.501 238945 DEBUG nova.compute.manager [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.527 238945 DEBUG nova.network.neutron [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:57 np0005597378 kernel: tap8f387573-08 (unregistering): left promiscuous mode
Jan 27 09:26:57 np0005597378 NetworkManager[48904]: <info>  [1769524017.5454] device (tap8f387573-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.552 238945 INFO nova.compute.manager [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Took 0.97 seconds to deallocate network for instance.#033[00m
Jan 27 09:26:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:57Z|01577|binding|INFO|Releasing lport 8f387573-0891-4f0a-9601-3736c186d288 from this chassis (sb_readonly=0)
Jan 27 09:26:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:57Z|01578|binding|INFO|Setting lport 8f387573-0891-4f0a-9601-3736c186d288 down in Southbound
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:57Z|01579|binding|INFO|Removing iface tap8f387573-08 ovn-installed in OVS
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.562 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.563 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.564 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.564 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8510fa3d-4c83-4bdf-b497-50116c21821c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.565 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c namespace which is not needed anymore#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 podman[373184]: 2026-01-27 14:26:57.594524534 +0000 UTC m=+0.049677190 container create 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:26:57 np0005597378 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 27 09:26:57 np0005597378 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000090.scope: Consumed 15.486s CPU time.
Jan 27 09:26:57 np0005597378 systemd-machined[207425]: Machine qemu-176-instance-00000090 terminated.
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.614 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.615 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:57 np0005597378 systemd[1]: Started libpod-conmon-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope.
Jan 27 09:26:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:26:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:26:57 np0005597378 podman[373184]: 2026-01-27 14:26:57.578511242 +0000 UTC m=+0.033663918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:26:57 np0005597378 podman[373184]: 2026-01-27 14:26:57.682226879 +0000 UTC m=+0.137379565 container init 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:26:57 np0005597378 podman[373184]: 2026-01-27 14:26:57.690870002 +0000 UTC m=+0.146022658 container start 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:26:57 np0005597378 podman[373184]: 2026-01-27 14:26:57.695198109 +0000 UTC m=+0.150350765 container attach 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:26:57 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : haproxy version is 2.8.14-c23fe91
Jan 27 09:26:57 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [NOTICE]   (370744) : path to executable is /usr/sbin/haproxy
Jan 27 09:26:57 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [WARNING]  (370744) : Exiting Master process...
Jan 27 09:26:57 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [WARNING]  (370744) : Exiting Master process...
Jan 27 09:26:57 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [ALERT]    (370744) : Current worker (370746) exited with code 143 (Terminated)
Jan 27 09:26:57 np0005597378 neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c[370686]: [WARNING]  (370744) : All workers exited. Exiting... (0)
Jan 27 09:26:57 np0005597378 systemd[1]: libpod-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff.scope: Deactivated successfully.
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.703 238945 DEBUG oslo_concurrency.processutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:26:57 np0005597378 podman[373221]: 2026-01-27 14:26:57.708318163 +0000 UTC m=+0.053700149 container died 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:26:57 np0005597378 kernel: tap8f387573-08: entered promiscuous mode
Jan 27 09:26:57 np0005597378 NetworkManager[48904]: <info>  [1769524017.7183] manager: (tap8f387573-08): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Jan 27 09:26:57 np0005597378 kernel: tap8f387573-08 (unregistering): left promiscuous mode
Jan 27 09:26:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:57Z|01580|binding|INFO|Claiming lport 8f387573-0891-4f0a-9601-3736c186d288 for this chassis.
Jan 27 09:26:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:57Z|01581|binding|INFO|8f387573-0891-4f0a-9601-3736c186d288: Claiming fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.739 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff-userdata-shm.mount: Deactivated successfully.
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:26:57Z|01582|binding|INFO|Releasing lport 8f387573-0891-4f0a-9601-3736c186d288 from this chassis (sb_readonly=0)
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.749 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-21ed0e26ea6428fe6635f263a1d3467fd96eb5dadaee64722fe6a65a5a04a8d6-merged.mount: Deactivated successfully.
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.758 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], port_security=['fa:16:3e:86:c5:09 10.100.0.12 2001:db8::f816:3eff:fe86:c509'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe86:c509/64', 'neutron:device_id': '6635dda1-c175-403d-ac21-0ec9dca6a77c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90600d8549a94e0fa1932cd257a4f609', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd296c7a5-f385-4534-8f02-e685c086b3e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba4236b-4771-4af0-bf5b-186f1b72959a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=8f387573-0891-4f0a-9601-3736c186d288) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.759 238945 INFO nova.virt.libvirt.driver [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance destroyed successfully.#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.759 238945 DEBUG nova.objects.instance [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lazy-loading 'resources' on Instance uuid 6635dda1-c175-403d-ac21-0ec9dca6a77c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.773 238945 DEBUG nova.virt.libvirt.vif [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-34426443',display_name='tempest-TestGettingAddress-server-34426443',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-34426443',id=144,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0roQxlm7rBsyexGnEPiCXhcnKxfp9fw0cPDJksHn/jHZPciuBmzdmiU2uk+pbwv5Uo/ReziRDsxV2MeUBZqXHZ+jvBUJxD4UhnGHkH+Oh6gMJQT0kcpRrKqU08CfxQIQ==',key_name='tempest-TestGettingAddress-1692085847',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:25:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90600d8549a94e0fa1932cd257a4f609',ramdisk_id='',reservation_id='r-uysr3jco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1672904195',owner_user_name='tempest-TestGettingAddress-1672904195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:25:49Z,user_data=None,user_id='54150da90e49498bb01ba6afc80f5562',uuid=6635dda1-c175-403d-ac21-0ec9dca6a77c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.774 238945 DEBUG nova.network.os_vif_util [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converting VIF {"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.775 238945 DEBUG nova.network.os_vif_util [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.775 238945 DEBUG os_vif [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.778 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.778 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f387573-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.780 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.784 238945 INFO os_vif [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:c5:09,bridge_name='br-int',has_traffic_filtering=True,id=8f387573-0891-4f0a-9601-3736c186d288,network=Network(3bdc2751-918c-46d6-9a4d-729ae5cc6d9c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f387573-08')#033[00m
Jan 27 09:26:57 np0005597378 podman[373221]: 2026-01-27 14:26:57.843640072 +0000 UTC m=+0.189022048 container cleanup 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 09:26:57 np0005597378 systemd[1]: libpod-conmon-8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff.scope: Deactivated successfully.
Jan 27 09:26:57 np0005597378 podman[373282]: 2026-01-27 14:26:57.922714935 +0000 UTC m=+0.054626244 container remove 8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1256796b-563c-4c58-9a4f-517a8d0443ca]: (4, ('Tue Jan 27 02:26:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c (8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff)\n8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff\nTue Jan 27 02:26:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c (8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff)\n8e318577423c92058751eff053087774be49133a828b5b15aedbdb54dce30cff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.932 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9334b3-6c9d-4b99-b164-11c37180c2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.933 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bdc2751-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 kernel: tap3bdc2751-90: left promiscuous mode
Jan 27 09:26:57 np0005597378 nova_compute[238941]: 2026-01-27 14:26:57.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[149a26a3-0e64-4104-b9a9-52fe1bbaf198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.965 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7af7c8-c3d6-4183-86f9-acdcf62b3eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.966 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbb3930-ecf0-42af-864a-f9280bbe3977]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5bf9e1-b74c-4f42-b8df-74a6d61da692]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658775, 'reachable_time': 36253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373323, 'error': None, 'target': 'ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 systemd[1]: run-netns-ovnmeta\x2d3bdc2751\x2d918c\x2d46d6\x2d9a4d\x2d729ae5cc6d9c.mount: Deactivated successfully.
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.990 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bdc2751-918c-46d6-9a4d-729ae5cc6d9c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.990 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[78941948-76c7-48fc-b3c3-b6ef90cc682c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.991 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.992 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.993 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[864f93ec-7ffb-49b8-a44a-c02dab5a20d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.994 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 8f387573-0891-4f0a-9601-3736c186d288 in datapath 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c unbound from our chassis#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.994 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bdc2751-918c-46d6-9a4d-729ae5cc6d9c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:26:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:26:57.995 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95babf40-e573-448c-a529-e1a2835a2194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.247 238945 INFO nova.virt.libvirt.driver [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deleting instance files /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c_del#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.248 238945 INFO nova.virt.libvirt.driver [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deletion of /var/lib/nova/instances/6635dda1-c175-403d-ac21-0ec9dca6a77c_del complete#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.283 238945 DEBUG nova.compute.manager [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG oslo_concurrency.lockutils [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG oslo_concurrency.lockutils [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG oslo_concurrency.lockutils [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 DEBUG nova.compute.manager [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] No waiting events found dispatching network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.284 238945 WARNING nova.compute.manager [req-e356349d-a2fd-4e5b-be20-a86dd887f033 req-3c2ad917-84b4-4fe6-bd02-b5a80c1b17e2 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received unexpected event network-vif-plugged-b3dcf519-7c56-406e-a80a-e3a3bdf38620 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.356 238945 INFO nova.compute.manager [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.356 238945 DEBUG oslo.service.loopingcall [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.357 238945 DEBUG nova.compute.manager [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.357 238945 DEBUG nova.network.neutron [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:26:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:26:58 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613664742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.383 238945 DEBUG oslo_concurrency.processutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.390 238945 DEBUG nova.compute.provider_tree [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.407 238945 DEBUG nova.scheduler.client.report [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.456 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:58 np0005597378 lvm[373387]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:26:58 np0005597378 lvm[373387]: VG ceph_vg0 finished
Jan 27 09:26:58 np0005597378 lvm[373389]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:26:58 np0005597378 lvm[373389]: VG ceph_vg1 finished
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.493 238945 INFO nova.scheduler.client.report [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance dab9f91a-166a-4055-95d9-c98bede611a4#033[00m
Jan 27 09:26:58 np0005597378 lvm[373390]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:26:58 np0005597378 lvm[373390]: VG ceph_vg2 finished
Jan 27 09:26:58 np0005597378 vigilant_bardeen[373222]: {}
Jan 27 09:26:58 np0005597378 systemd[1]: libpod-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope: Deactivated successfully.
Jan 27 09:26:58 np0005597378 systemd[1]: libpod-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope: Consumed 1.395s CPU time.
Jan 27 09:26:58 np0005597378 nova_compute[238941]: 2026-01-27 14:26:58.651 238945 DEBUG oslo_concurrency.lockutils [None req-1a64fc95-859f-4921-ad74-3f5b24c2633a 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "dab9f91a-166a-4055-95d9-c98bede611a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:26:58 np0005597378 podman[373393]: 2026-01-27 14:26:58.694030547 +0000 UTC m=+0.034594645 container died 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:26:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b7559b58528d7635b9f29cfd0fa84456ef170d51b730c5ef0279a817add6ad1e-merged.mount: Deactivated successfully.
Jan 27 09:26:58 np0005597378 podman[373393]: 2026-01-27 14:26:58.932354684 +0000 UTC m=+0.272918772 container remove 7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:26:58 np0005597378 systemd[1]: libpod-conmon-7c11b4eed1051df94e59446281e283b842eaf8dcfc653009808e76032e7d9fc8.scope: Deactivated successfully.
Jan 27 09:26:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:26:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 27 09:26:59 np0005597378 nova_compute[238941]: 2026-01-27 14:26:59.366 238945 DEBUG nova.compute.manager [req-70f1f7f3-f0da-46e9-8130-47e117e2207a req-cdde07d6-bad7-49fe-b524-d6997dff2bd4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Received event network-vif-deleted-b3dcf519-7c56-406e-a80a-e3a3bdf38620 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4052223284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:26:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4052223284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:26:59 np0005597378 nova_compute[238941]: 2026-01-27 14:26:59.732 238945 DEBUG nova.network.neutron [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:26:59 np0005597378 nova_compute[238941]: 2026-01-27 14:26:59.823 238945 INFO nova.compute.manager [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Took 1.47 seconds to deallocate network for instance.#033[00m
Jan 27 09:26:59 np0005597378 nova_compute[238941]: 2026-01-27 14:26:59.917 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:26:59 np0005597378 nova_compute[238941]: 2026-01-27 14:26:59.918 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:26:59 np0005597378 nova_compute[238941]: 2026-01-27 14:26:59.993 238945 DEBUG oslo_concurrency.processutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:27:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.272 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.273 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.273 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.273 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.274 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.275 238945 INFO nova.compute.manager [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Terminating instance#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.277 238945 DEBUG nova.compute.manager [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:27:00 np0005597378 kernel: tapa97b74ff-5e (unregistering): left promiscuous mode
Jan 27 09:27:00 np0005597378 NetworkManager[48904]: <info>  [1769524020.4220] device (tapa97b74ff-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:27:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:00Z|01583|binding|INFO|Releasing lport a97b74ff-5e1f-4cb1-a688-f986acf75619 from this chassis (sb_readonly=0)
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.430 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:00Z|01584|binding|INFO|Setting lport a97b74ff-5e1f-4cb1-a688-f986acf75619 down in Southbound
Jan 27 09:27:00 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:00Z|01585|binding|INFO|Removing iface tapa97b74ff-5e ovn-installed in OVS
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:00 np0005597378 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Deactivated successfully.
Jan 27 09:27:00 np0005597378 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Consumed 15.576s CPU time.
Jan 27 09:27:00 np0005597378 systemd-machined[207425]: Machine qemu-177-instance-00000091 terminated.
Jan 27 09:27:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.517 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e3:b5 10.100.0.8'], port_security=['fa:16:3e:f4:e3:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0131fc36-bc84-47cd-8067-04bef1ed346b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '401bcc9e-e379-4df5-b1b1-d040fa28b0f0 66468c20-6e25-42a7-908a-965ba4bd54ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6618af43-1391-409b-869f-1324bc7e5707, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=a97b74ff-5e1f-4cb1-a688-f986acf75619) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:27:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.518 154802 INFO neutron.agent.ovn.metadata.agent [-] Port a97b74ff-5e1f-4cb1-a688-f986acf75619 in datapath 07470876-8c4c-4f83-bb7f-48d1eefc447e unbound from our chassis#033[00m
Jan 27 09:27:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.519 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07470876-8c4c-4f83-bb7f-48d1eefc447e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:27:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.521 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7914d8af-ca0a-4787-8204-79ee891b1f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:00.521 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e namespace which is not needed anymore#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.571 238945 DEBUG nova.compute.manager [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.572 238945 DEBUG nova.compute.manager [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing instance network info cache due to event network-changed-a97b74ff-5e1f-4cb1-a688-f986acf75619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.572 238945 DEBUG oslo_concurrency.lockutils [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.573 238945 DEBUG oslo_concurrency.lockutils [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.573 238945 DEBUG nova.network.neutron [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Refreshing network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:27:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2423389577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.626 238945 DEBUG oslo_concurrency.processutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.632 238945 DEBUG nova.compute.provider_tree [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.656 238945 DEBUG nova.scheduler.client.report [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:27:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.689 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.708 238945 DEBUG nova.network.neutron [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updated VIF entry in instance network info cache for port 8f387573-0891-4f0a-9601-3736c186d288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.709 238945 DEBUG nova.network.neutron [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Updating instance_info_cache with network_info: [{"id": "8f387573-0891-4f0a-9601-3736c186d288", "address": "fa:16:3e:86:c5:09", "network": {"id": "3bdc2751-918c-46d6-9a4d-729ae5cc6d9c", "bridge": "br-int", "label": "tempest-network-smoke--1740538235", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe86:c509", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90600d8549a94e0fa1932cd257a4f609", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f387573-08", "ovs_interfaceid": "8f387573-0891-4f0a-9601-3736c186d288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.715 238945 INFO nova.virt.libvirt.driver [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Instance destroyed successfully.#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.715 238945 DEBUG nova.objects.instance [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid 0131fc36-bc84-47cd-8067-04bef1ed346b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:27:00 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : haproxy version is 2.8.14-c23fe91
Jan 27 09:27:00 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [NOTICE]   (371100) : path to executable is /usr/sbin/haproxy
Jan 27 09:27:00 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [WARNING]  (371100) : Exiting Master process...
Jan 27 09:27:00 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [ALERT]    (371100) : Current worker (371102) exited with code 143 (Terminated)
Jan 27 09:27:00 np0005597378 neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e[371096]: [WARNING]  (371100) : All workers exited. Exiting... (0)
Jan 27 09:27:00 np0005597378 systemd[1]: libpod-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672.scope: Deactivated successfully.
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.728 238945 INFO nova.scheduler.client.report [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Deleted allocations for instance 6635dda1-c175-403d-ac21-0ec9dca6a77c#033[00m
Jan 27 09:27:00 np0005597378 podman[373477]: 2026-01-27 14:27:00.732613515 +0000 UTC m=+0.109526705 container died 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.757 238945 DEBUG nova.virt.libvirt.vif [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1941573658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=145,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCh8iTJ79IQTfZKcyGTVMRNhc1Qm2nC8XJCZVveMYKvrisO0G2Hkg5wsreDtMeEvqvZQDV6CtGF/BIH4aUXaCB3Qn6MyyjLTrOg7bxvi4R7OJPdkJUXBnb+rWUYAQ0htRg==',key_name='tempest-TestSecurityGroupsBasicOps-1357076021',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:25:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-0me1tmc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:25:53Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=0131fc36-bc84-47cd-8067-04bef1ed346b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.758 238945 DEBUG nova.network.os_vif_util [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.758 238945 DEBUG nova.network.os_vif_util [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.759 238945 DEBUG os_vif [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.760 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa97b74ff-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.767 238945 INFO os_vif [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e3:b5,bridge_name='br-int',has_traffic_filtering=True,id=a97b74ff-5e1f-4cb1-a688-f986acf75619,network=Network(07470876-8c4c-4f83-bb7f-48d1eefc447e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa97b74ff-5e')#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.783 238945 DEBUG oslo_concurrency.lockutils [req-7993b2ec-de19-4149-af49-43ec628c4a92 req-f8e81376-6591-499c-a147-07f94d5a6e63 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6635dda1-c175-403d-ac21-0ec9dca6a77c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:00 np0005597378 nova_compute[238941]: 2026-01-27 14:27:00.862 238945 DEBUG oslo_concurrency.lockutils [None req-fad43a02-6071-4d70-944a-e85b5395dd69 54150da90e49498bb01ba6afc80f5562 90600d8549a94e0fa1932cd257a4f609 - - default default] Lock "6635dda1-c175-403d-ac21-0ec9dca6a77c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672-userdata-shm.mount: Deactivated successfully.
Jan 27 09:27:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bc0983a650b3169c921500c114c9fdc13efb4c9ab6f00a01ee418ff9eafbf101-merged.mount: Deactivated successfully.
Jan 27 09:27:00 np0005597378 podman[373477]: 2026-01-27 14:27:00.923982555 +0000 UTC m=+0.300895745 container cleanup 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:27:00 np0005597378 systemd[1]: libpod-conmon-78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672.scope: Deactivated successfully.
Jan 27 09:27:01 np0005597378 podman[373537]: 2026-01-27 14:27:01.179166207 +0000 UTC m=+0.234247188 container remove 78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.186 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2cadb7-cdeb-4146-8de8-8bacfb9d4d57]: (4, ('Tue Jan 27 02:27:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e (78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672)\n78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672\nTue Jan 27 02:27:00 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e (78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672)\n78823dd937af86d603b695b406c17259ca0bcc411241c12a00bf635d7bcdf672\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdf4925-c225-48fe-a8b5-6b3e9689ba9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.188 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07470876-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:01 np0005597378 kernel: tap07470876-80: left promiscuous mode
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.204 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.208 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[48766a2e-8150-4541-92fb-67f55b3890dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.226 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71f8e2b1-f525-46c3-a161-c167e92a6110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.228 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d2975c-7524-4595-bd6a-adefdf1bf6c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ee683ec9-102e-4444-a7a1-2b7d041dcfa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659265, 'reachable_time': 42729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373551, 'error': None, 'target': 'ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 systemd[1]: run-netns-ovnmeta\x2d07470876\x2d8c4c\x2d4f83\x2dbb7f\x2d48d1eefc447e.mount: Deactivated successfully.
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.254 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07470876-8c4c-4f83-bb7f-48d1eefc447e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:27:01 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:01.254 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7e7a32-55be-4b2d-9f22-f25ec2e72e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 121 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.642 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Received event network-vif-deleted-8f387573-0891-4f0a-9601-3736c186d288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.642 238945 INFO nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Neutron deleted interface 8f387573-0891-4f0a-9601-3736c186d288; detaching it from the instance and deleting it from the info cache#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.642 238945 DEBUG nova.network.neutron [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.644 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Detach interface failed, port_id=8f387573-0891-4f0a-9601-3736c186d288, reason: Instance 6635dda1-c175-403d-ac21-0ec9dca6a77c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-unplugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG oslo_concurrency.lockutils [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG oslo_concurrency.lockutils [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG oslo_concurrency.lockutils [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.645 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] No waiting events found dispatching network-vif-unplugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:27:01 np0005597378 nova_compute[238941]: 2026-01-27 14:27:01.646 238945 DEBUG nova.compute.manager [req-cbc5c25e-bc66-440f-8216-c47347748b8a req-5e53eed9-749f-46f8-a844-3325a9f75959 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-unplugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.489 238945 INFO nova.virt.libvirt.driver [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deleting instance files /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b_del#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.491 238945 INFO nova.virt.libvirt.driver [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deletion of /var/lib/nova/instances/0131fc36-bc84-47cd-8067-04bef1ed346b_del complete#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.551 238945 INFO nova.compute.manager [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 2.27 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.552 238945 DEBUG oslo.service.loopingcall [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.552 238945 DEBUG nova.compute.manager [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.552 238945 DEBUG nova.network.neutron [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.588 238945 DEBUG nova.network.neutron [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updated VIF entry in instance network info cache for port a97b74ff-5e1f-4cb1-a688-f986acf75619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.589 238945 DEBUG nova.network.neutron [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [{"id": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "address": "fa:16:3e:f4:e3:b5", "network": {"id": "07470876-8c4c-4f83-bb7f-48d1eefc447e", "bridge": "br-int", "label": "tempest-network-smoke--1796690166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa97b74ff-5e", "ovs_interfaceid": "a97b74ff-5e1f-4cb1-a688-f986acf75619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:02 np0005597378 nova_compute[238941]: 2026-01-27 14:27:02.610 238945 DEBUG oslo_concurrency.lockutils [req-21074854-4fcd-4a32-b116-a965aa72911a req-e97e711f-ed25-4e96-880b-6c2a490310c7 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-0131fc36-bc84-47cd-8067-04bef1ed346b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.219 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 121 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 21 KiB/s wr, 85 op/s
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.413 238945 DEBUG nova.network.neutron [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.438 238945 INFO nova.compute.manager [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Took 0.89 seconds to deallocate network for instance.#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.509 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.509 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.517 238945 DEBUG nova.compute.manager [req-ec45e4b3-01c9-412f-a994-4be0504298f7 req-09d85a35-f557-4cee-90c9-1cc2bcb26ba5 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-deleted-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.562 238945 DEBUG nova.compute.manager [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG oslo_concurrency.lockutils [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG oslo_concurrency.lockutils [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG oslo_concurrency.lockutils [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.563 238945 DEBUG nova.compute.manager [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] No waiting events found dispatching network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.564 238945 WARNING nova.compute.manager [req-64d0f850-6933-46a8-b29a-d4a9228e8bc9 req-9597fe75-b85a-407a-86dd-1aa5961d9df9 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Received unexpected event network-vif-plugged-a97b74ff-5e1f-4cb1-a688-f986acf75619 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.565 238945 DEBUG oslo_concurrency.processutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:03 np0005597378 nova_compute[238941]: 2026-01-27 14:27:03.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345977011' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:04 np0005597378 nova_compute[238941]: 2026-01-27 14:27:04.162 238945 DEBUG oslo_concurrency.processutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:04 np0005597378 nova_compute[238941]: 2026-01-27 14:27:04.169 238945 DEBUG nova.compute.provider_tree [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:27:04 np0005597378 nova_compute[238941]: 2026-01-27 14:27:04.186 238945 DEBUG nova.scheduler.client.report [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:27:04 np0005597378 nova_compute[238941]: 2026-01-27 14:27:04.205 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:04 np0005597378 nova_compute[238941]: 2026-01-27 14:27:04.240 238945 INFO nova.scheduler.client.report [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance 0131fc36-bc84-47cd-8067-04bef1ed346b#033[00m
Jan 27 09:27:04 np0005597378 nova_compute[238941]: 2026-01-27 14:27:04.319 238945 DEBUG oslo_concurrency.lockutils [None req-2a24cb3f-5cd6-4c43-a29d-ca5011337c56 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "0131fc36-bc84-47cd-8067-04bef1ed346b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 65 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 22 KiB/s wr, 100 op/s
Jan 27 09:27:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:05 np0005597378 nova_compute[238941]: 2026-01-27 14:27:05.762 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:05 np0005597378 nova_compute[238941]: 2026-01-27 14:27:05.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:06 np0005597378 nova_compute[238941]: 2026-01-27 14:27:06.168 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 20 KiB/s wr, 94 op/s
Jan 27 09:27:07 np0005597378 nova_compute[238941]: 2026-01-27 14:27:07.757 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524012.756274, 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:07 np0005597378 nova_compute[238941]: 2026-01-27 14:27:07.758 238945 INFO nova.compute.manager [-] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:27:07 np0005597378 nova_compute[238941]: 2026-01-27 14:27:07.782 238945 DEBUG nova.compute.manager [None req-653f91de-3319-4edb-b247-374a3beb9985 - - - - - -] [instance: 1b157d23-83f3-456c-8dae-d4ac1bcf3cdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.409 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:27:08 np0005597378 nova_compute[238941]: 2026-01-27 14:27:08.409 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745924196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.026 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.200 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.201 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3553MB free_disk=59.987363575957716GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.201 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.202 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 83 op/s
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.273 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.274 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.295 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297055411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.888 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.894 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.937 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.984 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:27:09 np0005597378 nova_compute[238941]: 2026-01-27 14:27:09.985 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:10 np0005597378 nova_compute[238941]: 2026-01-27 14:27:10.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:10 np0005597378 nova_compute[238941]: 2026-01-27 14:27:10.903 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524015.9025354, dab9f91a-166a-4055-95d9-c98bede611a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:10 np0005597378 nova_compute[238941]: 2026-01-27 14:27:10.904 238945 INFO nova.compute.manager [-] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:27:10 np0005597378 nova_compute[238941]: 2026-01-27 14:27:10.930 238945 DEBUG nova.compute.manager [None req-b9ec9fbd-2728-4f6b-9395-a1057ea01437 - - - - - -] [instance: dab9f91a-166a-4055-95d9-c98bede611a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 83 op/s
Jan 27 09:27:12 np0005597378 nova_compute[238941]: 2026-01-27 14:27:12.750 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524017.7332735, 6635dda1-c175-403d-ac21-0ec9dca6a77c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:12 np0005597378 nova_compute[238941]: 2026-01-27 14:27:12.751 238945 INFO nova.compute.manager [-] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:27:12 np0005597378 nova_compute[238941]: 2026-01-27 14:27:12.788 238945 DEBUG nova.compute.manager [None req-7efc7414-c4a3-4b02-9416-d7c0418da38b - - - - - -] [instance: 6635dda1-c175-403d-ac21-0ec9dca6a77c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:12 np0005597378 nova_compute[238941]: 2026-01-27 14:27:12.985 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:13 np0005597378 nova_compute[238941]: 2026-01-27 14:27:13.221 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:27:13 np0005597378 podman[373621]: 2026-01-27 14:27:13.727377943 +0000 UTC m=+0.063736810 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 09:27:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.686 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.686 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.708 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.713 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524020.711243, 0131fc36-bc84-47cd-8067-04bef1ed346b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.713 238945 INFO nova.compute.manager [-] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:27:15 np0005597378 podman[373641]: 2026-01-27 14:27:15.741881422 +0000 UTC m=+0.087085621 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.762 238945 DEBUG nova.compute.manager [None req-849b9563-3f98-4ebe-b02c-f51bc88eb475 - - - - - -] [instance: 0131fc36-bc84-47cd-8067-04bef1ed346b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.806 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.807 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.814 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.815 238945 INFO nova.compute.claims [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:27:15 np0005597378 nova_compute[238941]: 2026-01-27 14:27:15.918 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026539462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.518 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.525 238945 DEBUG nova.compute.provider_tree [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.551 238945 DEBUG nova.scheduler.client.report [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.573 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.574 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.633 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.633 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.651 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.668 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.751 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.752 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.753 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Creating image(s)#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.893 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.917 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.943 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:16 np0005597378 nova_compute[238941]: 2026-01-27 14:27:16.948 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.021 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.022 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.023 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.023 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.049 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.053 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e7d05a6a-847c-4124-bbb7-f122cb954501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:27:17
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'images', 'cephfs.cephfs.data']
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.172 238945 DEBUG nova.policy [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15cb999473674ad581f5a98de252c28a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '805ab209134d4d70b18753f441ccc5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Jan 27 09:27:17 np0005597378 nova_compute[238941]: 2026-01-27 14:27:17.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:27:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.166 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f e7d05a6a-847c-4124-bbb7-f122cb954501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.238 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] resizing rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:27:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.350 238945 DEBUG nova.objects.instance [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'migration_context' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.365 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.366 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Ensure instance console log exists: /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.366 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.367 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.367 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:18 np0005597378 nova_compute[238941]: 2026-01-27 14:27:18.596 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Successfully created port: c5db635d-2d18-4cdb-9339-8474b028f04b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:27:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 41 MiB data, 958 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.448 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Successfully updated port: c5db635d-2d18-4cdb-9339-8474b028f04b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.470 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.470 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.470 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.555 238945 DEBUG nova.compute.manager [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.556 238945 DEBUG nova.compute.manager [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing instance network info cache due to event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.556 238945 DEBUG oslo_concurrency.lockutils [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:19 np0005597378 nova_compute[238941]: 2026-01-27 14:27:19.621 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.427 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.442 238945 DEBUG nova.network.neutron [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.463 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.464 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance network_info: |[{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.464 238945 DEBUG oslo_concurrency.lockutils [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.465 238945 DEBUG nova.network.neutron [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.468 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start _get_guest_xml network_info=[{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.472 238945 WARNING nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.478 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.478 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.483 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.483 238945 DEBUG nova.virt.libvirt.host [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.483 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.484 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.485 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.486 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.486 238945 DEBUG nova.virt.hardware [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.488 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:20 np0005597378 nova_compute[238941]: 2026-01-27 14:27:20.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:27:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236628048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.057 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.085 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.089 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.600 238945 DEBUG nova.network.neutron [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated VIF entry in instance network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.602 238945 DEBUG nova.network.neutron [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.629 238945 DEBUG oslo_concurrency.lockutils [req-f02ca538-ae29-4c5f-8f9c-d8a0a58e73c2 req-05ede518-f013-4c84-8973-8a02471626c4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:27:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841105619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.745 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.747 238945 DEBUG nova.virt.libvirt.vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-371612586',display_name='tempest-TestSnapshotPattern-server-371612586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-371612586',id=149,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-z0d5npn4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:16Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=e7d05a6a-847c-4124-bbb7-f122cb954501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.748 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.749 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.750 238945 DEBUG nova.objects.instance [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.790 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <uuid>e7d05a6a-847c-4124-bbb7-f122cb954501</uuid>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <name>instance-00000095</name>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestSnapshotPattern-server-371612586</nova:name>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:27:20</nova:creationTime>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:user uuid="15cb999473674ad581f5a98de252c28a">tempest-TestSnapshotPattern-2108848063-project-member</nova:user>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:project uuid="805ab209134d4d70b18753f441ccc5a7">tempest-TestSnapshotPattern-2108848063</nova:project>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <nova:port uuid="c5db635d-2d18-4cdb-9339-8474b028f04b">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <entry name="serial">e7d05a6a-847c-4124-bbb7-f122cb954501</entry>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <entry name="uuid">e7d05a6a-847c-4124-bbb7-f122cb954501</entry>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e7d05a6a-847c-4124-bbb7-f122cb954501_disk">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:e7:e8:81"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <target dev="tapc5db635d-2d"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/console.log" append="off"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:27:21 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:27:21 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:27:21 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:27:21 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.792 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Preparing to wait for external event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.792 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.793 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.794 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.795 238945 DEBUG nova.virt.libvirt.vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-371612586',display_name='tempest-TestSnapshotPattern-server-371612586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-371612586',id=149,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-z0d5npn4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:16Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=e7d05a6a-847c-4124-bbb7-f122cb954501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.795 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.796 238945 DEBUG nova.network.os_vif_util [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.796 238945 DEBUG os_vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.797 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.798 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.802 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5db635d-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.803 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5db635d-2d, col_values=(('external_ids', {'iface-id': 'c5db635d-2d18-4cdb-9339-8474b028f04b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:e8:81', 'vm-uuid': 'e7d05a6a-847c-4124-bbb7-f122cb954501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:21 np0005597378 NetworkManager[48904]: <info>  [1769524041.8063] manager: (tapc5db635d-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:21 np0005597378 nova_compute[238941]: 2026-01-27 14:27:21.815 238945 INFO os_vif [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d')#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.083 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.084 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.084 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No VIF found with MAC fa:16:3e:e7:e8:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.085 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Using config drive#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.109 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.220 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.220 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.254 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.333 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.333 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.340 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.340 238945 INFO nova.compute.claims [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.452 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.697 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Creating config drive at /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.750 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo55r8jqh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.897 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo55r8jqh" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.935 238945 DEBUG nova.storage.rbd_utils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:22 np0005597378 nova_compute[238941]: 2026-01-27 14:27:22.942 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583056511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.054 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.063 238945 DEBUG nova.compute.provider_tree [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.145 238945 DEBUG nova.scheduler.client.report [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.196 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.197 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.224 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.297 238945 DEBUG oslo_concurrency.processutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config e7d05a6a-847c-4124-bbb7-f122cb954501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.298 238945 INFO nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deleting local config drive /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501/disk.config because it was imported into RBD.#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.333 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.334 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:27:23 np0005597378 kernel: tapc5db635d-2d: entered promiscuous mode
Jan 27 09:27:23 np0005597378 NetworkManager[48904]: <info>  [1769524043.3460] manager: (tapc5db635d-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/645)
Jan 27 09:27:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:23Z|01586|binding|INFO|Claiming lport c5db635d-2d18-4cdb-9339-8474b028f04b for this chassis.
Jan 27 09:27:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:23Z|01587|binding|INFO|c5db635d-2d18-4cdb-9339-8474b028f04b: Claiming fa:16:3e:e7:e8:81 10.100.0.13
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.345 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.370 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:e8:81 10.100.0.13'], port_security=['fa:16:3e:e7:e8:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e7d05a6a-847c-4124-bbb7-f122cb954501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c5db635d-2d18-4cdb-9339-8474b028f04b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.372 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c5db635d-2d18-4cdb-9339-8474b028f04b in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 bound to our chassis#033[00m
Jan 27 09:27:23 np0005597378 systemd-udevd[374009]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.373 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d618b96-4a07-4d69-bf79-7e30a43f8748#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.372 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:27:23 np0005597378 systemd-machined[207425]: New machine qemu-181-instance-00000095.
Jan 27 09:27:23 np0005597378 NetworkManager[48904]: <info>  [1769524043.3834] device (tapc5db635d-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:27:23 np0005597378 NetworkManager[48904]: <info>  [1769524043.3839] device (tapc5db635d-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.388 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6d29ec-9adb-44c1-8e54-b0ebb6f6ef06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.390 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d618b96-41 in ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.391 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d618b96-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.392 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1314df-aad0-4486-aef2-3be783321753]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.393 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6a78ea48-28b8-47a5-aba7-33c6a391d4bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 systemd[1]: Started Virtual Machine qemu-181-instance-00000095.
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.407 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[8da36813-4229-4783-8ecc-e2e5b1896121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.409 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:23Z|01588|binding|INFO|Setting lport c5db635d-2d18-4cdb-9339-8474b028f04b ovn-installed in OVS
Jan 27 09:27:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:23Z|01589|binding|INFO|Setting lport c5db635d-2d18-4cdb-9339-8474b028f04b up in Southbound
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.439 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.452 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbc9488-046a-4991-b0be-112787432d6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.485 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e323d1-4181-48dc-81e0-75fe4b7df50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.491 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[bc72ba7e-7eaf-411d-86ec-cf8230bf163b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 NetworkManager[48904]: <info>  [1769524043.4932] manager: (tap4d618b96-40): new Veth device (/org/freedesktop/NetworkManager/Devices/646)
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.527 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee7b515-1ffa-4c70-b6d8-c3fe2f656fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.531 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e5074e60-e35f-47cb-a088-b14014451d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 NetworkManager[48904]: <info>  [1769524043.5594] device (tap4d618b96-40): carrier: link connected
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.570 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[e22c9ada-634b-4cf8-be51-e25dfdb60f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.595 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3511d554-42d6-4809-8f9a-542ce07bdec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374043, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.617 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[957319c8-cb03-4ef8-bc00-d2cf83c54c88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:25d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668318, 'tstamp': 668318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374044, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.639 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e9878575-90b2-4cb6-8a09-7bfbe9fd9530]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374045, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.660 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.662 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.663 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Creating image(s)#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.678 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5e9382-5802-45cd-95a9-a004b638330b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.705 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.737 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.760 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfbb48f-ffb3-45fd-bb43-ed0fa1b91fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.763 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.763 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.765 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d618b96-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:23 np0005597378 kernel: tap4d618b96-40: entered promiscuous mode
Jan 27 09:27:23 np0005597378 NetworkManager[48904]: <info>  [1769524043.7696] manager: (tap4d618b96-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.768 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.775 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d618b96-40, col_values=(('external_ids', {'iface-id': '61475a7c-9045-4191-a533-3416010cde1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:23Z|01590|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.778 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.782 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d618b96-4a07-4d69-bf79-7e30a43f8748.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d618b96-4a07-4d69-bf79-7e30a43f8748.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.783 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[0e77db41-ae5a-404b-8f2c-ee03abf47ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.784 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/4d618b96-4a07-4d69-bf79-7e30a43f8748.pid.haproxy
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 4d618b96-4a07-4d69-bf79-7e30a43f8748
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:27:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:23.786 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'env', 'PROCESS_TAG=haproxy-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d618b96-4a07-4d69-bf79-7e30a43f8748.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.853 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.854 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.854 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.855 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.880 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:23 np0005597378 nova_compute[238941]: 2026-01-27 14:27:23.883 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c7923d56-2a41-4171-a525-a985a28fc016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.148 238945 DEBUG nova.policy [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:27:24 np0005597378 podman[374171]: 2026-01-27 14:27:24.168915596 +0000 UTC m=+0.032082066 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.367 238945 DEBUG nova.compute.manager [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.368 238945 DEBUG oslo_concurrency.lockutils [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.368 238945 DEBUG oslo_concurrency.lockutils [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.368 238945 DEBUG oslo_concurrency.lockutils [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.369 238945 DEBUG nova.compute.manager [req-e88c45e8-7116-4bc4-830b-d1eed3b5e115 req-f8a36081-672f-4c89-bdef-11d6eb77d0e0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Processing event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:27:24 np0005597378 podman[374171]: 2026-01-27 14:27:24.590182298 +0000 UTC m=+0.453348758 container create 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 09:27:24 np0005597378 systemd[1]: Started libpod-conmon-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62.scope.
Jan 27 09:27:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:27:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c9233bc3b6f5ba7ec9a6cb3d2f937271e9caae10adc6c0f0761b21388886a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.803 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f c7923d56-2a41-4171-a525-a985a28fc016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.920s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:24 np0005597378 nova_compute[238941]: 2026-01-27 14:27:24.868 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:27:24 np0005597378 podman[374171]: 2026-01-27 14:27:24.873836498 +0000 UTC m=+0.737003008 container init 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 09:27:24 np0005597378 podman[374171]: 2026-01-27 14:27:24.879607153 +0000 UTC m=+0.742773623 container start 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:27:24 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : New worker (374269) forked
Jan 27 09:27:24 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : Loading success.
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.050 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.050 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524045.0496922, e7d05a6a-847c-4124-bbb7-f122cb954501 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.051 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Started (Lifecycle Event)#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.055 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.059 238945 INFO nova.virt.libvirt.driver [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance spawned successfully.#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.059 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.180 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.183 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.193 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.193 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.194 238945 DEBUG nova.virt.libvirt.driver [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 103 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.3 MiB/s wr, 30 op/s
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.304 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524045.053036, e7d05a6a-847c-4124-bbb7-f122cb954501 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.304 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.379 238945 DEBUG nova.objects.instance [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid c7923d56-2a41-4171-a525-a985a28fc016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.409 238945 INFO nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 8.66 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.409 238945 DEBUG nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.425 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.429 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524045.0554972, e7d05a6a-847c-4124-bbb7-f122cb954501 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.430 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.453 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.453 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Ensure instance console log exists: /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.454 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.454 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.454 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.487 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.490 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.559 238945 INFO nova.compute.manager [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 9.78 seconds to build instance.#033[00m
Jan 27 09:27:25 np0005597378 nova_compute[238941]: 2026-01-27 14:27:25.703 238945 DEBUG oslo_concurrency.lockutils [None req-61cff1b5-e90a-4b77-a686-ab9edfec8d8e 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:26 np0005597378 nova_compute[238941]: 2026-01-27 14:27:26.285 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Successfully created port: b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:27:26 np0005597378 nova_compute[238941]: 2026-01-27 14:27:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:26 np0005597378 nova_compute[238941]: 2026-01-27 14:27:26.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 115 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 27 09:27:27 np0005597378 nova_compute[238941]: 2026-01-27 14:27:27.425 238945 DEBUG nova.compute.manager [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:27 np0005597378 nova_compute[238941]: 2026-01-27 14:27:27.425 238945 DEBUG oslo_concurrency.lockutils [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:27 np0005597378 nova_compute[238941]: 2026-01-27 14:27:27.425 238945 DEBUG oslo_concurrency.lockutils [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:27 np0005597378 nova_compute[238941]: 2026-01-27 14:27:27.426 238945 DEBUG oslo_concurrency.lockutils [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:27 np0005597378 nova_compute[238941]: 2026-01-27 14:27:27.426 238945 DEBUG nova.compute.manager [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] No waiting events found dispatching network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:27:27 np0005597378 nova_compute[238941]: 2026-01-27 14:27:27.426 238945 WARNING nova.compute.manager [req-85fdd10f-0895-48d8-82b4-56e9580e5b3d req-c5a580d1-75b9-4301-bd0f-1449a44f7c24 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received unexpected event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b for instance with vm_state active and task_state None.#033[00m
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000709625708526673 of space, bias 1.0, pg target 0.21288771255800187 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693094015944289 of space, bias 1.0, pg target 0.20079282047832867 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0288933055152603e-06 of space, bias 4.0, pg target 0.0012346719666183124 quantized to 16 (current 16)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:27:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:27:28 np0005597378 nova_compute[238941]: 2026-01-27 14:27:28.044 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Successfully updated port: b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:27:28 np0005597378 nova_compute[238941]: 2026-01-27 14:27:28.085 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:28 np0005597378 nova_compute[238941]: 2026-01-27 14:27:28.086 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:28 np0005597378 nova_compute[238941]: 2026-01-27 14:27:28.086 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:27:28 np0005597378 nova_compute[238941]: 2026-01-27 14:27:28.226 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:28 np0005597378 nova_compute[238941]: 2026-01-27 14:27:28.358 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:27:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 115 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.358 238945 DEBUG nova.network.neutron [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.453 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.453 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance network_info: |[{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.456 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start _get_guest_xml network_info=[{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:27:29 np0005597378 NetworkManager[48904]: <info>  [1769524049.4565] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.456 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:29 np0005597378 NetworkManager[48904]: <info>  [1769524049.4574] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.461 238945 WARNING nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.468 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.469 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.475 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.476 238945 DEBUG nova.virt.libvirt.host [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.477 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.478 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.479 238945 DEBUG nova.virt.hardware [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.483 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:29Z|01591|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 09:27:29 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:29Z|01592|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.602 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.627 238945 DEBUG nova.compute.manager [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.628 238945 DEBUG nova.compute.manager [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing instance network info cache due to event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.628 238945 DEBUG oslo_concurrency.lockutils [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.629 238945 DEBUG oslo_concurrency.lockutils [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:29 np0005597378 nova_compute[238941]: 2026-01-27 14:27:29.629 238945 DEBUG nova.network.neutron [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:27:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:27:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670059445' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.135 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.160 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.164 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.258 238945 DEBUG nova.compute.manager [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.259 238945 DEBUG nova.compute.manager [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing instance network info cache due to event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.260 238945 DEBUG oslo_concurrency.lockutils [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.260 238945 DEBUG oslo_concurrency.lockutils [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.260 238945 DEBUG nova.network.neutron [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:27:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:27:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2067412212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:27:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.786 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.788 238945 DEBUG nova.virt.libvirt.vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=150,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-3fs48m1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:23Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=c7923d56-2a41-4171-a525-a985a28fc016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.788 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.789 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.791 238945 DEBUG nova.objects.instance [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid c7923d56-2a41-4171-a525-a985a28fc016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.810 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <uuid>c7923d56-2a41-4171-a525-a985a28fc016</uuid>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <name>instance-00000096</name>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806</nova:name>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:27:29</nova:creationTime>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <nova:port uuid="b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <entry name="serial">c7923d56-2a41-4171-a525-a985a28fc016</entry>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <entry name="uuid">c7923d56-2a41-4171-a525-a985a28fc016</entry>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c7923d56-2a41-4171-a525-a985a28fc016_disk">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/c7923d56-2a41-4171-a525-a985a28fc016_disk.config">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:f2:2b:9c"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <target dev="tapb5f45ab4-38"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/console.log" append="off"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:27:30 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:27:30 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:27:30 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:27:30 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.816 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Preparing to wait for external event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.817 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.817 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.817 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.818 238945 DEBUG nova.virt.libvirt.vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=150,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-3fs48m1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:23Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=c7923d56-2a41-4171-a525-a985a28fc016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.819 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.819 238945 DEBUG nova.network.os_vif_util [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.820 238945 DEBUG os_vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.821 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.822 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.825 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5f45ab4-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.825 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5f45ab4-38, col_values=(('external_ids', {'iface-id': 'b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:2b:9c', 'vm-uuid': 'c7923d56-2a41-4171-a525-a985a28fc016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:30 np0005597378 NetworkManager[48904]: <info>  [1769524050.8287] manager: (tapb5f45ab4-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.830 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:30 np0005597378 nova_compute[238941]: 2026-01-27 14:27:30.834 238945 INFO os_vif [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38')#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.016 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.016 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.017 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:f2:2b:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.018 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Using config drive#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.038 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.409 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Creating config drive at /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.414 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx8vc6e38 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.552 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx8vc6e38" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.577 238945 DEBUG nova.storage.rbd_utils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image c7923d56-2a41-4171-a525-a985a28fc016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.580 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config c7923d56-2a41-4171-a525-a985a28fc016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.617 238945 DEBUG nova.network.neutron [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updated VIF entry in instance network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.618 238945 DEBUG nova.network.neutron [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.643 238945 DEBUG oslo_concurrency.lockutils [req-66dae831-d437-46e3-97ea-8fb4074a89b6 req-a9ab5e85-f151-4ac1-b78e-abe637b7b51f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.842 238945 DEBUG oslo_concurrency.processutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config c7923d56-2a41-4171-a525-a985a28fc016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.843 238945 INFO nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deleting local config drive /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016/disk.config because it was imported into RBD.#033[00m
Jan 27 09:27:31 np0005597378 kernel: tapb5f45ab4-38: entered promiscuous mode
Jan 27 09:27:31 np0005597378 NetworkManager[48904]: <info>  [1769524051.8950] manager: (tapb5f45ab4-38): new Tun device (/org/freedesktop/NetworkManager/Devices/651)
Jan 27 09:27:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:31Z|01593|binding|INFO|Claiming lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for this chassis.
Jan 27 09:27:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:31Z|01594|binding|INFO|b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb: Claiming fa:16:3e:f2:2b:9c 10.100.0.14
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:31 np0005597378 systemd-udevd[374450]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:27:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:31Z|01595|binding|INFO|Setting lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb ovn-installed in OVS
Jan 27 09:27:31 np0005597378 nova_compute[238941]: 2026-01-27 14:27:31.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:31 np0005597378 NetworkManager[48904]: <info>  [1769524051.9591] device (tapb5f45ab4-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:27:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:31Z|01596|binding|INFO|Setting lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb up in Southbound
Jan 27 09:27:31 np0005597378 NetworkManager[48904]: <info>  [1769524051.9603] device (tapb5f45ab4-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.959 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:9c 10.100.0.14'], port_security=['fa:16:3e:f2:2b:9c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c7923d56-2a41-4171-a525-a985a28fc016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '660baac2-dc26-4ff6-a045-736abfa5b2f4 c2c5ff5e-9ee7-4797-83a9-9d36f0a33d37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.960 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 bound to our chassis#033[00m
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.961 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6659e71-fbb8-4896-9a40-2262d5df9f38#033[00m
Jan 27 09:27:31 np0005597378 systemd-machined[207425]: New machine qemu-182-instance-00000096.
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.973 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f745627-0f44-41b9-a866-999dfbfe38ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.974 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6659e71-f1 in ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.976 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6659e71-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.976 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a77a0e03-75a5-4e6c-a260-c6ccc9697994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.977 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[754073cf-7d84-4398-a1bc-0650f2647b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:31 np0005597378 systemd[1]: Started Virtual Machine qemu-182-instance-00000096.
Jan 27 09:27:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:31.991 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[584de95e-f7dc-487f-93d4-0d8d851ee9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.006 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca2ec41-db4a-4921-ad23-248519e2cd0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.055 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9591d1c6-0975-4ba7-b724-13879ef95439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.060 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3b459d30-dce9-4b4e-b555-550ec2819f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 NetworkManager[48904]: <info>  [1769524052.0619] manager: (tapc6659e71-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/652)
Jan 27 09:27:32 np0005597378 systemd-udevd[374455]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.100 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[76ae8f0d-c764-478e-af0d-cd4a467b9c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.102 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[1e407aa4-91de-4808-a13a-3fe8462c759c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 NetworkManager[48904]: <info>  [1769524052.1302] device (tapc6659e71-f0): carrier: link connected
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.135 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8e3bf6-759e-40c4-91f4-da5150056ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.154 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fc9e54-e012-4dc7-a714-0936eb73c350]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374486, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.170 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[951d7e31-112f-48b1-8c6a-7ea7d78a7f78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:1d8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669175, 'tstamp': 669175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374487, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.188 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e3df7f4b-9117-45d2-aee4-260b9816312b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374488, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.223 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[80056f21-76ef-4c8b-9f20-f2c11fd4c70f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.263 238945 DEBUG nova.compute.manager [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.263 238945 DEBUG oslo_concurrency.lockutils [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.264 238945 DEBUG oslo_concurrency.lockutils [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.264 238945 DEBUG oslo_concurrency.lockutils [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.264 238945 DEBUG nova.compute.manager [req-306a3bae-fc00-46c6-8cd4-11f041da4d5c req-4e674338-1cdd-4f2c-805c-4e6b453ce183 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Processing event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.279 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[7684a037-04b2-4d5c-83c9-28104cc1551f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.281 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.282 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6659e71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.284 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:32 np0005597378 NetworkManager[48904]: <info>  [1769524052.2847] manager: (tapc6659e71-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Jan 27 09:27:32 np0005597378 kernel: tapc6659e71-f0: entered promiscuous mode
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.288 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6659e71-f0, col_values=(('external_ids', {'iface-id': '744ca588-fa03-49bd-91c4-9cf04119b46c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:32 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:32Z|01597|binding|INFO|Releasing lport 744ca588-fa03-49bd-91c4-9cf04119b46c from this chassis (sb_readonly=0)
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.293 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6659e71-fbb8-4896-9a40-2262d5df9f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6659e71-fbb8-4896-9a40-2262d5df9f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fa1e10-9ad4-4feb-92cc-9eaef4e58fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.294 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/c6659e71-fbb8-4896-9a40-2262d5df9f38.pid.haproxy
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID c6659e71-fbb8-4896-9a40-2262d5df9f38
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:27:32 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:32.296 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'env', 'PROCESS_TAG=haproxy-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6659e71-fbb8-4896-9a40-2262d5df9f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.487 238945 DEBUG nova.network.neutron [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated VIF entry in instance network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.487 238945 DEBUG nova.network.neutron [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.548 238945 DEBUG oslo_concurrency.lockutils [req-04512d56-fdbb-4142-870f-969145e0550d req-d428df5b-7480-4b3b-8693-16c5fc422805 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:32 np0005597378 podman[374520]: 2026-01-27 14:27:32.721073786 +0000 UTC m=+0.093251506 container create ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:27:32 np0005597378 podman[374520]: 2026-01-27 14:27:32.64929205 +0000 UTC m=+0.021469790 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.807 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524052.8069386, c7923d56-2a41-4171-a525-a985a28fc016 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.808 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Started (Lifecycle Event)#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.811 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.815 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.819 238945 INFO nova.virt.libvirt.driver [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance spawned successfully.#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.819 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:27:32 np0005597378 systemd[1]: Started libpod-conmon-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1.scope.
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.849 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.854 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:27:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9397fc0828262d31d4311ffab7a6e8b659c7f57c5cdcedf60e73984a31d6826e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.873 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.874 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.874 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.875 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.876 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.876 238945 DEBUG nova.virt.libvirt.driver [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.927 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.928 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524052.807072, c7923d56-2a41-4171-a525-a985a28fc016 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.928 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:27:32 np0005597378 podman[374520]: 2026-01-27 14:27:32.969996919 +0000 UTC m=+0.342174739 container init ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:27:32 np0005597378 podman[374520]: 2026-01-27 14:27:32.976393992 +0000 UTC m=+0.348571712 container start ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.981 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.988 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524052.814141, c7923d56-2a41-4171-a525-a985a28fc016 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:27:32 np0005597378 nova_compute[238941]: 2026-01-27 14:27:32.989 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:27:33 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : New worker (374582) forked
Jan 27 09:27:33 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : Loading success.
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.012 238945 INFO nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 9.35 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.013 238945 DEBUG nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.036 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.042 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.089 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.191 238945 INFO nova.compute.manager [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 10.88 seconds to build instance.#033[00m
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.229 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:27:33 np0005597378 nova_compute[238941]: 2026-01-27 14:27:33.325 238945 DEBUG oslo_concurrency.lockutils [None req-be4837fc-98ed-45d2-87ef-cc0d21b5b700 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:34 np0005597378 nova_compute[238941]: 2026-01-27 14:27:34.372 238945 DEBUG nova.compute.manager [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:34 np0005597378 nova_compute[238941]: 2026-01-27 14:27:34.373 238945 DEBUG oslo_concurrency.lockutils [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:34 np0005597378 nova_compute[238941]: 2026-01-27 14:27:34.373 238945 DEBUG oslo_concurrency.lockutils [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:34 np0005597378 nova_compute[238941]: 2026-01-27 14:27:34.374 238945 DEBUG oslo_concurrency.lockutils [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:34 np0005597378 nova_compute[238941]: 2026-01-27 14:27:34.374 238945 DEBUG nova.compute.manager [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] No waiting events found dispatching network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:27:34 np0005597378 nova_compute[238941]: 2026-01-27 14:27:34.375 238945 WARNING nova.compute.manager [req-ad5da9eb-b44f-427e-86a7-3cb2614c884d req-c9e15e1a-702f-47dd-809c-423cfa06b842 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received unexpected event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for instance with vm_state active and task_state None.#033[00m
Jan 27 09:27:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 27 09:27:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:35 np0005597378 nova_compute[238941]: 2026-01-27 14:27:35.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:36 np0005597378 nova_compute[238941]: 2026-01-27 14:27:36.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:36 np0005597378 nova_compute[238941]: 2026-01-27 14:27:36.817 238945 DEBUG nova.compute.manager [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:27:36 np0005597378 nova_compute[238941]: 2026-01-27 14:27:36.818 238945 DEBUG nova.compute.manager [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing instance network info cache due to event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:27:36 np0005597378 nova_compute[238941]: 2026-01-27 14:27:36.818 238945 DEBUG oslo_concurrency.lockutils [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:27:36 np0005597378 nova_compute[238941]: 2026-01-27 14:27:36.819 238945 DEBUG oslo_concurrency.lockutils [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:27:36 np0005597378 nova_compute[238941]: 2026-01-27 14:27:36.819 238945 DEBUG nova.network.neutron [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:27:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 171 op/s
Jan 27 09:27:37 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 27 09:27:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:38Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:e8:81 10.100.0.13
Jan 27 09:27:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:38Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:e8:81 10.100.0.13
Jan 27 09:27:38 np0005597378 nova_compute[238941]: 2026-01-27 14:27:38.231 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:38 np0005597378 nova_compute[238941]: 2026-01-27 14:27:38.325 238945 DEBUG nova.network.neutron [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updated VIF entry in instance network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:27:38 np0005597378 nova_compute[238941]: 2026-01-27 14:27:38.326 238945 DEBUG nova.network.neutron [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:27:38 np0005597378 nova_compute[238941]: 2026-01-27 14:27:38.355 238945 DEBUG oslo_concurrency.lockutils [req-932bc279-9e41-4b57-b54f-b50c5208d24a req-9298c719-3970-4227-8cb7-2cef4f751dda 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:27:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 134 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 141 op/s
Jan 27 09:27:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:40 np0005597378 nova_compute[238941]: 2026-01-27 14:27:40.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.3 MiB/s wr, 204 op/s
Jan 27 09:27:43 np0005597378 nova_compute[238941]: 2026-01-27 14:27:43.232 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 27 09:27:44 np0005597378 podman[374591]: 2026-01-27 14:27:44.729574477 +0000 UTC m=+0.063613567 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:27:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Jan 27 09:27:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:45Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:2b:9c 10.100.0.14
Jan 27 09:27:45 np0005597378 nova_compute[238941]: 2026-01-27 14:27:45.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:27:45 np0005597378 ovn_controller[144812]: 2026-01-27T14:27:45Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:2b:9c 10.100.0.14
Jan 27 09:27:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:45 np0005597378 nova_compute[238941]: 2026-01-27 14:27:45.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:46.334 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:46.336 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:27:46.338 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:46 np0005597378 nova_compute[238941]: 2026-01-27 14:27:46.416 238945 DEBUG nova.compute.manager [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:27:46 np0005597378 nova_compute[238941]: 2026-01-27 14:27:46.490 238945 INFO nova.compute.manager [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] instance snapshotting#033[00m
Jan 27 09:27:46 np0005597378 podman[374609]: 2026-01-27 14:27:46.741138546 +0000 UTC m=+0.085124957 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 09:27:46 np0005597378 nova_compute[238941]: 2026-01-27 14:27:46.782 238945 INFO nova.virt.libvirt.driver [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Beginning live snapshot process#033[00m
Jan 27 09:27:46 np0005597378 nova_compute[238941]: 2026-01-27 14:27:46.918 238945 DEBUG nova.virt.libvirt.imagebackend [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 09:27:47 np0005597378 nova_compute[238941]: 2026-01-27 14:27:47.144 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(73956e0d3669496bb2b0c6a478d44a07) on rbd image(e7d05a6a-847c-4124-bbb7-f122cb954501_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 177 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 3.0 MiB/s wr, 102 op/s
Jan 27 09:27:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 27 09:27:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 27 09:27:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 27 09:27:47 np0005597378 nova_compute[238941]: 2026-01-27 14:27:47.402 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] cloning vms/e7d05a6a-847c-4124-bbb7-f122cb954501_disk@73956e0d3669496bb2b0c6a478d44a07 to images/dc70b820-f623-4425-90a6-c6b104369526 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:27:47 np0005597378 nova_compute[238941]: 2026-01-27 14:27:47.569 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] flattening images/dc70b820-f623-4425-90a6-c6b104369526 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:27:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:27:48 np0005597378 nova_compute[238941]: 2026-01-27 14:27:48.235 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:49 np0005597378 nova_compute[238941]: 2026-01-27 14:27:49.050 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] removing snapshot(73956e0d3669496bb2b0c6a478d44a07) on rbd image(e7d05a6a-847c-4124-bbb7-f122cb954501_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 09:27:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 177 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 562 KiB/s rd, 3.7 MiB/s wr, 106 op/s
Jan 27 09:27:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 27 09:27:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 27 09:27:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 27 09:27:50 np0005597378 nova_compute[238941]: 2026-01-27 14:27:50.205 238945 DEBUG nova.storage.rbd_utils [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(snap) on rbd image(dc70b820-f623-4425-90a6-c6b104369526) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:27:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:50 np0005597378 nova_compute[238941]: 2026-01-27 14:27:50.832 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 27 09:27:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 27 09:27:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 27 09:27:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 258 op/s
Jan 27 09:27:52 np0005597378 nova_compute[238941]: 2026-01-27 14:27:52.261 238945 INFO nova.virt.libvirt.driver [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Snapshot image upload complete#033[00m
Jan 27 09:27:52 np0005597378 nova_compute[238941]: 2026-01-27 14:27:52.261 238945 INFO nova.compute.manager [None req-31da4a95-a12c-4e1c-9697-f2684e7a8e7d 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 5.77 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 09:27:53 np0005597378 nova_compute[238941]: 2026-01-27 14:27:53.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 10 MiB/s wr, 221 op/s
Jan 27 09:27:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.8 MiB/s wr, 195 op/s
Jan 27 09:27:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:27:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 27 09:27:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 27 09:27:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 27 09:27:55 np0005597378 nova_compute[238941]: 2026-01-27 14:27:55.834 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:56 np0005597378 nova_compute[238941]: 2026-01-27 14:27:56.219 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:56 np0005597378 nova_compute[238941]: 2026-01-27 14:27:56.219 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:56 np0005597378 nova_compute[238941]: 2026-01-27 14:27:56.699 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:27:56 np0005597378 nova_compute[238941]: 2026-01-27 14:27:56.768 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:56 np0005597378 nova_compute[238941]: 2026-01-27 14:27:56.769 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.5 MiB/s wr, 66 op/s
Jan 27 09:27:57 np0005597378 nova_compute[238941]: 2026-01-27 14:27:57.353 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:57 np0005597378 nova_compute[238941]: 2026-01-27 14:27:57.354 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:57 np0005597378 nova_compute[238941]: 2026-01-27 14:27:57.364 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:27:57 np0005597378 nova_compute[238941]: 2026-01-27 14:27:57.365 238945 INFO nova.compute.claims [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:27:57 np0005597378 nova_compute[238941]: 2026-01-27 14:27:57.414 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:27:58 np0005597378 nova_compute[238941]: 2026-01-27 14:27:58.145 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:27:58 np0005597378 nova_compute[238941]: 2026-01-27 14:27:58.239 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:27:58 np0005597378 nova_compute[238941]: 2026-01-27 14:27:58.884 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 20 KiB/s wr, 27 op/s
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120845783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.519 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.527 238945 DEBUG nova.compute.provider_tree [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.570 238945 DEBUG nova.scheduler.client.report [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.604 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.610 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.617 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.618 238945 INFO nova.compute.claims [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567249251' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567249251' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.739 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.739 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.770 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.795 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:27:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.875 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.973 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.976 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:27:59 np0005597378 nova_compute[238941]: 2026-01-27 14:27:59.976 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Creating image(s)#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.006 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.033 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.066 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.072 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "5c3458c97d293c7980156027efc0b203d772cbdc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.073 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "5c3458c97d293c7980156027efc0b203d772cbdc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.144 238945 DEBUG nova.policy [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15cb999473674ad581f5a98de252c28a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '805ab209134d4d70b18753f441ccc5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.280 238945 DEBUG nova.virt.libvirt.imagebackend [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/dc70b820-f623-4425-90a6-c6b104369526/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/dc70b820-f623-4425-90a6-c6b104369526/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.31219739 +0000 UTC m=+0.055359853 container create 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.339 238945 DEBUG nova.virt.libvirt.imagebackend [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/dc70b820-f623-4425-90a6-c6b104369526/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.340 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] cloning images/dc70b820-f623-4425-90a6-c6b104369526@snap to None/bad9acc4-1999-4764-adea-156a129e9d4a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:28:00 np0005597378 systemd[1]: Started libpod-conmon-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope.
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.286844926 +0000 UTC m=+0.030007439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:28:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.422747731 +0000 UTC m=+0.165910244 container init 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.430989154 +0000 UTC m=+0.174151617 container start 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.436277947 +0000 UTC m=+0.179440460 container attach 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:28:00 np0005597378 systemd[1]: libpod-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope: Deactivated successfully.
Jan 27 09:28:00 np0005597378 priceless_mclean[375081]: 167 167
Jan 27 09:28:00 np0005597378 conmon[375081]: conmon 8723030862ec3ba69c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope/container/memory.events
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.440219973 +0000 UTC m=+0.183382436 container died 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.464 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "5c3458c97d293c7980156027efc0b203d772cbdc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-386cb7531690fefbfdfbc152548a93c9f5b2aca6bad95784b4c27fd870006e5e-merged.mount: Deactivated successfully.
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372251324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.501 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:00 np0005597378 podman[375016]: 2026-01-27 14:28:00.502457671 +0000 UTC m=+0.245620134 container remove 8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclean, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:28:00 np0005597378 systemd[1]: libpod-conmon-8723030862ec3ba69c59d50e6a84c353e8da3a3766c1177d48b34265c1b4c0b2.scope: Deactivated successfully.
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.570 238945 DEBUG nova.compute.provider_tree [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.578 238945 DEBUG nova.objects.instance [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'migration_context' on Instance uuid bad9acc4-1999-4764-adea-156a129e9d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.597 238945 DEBUG nova.scheduler.client.report [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.602 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.602 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Ensure instance console log exists: /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.603 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.618 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.618 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:28:00 np0005597378 podman[375182]: 2026-01-27 14:28:00.677082611 +0000 UTC m=+0.044636475 container create 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.710 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.711 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:28:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:00.714 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:28:00 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:00.715 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.742 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:28:00 np0005597378 podman[375182]: 2026-01-27 14:28:00.656389783 +0000 UTC m=+0.023943647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.754 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:00 np0005597378 systemd[1]: Started libpod-conmon-863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207.scope.
Jan 27 09:28:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.779 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:28:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.797 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Successfully created port: 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:28:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:00 np0005597378 podman[375182]: 2026-01-27 14:28:00.815507584 +0000 UTC m=+0.183061448 container init 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:28:00 np0005597378 podman[375182]: 2026-01-27 14:28:00.824107396 +0000 UTC m=+0.191661240 container start 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:28:00 np0005597378 podman[375182]: 2026-01-27 14:28:00.827572959 +0000 UTC m=+0.195126803 container attach 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.880 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.881 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.882 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Creating image(s)#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.905 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.926 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.949 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.953 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:00 np0005597378 nova_compute[238941]: 2026-01-27 14:28:00.996 238945 DEBUG nova.policy [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2610a627ed524b0ab448b5604167899e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '45344a38de5c4bc6b61680272082756a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.034 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.035 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.036 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.036 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.060 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.064 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b5d1a89f-53d1-4f04-90ed-309724685f10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 20 KiB/s wr, 25 op/s
Jan 27 09:28:01 np0005597378 compassionate_lichterman[375198]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:28:01 np0005597378 compassionate_lichterman[375198]: --> All data devices are unavailable
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.418 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f b5d1a89f-53d1-4f04-90ed-309724685f10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:01 np0005597378 systemd[1]: libpod-863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207.scope: Deactivated successfully.
Jan 27 09:28:01 np0005597378 podman[375182]: 2026-01-27 14:28:01.425865745 +0000 UTC m=+0.793419609 container died 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:28:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2f3320f6a0210bf5c42b62be51c61c822bf5975c7b2256faf9d095226e35302c-merged.mount: Deactivated successfully.
Jan 27 09:28:01 np0005597378 podman[375182]: 2026-01-27 14:28:01.480149278 +0000 UTC m=+0.847703122 container remove 863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:28:01 np0005597378 systemd[1]: libpod-conmon-863460f69c7e525b62725c1052f40970727ecfd95c727c3f971a2074ef523207.scope: Deactivated successfully.
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.498 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] resizing rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.579 238945 DEBUG nova.objects.instance [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'migration_context' on Instance uuid b5d1a89f-53d1-4f04-90ed-309724685f10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.597 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.597 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Ensure instance console log exists: /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.598 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.598 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.598 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.897 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Successfully updated port: 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.904 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Successfully created port: 5818eb5a-5355-449d-8f54-1954097bdc8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.917 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.918 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:01 np0005597378 nova_compute[238941]: 2026-01-27 14:28:01.918 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:28:01 np0005597378 podman[375457]: 2026-01-27 14:28:01.969060324 +0000 UTC m=+0.044171853 container create 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:28:02 np0005597378 nova_compute[238941]: 2026-01-27 14:28:02.007 238945 DEBUG nova.compute.manager [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:02 np0005597378 nova_compute[238941]: 2026-01-27 14:28:02.007 238945 DEBUG nova.compute.manager [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing instance network info cache due to event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:28:02 np0005597378 nova_compute[238941]: 2026-01-27 14:28:02.008 238945 DEBUG oslo_concurrency.lockutils [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:02 np0005597378 systemd[1]: Started libpod-conmon-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope.
Jan 27 09:28:02 np0005597378 podman[375457]: 2026-01-27 14:28:01.951233163 +0000 UTC m=+0.026344712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:28:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:28:02 np0005597378 podman[375457]: 2026-01-27 14:28:02.064410045 +0000 UTC m=+0.139521614 container init 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:28:02 np0005597378 podman[375457]: 2026-01-27 14:28:02.072474792 +0000 UTC m=+0.147586311 container start 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:28:02 np0005597378 podman[375457]: 2026-01-27 14:28:02.075743321 +0000 UTC m=+0.150854890 container attach 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:28:02 np0005597378 cranky_wilson[375473]: 167 167
Jan 27 09:28:02 np0005597378 systemd[1]: libpod-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope: Deactivated successfully.
Jan 27 09:28:02 np0005597378 conmon[375473]: conmon 47630bd5374e812dfc4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope/container/memory.events
Jan 27 09:28:02 np0005597378 podman[375457]: 2026-01-27 14:28:02.078625758 +0000 UTC m=+0.153737277 container died 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:28:02 np0005597378 nova_compute[238941]: 2026-01-27 14:28:02.086 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:28:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-921129668e78980e3664e1cd70a7f651c426e62720cc826a396eab91a1e561b7-merged.mount: Deactivated successfully.
Jan 27 09:28:02 np0005597378 podman[375457]: 2026-01-27 14:28:02.132622504 +0000 UTC m=+0.207734043 container remove 47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:28:02 np0005597378 systemd[1]: libpod-conmon-47630bd5374e812dfc4eb458b643beb36b1e67919a9cc527dd1f13dfedeb92c4.scope: Deactivated successfully.
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.325198838 +0000 UTC m=+0.057035069 container create 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:28:02 np0005597378 systemd[1]: Started libpod-conmon-75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495.scope.
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.298480218 +0000 UTC m=+0.030316529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:28:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:28:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.420264972 +0000 UTC m=+0.152101213 container init 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.428804732 +0000 UTC m=+0.160640963 container start 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.431924316 +0000 UTC m=+0.163760597 container attach 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:28:02 np0005597378 practical_banach[375513]: {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:    "0": [
Jan 27 09:28:02 np0005597378 practical_banach[375513]:        {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "devices": [
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "/dev/loop3"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            ],
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_name": "ceph_lv0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_size": "21470642176",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "name": "ceph_lv0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "tags": {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cluster_name": "ceph",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.crush_device_class": "",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.encrypted": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.objectstore": "bluestore",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osd_id": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.type": "block",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.vdo": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.with_tpm": "0"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            },
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "type": "block",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "vg_name": "ceph_vg0"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:        }
Jan 27 09:28:02 np0005597378 practical_banach[375513]:    ],
Jan 27 09:28:02 np0005597378 practical_banach[375513]:    "1": [
Jan 27 09:28:02 np0005597378 practical_banach[375513]:        {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "devices": [
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "/dev/loop4"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            ],
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_name": "ceph_lv1",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_size": "21470642176",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "name": "ceph_lv1",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "tags": {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cluster_name": "ceph",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.crush_device_class": "",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.encrypted": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.objectstore": "bluestore",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osd_id": "1",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.type": "block",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.vdo": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.with_tpm": "0"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            },
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "type": "block",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "vg_name": "ceph_vg1"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:        }
Jan 27 09:28:02 np0005597378 practical_banach[375513]:    ],
Jan 27 09:28:02 np0005597378 practical_banach[375513]:    "2": [
Jan 27 09:28:02 np0005597378 practical_banach[375513]:        {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "devices": [
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "/dev/loop5"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            ],
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_name": "ceph_lv2",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_size": "21470642176",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "name": "ceph_lv2",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "tags": {
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.cluster_name": "ceph",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.crush_device_class": "",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.encrypted": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.objectstore": "bluestore",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osd_id": "2",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.type": "block",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.vdo": "0",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:                "ceph.with_tpm": "0"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            },
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "type": "block",
Jan 27 09:28:02 np0005597378 practical_banach[375513]:            "vg_name": "ceph_vg2"
Jan 27 09:28:02 np0005597378 practical_banach[375513]:        }
Jan 27 09:28:02 np0005597378 practical_banach[375513]:    ]
Jan 27 09:28:02 np0005597378 practical_banach[375513]: }
Jan 27 09:28:02 np0005597378 systemd[1]: libpod-75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495.scope: Deactivated successfully.
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.781303129 +0000 UTC m=+0.513139400 container died 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:28:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0425ebe47dd7b01e7154c9df27db08b666b656ac4859e7def1047d3e9182adf1-merged.mount: Deactivated successfully.
Jan 27 09:28:02 np0005597378 podman[375496]: 2026-01-27 14:28:02.823836735 +0000 UTC m=+0.555672976 container remove 75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banach, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:28:02 np0005597378 systemd[1]: libpod-conmon-75da65adaa008efc34021da48f9cabdd994e9b148c536f4331bbf7ac77814495.scope: Deactivated successfully.
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.000 238945 DEBUG nova.network.neutron [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.023 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Successfully updated port: 5818eb5a-5355-449d-8f54-1954097bdc8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.027 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.027 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance network_info: |[{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.027 238945 DEBUG oslo_concurrency.lockutils [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.028 238945 DEBUG nova.network.neutron [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.031 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start _get_guest_xml network_info=[{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:27:46Z,direct_url=<?>,disk_format='raw',id=dc70b820-f623-4425-90a6-c6b104369526,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-318241120',owner='805ab209134d4d70b18753f441ccc5a7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:27:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'dc70b820-f623-4425-90a6-c6b104369526'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.037 238945 WARNING nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.042 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.044 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.045 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.045 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquired lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.046 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.053 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.056 238945 DEBUG nova.virt.libvirt.host [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.056 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.056 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:27:46Z,direct_url=<?>,disk_format='raw',id=dc70b820-f623-4425-90a6-c6b104369526,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-318241120',owner='805ab209134d4d70b18753f441ccc5a7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:27:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.057 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.058 238945 DEBUG nova.virt.hardware [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.061 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.227 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.240 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 20 KiB/s wr, 25 op/s
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.322299388 +0000 UTC m=+0.046664999 container create 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:28:03 np0005597378 systemd[1]: Started libpod-conmon-3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f.scope.
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.296807801 +0000 UTC m=+0.021173442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:28:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.408719129 +0000 UTC m=+0.133084760 container init 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.414656029 +0000 UTC m=+0.139021640 container start 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:28:03 np0005597378 gallant_thompson[375632]: 167 167
Jan 27 09:28:03 np0005597378 systemd[1]: libpod-3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f.scope: Deactivated successfully.
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.420360663 +0000 UTC m=+0.144726304 container attach 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.422748637 +0000 UTC m=+0.147114248 container died 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:28:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e9f56952ad8726c2adb3e56c5d229d403d9d56537637f6c845d04fea977c6a05-merged.mount: Deactivated successfully.
Jan 27 09:28:03 np0005597378 podman[375616]: 2026-01-27 14:28:03.483463655 +0000 UTC m=+0.207829266 container remove 3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_thompson, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:28:03 np0005597378 systemd[1]: libpod-conmon-3fae74e8cf1baaa2f51f4ed5a0a1a4c648387c04dfe7441db58cbcc908fb0a9f.scope: Deactivated successfully.
Jan 27 09:28:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:28:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940784564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.663 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:03 np0005597378 podman[375656]: 2026-01-27 14:28:03.665477183 +0000 UTC m=+0.044586523 container create f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.695 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:03 np0005597378 nova_compute[238941]: 2026-01-27 14:28:03.703 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:03 np0005597378 systemd[1]: Started libpod-conmon-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope.
Jan 27 09:28:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:28:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:28:03 np0005597378 podman[375656]: 2026-01-27 14:28:03.64606316 +0000 UTC m=+0.025172530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:28:03 np0005597378 podman[375656]: 2026-01-27 14:28:03.751878163 +0000 UTC m=+0.130987523 container init f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:28:03 np0005597378 podman[375656]: 2026-01-27 14:28:03.759584711 +0000 UTC m=+0.138694051 container start f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:28:03 np0005597378 podman[375656]: 2026-01-27 14:28:03.772916581 +0000 UTC m=+0.152025921 container attach f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:28:04 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.197 238945 DEBUG nova.compute.manager [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.197 238945 DEBUG nova.compute.manager [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing instance network info cache due to event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.197 238945 DEBUG oslo_concurrency.lockutils [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.203 238945 DEBUG nova.network.neutron [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.210 238945 DEBUG nova.network.neutron [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updated VIF entry in instance network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.210 238945 DEBUG nova.network.neutron [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.478 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Releasing lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.478 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance network_info: |[{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.479 238945 DEBUG oslo_concurrency.lockutils [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.479 238945 DEBUG nova.network.neutron [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.482 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start _get_guest_xml network_info=[{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:04.483 238945 DEBUG oslo_concurrency.lockutils [req-f0097afe-e546-4fa2-9f37-818924528e89 req-f17ab597-fe6d-47f2-b06c-d51acacc3ba4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:04.718 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185801788' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.004 238945 WARNING nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.010 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.011 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.015 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.016 238945 DEBUG nova.virt.libvirt.host [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.016 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.016 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.017 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.017 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.017 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.018 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.019 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.019 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.019 238945 DEBUG nova.virt.hardware [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.024 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.065 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.067 238945 DEBUG nova.virt.libvirt.vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1569596201',display_name='tempest-TestSnapshotPattern-server-1569596201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1569596201',id=151,image_ref='dc70b820-f623-4425-90a6-c6b104369526',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-2guobcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d05a6a-847c-4124-bbb7-f122cb954501',image_min_disk='1',image_min_ram='0',image_owner_id='805ab209134d4d70b18753f441ccc5a7',image_owner_project_name='tempest-TestSnapshotPattern-2108848063',image_owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member',image_user_id='15cb999473674ad581f5a98de252c28a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:59Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=bad9acc4-1999-4764-adea-156a129e9d4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.068 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.069 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.071 238945 DEBUG nova.objects.instance [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bad9acc4-1999-4764-adea-156a129e9d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.096 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <uuid>bad9acc4-1999-4764-adea-156a129e9d4a</uuid>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <name>instance-00000097</name>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestSnapshotPattern-server-1569596201</nova:name>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:28:03</nova:creationTime>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:user uuid="15cb999473674ad581f5a98de252c28a">tempest-TestSnapshotPattern-2108848063-project-member</nova:user>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:project uuid="805ab209134d4d70b18753f441ccc5a7">tempest-TestSnapshotPattern-2108848063</nova:project>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="dc70b820-f623-4425-90a6-c6b104369526"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <nova:port uuid="27db5f4c-e0fe-4746-aa0a-99149d5341d4">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <entry name="serial">bad9acc4-1999-4764-adea-156a129e9d4a</entry>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <entry name="uuid">bad9acc4-1999-4764-adea-156a129e9d4a</entry>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bad9acc4-1999-4764-adea-156a129e9d4a_disk">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/bad9acc4-1999-4764-adea-156a129e9d4a_disk.config">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:15:40:d6"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <target dev="tap27db5f4c-e0"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/console.log" append="off"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:28:05 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:28:05 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:28:05 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:28:05 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.097 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Preparing to wait for external event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.097 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.098 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.099 238945 DEBUG nova.virt.libvirt.vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1569596201',display_name='tempest-TestSnapshotPattern-server-1569596201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1569596201',id=151,image_ref='dc70b820-f623-4425-90a6-c6b104369526',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-2guobcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d05a6a-847c-4124-bbb7-f122cb954501',image_min_disk='1',image_min_ram='0',image_owner_id='805ab209134d4d70b18753f441ccc5a7',image_owner_project_name='tempest-TestSnapshotPattern-2108848063',image_owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member',image_user_id='15cb999473674ad581f5a98de252c28a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:27:59Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=bad9acc4-1999-4764-adea-156a129e9d4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.099 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.100 238945 DEBUG nova.network.os_vif_util [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.100 238945 DEBUG os_vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.106 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27db5f4c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.107 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27db5f4c-e0, col_values=(('external_ids', {'iface-id': '27db5f4c-e0fe-4746-aa0a-99149d5341d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:40:d6', 'vm-uuid': 'bad9acc4-1999-4764-adea-156a129e9d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:05 np0005597378 NetworkManager[48904]: <info>  [1769524085.1100] manager: (tap27db5f4c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.118 238945 INFO os_vif [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0')#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.181 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.181 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.182 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] No VIF found with MAC fa:16:3e:15:40:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.182 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Using config drive#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.206 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 957 KiB/s wr, 35 op/s
Jan 27 09:28:05 np0005597378 lvm[375834]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:28:05 np0005597378 lvm[375834]: VG ceph_vg1 finished
Jan 27 09:28:05 np0005597378 lvm[375833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:28:05 np0005597378 lvm[375833]: VG ceph_vg0 finished
Jan 27 09:28:05 np0005597378 lvm[375836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:28:05 np0005597378 lvm[375836]: VG ceph_vg2 finished
Jan 27 09:28:05 np0005597378 laughing_lehmann[375692]: {}
Jan 27 09:28:05 np0005597378 systemd[1]: libpod-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope: Deactivated successfully.
Jan 27 09:28:05 np0005597378 systemd[1]: libpod-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope: Consumed 3.016s CPU time.
Jan 27 09:28:05 np0005597378 podman[375656]: 2026-01-27 14:28:05.467978604 +0000 UTC m=+1.847087944 container died f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:28:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-706c6278ce310391441beab54edc86b7b72a3dbfd2d14ddb53d6f482ac7d2821-merged.mount: Deactivated successfully.
Jan 27 09:28:05 np0005597378 podman[375656]: 2026-01-27 14:28:05.522183336 +0000 UTC m=+1.901292676 container remove f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.526 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Creating config drive at /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config#033[00m
Jan 27 09:28:05 np0005597378 systemd[1]: libpod-conmon-f33c0abd1de63aed1adeae808b954360ce73849d4ca20d7589cb633036774569.scope: Deactivated successfully.
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.533 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahtvc5tp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686320844' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.656 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.685 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.689 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.722 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahtvc5tp" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.725 238945 DEBUG nova.network.neutron [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updated VIF entry in instance network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.726 238945 DEBUG nova.network.neutron [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.758 238945 DEBUG nova.storage.rbd_utils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] rbd image bad9acc4-1999-4764-adea-156a129e9d4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.762 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config bad9acc4-1999-4764-adea-156a129e9d4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.827 238945 DEBUG oslo_concurrency.lockutils [req-46367c3f-4a06-4a14-a511-14e846c93769 req-053abaab-390c-4fb4-a28c-8753a8d473eb 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.906 238945 DEBUG oslo_concurrency.processutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config bad9acc4-1999-4764-adea-156a129e9d4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.907 238945 INFO nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deleting local config drive /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a/disk.config because it was imported into RBD.#033[00m
Jan 27 09:28:05 np0005597378 kernel: tap27db5f4c-e0: entered promiscuous mode
Jan 27 09:28:05 np0005597378 NetworkManager[48904]: <info>  [1769524085.9598] manager: (tap27db5f4c-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Jan 27 09:28:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:05Z|01598|binding|INFO|Claiming lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 for this chassis.
Jan 27 09:28:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:05Z|01599|binding|INFO|27db5f4c-e0fe-4746-aa0a-99149d5341d4: Claiming fa:16:3e:15:40:d6 10.100.0.7
Jan 27 09:28:05 np0005597378 systemd-udevd[375835]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:05 np0005597378 NetworkManager[48904]: <info>  [1769524085.9773] device (tap27db5f4c-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:28:05 np0005597378 NetworkManager[48904]: <info>  [1769524085.9787] device (tap27db5f4c-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:28:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:05Z|01600|binding|INFO|Setting lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 ovn-installed in OVS
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.984 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:05 np0005597378 nova_compute[238941]: 2026-01-27 14:28:05.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:06 np0005597378 systemd-machined[207425]: New machine qemu-183-instance-00000097.
Jan 27 09:28:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:28:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:28:06 np0005597378 systemd[1]: Started Virtual Machine qemu-183-instance-00000097.
Jan 27 09:28:06 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:06Z|01601|binding|INFO|Setting lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 up in Southbound
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.080 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:40:d6 10.100.0.7'], port_security=['fa:16:3e:15:40:d6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bad9acc4-1999-4764-adea-156a129e9d4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27db5f4c-e0fe-4746-aa0a-99149d5341d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.082 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 bound to our chassis#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.083 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d618b96-4a07-4d69-bf79-7e30a43f8748#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.101 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[43b157e9-c432-4ef8-915d-0942b74411c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.138 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f50f38f2-26ba-48dc-8287-84fe5d096fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.141 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[b78b2220-a7a6-410e-9a60-6c0a8069a556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.175 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e6280a-fc2d-4a73-b10d-1e856481fa0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.197 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[07d2e12c-cbe6-43ec-9fe3-dde715c7187f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375978, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.216 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[44474195-2e3c-4ea3-940e-e1ba248a652b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668333, 'tstamp': 668333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375979, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668337, 'tstamp': 668337}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375979, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.218 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.220 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.221 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d618b96-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.221 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d618b96-40, col_values=(('external_ids', {'iface-id': '61475a7c-9045-4191-a533-3416010cde1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:06.222 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:28:06 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715858846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.300 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.302 238945 DEBUG nova.virt.libvirt.vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=152,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-i11giqrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:28:00Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=b5d1a89f-53d1-4f04-90ed-309724685f10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.302 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.303 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.304 238945 DEBUG nova.objects.instance [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'pci_devices' on Instance uuid b5d1a89f-53d1-4f04-90ed-309724685f10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.318 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <uuid>b5d1a89f-53d1-4f04-90ed-309724685f10</uuid>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <name>instance-00000098</name>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388</nova:name>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:28:05</nova:creationTime>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:user uuid="2610a627ed524b0ab448b5604167899e">tempest-TestSecurityGroupsBasicOps-165504025-project-member</nova:user>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:project uuid="45344a38de5c4bc6b61680272082756a">tempest-TestSecurityGroupsBasicOps-165504025</nova:project>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <nova:port uuid="5818eb5a-5355-449d-8f54-1954097bdc8e">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <entry name="serial">b5d1a89f-53d1-4f04-90ed-309724685f10</entry>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <entry name="uuid">b5d1a89f-53d1-4f04-90ed-309724685f10</entry>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b5d1a89f-53d1-4f04-90ed-309724685f10_disk">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:b5:e7:7d"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <target dev="tap5818eb5a-53"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/console.log" append="off"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:28:06 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:28:06 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:28:06 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:28:06 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Preparing to wait for external event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.320 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.321 238945 DEBUG nova.virt.libvirt.vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:27:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=152,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-i11giqrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:28:00Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=b5d1a89f-53d1-4f04-90ed-309724685f10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.321 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.322 238945 DEBUG nova.network.os_vif_util [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.322 238945 DEBUG os_vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.323 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.324 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.326 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5818eb5a-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.327 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5818eb5a-53, col_values=(('external_ids', {'iface-id': '5818eb5a-5355-449d-8f54-1954097bdc8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:e7:7d', 'vm-uuid': 'b5d1a89f-53d1-4f04-90ed-309724685f10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:06 np0005597378 NetworkManager[48904]: <info>  [1769524086.3295] manager: (tap5818eb5a-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.332 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.336 238945 INFO os_vif [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53')#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.429 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.430 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.430 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] No VIF found with MAC fa:16:3e:b5:e7:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.431 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Using config drive#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.455 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.995 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524086.9954565, bad9acc4-1999-4764-adea-156a129e9d4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:06 np0005597378 nova_compute[238941]: 2026-01-27 14:28:06.996 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Started (Lifecycle Event)#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.029 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.033 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524086.996286, bad9acc4-1999-4764-adea-156a129e9d4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.034 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.059 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.061 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.081 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:28:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.850 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Creating config drive at /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.855 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppiracux9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.986 238945 DEBUG nova.compute.manager [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.987 238945 DEBUG oslo_concurrency.lockutils [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.988 238945 DEBUG oslo_concurrency.lockutils [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.988 238945 DEBUG oslo_concurrency.lockutils [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.988 238945 DEBUG nova.compute.manager [req-7f04d65f-d1c0-4962-8cdb-1ef297d96237 req-1501a3f6-ddb0-47a1-b575-4cc5af0518db 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Processing event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.989 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.994 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524087.993696, bad9acc4-1999-4764-adea-156a129e9d4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.994 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:28:07 np0005597378 nova_compute[238941]: 2026-01-27 14:28:07.997 238945 DEBUG nova.virt.libvirt.driver [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.000 238945 INFO nova.virt.libvirt.driver [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance spawned successfully.#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.001 238945 INFO nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 8.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.001 238945 DEBUG nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.003 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppiracux9" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.036 238945 DEBUG nova.storage.rbd_utils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] rbd image b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.040 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.085 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.095 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.120 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.147 238945 INFO nova.compute.manager [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 10.83 seconds to build instance.#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.164 238945 DEBUG oslo_concurrency.lockutils [None req-a98f117d-f1dc-4f57-8038-556c1a060abe 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.192 238945 DEBUG oslo_concurrency.processutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config b5d1a89f-53d1-4f04-90ed-309724685f10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.193 238945 INFO nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deleting local config drive /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10/disk.config because it was imported into RBD.#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:08 np0005597378 kernel: tap5818eb5a-53: entered promiscuous mode
Jan 27 09:28:08 np0005597378 NetworkManager[48904]: <info>  [1769524088.2497] manager: (tap5818eb5a-53): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.254 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:08Z|01602|binding|INFO|Claiming lport 5818eb5a-5355-449d-8f54-1954097bdc8e for this chassis.
Jan 27 09:28:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:08Z|01603|binding|INFO|5818eb5a-5355-449d-8f54-1954097bdc8e: Claiming fa:16:3e:b5:e7:7d 10.100.0.7
Jan 27 09:28:08 np0005597378 NetworkManager[48904]: <info>  [1769524088.2658] device (tap5818eb5a-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:28:08 np0005597378 NetworkManager[48904]: <info>  [1769524088.2667] device (tap5818eb5a-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.265 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e7:7d 10.100.0.7'], port_security=['fa:16:3e:b5:e7:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b5d1a89f-53d1-4f04-90ed-309724685f10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '660baac2-dc26-4ff6-a045-736abfa5b2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5818eb5a-5355-449d-8f54-1954097bdc8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.267 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5818eb5a-5355-449d-8f54-1954097bdc8e in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 bound to our chassis#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.269 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6659e71-fbb8-4896-9a40-2262d5df9f38#033[00m
Jan 27 09:28:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:08Z|01604|binding|INFO|Setting lport 5818eb5a-5355-449d-8f54-1954097bdc8e ovn-installed in OVS
Jan 27 09:28:08 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:08Z|01605|binding|INFO|Setting lport 5818eb5a-5355-449d-8f54-1954097bdc8e up in Southbound
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.286 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a00b6b21-d5b7-4614-ba7f-39a3cc0e3a86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.286 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:08 np0005597378 systemd-machined[207425]: New machine qemu-184-instance-00000098.
Jan 27 09:28:08 np0005597378 systemd[1]: Started Virtual Machine qemu-184-instance-00000098.
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.321 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[13b5e335-ae40-439d-859f-63a38b1e67d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.325 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7b1475-fd98-46fc-9a2a-f6190824aef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.365 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[78da6ee9-c2d3-460c-ab91-aa095b925d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.384 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[72d69baa-8b17-4067-b6b2-d09af6eb0e4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376111, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.402 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[49ac9c49-0e62-40c5-87a6-e8ff6f39a478]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669186, 'tstamp': 669186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376112, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669189, 'tstamp': 669189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376112, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.404 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.405 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.409 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6659e71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.410 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.410 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6659e71-f0, col_values=(('external_ids', {'iface-id': '744ca588-fa03-49bd-91c4-9cf04119b46c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:08.411 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.895 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524088.895224, b5d1a89f-53d1-4f04-90ed-309724685f10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.897 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Started (Lifecycle Event)#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.919 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.923 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524088.8954268, b5d1a89f-53d1-4f04-90ed-309724685f10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.924 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.941 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.945 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:28:08 np0005597378 nova_compute[238941]: 2026-01-27 14:28:08.964 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:28:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.401 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.401 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.402 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.402 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.402 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:09 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:28:09 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505677351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:28:09 np0005597378 nova_compute[238941]: 2026-01-27 14:28:09.996 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.072 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.072 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] No waiting events found dispatching network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.073 238945 WARNING nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received unexpected event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.074 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.074 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.074 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Processing event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.075 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 DEBUG oslo_concurrency.lockutils [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 DEBUG nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] No waiting events found dispatching network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.076 238945 WARNING nova.compute.manager [req-b439b09f-9755-4e4a-8d35-ba866fe63f8e req-add2139a-e0f1-4a82-b47e-a8c802dd6bec 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received unexpected event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e for instance with vm_state building and task_state spawning.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.078 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.103 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524090.0828004, b5d1a89f-53d1-4f04-90ed-309724685f10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.104 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.107 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.110 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.110 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.114 238945 INFO nova.virt.libvirt.driver [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance spawned successfully.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.115 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.117 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.117 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.122 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.123 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.127 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.131 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.131 238945 DEBUG nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.135 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.136 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.136 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.137 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.137 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.137 238945 DEBUG nova.virt.libvirt.driver [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.143 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.186 238945 INFO nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 9.31 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.187 238945 DEBUG nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.245 238945 INFO nova.compute.manager [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 12.12 seconds to build instance.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.260 238945 DEBUG oslo_concurrency.lockutils [None req-291ef5fc-7e29-476e-bcef-57e11efd59b9 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.348 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.349 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2930MB free_disk=59.8754213526845GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.350 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.350 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.453 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance e7d05a6a-847c-4124-bbb7-f122cb954501 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance c7923d56-2a41-4171-a525-a985a28fc016 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance bad9acc4-1999-4764-adea-156a129e9d4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.454 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Instance b5d1a89f-53d1-4f04-90ed-309724685f10 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.455 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.455 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:28:10 np0005597378 nova_compute[238941]: 2026-01-27 14:28:10.540 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:28:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272372764' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:28:11 np0005597378 nova_compute[238941]: 2026-01-27 14:28:11.155 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:11 np0005597378 nova_compute[238941]: 2026-01-27 14:28:11.163 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:28:11 np0005597378 nova_compute[238941]: 2026-01-27 14:28:11.181 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:28:11 np0005597378 nova_compute[238941]: 2026-01-27 14:28:11.203 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:28:11 np0005597378 nova_compute[238941]: 2026-01-27 14:28:11.203 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 27 09:28:11 np0005597378 nova_compute[238941]: 2026-01-27 14:28:11.328 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.149 238945 DEBUG nova.compute.manager [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.150 238945 DEBUG nova.compute.manager [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing instance network info cache due to event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.151 238945 DEBUG oslo_concurrency.lockutils [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.151 238945 DEBUG oslo_concurrency.lockutils [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.151 238945 DEBUG nova.network.neutron [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.204 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:13 np0005597378 nova_compute[238941]: 2026-01-27 14:28:13.244 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 27 09:28:14 np0005597378 nova_compute[238941]: 2026-01-27 14:28:14.542 238945 DEBUG nova.compute.manager [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:14 np0005597378 nova_compute[238941]: 2026-01-27 14:28:14.544 238945 DEBUG nova.compute.manager [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing instance network info cache due to event network-changed-5818eb5a-5355-449d-8f54-1954097bdc8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:28:14 np0005597378 nova_compute[238941]: 2026-01-27 14:28:14.544 238945 DEBUG oslo_concurrency.lockutils [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:14 np0005597378 nova_compute[238941]: 2026-01-27 14:28:14.544 238945 DEBUG oslo_concurrency.lockutils [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:14 np0005597378 nova_compute[238941]: 2026-01-27 14:28:14.545 238945 DEBUG nova.network.neutron [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Refreshing network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:28:15 np0005597378 nova_compute[238941]: 2026-01-27 14:28:15.166 238945 DEBUG nova.network.neutron [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updated VIF entry in instance network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:28:15 np0005597378 nova_compute[238941]: 2026-01-27 14:28:15.167 238945 DEBUG nova.network.neutron [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:15 np0005597378 nova_compute[238941]: 2026-01-27 14:28:15.230 238945 DEBUG oslo_concurrency.lockutils [req-41f998f5-29d3-4d4e-8d6e-aed5453ec848 req-8981d8f6-7071-452b-8b76-ccb83cdd20a1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Jan 27 09:28:15 np0005597378 podman[376201]: 2026-01-27 14:28:15.720439078 +0000 UTC m=+0.056259438 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 27 09:28:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:16 np0005597378 nova_compute[238941]: 2026-01-27 14:28:16.330 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:16 np0005597378 nova_compute[238941]: 2026-01-27 14:28:16.352 238945 DEBUG nova.network.neutron [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updated VIF entry in instance network info cache for port 5818eb5a-5355-449d-8f54-1954097bdc8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:28:16 np0005597378 nova_compute[238941]: 2026-01-27 14:28:16.352 238945 DEBUG nova.network.neutron [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [{"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:16 np0005597378 nova_compute[238941]: 2026-01-27 14:28:16.373 238945 DEBUG oslo_concurrency.lockutils [req-0693b2c8-77d5-4b87-8e03-75c620bbd584 req-c6c0a073-826b-400b-9106-79d24a6af696 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-b5d1a89f-53d1-4f04-90ed-309724685f10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:28:17
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', '.rgw.root', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 176 op/s
Jan 27 09:28:17 np0005597378 nova_compute[238941]: 2026-01-27 14:28:17.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:17 np0005597378 podman[376221]: 2026-01-27 14:28:17.757601167 +0000 UTC m=+0.086086013 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:28:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:28:18 np0005597378 nova_compute[238941]: 2026-01-27 14:28:18.284 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:28:18 np0005597378 nova_compute[238941]: 2026-01-27 14:28:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Jan 27 09:28:20 np0005597378 nova_compute[238941]: 2026-01-27 14:28:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:20 np0005597378 nova_compute[238941]: 2026-01-27 14:28:20.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:28:20 np0005597378 nova_compute[238941]: 2026-01-27 14:28:20.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:28:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:21 np0005597378 nova_compute[238941]: 2026-01-27 14:28:21.089 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:21 np0005597378 nova_compute[238941]: 2026-01-27 14:28:21.090 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:21 np0005597378 nova_compute[238941]: 2026-01-27 14:28:21.090 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:28:21 np0005597378 nova_compute[238941]: 2026-01-27 14:28:21.091 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 152 op/s
Jan 27 09:28:21 np0005597378 nova_compute[238941]: 2026-01-27 14:28:21.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:22 np0005597378 nova_compute[238941]: 2026-01-27 14:28:22.527 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:22 np0005597378 nova_compute[238941]: 2026-01-27 14:28:22.547 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:22 np0005597378 nova_compute[238941]: 2026-01-27 14:28:22.548 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:28:23 np0005597378 nova_compute[238941]: 2026-01-27 14:28:23.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.0 KiB/s wr, 65 op/s
Jan 27 09:28:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:23Z|00196|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.7
Jan 27 09:28:23 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:23Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:15:40:d6 10.100.0.7
Jan 27 09:28:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:24Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:e7:7d 10.100.0.7
Jan 27 09:28:24 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:24Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:e7:7d 10.100.0.7
Jan 27 09:28:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 122 op/s
Jan 27 09:28:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:26 np0005597378 nova_compute[238941]: 2026-01-27 14:28:26.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:26 np0005597378 nova_compute[238941]: 2026-01-27 14:28:26.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:26Z|00200|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.7
Jan 27 09:28:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:26Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:15:40:d6 10.100.0.7
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 362 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023861462198803804 of space, bias 1.0, pg target 0.7158438659641141 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014270836001162247 of space, bias 1.0, pg target 0.42812508003486743 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0339544800146488e-06 of space, bias 4.0, pg target 0.0012407453760175785 quantized to 16 (current 16)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:28:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:28:28 np0005597378 nova_compute[238941]: 2026-01-27 14:28:28.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:28Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:40:d6 10.100.0.7
Jan 27 09:28:28 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:28Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:40:d6 10.100.0.7
Jan 27 09:28:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 362 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 27 09:28:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 376 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.526 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.527 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.530 238945 INFO nova.compute.manager [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Terminating instance#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.532 238945 DEBUG nova.compute.manager [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:28:31 np0005597378 kernel: tap5818eb5a-53 (unregistering): left promiscuous mode
Jan 27 09:28:31 np0005597378 NetworkManager[48904]: <info>  [1769524111.7424] device (tap5818eb5a-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:28:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:31Z|01606|binding|INFO|Releasing lport 5818eb5a-5355-449d-8f54-1954097bdc8e from this chassis (sb_readonly=0)
Jan 27 09:28:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:31Z|01607|binding|INFO|Setting lport 5818eb5a-5355-449d-8f54-1954097bdc8e down in Southbound
Jan 27 09:28:31 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:31Z|01608|binding|INFO|Removing iface tap5818eb5a-53 ovn-installed in OVS
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.813 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e7:7d 10.100.0.7'], port_security=['fa:16:3e:b5:e7:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b5d1a89f-53d1-4f04-90ed-309724685f10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '01596a0a-3358-4f9a-9cb8-e8e51a411fde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=5818eb5a-5355-449d-8f54-1954097bdc8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.814 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 5818eb5a-5355-449d-8f54-1954097bdc8e in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 unbound from our chassis#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.816 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6659e71-fbb8-4896-9a40-2262d5df9f38#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.837 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b7f556-b5bb-4190-8e69-72effa54aabe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.869 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[10466522-43e7-49a2-9db2-9ba2ea4ae7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:31 np0005597378 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Deactivated successfully.
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.872 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f4573b7f-97a4-46e0-a5c8-df4cb10078fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:31 np0005597378 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000098.scope: Consumed 15.780s CPU time.
Jan 27 09:28:31 np0005597378 systemd-machined[207425]: Machine qemu-184-instance-00000098 terminated.
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.902 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5d60a3-8d80-417f-b869-5642f50b6ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.922 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3f36e003-dd5a-48f3-adeb-b3bbd9862093]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6659e71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1d:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669175, 'reachable_time': 43763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376259, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.943 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[5082b89d-30e4-49f2-8f36-9e9c2c09927e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669186, 'tstamp': 669186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376260, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6659e71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669189, 'tstamp': 669189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376260, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.944 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6659e71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.953 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6659e71-f0, col_values=(('external_ids', {'iface-id': '744ca588-fa03-49bd-91c4-9cf04119b46c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:31 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:31.954 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.969 238945 INFO nova.virt.libvirt.driver [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Instance destroyed successfully.#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.970 238945 DEBUG nova.objects.instance [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid b5d1a89f-53d1-4f04-90ed-309724685f10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.985 238945 DEBUG nova.virt.libvirt.vif [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-gen-1-1759966388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-gen',id=152,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:28:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-i11giqrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:28:10Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=b5d1a89f-53d1-4f04-90ed-309724685f10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.987 238945 DEBUG nova.network.os_vif_util [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "5818eb5a-5355-449d-8f54-1954097bdc8e", "address": "fa:16:3e:b5:e7:7d", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5818eb5a-53", "ovs_interfaceid": "5818eb5a-5355-449d-8f54-1954097bdc8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.987 238945 DEBUG nova.network.os_vif_util [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.988 238945 DEBUG os_vif [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.990 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5818eb5a-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.991 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.993 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:31 np0005597378 nova_compute[238941]: 2026-01-27 14:28:31.995 238945 INFO os_vif [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e7:7d,bridge_name='br-int',has_traffic_filtering=True,id=5818eb5a-5355-449d-8f54-1954097bdc8e,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5818eb5a-53')#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.288 238945 DEBUG nova.compute.manager [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-unplugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.289 238945 DEBUG oslo_concurrency.lockutils [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG oslo_concurrency.lockutils [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG oslo_concurrency.lockutils [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG nova.compute.manager [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] No waiting events found dispatching network-vif-unplugged-5818eb5a-5355-449d-8f54-1954097bdc8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.290 238945 DEBUG nova.compute.manager [req-19f21f6a-ae1c-4f1e-a97c-219b80245929 req-57f7710b-9a42-4126-bc1d-ac57e46689e1 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-unplugged-5818eb5a-5355-449d-8f54-1954097bdc8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.484 238945 INFO nova.virt.libvirt.driver [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deleting instance files /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10_del#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.485 238945 INFO nova.virt.libvirt.driver [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deletion of /var/lib/nova/instances/b5d1a89f-53d1-4f04-90ed-309724685f10_del complete#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.531 238945 INFO nova.compute.manager [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.533 238945 DEBUG oslo.service.loopingcall [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.533 238945 DEBUG nova.compute.manager [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:28:32 np0005597378 nova_compute[238941]: 2026-01-27 14:28:32.533 238945 DEBUG nova.network.neutron [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:28:33 np0005597378 nova_compute[238941]: 2026-01-27 14:28:33.146 238945 DEBUG nova.network.neutron [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:33 np0005597378 nova_compute[238941]: 2026-01-27 14:28:33.160 238945 INFO nova.compute.manager [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Took 0.63 seconds to deallocate network for instance.#033[00m
Jan 27 09:28:33 np0005597378 nova_compute[238941]: 2026-01-27 14:28:33.204 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:33 np0005597378 nova_compute[238941]: 2026-01-27 14:28:33.204 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:33 np0005597378 nova_compute[238941]: 2026-01-27 14:28:33.214 238945 DEBUG nova.compute.manager [req-f0baa21e-ca4b-4a2a-a969-cdb6b4759410 req-683cf889-8a03-41a3-9142-6a724f17b998 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-deleted-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:33 np0005597378 nova_compute[238941]: 2026-01-27 14:28:33.292 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 376 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.398 238945 DEBUG nova.compute.manager [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.398 238945 DEBUG oslo_concurrency.lockutils [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 DEBUG oslo_concurrency.lockutils [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 DEBUG oslo_concurrency.lockutils [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 DEBUG nova.compute.manager [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] No waiting events found dispatching network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.399 238945 WARNING nova.compute.manager [req-c8484d16-443f-4916-8ae2-d4fc2821c7b9 req-1de83b86-2dc9-49ff-9edf-1d1fb1f11baa 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Received unexpected event network-vif-plugged-5818eb5a-5355-449d-8f54-1954097bdc8e for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:28:34 np0005597378 nova_compute[238941]: 2026-01-27 14:28:34.730 238945 DEBUG oslo_concurrency.processutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:28:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197138154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:28:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 129 op/s
Jan 27 09:28:35 np0005597378 nova_compute[238941]: 2026-01-27 14:28:35.333 238945 DEBUG oslo_concurrency.processutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:35 np0005597378 nova_compute[238941]: 2026-01-27 14:28:35.338 238945 DEBUG nova.compute.provider_tree [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:28:35 np0005597378 nova_compute[238941]: 2026-01-27 14:28:35.356 238945 DEBUG nova.scheduler.client.report [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:28:35 np0005597378 nova_compute[238941]: 2026-01-27 14:28:35.384 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:35 np0005597378 nova_compute[238941]: 2026-01-27 14:28:35.413 238945 INFO nova.scheduler.client.report [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance b5d1a89f-53d1-4f04-90ed-309724685f10#033[00m
Jan 27 09:28:35 np0005597378 nova_compute[238941]: 2026-01-27 14:28:35.489 238945 DEBUG oslo_concurrency.lockutils [None req-f4c218d8-bddc-485c-a7b6-0b9d8aff3438 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "b5d1a89f-53d1-4f04-90ed-309724685f10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:36 np0005597378 nova_compute[238941]: 2026-01-27 14:28:36.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 27 09:28:37 np0005597378 nova_compute[238941]: 2026-01-27 14:28:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.285 238945 DEBUG nova.compute.manager [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG nova.compute.manager [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing instance network info cache due to event network-changed-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG oslo_concurrency.lockutils [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG oslo_concurrency.lockutils [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.286 238945 DEBUG nova.network.neutron [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Refreshing network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.293 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.459 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.460 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.462 238945 INFO nova.compute.manager [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Terminating instance#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.462 238945 DEBUG nova.compute.manager [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:28:38 np0005597378 kernel: tapb5f45ab4-38 (unregistering): left promiscuous mode
Jan 27 09:28:38 np0005597378 NetworkManager[48904]: <info>  [1769524118.6851] device (tapb5f45ab4-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:28:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:38Z|01609|binding|INFO|Releasing lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb from this chassis (sb_readonly=0)
Jan 27 09:28:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:38Z|01610|binding|INFO|Setting lport b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb down in Southbound
Jan 27 09:28:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:38Z|01611|binding|INFO|Removing iface tapb5f45ab4-38 ovn-installed in OVS
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.693 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.735 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:9c 10.100.0.14'], port_security=['fa:16:3e:f2:2b:9c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c7923d56-2a41-4171-a525-a985a28fc016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '45344a38de5c4bc6b61680272082756a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '660baac2-dc26-4ff6-a045-736abfa5b2f4 c2c5ff5e-9ee7-4797-83a9-9d36f0a33d37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad1f4dd7-61a4-41de-900e-cd5d6044addb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:28:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.736 154802 INFO neutron.agent.ovn.metadata.agent [-] Port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb in datapath c6659e71-fbb8-4896-9a40-2262d5df9f38 unbound from our chassis#033[00m
Jan 27 09:28:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.738 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6659e71-fbb8-4896-9a40-2262d5df9f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:28:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.738 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fddd5559-f69b-499a-855e-496aea1ac5ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:38 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:38.740 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 namespace which is not needed anymore#033[00m
Jan 27 09:28:38 np0005597378 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 27 09:28:38 np0005597378 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Consumed 16.218s CPU time.
Jan 27 09:28:38 np0005597378 systemd-machined[207425]: Machine qemu-182-instance-00000096 terminated.
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.897 238945 INFO nova.virt.libvirt.driver [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Instance destroyed successfully.#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.899 238945 DEBUG nova.objects.instance [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lazy-loading 'resources' on Instance uuid c7923d56-2a41-4171-a525-a985a28fc016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.947 238945 DEBUG nova.virt.libvirt.vif [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-165504025-access_point-1676094806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-165504025-acc',id=150,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLziTTnCpQiGwD+zfsbNvOm7z3D/S+qNwJSbq3Ty7c+UeSsIMUq4F1A96cXt5Rqv4yVv7AHtiaj6NhC4ex3LO4YKWjSY7OsLwRoADAIj8lBXbuJvokxKrQTVr+6ZTeWY5A==',key_name='tempest-TestSecurityGroupsBasicOps-986212093',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:27:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='45344a38de5c4bc6b61680272082756a',ramdisk_id='',reservation_id='r-3fs48m1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-165504025',owner_user_name='tempest-TestSecurityGroupsBasicOps-165504025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:27:33Z,user_data=None,user_id='2610a627ed524b0ab448b5604167899e',uuid=c7923d56-2a41-4171-a525-a985a28fc016,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.948 238945 DEBUG nova.network.os_vif_util [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converting VIF {"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.949 238945 DEBUG nova.network.os_vif_util [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.949 238945 DEBUG os_vif [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.951 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5f45ab4-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:38 np0005597378 nova_compute[238941]: 2026-01-27 14:28:38.956 238945 INFO os_vif [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb,network=Network(c6659e71-fbb8-4896-9a40-2262d5df9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5f45ab4-38')#033[00m
Jan 27 09:28:38 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : haproxy version is 2.8.14-c23fe91
Jan 27 09:28:38 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [NOTICE]   (374580) : path to executable is /usr/sbin/haproxy
Jan 27 09:28:38 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [WARNING]  (374580) : Exiting Master process...
Jan 27 09:28:38 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [WARNING]  (374580) : Exiting Master process...
Jan 27 09:28:38 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [ALERT]    (374580) : Current worker (374582) exited with code 143 (Terminated)
Jan 27 09:28:38 np0005597378 neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38[374576]: [WARNING]  (374580) : All workers exited. Exiting... (0)
Jan 27 09:28:38 np0005597378 systemd[1]: libpod-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1.scope: Deactivated successfully.
Jan 27 09:28:39 np0005597378 podman[376335]: 2026-01-27 14:28:39.001838394 +0000 UTC m=+0.171961389 container died ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:28:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9397fc0828262d31d4311ffab7a6e8b659c7f57c5cdcedf60e73984a31d6826e-merged.mount: Deactivated successfully.
Jan 27 09:28:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1-userdata-shm.mount: Deactivated successfully.
Jan 27 09:28:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 85 KiB/s wr, 38 op/s
Jan 27 09:28:39 np0005597378 podman[376335]: 2026-01-27 14:28:39.667479445 +0000 UTC m=+0.837602440 container cleanup ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:28:39 np0005597378 systemd[1]: libpod-conmon-ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1.scope: Deactivated successfully.
Jan 27 09:28:39 np0005597378 podman[376394]: 2026-01-27 14:28:39.765029836 +0000 UTC m=+0.074885241 container remove ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.772 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8096d829-2f91-4860-aa2a-400ea3725405]: (4, ('Tue Jan 27 02:28:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 (ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1)\nad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1\nTue Jan 27 02:28:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 (ad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1)\nad87901373cb2cc9abc93ae6e465fdc71843e57e91ed526ee187414459e17ed1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.774 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2bc820-7503-4b42-8001-25ae754aa5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.775 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6659e71-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:39 np0005597378 nova_compute[238941]: 2026-01-27 14:28:39.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:39 np0005597378 kernel: tapc6659e71-f0: left promiscuous mode
Jan 27 09:28:39 np0005597378 nova_compute[238941]: 2026-01-27 14:28:39.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.795 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3000dec6-c173-4b63-87a2-bcbc938882a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.813 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b0206efd-ddc8-45d8-92dd-64f5c6f3cc11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.815 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6c34b3f3-d72e-4460-991b-b23db0f52bc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.832 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[a7135c82-3360-4a2b-9272-a16c42245038]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669165, 'reachable_time': 44640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376410, 'error': None, 'target': 'ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.835 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6659e71-fbb8-4896-9a40-2262d5df9f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:28:39 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:39.835 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[36da3808-951b-412a-aaca-a9b9a1c34a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:39 np0005597378 systemd[1]: run-netns-ovnmeta\x2dc6659e71\x2dfbb8\x2d4896\x2d9a40\x2d2262d5df9f38.mount: Deactivated successfully.
Jan 27 09:28:39 np0005597378 nova_compute[238941]: 2026-01-27 14:28:39.973 238945 INFO nova.virt.libvirt.driver [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deleting instance files /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016_del#033[00m
Jan 27 09:28:39 np0005597378 nova_compute[238941]: 2026-01-27 14:28:39.974 238945 INFO nova.virt.libvirt.driver [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deletion of /var/lib/nova/instances/c7923d56-2a41-4171-a525-a985a28fc016_del complete#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.052 238945 INFO nova.compute.manager [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 1.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.053 238945 DEBUG oslo.service.loopingcall [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.053 238945 DEBUG nova.compute.manager [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.054 238945 DEBUG nova.network.neutron [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.143 238945 DEBUG nova.network.neutron [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updated VIF entry in instance network info cache for port b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.143 238945 DEBUG nova.network.neutron [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [{"id": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "address": "fa:16:3e:f2:2b:9c", "network": {"id": "c6659e71-fbb8-4896-9a40-2262d5df9f38", "bridge": "br-int", "label": "tempest-network-smoke--2092928014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "45344a38de5c4bc6b61680272082756a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5f45ab4-38", "ovs_interfaceid": "b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.180 238945 DEBUG oslo_concurrency.lockutils [req-230981d6-b600-49d2-b32b-1866c3123ad4 req-d8bbf6f9-5067-4d51-8605-4264272a5f96 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-c7923d56-2a41-4171-a525-a985a28fc016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-unplugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.369 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.370 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] No waiting events found dispatching network-vif-unplugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.370 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-unplugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.370 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "c7923d56-2a41-4171-a525-a985a28fc016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG oslo_concurrency.lockutils [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 DEBUG nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] No waiting events found dispatching network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.371 238945 WARNING nova.compute.manager [req-a46f92ba-d02c-4e70-a94c-fbd1f0d5cfe3 req-b42ae981-4c3d-4c27-8c5a-2868a2716854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received unexpected event network-vif-plugged-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.639 238945 DEBUG nova.network.neutron [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.660 238945 INFO nova.compute.manager [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Took 0.61 seconds to deallocate network for instance.#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.708 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.708 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.713 238945 DEBUG nova.compute.manager [req-012621bf-6e25-44d4-99b7-a89822d8c114 req-8ed5e052-81c8-4e0f-bd3c-c14db0b2a656 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Received event network-vif-deleted-b5f45ab4-38f3-4b5a-81cb-2ccf6c6477fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:40 np0005597378 nova_compute[238941]: 2026-01-27 14:28:40.790 238945 DEBUG oslo_concurrency.processutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:28:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 89 KiB/s wr, 55 op/s
Jan 27 09:28:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:28:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870739130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:28:41 np0005597378 nova_compute[238941]: 2026-01-27 14:28:41.364 238945 DEBUG oslo_concurrency.processutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:28:41 np0005597378 nova_compute[238941]: 2026-01-27 14:28:41.370 238945 DEBUG nova.compute.provider_tree [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:28:41 np0005597378 nova_compute[238941]: 2026-01-27 14:28:41.411 238945 DEBUG nova.scheduler.client.report [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:28:41 np0005597378 nova_compute[238941]: 2026-01-27 14:28:41.451 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:41 np0005597378 nova_compute[238941]: 2026-01-27 14:28:41.475 238945 INFO nova.scheduler.client.report [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Deleted allocations for instance c7923d56-2a41-4171-a525-a985a28fc016#033[00m
Jan 27 09:28:41 np0005597378 nova_compute[238941]: 2026-01-27 14:28:41.544 238945 DEBUG oslo_concurrency.lockutils [None req-d5d69801-d70b-42da-a974-0a6a579c490e 2610a627ed524b0ab448b5604167899e 45344a38de5c4bc6b61680272082756a - - default default] Lock "c7923d56-2a41-4171-a525-a985a28fc016" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:43 np0005597378 nova_compute[238941]: 2026-01-27 14:28:43.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 21 KiB/s wr, 45 op/s
Jan 27 09:28:43 np0005597378 nova_compute[238941]: 2026-01-27 14:28:43.953 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:44 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:44Z|01612|binding|INFO|Releasing lport 61475a7c-9045-4191-a533-3416010cde1f from this chassis (sb_readonly=0)
Jan 27 09:28:44 np0005597378 nova_compute[238941]: 2026-01-27 14:28:44.679 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 56 op/s
Jan 27 09:28:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:45 np0005597378 nova_compute[238941]: 2026-01-27 14:28:45.918 238945 DEBUG nova.compute.manager [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:45 np0005597378 nova_compute[238941]: 2026-01-27 14:28:45.966 238945 INFO nova.compute.manager [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] instance snapshotting#033[00m
Jan 27 09:28:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:46.335 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:46.335 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:46.336 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:46 np0005597378 nova_compute[238941]: 2026-01-27 14:28:46.389 238945 INFO nova.virt.libvirt.driver [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Beginning live snapshot process#033[00m
Jan 27 09:28:46 np0005597378 podman[376434]: 2026-01-27 14:28:46.755240713 +0000 UTC m=+0.090540112 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:28:46 np0005597378 nova_compute[238941]: 2026-01-27 14:28:46.968 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524111.9677024, b5d1a89f-53d1-4f04-90ed-309724685f10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:46 np0005597378 nova_compute[238941]: 2026-01-27 14:28:46.969 238945 INFO nova.compute.manager [-] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:28:47 np0005597378 nova_compute[238941]: 2026-01-27 14:28:47.170 238945 DEBUG nova.compute.manager [None req-83bf25e4-9cb4-44a3-923d-d33cf18a44b6 - - - - - -] [instance: b5d1a89f-53d1-4f04-90ed-309724685f10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:47 np0005597378 nova_compute[238941]: 2026-01-27 14:28:47.267 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(028d73cfc13142d481c436fb5ec98798) on rbd image(bad9acc4-1999-4764-adea-156a129e9d4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 27 09:28:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 27 09:28:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 27 09:28:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 27 09:28:47 np0005597378 nova_compute[238941]: 2026-01-27 14:28:47.518 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] cloning vms/bad9acc4-1999-4764-adea-156a129e9d4a_disk@028d73cfc13142d481c436fb5ec98798 to images/770adb1e-f49a-4067-80be-2ce1da90973c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:28:47 np0005597378 nova_compute[238941]: 2026-01-27 14:28:47.765 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] flattening images/770adb1e-f49a-4067-80be-2ce1da90973c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:28:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:28:48 np0005597378 nova_compute[238941]: 2026-01-27 14:28:48.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:48 np0005597378 nova_compute[238941]: 2026-01-27 14:28:48.557 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] removing snapshot(028d73cfc13142d481c436fb5ec98798) on rbd image(bad9acc4-1999-4764-adea-156a129e9d4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 09:28:48 np0005597378 podman[376578]: 2026-01-27 14:28:48.762568328 +0000 UTC m=+0.107722726 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 09:28:48 np0005597378 nova_compute[238941]: 2026-01-27 14:28:48.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 5.4 KiB/s wr, 37 op/s
Jan 27 09:28:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 27 09:28:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 27 09:28:49 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 27 09:28:49 np0005597378 nova_compute[238941]: 2026-01-27 14:28:49.620 238945 DEBUG nova.storage.rbd_utils [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] creating snapshot(snap) on rbd image(770adb1e-f49a-4067-80be-2ce1da90973c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:28:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 27 09:28:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 27 09:28:50 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 27 09:28:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 319 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 171 op/s
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.096 238945 INFO nova.virt.libvirt.driver [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Snapshot image upload complete#033[00m
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.097 238945 INFO nova.compute.manager [None req-6ec16d27-7d06-4961-a891-47f991056d54 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 7.13 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.300 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 319 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 164 op/s
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.895 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524118.8938851, c7923d56-2a41-4171-a525-a985a28fc016 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.895 238945 INFO nova.compute.manager [-] [instance: c7923d56-2a41-4171-a525-a985a28fc016] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.920 238945 DEBUG nova.compute.manager [None req-f0ff1799-cf7a-4a0c-9ff4-9d3f907fe090 - - - - - -] [instance: c7923d56-2a41-4171-a525-a985a28fc016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:28:53 np0005597378 nova_compute[238941]: 2026-01-27 14:28:53.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 27 09:28:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 27 09:28:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 27 09:28:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 180 op/s
Jan 27 09:28:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:28:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 27 09:28:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 27 09:28:55 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.777 238945 DEBUG nova.compute.manager [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG nova.compute.manager [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing instance network info cache due to event network-changed-27db5f4c-e0fe-4746-aa0a-99149d5341d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG oslo_concurrency.lockutils [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG oslo_concurrency.lockutils [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.778 238945 DEBUG nova.network.neutron [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Refreshing network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.918 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.918 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.918 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.919 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.919 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.920 238945 INFO nova.compute.manager [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Terminating instance#033[00m
Jan 27 09:28:56 np0005597378 nova_compute[238941]: 2026-01-27 14:28:56.921 238945 DEBUG nova.compute.manager [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:28:57 np0005597378 kernel: tap27db5f4c-e0 (unregistering): left promiscuous mode
Jan 27 09:28:57 np0005597378 NetworkManager[48904]: <info>  [1769524137.0397] device (tap27db5f4c-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:28:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:57Z|01613|binding|INFO|Releasing lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 from this chassis (sb_readonly=0)
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:57Z|01614|binding|INFO|Setting lport 27db5f4c-e0fe-4746-aa0a-99149d5341d4 down in Southbound
Jan 27 09:28:57 np0005597378 ovn_controller[144812]: 2026-01-27T14:28:57Z|01615|binding|INFO|Removing iface tap27db5f4c-e0 ovn-installed in OVS
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 27 09:28:57 np0005597378 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Consumed 17.729s CPU time.
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.108 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:40:d6 10.100.0.7'], port_security=['fa:16:3e:15:40:d6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bad9acc4-1999-4764-adea-156a129e9d4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=27db5f4c-e0fe-4746-aa0a-99149d5341d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.109 154802 INFO neutron.agent.ovn.metadata.agent [-] Port 27db5f4c-e0fe-4746-aa0a-99149d5341d4 in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 unbound from our chassis#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.110 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d618b96-4a07-4d69-bf79-7e30a43f8748#033[00m
Jan 27 09:28:57 np0005597378 systemd-machined[207425]: Machine qemu-183-instance-00000097 terminated.
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.130 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1461b08e-d888-40ee-baaf-62020738a8ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.172 238945 INFO nova.virt.libvirt.driver [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Instance destroyed successfully.#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.172 238945 DEBUG nova.objects.instance [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'resources' on Instance uuid bad9acc4-1999-4764-adea-156a129e9d4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.174 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c6fecd-5a7a-44a1-82f8-06349264a4b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.178 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0015bc-ff87-47a6-81d3-95e0d5f93f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.212 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[cb76cfc0-7e34-472a-945a-498ceb688618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9e5ef0-19ea-4f50-b40a-c9d92e3ac4ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d618b96-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:25:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668318, 'reachable_time': 18816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376645, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.250 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4c763567-1a1a-4a26-b72b-5d31edb8e162]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668333, 'tstamp': 668333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376646, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d618b96-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668337, 'tstamp': 668337}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376646, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.251 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.253 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.258 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d618b96-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d618b96-40, col_values=(('external_ids', {'iface-id': '61475a7c-9045-4191-a533-3416010cde1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:57 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:28:57.259 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.315 238945 DEBUG nova.virt.libvirt.vif [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1569596201',display_name='tempest-TestSnapshotPattern-server-1569596201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1569596201',id=151,image_ref='dc70b820-f623-4425-90a6-c6b104369526',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:28:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-2guobcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d05a6a-847c-4124-bbb7-f122cb954501',image_min_disk='1',image_min_ram='0',image_owner_id='805ab209134d4d70b18753f441ccc5a7',image_owner_project_name='tempest-TestSnapshotPattern-2108848063',image_owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member',image_user_id='15cb999473674ad581f5a98de252c28a',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:28:53Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=bad9acc4-1999-4764-adea-156a129e9d4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.316 238945 DEBUG nova.network.os_vif_util [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.317 238945 DEBUG nova.network.os_vif_util [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.317 238945 DEBUG os_vif [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.319 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.319 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27db5f4c-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.323 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 337 KiB/s wr, 69 op/s
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.325 238945 INFO os_vif [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:40:d6,bridge_name='br-int',has_traffic_filtering=True,id=27db5f4c-e0fe-4746-aa0a-99149d5341d4,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27db5f4c-e0')#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.552 238945 DEBUG nova.compute.manager [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-unplugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.552 238945 DEBUG oslo_concurrency.lockutils [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.553 238945 DEBUG oslo_concurrency.lockutils [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.553 238945 DEBUG oslo_concurrency.lockutils [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.554 238945 DEBUG nova.compute.manager [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] No waiting events found dispatching network-vif-unplugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:57 np0005597378 nova_compute[238941]: 2026-01-27 14:28:57.554 238945 DEBUG nova.compute.manager [req-94a5402d-6ebe-4cba-9a26-37095e679eaf req-904be8c4-6764-47b1-9d85-7a20b93e7854 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-unplugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:28:58 np0005597378 nova_compute[238941]: 2026-01-27 14:28:58.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:58 np0005597378 nova_compute[238941]: 2026-01-27 14:28:58.301 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:28:58 np0005597378 nova_compute[238941]: 2026-01-27 14:28:58.621 238945 DEBUG nova.network.neutron [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updated VIF entry in instance network info cache for port 27db5f4c-e0fe-4746-aa0a-99149d5341d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:28:58 np0005597378 nova_compute[238941]: 2026-01-27 14:28:58.621 238945 DEBUG nova.network.neutron [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [{"id": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "address": "fa:16:3e:15:40:d6", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27db5f4c-e0", "ovs_interfaceid": "27db5f4c-e0fe-4746-aa0a-99149d5341d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:28:58 np0005597378 nova_compute[238941]: 2026-01-27 14:28:58.644 238945 DEBUG oslo_concurrency.lockutils [req-343cb09f-ff12-4915-9846-d7d56907c918 req-3dde55cc-fdae-4ef9-bd92-f25475ed9999 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-bad9acc4-1999-4764-adea-156a129e9d4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.125 238945 INFO nova.virt.libvirt.driver [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deleting instance files /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a_del#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.126 238945 INFO nova.virt.libvirt.driver [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deletion of /var/lib/nova/instances/bad9acc4-1999-4764-adea-156a129e9d4a_del complete#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.199 238945 INFO nova.compute.manager [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 2.28 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.200 238945 DEBUG oslo.service.loopingcall [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.200 238945 DEBUG nova.compute.manager [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.201 238945 DEBUG nova.network.neutron [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:28:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 297 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 284 KiB/s wr, 58 op/s
Jan 27 09:28:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:28:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208787029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:28:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:28:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208787029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.784 238945 DEBUG nova.compute.manager [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.784 238945 DEBUG oslo_concurrency.lockutils [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.784 238945 DEBUG oslo_concurrency.lockutils [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.785 238945 DEBUG oslo_concurrency.lockutils [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.785 238945 DEBUG nova.compute.manager [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] No waiting events found dispatching network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:28:59 np0005597378 nova_compute[238941]: 2026-01-27 14:28:59.785 238945 WARNING nova.compute.manager [req-a19f8d3e-a572-4ac9-9dc2-b7544690348c req-497e0c82-0e16-4810-9b95-5df8530e4ee4 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received unexpected event network-vif-plugged-27db5f4c-e0fe-4746-aa0a-99149d5341d4 for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.192 238945 DEBUG nova.network.neutron [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.212 238945 INFO nova.compute.manager [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Took 1.01 seconds to deallocate network for instance.#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.295 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.295 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.364 238945 DEBUG oslo_concurrency.processutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:29:00 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358806203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.968 238945 DEBUG oslo_concurrency.processutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.976 238945 DEBUG nova.compute.provider_tree [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:29:00 np0005597378 nova_compute[238941]: 2026-01-27 14:29:00.991 238945 DEBUG nova.scheduler.client.report [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:29:01 np0005597378 nova_compute[238941]: 2026-01-27 14:29:01.015 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:01 np0005597378 nova_compute[238941]: 2026-01-27 14:29:01.075 238945 INFO nova.scheduler.client.report [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Deleted allocations for instance bad9acc4-1999-4764-adea-156a129e9d4a#033[00m
Jan 27 09:29:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 286 KiB/s wr, 102 op/s
Jan 27 09:29:01 np0005597378 nova_compute[238941]: 2026-01-27 14:29:01.419 238945 DEBUG oslo_concurrency.lockutils [None req-4ad19eba-64b3-4807-9781-0d28cd43c65c 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "bad9acc4-1999-4764-adea-156a129e9d4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:01 np0005597378 nova_compute[238941]: 2026-01-27 14:29:01.879 238945 DEBUG nova.compute.manager [req-92b2ba51-4b78-4dd4-b3c6-7ea866a3a800 req-2206acd7-d4a2-4ded-a312-6fd93edac9dc 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Received event network-vif-deleted-27db5f4c-e0fe-4746-aa0a-99149d5341d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:02 np0005597378 nova_compute[238941]: 2026-01-27 14:29:02.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:02 np0005597378 nova_compute[238941]: 2026-01-27 14:29:02.321 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 27 09:29:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 27 09:29:02 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 27 09:29:03 np0005597378 nova_compute[238941]: 2026-01-27 14:29:03.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 7.2 KiB/s wr, 90 op/s
Jan 27 09:29:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 142 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.9 KiB/s wr, 63 op/s
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.456 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.459 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.504 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.698 238945 DEBUG nova.compute.manager [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.698 238945 DEBUG nova.compute.manager [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing instance network info cache due to event network-changed-c5db635d-2d18-4cdb-9339-8474b028f04b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.698 238945 DEBUG oslo_concurrency.lockutils [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.699 238945 DEBUG oslo_concurrency.lockutils [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.699 238945 DEBUG nova.network.neutron [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Refreshing network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.779 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.780 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.780 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.780 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.781 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.782 238945 INFO nova.compute.manager [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Terminating instance#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.784 238945 DEBUG nova.compute.manager [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:29:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 27 09:29:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 27 09:29:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 27 09:29:05 np0005597378 kernel: tapc5db635d-2d (unregistering): left promiscuous mode
Jan 27 09:29:05 np0005597378 NetworkManager[48904]: <info>  [1769524145.8410] device (tapc5db635d-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.848 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:05Z|01616|binding|INFO|Releasing lport c5db635d-2d18-4cdb-9339-8474b028f04b from this chassis (sb_readonly=0)
Jan 27 09:29:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:05Z|01617|binding|INFO|Setting lport c5db635d-2d18-4cdb-9339-8474b028f04b down in Southbound
Jan 27 09:29:05 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:05Z|01618|binding|INFO|Removing iface tapc5db635d-2d ovn-installed in OVS
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.858 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:e8:81 10.100.0.13'], port_security=['fa:16:3e:e7:e8:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e7d05a6a-847c-4124-bbb7-f122cb954501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '805ab209134d4d70b18753f441ccc5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42f86cce-87c9-45ba-83fe-825a709960dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b607c2-7b6e-41ad-bb2f-d8b59f61c333, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=c5db635d-2d18-4cdb-9339-8474b028f04b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.860 154802 INFO neutron.agent.ovn.metadata.agent [-] Port c5db635d-2d18-4cdb-9339-8474b028f04b in datapath 4d618b96-4a07-4d69-bf79-7e30a43f8748 unbound from our chassis#033[00m
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.861 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d618b96-4a07-4d69-bf79-7e30a43f8748, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.862 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[38198d31-1df1-4956-bb3d-8cabbda452d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:05 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:05.863 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 namespace which is not needed anymore#033[00m
Jan 27 09:29:05 np0005597378 nova_compute[238941]: 2026-01-27 14:29:05.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:05 np0005597378 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 27 09:29:05 np0005597378 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Consumed 17.707s CPU time.
Jan 27 09:29:05 np0005597378 systemd-machined[207425]: Machine qemu-181-instance-00000095 terminated.
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.029 238945 INFO nova.virt.libvirt.driver [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Instance destroyed successfully.#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.030 238945 DEBUG nova.objects.instance [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lazy-loading 'resources' on Instance uuid e7d05a6a-847c-4124-bbb7-f122cb954501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:29:06 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : haproxy version is 2.8.14-c23fe91
Jan 27 09:29:06 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [NOTICE]   (374254) : path to executable is /usr/sbin/haproxy
Jan 27 09:29:06 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [WARNING]  (374254) : Exiting Master process...
Jan 27 09:29:06 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [ALERT]    (374254) : Current worker (374269) exited with code 143 (Terminated)
Jan 27 09:29:06 np0005597378 neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748[374186]: [WARNING]  (374254) : All workers exited. Exiting... (0)
Jan 27 09:29:06 np0005597378 systemd[1]: libpod-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62.scope: Deactivated successfully.
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.045 238945 DEBUG nova.virt.libvirt.vif [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:27:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-371612586',display_name='tempest-TestSnapshotPattern-server-371612586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-371612586',id=149,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNc3kabcPAp53cNHKQDuO00NCZjux+eUa7mSEOBuVMwPChG+U7u0C7ZMMWjp5T2k1lbp+Ba5Z8QjzrSwhKhfIt41TEYXlz0I5nulM29GPEwEtNekD200rXhk5UEP8gcTGg==',key_name='tempest-TestSnapshotPattern-1633717199',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:27:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='805ab209134d4d70b18753f441ccc5a7',ramdisk_id='',reservation_id='r-z0d5npn4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-2108848063',owner_user_name='tempest-TestSnapshotPattern-2108848063-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:27:52Z,user_data=None,user_id='15cb999473674ad581f5a98de252c28a',uuid=e7d05a6a-847c-4124-bbb7-f122cb954501,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.047 238945 DEBUG nova.network.os_vif_util [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converting VIF {"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:29:06 np0005597378 podman[376764]: 2026-01-27 14:29:06.048450774 +0000 UTC m=+0.075920929 container died 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.048 238945 DEBUG nova.network.os_vif_util [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.049 238945 DEBUG os_vif [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.053 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5db635d-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.060 238945 INFO os_vif [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:e8:81,bridge_name='br-int',has_traffic_filtering=True,id=c5db635d-2d18-4cdb-9339-8474b028f04b,network=Network(4d618b96-4a07-4d69-bf79-7e30a43f8748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5db635d-2d')#033[00m
Jan 27 09:29:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62-userdata-shm.mount: Deactivated successfully.
Jan 27 09:29:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-452c9233bc3b6f5ba7ec9a6cb3d2f937271e9caae10adc6c0f0761b21388886a-merged.mount: Deactivated successfully.
Jan 27 09:29:06 np0005597378 podman[376764]: 2026-01-27 14:29:06.1295221 +0000 UTC m=+0.156992255 container cleanup 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 09:29:06 np0005597378 systemd[1]: libpod-conmon-8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62.scope: Deactivated successfully.
Jan 27 09:29:06 np0005597378 podman[376841]: 2026-01-27 14:29:06.256476673 +0000 UTC m=+0.097340535 container remove 8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.264 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd551b2-92a1-47e6-8422-a227dba842f8]: (4, ('Tue Jan 27 02:29:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 (8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62)\n8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62\nTue Jan 27 02:29:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 (8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62)\n8e600bb6593e3a54b7486ce8adf50d9b7154d35f96917fee6864d7fd4db71d62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.266 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba9c37a-bec4-411b-984b-04c00f563bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.267 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d618b96-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:06 np0005597378 kernel: tap4d618b96-40: left promiscuous mode
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.289 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.290 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.293 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[231d82dd-745a-4024-8639-a5e78f051b9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.310 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[d628c57f-7cc9-4615-a8ac-773c5abe9cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.311 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[1835ec1e-a8e2-443b-8a1e-60c68ab1eca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.334 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac97fd56-4f11-4941-beb8-2ed471caea6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668310, 'reachable_time': 30188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376886, 'error': None, 'target': 'ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.337 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d618b96-4a07-4d69-bf79-7e30a43f8748 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:29:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:06.337 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[ee67ed89-efc7-4589-b496-67bf76509966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:06 np0005597378 systemd[1]: run-netns-ovnmeta\x2d4d618b96\x2d4a07\x2d4d69\x2dbf79\x2d7e30a43f8748.mount: Deactivated successfully.
Jan 27 09:29:06 np0005597378 podman[376876]: 2026-01-27 14:29:06.404044833 +0000 UTC m=+0.101145998 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:29:06 np0005597378 podman[376876]: 2026-01-27 14:29:06.500880475 +0000 UTC m=+0.197981650 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.771 238945 INFO nova.virt.libvirt.driver [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deleting instance files /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501_del#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.772 238945 INFO nova.virt.libvirt.driver [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deletion of /var/lib/nova/instances/e7d05a6a-847c-4124-bbb7-f122cb954501_del complete#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 INFO nova.compute.manager [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 DEBUG oslo.service.loopingcall [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 DEBUG nova.compute.manager [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:29:06 np0005597378 nova_compute[238941]: 2026-01-27 14:29:06.821 238945 DEBUG nova.network.neutron [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:29:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 78 op/s
Jan 27 09:29:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:29:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:29:07 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.721 238945 DEBUG nova.network.neutron [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updated VIF entry in instance network info cache for port c5db635d-2d18-4cdb-9339-8474b028f04b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.721 238945 DEBUG nova.network.neutron [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [{"id": "c5db635d-2d18-4cdb-9339-8474b028f04b", "address": "fa:16:3e:e7:e8:81", "network": {"id": "4d618b96-4a07-4d69-bf79-7e30a43f8748", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1458448468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "805ab209134d4d70b18753f441ccc5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5db635d-2d", "ovs_interfaceid": "c5db635d-2d18-4cdb-9339-8474b028f04b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.741 238945 DEBUG nova.network.neutron [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.743 238945 DEBUG oslo_concurrency.lockutils [req-64e33225-5c0b-4613-ae64-05f5496b62ab req-e5580fc3-b437-459b-81dc-98ed30995452 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-e7d05a6a-847c-4124-bbb7-f122cb954501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.759 238945 INFO nova.compute.manager [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Took 0.94 seconds to deallocate network for instance.#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-unplugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.796 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] No waiting events found dispatching network-vif-unplugged-c5db635d-2d18-4cdb-9339-8474b028f04b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-unplugged-c5db635d-2d18-4cdb-9339-8474b028f04b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.797 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 DEBUG oslo_concurrency.lockutils [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 DEBUG nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] No waiting events found dispatching network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.798 238945 WARNING nova.compute.manager [req-ca8224f1-529e-43ce-9345-0b0d865a5d63 req-3502e4f1-5260-4bc4-acc4-add8d4dbaace 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received unexpected event network-vif-plugged-c5db635d-2d18-4cdb-9339-8474b028f04b for instance with vm_state active and task_state deleting.#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.807 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.807 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.834 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.849 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.850 238945 DEBUG nova.compute.provider_tree [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.862 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.881 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:29:07 np0005597378 nova_compute[238941]: 2026-01-27 14:29:07.912 238945 DEBUG oslo_concurrency.processutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.238 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.307 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:29:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908747889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.585 238945 DEBUG oslo_concurrency.processutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.592 238945 DEBUG nova.compute.provider_tree [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.617 238945 DEBUG nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.646 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:08 np0005597378 podman[377227]: 2026-01-27 14:29:08.672717046 +0000 UTC m=+0.115527486 container create 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:29:08 np0005597378 podman[377227]: 2026-01-27 14:29:08.583965833 +0000 UTC m=+0.026776293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.681 238945 INFO nova.scheduler.client.report [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Deleted allocations for instance e7d05a6a-847c-4124-bbb7-f122cb954501#033[00m
Jan 27 09:29:08 np0005597378 systemd[1]: Started libpod-conmon-44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3.scope.
Jan 27 09:29:08 np0005597378 nova_compute[238941]: 2026-01-27 14:29:08.762 238945 DEBUG oslo_concurrency.lockutils [None req-32000b61-a352-4783-8d15-22b7c9a6c3fa 15cb999473674ad581f5a98de252c28a 805ab209134d4d70b18753f441ccc5a7 - - default default] Lock "e7d05a6a-847c-4124-bbb7-f122cb954501" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:08 np0005597378 podman[377227]: 2026-01-27 14:29:08.864023716 +0000 UTC m=+0.306834176 container init 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:29:08 np0005597378 podman[377227]: 2026-01-27 14:29:08.873625775 +0000 UTC m=+0.316436215 container start 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:29:08 np0005597378 heuristic_euler[377243]: 167 167
Jan 27 09:29:08 np0005597378 systemd[1]: libpod-44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3.scope: Deactivated successfully.
Jan 27 09:29:08 np0005597378 podman[377227]: 2026-01-27 14:29:08.917433326 +0000 UTC m=+0.360243776 container attach 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:29:08 np0005597378 podman[377227]: 2026-01-27 14:29:08.918030642 +0000 UTC m=+0.360841082 container died 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 27 09:29:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b3db711d0b1bc553b85ec82260ad6d240a61a83016e8c4b7fda267bd8d1c073e-merged.mount: Deactivated successfully.
Jan 27 09:29:09 np0005597378 podman[377227]: 2026-01-27 14:29:09.15079365 +0000 UTC m=+0.593604100 container remove 44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:29:09 np0005597378 systemd[1]: libpod-conmon-44026a5c9bf354a2ca0abbc32223afb107676ed3092fa3f7df9378f3be6db0c3.scope: Deactivated successfully.
Jan 27 09:29:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 27 09:29:09 np0005597378 podman[377269]: 2026-01-27 14:29:09.340913216 +0000 UTC m=+0.055645371 container create d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:29:09 np0005597378 systemd[1]: Started libpod-conmon-d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98.scope.
Jan 27 09:29:09 np0005597378 podman[377269]: 2026-01-27 14:29:09.317408533 +0000 UTC m=+0.032140708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:29:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:09 np0005597378 podman[377269]: 2026-01-27 14:29:09.453406581 +0000 UTC m=+0.168138756 container init d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:29:09 np0005597378 podman[377269]: 2026-01-27 14:29:09.46191197 +0000 UTC m=+0.176644115 container start d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:29:09 np0005597378 podman[377269]: 2026-01-27 14:29:09.47082546 +0000 UTC m=+0.185557645 container attach d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:29:09 np0005597378 nova_compute[238941]: 2026-01-27 14:29:09.892 238945 DEBUG nova.compute.manager [req-3da51ccf-c1a5-4c5d-842c-7c4077c51126 req-ed7d52a2-e9f5-4d6c-abf7-a657ab96c509 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Received event network-vif-deleted-c5db635d-2d18-4cdb-9339-8474b028f04b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:09 np0005597378 frosty_swirles[377286]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:29:09 np0005597378 frosty_swirles[377286]: --> All data devices are unavailable
Jan 27 09:29:09 np0005597378 systemd[1]: libpod-d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98.scope: Deactivated successfully.
Jan 27 09:29:09 np0005597378 podman[377269]: 2026-01-27 14:29:09.96508412 +0000 UTC m=+0.679816275 container died d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:29:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-72d7b4498628d3bdddd41545948aebf719bd725c838c59bbb7bb45bcd5f9da9b-merged.mount: Deactivated successfully.
Jan 27 09:29:10 np0005597378 podman[377269]: 2026-01-27 14:29:10.068436817 +0000 UTC m=+0.783168972 container remove d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 27 09:29:10 np0005597378 systemd[1]: libpod-conmon-d64aa35c4d4886789c4488928f1869e46b8204f55e43e37afff0fa1ef7874a98.scope: Deactivated successfully.
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.556904421 +0000 UTC m=+0.046484254 container create 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:29:10 np0005597378 systemd[1]: Started libpod-conmon-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope.
Jan 27 09:29:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.535187195 +0000 UTC m=+0.024767078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.645275054 +0000 UTC m=+0.134854897 container init 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.653120036 +0000 UTC m=+0.142699899 container start 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.658205883 +0000 UTC m=+0.147785726 container attach 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:29:10 np0005597378 affectionate_elion[377394]: 167 167
Jan 27 09:29:10 np0005597378 systemd[1]: libpod-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope: Deactivated successfully.
Jan 27 09:29:10 np0005597378 conmon[377394]: conmon 4ce789d5d4c727645944 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope/container/memory.events
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.661734838 +0000 UTC m=+0.151314681 container died 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:29:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8d36f2958cbe34e3967cdb34aeb71754727419fdfdb35c8456be0d7c088573ab-merged.mount: Deactivated successfully.
Jan 27 09:29:10 np0005597378 podman[377379]: 2026-01-27 14:29:10.705672573 +0000 UTC m=+0.195252406 container remove 4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:29:10 np0005597378 systemd[1]: libpod-conmon-4ce789d5d4c7276459443956a69e46f12dd7d9ce9c59a294fc717ebf774bab32.scope: Deactivated successfully.
Jan 27 09:29:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 27 09:29:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 27 09:29:10 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 27 09:29:10 np0005597378 podman[377419]: 2026-01-27 14:29:10.888525805 +0000 UTC m=+0.049578689 container create 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:29:10 np0005597378 systemd[1]: Started libpod-conmon-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope.
Jan 27 09:29:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:10 np0005597378 podman[377419]: 2026-01-27 14:29:10.868879425 +0000 UTC m=+0.029932319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:29:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:10 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:10 np0005597378 podman[377419]: 2026-01-27 14:29:10.980631178 +0000 UTC m=+0.141684082 container init 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:29:10 np0005597378 podman[377419]: 2026-01-27 14:29:10.98886047 +0000 UTC m=+0.149913354 container start 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:29:10 np0005597378 podman[377419]: 2026-01-27 14:29:10.992221971 +0000 UTC m=+0.153274865 container attach 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:11 np0005597378 elated_bouman[377435]: {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:    "0": [
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:        {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "devices": [
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "/dev/loop3"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            ],
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_name": "ceph_lv0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_size": "21470642176",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "name": "ceph_lv0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "tags": {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cluster_name": "ceph",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.crush_device_class": "",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.encrypted": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.objectstore": "bluestore",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osd_id": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.type": "block",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.vdo": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.with_tpm": "0"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            },
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "type": "block",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "vg_name": "ceph_vg0"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:        }
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:    ],
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:    "1": [
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:        {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "devices": [
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "/dev/loop4"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            ],
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_name": "ceph_lv1",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_size": "21470642176",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "name": "ceph_lv1",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "tags": {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cluster_name": "ceph",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.crush_device_class": "",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.encrypted": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.objectstore": "bluestore",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osd_id": "1",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.type": "block",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.vdo": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.with_tpm": "0"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            },
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "type": "block",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "vg_name": "ceph_vg1"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:        }
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:    ],
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:    "2": [
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:        {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "devices": [
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "/dev/loop5"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            ],
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_name": "ceph_lv2",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_size": "21470642176",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "name": "ceph_lv2",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "tags": {
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.cluster_name": "ceph",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.crush_device_class": "",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.encrypted": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.objectstore": "bluestore",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osd_id": "2",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.type": "block",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.vdo": "0",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:                "ceph.with_tpm": "0"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            },
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "type": "block",
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:            "vg_name": "ceph_vg2"
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:        }
Jan 27 09:29:11 np0005597378 elated_bouman[377435]:    ]
Jan 27 09:29:11 np0005597378 elated_bouman[377435]: }
Jan 27 09:29:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Jan 27 09:29:11 np0005597378 systemd[1]: libpod-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope: Deactivated successfully.
Jan 27 09:29:11 np0005597378 conmon[377435]: conmon 8991e3cd47adcfa4e471 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope/container/memory.events
Jan 27 09:29:11 np0005597378 podman[377419]: 2026-01-27 14:29:11.34150441 +0000 UTC m=+0.502557294 container died 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:29:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0803bf962b15aac67a2e67c6b112cd9dceb20972d39fe72d4caa33c5773a02ea-merged.mount: Deactivated successfully.
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:11 np0005597378 podman[377419]: 2026-01-27 14:29:11.401736735 +0000 UTC m=+0.562789639 container remove 8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.408 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.409 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.410 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:11 np0005597378 systemd[1]: libpod-conmon-8991e3cd47adcfa4e4711109640400be740c5ecefc780a23de4988d1ca3b35bb.scope: Deactivated successfully.
Jan 27 09:29:11 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:11.461 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:11 np0005597378 podman[377539]: 2026-01-27 14:29:11.889378115 +0000 UTC m=+0.044483450 container create 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:29:11 np0005597378 systemd[1]: Started libpod-conmon-32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274.scope.
Jan 27 09:29:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:29:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490987053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:29:11 np0005597378 podman[377539]: 2026-01-27 14:29:11.867375403 +0000 UTC m=+0.022480768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:29:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:11 np0005597378 nova_compute[238941]: 2026-01-27 14:29:11.978 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:11 np0005597378 podman[377539]: 2026-01-27 14:29:11.995021135 +0000 UTC m=+0.150126490 container init 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:29:12 np0005597378 podman[377539]: 2026-01-27 14:29:12.005374884 +0000 UTC m=+0.160480209 container start 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:29:12 np0005597378 podman[377539]: 2026-01-27 14:29:12.009225348 +0000 UTC m=+0.164330763 container attach 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:29:12 np0005597378 elated_engelbart[377556]: 167 167
Jan 27 09:29:12 np0005597378 systemd[1]: libpod-32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274.scope: Deactivated successfully.
Jan 27 09:29:12 np0005597378 podman[377539]: 2026-01-27 14:29:12.014563702 +0000 UTC m=+0.169669027 container died 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:29:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-228e52457e1009adf06bc66ed15e6635dc31f9f6b90cbdad38be2864017818c0-merged.mount: Deactivated successfully.
Jan 27 09:29:12 np0005597378 podman[377539]: 2026-01-27 14:29:12.079049251 +0000 UTC m=+0.234154576 container remove 32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:29:12 np0005597378 systemd[1]: libpod-conmon-32f9e2943bd4734de050dcde8645eae36ed3259aed668d441202d38b5c1ba274.scope: Deactivated successfully.
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.169 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524137.1694882, bad9acc4-1999-4764-adea-156a129e9d4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.170 238945 INFO nova.compute.manager [-] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.197 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.199 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.98734220955521GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.200 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.200 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.204 238945 DEBUG nova.compute.manager [None req-e5787648-f805-47ae-b3d6-a483d66963cf - - - - - -] [instance: bad9acc4-1999-4764-adea-156a129e9d4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.259 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.259 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.274 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:12 np0005597378 podman[377583]: 2026-01-27 14:29:12.295419736 +0000 UTC m=+0.082552627 container create 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:29:12 np0005597378 podman[377583]: 2026-01-27 14:29:12.241215895 +0000 UTC m=+0.028348786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:29:12 np0005597378 systemd[1]: Started libpod-conmon-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope.
Jan 27 09:29:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:12 np0005597378 podman[377583]: 2026-01-27 14:29:12.404270632 +0000 UTC m=+0.191403533 container init 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:29:12 np0005597378 podman[377583]: 2026-01-27 14:29:12.412241827 +0000 UTC m=+0.199374708 container start 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:29:12 np0005597378 podman[377583]: 2026-01-27 14:29:12.418854625 +0000 UTC m=+0.205987516 container attach 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:29:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:29:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272893029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.857 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.864 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.882 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.921 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:29:12 np0005597378 nova_compute[238941]: 2026-01-27 14:29:12.921 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:13 np0005597378 lvm[377700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:29:13 np0005597378 lvm[377700]: VG ceph_vg0 finished
Jan 27 09:29:13 np0005597378 lvm[377701]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:29:13 np0005597378 lvm[377701]: VG ceph_vg1 finished
Jan 27 09:29:13 np0005597378 lvm[377703]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:29:13 np0005597378 lvm[377703]: VG ceph_vg2 finished
Jan 27 09:29:13 np0005597378 sad_mestorf[377601]: {}
Jan 27 09:29:13 np0005597378 systemd[1]: libpod-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope: Deactivated successfully.
Jan 27 09:29:13 np0005597378 systemd[1]: libpod-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope: Consumed 1.327s CPU time.
Jan 27 09:29:13 np0005597378 podman[377583]: 2026-01-27 14:29:13.23702082 +0000 UTC m=+1.024153711 container died 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:29:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d954dc2d03aa019194f1625bff6d9d8505743cace5038a9d2a927edc1042b223-merged.mount: Deactivated successfully.
Jan 27 09:29:13 np0005597378 podman[377583]: 2026-01-27 14:29:13.281137699 +0000 UTC m=+1.068270580 container remove 5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mestorf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:29:13 np0005597378 systemd[1]: libpod-conmon-5f6fb2159adfd1850c2ce7c40c4fc9fa260a28df3b4f9a26ee2bb667354ff7f0.scope: Deactivated successfully.
Jan 27 09:29:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:29:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 27 09:29:13 np0005597378 nova_compute[238941]: 2026-01-27 14:29:13.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:29:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:14 np0005597378 nova_compute[238941]: 2026-01-27 14:29:14.199 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:29:14 np0005597378 nova_compute[238941]: 2026-01-27 14:29:14.497 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:14 np0005597378 nova_compute[238941]: 2026-01-27 14:29:14.922 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.822289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155822392, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1870, "num_deletes": 254, "total_data_size": 2952718, "memory_usage": 2988472, "flush_reason": "Manual Compaction"}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155837912, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1796290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52175, "largest_seqno": 54044, "table_properties": {"data_size": 1789699, "index_size": 3473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16999, "raw_average_key_size": 21, "raw_value_size": 1775263, "raw_average_value_size": 2210, "num_data_blocks": 157, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769523981, "oldest_key_time": 1769523981, "file_creation_time": 1769524155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 15680 microseconds, and 6365 cpu microseconds.
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.837971) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1796290 bytes OK
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.837996) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.840079) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.840093) EVENT_LOG_v1 {"time_micros": 1769524155840089, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.840113) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2944678, prev total WAL file size 2944678, number of live WAL files 2.
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.841220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1754KB)], [122(9864KB)]
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155841256, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11898023, "oldest_snapshot_seqno": -1}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7596 keys, 9630334 bytes, temperature: kUnknown
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155902026, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9630334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9581548, "index_size": 28706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19013, "raw_key_size": 197104, "raw_average_key_size": 25, "raw_value_size": 9448162, "raw_average_value_size": 1243, "num_data_blocks": 1125, "num_entries": 7596, "num_filter_entries": 7596, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.902506) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9630334 bytes
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.903937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 158.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(12.0) write-amplify(5.4) OK, records in: 8037, records dropped: 441 output_compression: NoCompression
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.903972) EVENT_LOG_v1 {"time_micros": 1769524155903955, "job": 74, "event": "compaction_finished", "compaction_time_micros": 60929, "compaction_time_cpu_micros": 28850, "output_level": 6, "num_output_files": 1, "total_output_size": 9630334, "num_input_records": 8037, "num_output_records": 7596, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155904721, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524155906663, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.841081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:15 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:15.906712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:16 np0005597378 nova_compute[238941]: 2026-01-27 14:29:16.060 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:29:17
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 27 09:29:17 np0005597378 podman[377744]: 2026-01-27 14:29:17.734595564 +0000 UTC m=+0.070498792 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:29:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:29:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:29:18 np0005597378 nova_compute[238941]: 2026-01-27 14:29:18.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:18 np0005597378 nova_compute[238941]: 2026-01-27 14:29:18.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:18 np0005597378 nova_compute[238941]: 2026-01-27 14:29:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:18 np0005597378 nova_compute[238941]: 2026-01-27 14:29:18.958 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:18 np0005597378 nova_compute[238941]: 2026-01-27 14:29:18.959 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:18 np0005597378 nova_compute[238941]: 2026-01-27 14:29:18.976 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.047 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.048 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.056 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.057 238945 INFO nova.compute.claims [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.163 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 27 09:29:19 np0005597378 podman[377785]: 2026-01-27 14:29:19.772899114 +0000 UTC m=+0.101731745 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:29:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:29:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2222739364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.822 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.829 238945 DEBUG nova.compute.provider_tree [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.847 238945 DEBUG nova.scheduler.client.report [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.884 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.885 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.930 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.931 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.955 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:29:19 np0005597378 nova_compute[238941]: 2026-01-27 14:29:19.971 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.066 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.068 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.068 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating image(s)#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.096 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.123 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.148 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.152 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.233 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.235 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.236 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.236 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.269 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.275 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.364 238945 DEBUG nova.policy [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46ab77ba8e764d19b7827d3cc5bd53ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '24cbefea6247422aafb138daa54f3eea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.601 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.663 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] resizing rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.744 238945 DEBUG nova.objects.instance [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'migration_context' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.760 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.761 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Ensure instance console log exists: /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.761 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.762 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:20 np0005597378 nova_compute[238941]: 2026-01-27 14:29:20.762 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.027 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524146.0263906, e7d05a6a-847c-4124-bbb7-f122cb954501 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.028 238945 INFO nova.compute.manager [-] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.044 238945 DEBUG nova.compute.manager [None req-9c288388-ebc8-4d64-8518-243d55ee994e - - - - - -] [instance: e7d05a6a-847c-4124-bbb7-f122cb954501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.062 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.264 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Successfully created port: cc9d6b78-ae76-435f-a504-d4720a04f2b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 27 09:29:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 994 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.910 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Successfully updated port: cc9d6b78-ae76-435f-a504-d4720a04f2b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.935 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.936 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:29:21 np0005597378 nova_compute[238941]: 2026-01-27 14:29:21.936 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.026 238945 DEBUG nova.compute.manager [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.027 238945 DEBUG nova.compute.manager [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.027 238945 DEBUG oslo_concurrency.lockutils [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.120 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.449 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.762 238945 DEBUG nova.network.neutron [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.785 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.786 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance network_info: |[{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.786 238945 DEBUG oslo_concurrency.lockutils [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.787 238945 DEBUG nova.network.neutron [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.789 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start _get_guest_xml network_info=[{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.795 238945 WARNING nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.801 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.802 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.805 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.805 238945 DEBUG nova.virt.libvirt.host [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.806 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.806 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.807 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.808 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.809 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.809 238945 DEBUG nova.virt.hardware [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:29:22 np0005597378 nova_compute[238941]: 2026-01-27 14:29:22.812 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:29:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282702768' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:29:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 55 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 535 KiB/s wr, 12 op/s
Jan 27 09:29:23 np0005597378 nova_compute[238941]: 2026-01-27 14:29:23.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:23 np0005597378 nova_compute[238941]: 2026-01-27 14:29:23.371 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:23 np0005597378 nova_compute[238941]: 2026-01-27 14:29:23.402 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:23 np0005597378 nova_compute[238941]: 2026-01-27 14:29:23.411 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:23 np0005597378 nova_compute[238941]: 2026-01-27 14:29:23.980 238945 DEBUG nova.network.neutron [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:29:23 np0005597378 nova_compute[238941]: 2026-01-27 14:29:23.981 238945 DEBUG nova.network.neutron [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.007 238945 DEBUG oslo_concurrency.lockutils [req-d8cfb9a5-aed9-489f-abaa-6a7feeef344a req-202aab8a-4a7f-4be2-9efe-d413fe096775 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:29:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:29:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395935134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.054 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.056 238945 DEBUG nova.virt.libvirt.vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:29:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.057 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.058 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.059 238945 DEBUG nova.objects.instance [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.082 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <uuid>6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</uuid>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <name>instance-00000099</name>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestShelveInstance-server-347034573</nova:name>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:29:22</nova:creationTime>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:user uuid="46ab77ba8e764d19b7827d3cc5bd53ab">tempest-TestShelveInstance-532292556-project-member</nova:user>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:project uuid="24cbefea6247422aafb138daa54f3eea">tempest-TestShelveInstance-532292556</nova:project>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <nova:port uuid="cc9d6b78-ae76-435f-a504-d4720a04f2b4">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <entry name="serial">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <entry name="uuid">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:75:3d:65"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <target dev="tapcc9d6b78-ae"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log" append="off"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:29:24 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:29:24 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:29:24 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:29:24 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.086 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Preparing to wait for external event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.087 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.088 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.088 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.090 238945 DEBUG nova.virt.libvirt.vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:29:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.091 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.092 238945 DEBUG nova.network.os_vif_util [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.093 238945 DEBUG os_vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.095 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.096 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9d6b78-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.102 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9d6b78-ae, col_values=(('external_ids', {'iface-id': 'cc9d6b78-ae76-435f-a504-d4720a04f2b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:3d:65', 'vm-uuid': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:24 np0005597378 NetworkManager[48904]: <info>  [1769524164.1052] manager: (tapcc9d6b78-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/658)
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.111 238945 INFO os_vif [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.162 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.162 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.163 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No VIF found with MAC fa:16:3e:75:3d:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.163 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Using config drive#033[00m
Jan 27 09:29:24 np0005597378 nova_compute[238941]: 2026-01-27 14:29:24.188 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.272 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating config drive at /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.279 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8naf5y8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.428 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8naf5y8" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.454 238945 DEBUG nova.storage.rbd_utils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.458 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.611 238945 DEBUG oslo_concurrency.processutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.613 238945 INFO nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting local config drive /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config because it was imported into RBD.#033[00m
Jan 27 09:29:25 np0005597378 kernel: tapcc9d6b78-ae: entered promiscuous mode
Jan 27 09:29:25 np0005597378 NetworkManager[48904]: <info>  [1769524165.6742] manager: (tapcc9d6b78-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/659)
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:25Z|01619|binding|INFO|Claiming lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 for this chassis.
Jan 27 09:29:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:25Z|01620|binding|INFO|cc9d6b78-ae76-435f-a504-d4720a04f2b4: Claiming fa:16:3e:75:3d:65 10.100.0.14
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.689 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.690 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 bound to our chassis#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.691 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37b14166-b0d0-402b-94a9-ec6d48de23a0#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.706 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3c56999a-3672-46bf-8016-c10288c474ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.707 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37b14166-b1 in ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.708 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37b14166-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.708 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[95540c9f-b95a-4648-bd6e-e8e45777916a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.709 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9beb67-4a3e-455e-9569-9b5d12caa886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 systemd-machined[207425]: New machine qemu-185-instance-00000099.
Jan 27 09:29:25 np0005597378 systemd-udevd[378115]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:29:25 np0005597378 systemd[1]: Started Virtual Machine qemu-185-instance-00000099.
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.734 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc9e7db-3abe-4f77-8d79-5930637162f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 NetworkManager[48904]: <info>  [1769524165.7400] device (tapcc9d6b78-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:29:25 np0005597378 NetworkManager[48904]: <info>  [1769524165.7408] device (tapcc9d6b78-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.757 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:25Z|01621|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 ovn-installed in OVS
Jan 27 09:29:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:25Z|01622|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 up in Southbound
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.762 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c2880931-6c83-4ce0-8925-592cf105232a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.800 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[833f1c49-aa59-4404-a815-4900a88770a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.806 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f010a440-9413-4f12-a436-ec6aef8b3562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 NetworkManager[48904]: <info>  [1769524165.8073] manager: (tap37b14166-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/660)
Jan 27 09:29:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.841 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[f5583e23-4cdf-467c-a06f-40013b9aa549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.844 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b40b2a-c6ce-4318-a68b-5a5c058efb08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 NetworkManager[48904]: <info>  [1769524165.8706] device (tap37b14166-b0): carrier: link connected
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.882 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[9484d07c-5d03-4f12-b446-c1878de985b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.907 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b4921cb6-9349-49f3-84ff-b2ddb4af9abe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680549, 'reachable_time': 24979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378147, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.926 238945 DEBUG nova.compute.manager [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.927 238945 DEBUG oslo_concurrency.lockutils [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.927 238945 DEBUG oslo_concurrency.lockutils [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.928 238945 DEBUG oslo_concurrency.lockutils [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:25 np0005597378 nova_compute[238941]: 2026-01-27 14:29:25.928 238945 DEBUG nova.compute.manager [req-cc2bc365-3049-46d5-9f41-1bc285ae5cdc req-959688f2-7b4d-44eb-aac5-b32cf4aea6a0 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Processing event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.929 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[08ce56ef-fdf4-4735-9db8-7c3fa3ce1293]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:6a0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680549, 'tstamp': 680549}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378148, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.951 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[71198b51-1fb4-449e-8ad7-e8e9cb0b9bf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680549, 'reachable_time': 24979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378149, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:25.984 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf51f59-8d01-4b17-a8c0-c37f758a1d3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.056 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[35ca503d-35ad-4641-821c-5c61f5554414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.058 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37b14166-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:26 np0005597378 kernel: tap37b14166-b0: entered promiscuous mode
Jan 27 09:29:26 np0005597378 NetworkManager[48904]: <info>  [1769524166.0626] manager: (tap37b14166-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Jan 27 09:29:26 np0005597378 nova_compute[238941]: 2026-01-27 14:29:26.064 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.064 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37b14166-b0, col_values=(('external_ids', {'iface-id': 'e3de2cc2-b8d6-417c-834f-e33c5933da91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:26Z|01623|binding|INFO|Releasing lport e3de2cc2-b8d6-417c-834f-e33c5933da91 from this chassis (sb_readonly=0)
Jan 27 09:29:26 np0005597378 nova_compute[238941]: 2026-01-27 14:29:26.081 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.082 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.084 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[e8968d5c-796a-42f7-b85b-9ec96f9bc1de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.084 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:29:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:26.085 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'env', 'PROCESS_TAG=haproxy-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37b14166-b0d0-402b-94a9-ec6d48de23a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:29:26 np0005597378 nova_compute[238941]: 2026-01-27 14:29:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:26 np0005597378 podman[378181]: 2026-01-27 14:29:26.471197498 +0000 UTC m=+0.051161242 container create 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:29:26 np0005597378 systemd[1]: Started libpod-conmon-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918.scope.
Jan 27 09:29:26 np0005597378 podman[378181]: 2026-01-27 14:29:26.441715003 +0000 UTC m=+0.021678777 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:29:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:29:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b59187ddc5e2276c6a96b40b8111eadb866f412f9f26ed13fb909971a552535/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:29:26 np0005597378 podman[378181]: 2026-01-27 14:29:26.563938138 +0000 UTC m=+0.143901882 container init 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:29:26 np0005597378 podman[378181]: 2026-01-27 14:29:26.572129199 +0000 UTC m=+0.152092953 container start 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:29:26 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : New worker (378203) forked
Jan 27 09:29:26 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : Loading success.
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.133 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.136 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524167.133405, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.137 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Started (Lifecycle Event)#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.140 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.143 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance spawned successfully.#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.143 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.281 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.286 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.291 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.291 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.292 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.292 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.293 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.293 238945 DEBUG nova.virt.libvirt.driver [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.325 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.326 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524167.1345432, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.326 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.346 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.349 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524167.1395671, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.349 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.354 238945 INFO nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.354 238945 DEBUG nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.377 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.380 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.407 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.417 238945 INFO nova.compute.manager [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 8.40 seconds to build instance.#033[00m
Jan 27 09:29:27 np0005597378 nova_compute[238941]: 2026-01-27 14:29:27.435 238945 DEBUG oslo_concurrency.lockutils [None req-eb979cf8-1864-46cd-8e55-6af3038af6eb 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036163104033030795 of space, bias 1.0, pg target 0.10848931209909238 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006694266780121233 of space, bias 1.0, pg target 0.200828003403637 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0461261389334235e-06 of space, bias 4.0, pg target 0.0012553513667201083 quantized to 16 (current 16)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:29:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.025 238945 DEBUG nova.compute.manager [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.026 238945 DEBUG oslo_concurrency.lockutils [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.026 238945 DEBUG oslo_concurrency.lockutils [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.027 238945 DEBUG oslo_concurrency.lockutils [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.027 238945 DEBUG nova.compute.manager [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.027 238945 WARNING nova.compute.manager [req-34eb8cac-3e33-4982-9a37-ca78e52d0807 req-79cbb3d1-4423-45cd-a99a-56665df30a88 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:29:28 np0005597378 nova_compute[238941]: 2026-01-27 14:29:28.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:29 np0005597378 nova_compute[238941]: 2026-01-27 14:29:29.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 27 09:29:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Jan 27 09:29:31 np0005597378 nova_compute[238941]: 2026-01-27 14:29:31.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:33 np0005597378 NetworkManager[48904]: <info>  [1769524173.2173] manager: (patch-br-int-to-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Jan 27 09:29:33 np0005597378 NetworkManager[48904]: <info>  [1769524173.2183] manager: (patch-provnet-a63207b1-64f9-41f1-9e51-549fc13442d4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.305 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:33Z|01624|binding|INFO|Releasing lport e3de2cc2-b8d6-417c-834f-e33c5933da91 from this chassis (sb_readonly=0)
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.812 238945 DEBUG nova.compute.manager [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.812 238945 DEBUG nova.compute.manager [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.812 238945 DEBUG oslo_concurrency.lockutils [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.813 238945 DEBUG oslo_concurrency.lockutils [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:29:33 np0005597378 nova_compute[238941]: 2026-01-27 14:29:33.813 238945 DEBUG nova.network.neutron [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:29:34 np0005597378 nova_compute[238941]: 2026-01-27 14:29:34.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Jan 27 09:29:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:35 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 27 09:29:35 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:35.972720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:29:35 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 27 09:29:35 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524175972827, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 408, "num_deletes": 251, "total_data_size": 299742, "memory_usage": 307472, "flush_reason": "Manual Compaction"}
Jan 27 09:29:35 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176030273, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 297057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54045, "largest_seqno": 54452, "table_properties": {"data_size": 294619, "index_size": 538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5901, "raw_average_key_size": 18, "raw_value_size": 289873, "raw_average_value_size": 917, "num_data_blocks": 24, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524156, "oldest_key_time": 1769524156, "file_creation_time": 1769524175, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 57651 microseconds, and 2550 cpu microseconds.
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.030391) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 297057 bytes OK
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.030416) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273452) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273493) EVENT_LOG_v1 {"time_micros": 1769524176273485, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:29:36 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:29:36 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 297163, prev total WAL file size 297163, number of live WAL files 2.
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(290KB)], [125(9404KB)]
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176273999, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9927391, "oldest_snapshot_seqno": -1}
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7403 keys, 8223880 bytes, temperature: kUnknown
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176390226, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8223880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8177666, "index_size": 26581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 193786, "raw_average_key_size": 26, "raw_value_size": 8048957, "raw_average_value_size": 1087, "num_data_blocks": 1027, "num_entries": 7403, "num_filter_entries": 7403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.390522) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8223880 bytes
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.446373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.4 rd, 70.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(61.1) write-amplify(27.7) OK, records in: 7912, records dropped: 509 output_compression: NoCompression
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.446427) EVENT_LOG_v1 {"time_micros": 1769524176446407, "job": 76, "event": "compaction_finished", "compaction_time_micros": 116311, "compaction_time_cpu_micros": 22464, "output_level": 6, "num_output_files": 1, "total_output_size": 8223880, "num_input_records": 7912, "num_output_records": 7403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176446748, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524176448634, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.273914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:29:36.448688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:29:37 np0005597378 nova_compute[238941]: 2026-01-27 14:29:37.144 238945 DEBUG nova.network.neutron [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:29:37 np0005597378 nova_compute[238941]: 2026-01-27 14:29:37.145 238945 DEBUG nova.network.neutron [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:29:37 np0005597378 nova_compute[238941]: 2026-01-27 14:29:37.196 238945 DEBUG oslo_concurrency.lockutils [req-8c046080-743f-4c6e-8fd0-713a2accbe44 req-5d5b4bad-ca6f-4bc6-8d40-24810ce5ab87 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:29:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:29:38 np0005597378 nova_compute[238941]: 2026-01-27 14:29:38.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:39 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 27 09:29:39 np0005597378 nova_compute[238941]: 2026-01-27 14:29:39.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 27 09:29:39 np0005597378 nova_compute[238941]: 2026-01-27 14:29:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:40Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:3d:65 10.100.0.14
Jan 27 09:29:40 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:40Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:3d:65 10.100.0.14
Jan 27 09:29:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 98 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.0 MiB/s wr, 53 op/s
Jan 27 09:29:43 np0005597378 nova_compute[238941]: 2026-01-27 14:29:43.365 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 106 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Jan 27 09:29:44 np0005597378 nova_compute[238941]: 2026-01-27 14:29:44.115 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 09:29:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:46.336 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:46 np0005597378 nova_compute[238941]: 2026-01-27 14:29:46.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:29:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:29:48 np0005597378 nova_compute[238941]: 2026-01-27 14:29:48.367 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:48 np0005597378 podman[378259]: 2026-01-27 14:29:48.723214999 +0000 UTC m=+0.061912382 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 27 09:29:49 np0005597378 nova_compute[238941]: 2026-01-27 14:29:49.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:29:50 np0005597378 nova_compute[238941]: 2026-01-27 14:29:50.706 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:50 np0005597378 nova_compute[238941]: 2026-01-27 14:29:50.706 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:50 np0005597378 nova_compute[238941]: 2026-01-27 14:29:50.707 238945 INFO nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Shelving#033[00m
Jan 27 09:29:50 np0005597378 nova_compute[238941]: 2026-01-27 14:29:50.727 238945 DEBUG nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 27 09:29:50 np0005597378 podman[378279]: 2026-01-27 14:29:50.772504424 +0000 UTC m=+0.109572796 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 09:29:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:29:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 27 09:29:53 np0005597378 nova_compute[238941]: 2026-01-27 14:29:53.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 1.1 MiB/s wr, 57 op/s
Jan 27 09:29:53 np0005597378 nova_compute[238941]: 2026-01-27 14:29:53.744 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance shutdown successfully after 3 seconds.#033[00m
Jan 27 09:29:54 np0005597378 kernel: tapcc9d6b78-ae (unregistering): left promiscuous mode
Jan 27 09:29:54 np0005597378 NetworkManager[48904]: <info>  [1769524194.1004] device (tapcc9d6b78-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.112 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:54Z|01625|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=0)
Jan 27 09:29:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:54Z|01626|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down in Southbound
Jan 27 09:29:54 np0005597378 ovn_controller[144812]: 2026-01-27T14:29:54Z|01627|binding|INFO|Removing iface tapcc9d6b78-ae ovn-installed in OVS
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.120 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.120 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:29:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.122 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis#033[00m
Jan 27 09:29:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.123 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:29:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.124 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[801ca5b1-0172-43a3-beb9-ccafefc18596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:54 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:54.124 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace which is not needed anymore#033[00m
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:54 np0005597378 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 27 09:29:54 np0005597378 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Consumed 15.145s CPU time.
Jan 27 09:29:54 np0005597378 systemd-machined[207425]: Machine qemu-185-instance-00000099 terminated.
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.427 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.#033[00m
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.427 238945 DEBUG nova.objects.instance [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'numa_topology' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:29:54 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : haproxy version is 2.8.14-c23fe91
Jan 27 09:29:54 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [NOTICE]   (378201) : path to executable is /usr/sbin/haproxy
Jan 27 09:29:54 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [WARNING]  (378201) : Exiting Master process...
Jan 27 09:29:54 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [ALERT]    (378201) : Current worker (378203) exited with code 143 (Terminated)
Jan 27 09:29:54 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[378196]: [WARNING]  (378201) : All workers exited. Exiting... (0)
Jan 27 09:29:54 np0005597378 systemd[1]: libpod-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918.scope: Deactivated successfully.
Jan 27 09:29:54 np0005597378 podman[378330]: 2026-01-27 14:29:54.481931864 +0000 UTC m=+0.271138164 container died 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.683 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Beginning cold snapshot process#033[00m
Jan 27 09:29:54 np0005597378 nova_compute[238941]: 2026-01-27 14:29:54.834 238945 DEBUG nova.virt.libvirt.imagebackend [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No parent info for deec719f-9679-4d33-adfe-db01148e4a56; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.004 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] creating snapshot(970d45d7d9014df3a5d855a028cfdd7d) on rbd image(6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:29:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918-userdata-shm.mount: Deactivated successfully.
Jan 27 09:29:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9b59187ddc5e2276c6a96b40b8111eadb866f412f9f26ed13fb909971a552535-merged.mount: Deactivated successfully.
Jan 27 09:29:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 664 KiB/s wr, 29 op/s
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.657 238945 DEBUG nova.compute.manager [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.658 238945 DEBUG oslo_concurrency.lockutils [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.659 238945 DEBUG oslo_concurrency.lockutils [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.660 238945 DEBUG oslo_concurrency.lockutils [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.660 238945 DEBUG nova.compute.manager [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:29:55 np0005597378 nova_compute[238941]: 2026-01-27 14:29:55.660 238945 WARNING nova.compute.manager [req-5fda9546-1985-489b-b395-2397f095a2aa req-fa33fa06-b09c-453d-91ea-e2ef938be708 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:29:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 27 09:29:55 np0005597378 podman[378330]: 2026-01-27 14:29:55.98426777 +0000 UTC m=+1.773474060 container cleanup 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 09:29:55 np0005597378 systemd[1]: libpod-conmon-317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918.scope: Deactivated successfully.
Jan 27 09:29:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 27 09:29:56 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 27 09:29:56 np0005597378 podman[378421]: 2026-01-27 14:29:56.514632603 +0000 UTC m=+0.510059107 container remove 317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.522 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[455ea70e-ae0d-4ad3-9104-66af3149c89c]: (4, ('Tue Jan 27 02:29:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918)\n317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918\nTue Jan 27 02:29:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918)\n317b4a1f6d1c9c322baf05f91828a50c80265e892bb31f5f8285c295328ff918\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.525 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f6abaf39-f30c-4278-a2ae-d3f48e723e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.526 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:29:56 np0005597378 nova_compute[238941]: 2026-01-27 14:29:56.528 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:56 np0005597378 kernel: tap37b14166-b0: left promiscuous mode
Jan 27 09:29:56 np0005597378 nova_compute[238941]: 2026-01-27 14:29:56.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.554 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4081c1f4-9ebd-4ec3-a67a-ad8d5a51e4e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.569 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b42cbb22-d965-491d-af88-f5302677d89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.570 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[458d98de-db87-4aad-9364-fcc436c7d498]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.590 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4124d095-4e89-481d-b3dd-c55566fc8dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680541, 'reachable_time': 22351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378439, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.593 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:29:56 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:29:56.593 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[fc39e58a-5d27-47e1-ae60-0a82266ef655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:29:56 np0005597378 systemd[1]: run-netns-ovnmeta\x2d37b14166\x2db0d0\x2d402b\x2d94a9\x2dec6d48de23a0.mount: Deactivated successfully.
Jan 27 09:29:56 np0005597378 nova_compute[238941]: 2026-01-27 14:29:56.822 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] cloning vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk@970d45d7d9014df3a5d855a028cfdd7d to images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:29:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 47 KiB/s wr, 5 op/s
Jan 27 09:29:57 np0005597378 nova_compute[238941]: 2026-01-27 14:29:57.657 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] flattening images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:29:57 np0005597378 nova_compute[238941]: 2026-01-27 14:29:57.998 238945 DEBUG nova.compute.manager [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:29:57 np0005597378 nova_compute[238941]: 2026-01-27 14:29:57.999 238945 DEBUG oslo_concurrency.lockutils [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:29:58 np0005597378 nova_compute[238941]: 2026-01-27 14:29:57.999 238945 DEBUG oslo_concurrency.lockutils [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:29:58 np0005597378 nova_compute[238941]: 2026-01-27 14:29:58.000 238945 DEBUG oslo_concurrency.lockutils [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:29:58 np0005597378 nova_compute[238941]: 2026-01-27 14:29:58.000 238945 DEBUG nova.compute.manager [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:29:58 np0005597378 nova_compute[238941]: 2026-01-27 14:29:58.001 238945 WARNING nova.compute.manager [req-e5c2de1c-727e-482e-958b-9f1bbecc5e3c req-e5be5343-0b32-4faa-a24b-947e9ce864ee 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 27 09:29:58 np0005597378 nova_compute[238941]: 2026-01-27 14:29:58.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:59 np0005597378 nova_compute[238941]: 2026-01-27 14:29:59.123 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:29:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 126 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 373 KiB/s wr, 34 op/s
Jan 27 09:29:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:29:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3738922964' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:29:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:29:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3738922964' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:30:00 np0005597378 nova_compute[238941]: 2026-01-27 14:30:00.744 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] removing snapshot(970d45d7d9014df3a5d855a028cfdd7d) on rbd image(6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 27 09:30:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:01 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 09:30:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.1 MiB/s wr, 38 op/s
Jan 27 09:30:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 27 09:30:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 27 09:30:01 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 27 09:30:03 np0005597378 nova_compute[238941]: 2026-01-27 14:30:03.125 238945 DEBUG nova.storage.rbd_utils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] creating snapshot(snap) on rbd image(3cc27298-a2ca-4936-ba67-d36ba64e6fdc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 27 09:30:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 168 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Jan 27 09:30:03 np0005597378 nova_compute[238941]: 2026-01-27 14:30:03.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:04 np0005597378 nova_compute[238941]: 2026-01-27 14:30:04.126 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 27 09:30:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 27 09:30:05 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 27 09:30:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Jan 27 09:30:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 75 op/s
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.640 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Snapshot image upload complete#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.641 238945 DEBUG nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.736 238945 INFO nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Shelve offloading#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.743 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.743 238945 DEBUG nova.compute.manager [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.746 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.746 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:30:07 np0005597378 nova_compute[238941]: 2026-01-27 14:30:07.746 238945 DEBUG nova.network.neutron [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:30:08 np0005597378 nova_compute[238941]: 2026-01-27 14:30:08.406 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.008 238945 DEBUG nova.network.neutron [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.025 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 104 op/s
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.425 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524194.4243894, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.426 238945 INFO nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.444 238945 DEBUG nova.compute.manager [None req-8996c8ea-0091-4994-af92-7eb0b6c5ecf5 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.448 238945 DEBUG nova.compute.manager [None req-8996c8ea-0091-4994-af92-7eb0b6c5ecf5 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.464 238945 INFO nova.compute.manager [None req-8996c8ea-0091-4994-af92-7eb0b6c5ecf5 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Jan 27 09:30:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:09.605 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:30:09 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:09.605 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.609 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.643 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.643 238945 DEBUG nova.objects.instance [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'resources' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.663 238945 DEBUG nova.virt.libvirt.vif [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member',shelved_at='2026-01-27T14:30:07.641433',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3cc27298-a2ca-4936-ba67-d36ba64e6fdc'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:29:54Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.664 238945 DEBUG nova.network.os_vif_util [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.664 238945 DEBUG nova.network.os_vif_util [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.665 238945 DEBUG os_vif [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.666 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.667 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9d6b78-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.672 238945 INFO os_vif [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.711 238945 DEBUG nova.compute.manager [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.711 238945 DEBUG nova.compute.manager [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.712 238945 DEBUG oslo_concurrency.lockutils [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.712 238945 DEBUG oslo_concurrency.lockutils [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:30:09 np0005597378 nova_compute[238941]: 2026-01-27 14:30:09.712 238945 DEBUG nova.network.neutron [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:30:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:10.607 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:10 np0005597378 nova_compute[238941]: 2026-01-27 14:30:10.943 238945 DEBUG nova.network.neutron [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:30:10 np0005597378 nova_compute[238941]: 2026-01-27 14:30:10.944 238945 DEBUG nova.network.neutron [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": null, "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 27 09:30:10 np0005597378 nova_compute[238941]: 2026-01-27 14:30:10.991 238945 DEBUG oslo_concurrency.lockutils [req-d2feeb17-95ef-4068-8d73-5687e234ca09 req-3b88436d-e423-4192-b4e1-b99da8fef950 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:30:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 27 09:30:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 27 09:30:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.237 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting instance files /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.238 238945 INFO nova.virt.libvirt.driver [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deletion of /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del complete#033[00m
Jan 27 09:30:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 178 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.408 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.679 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.680 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.680 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.680 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:30:13 np0005597378 nova_compute[238941]: 2026-01-27 14:30:13.681 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.028 238945 INFO nova.scheduler.client.report [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Deleted allocations for instance 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c#033[00m
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771392497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.267 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.455 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.456 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.953105873428285GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.456 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.457 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:30:14 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.546 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:14 np0005597378 podman[378717]: 2026-01-27 14:30:14.623084213 +0000 UTC m=+0.065916889 container create f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.669 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:14 np0005597378 podman[378717]: 2026-01-27 14:30:14.579481896 +0000 UTC m=+0.022314602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:30:14 np0005597378 systemd[1]: Started libpod-conmon-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope.
Jan 27 09:30:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:14 np0005597378 podman[378717]: 2026-01-27 14:30:14.781794513 +0000 UTC m=+0.224627239 container init f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:30:14 np0005597378 podman[378717]: 2026-01-27 14:30:14.791150185 +0000 UTC m=+0.233982861 container start f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:30:14 np0005597378 mystifying_galois[378733]: 167 167
Jan 27 09:30:14 np0005597378 systemd[1]: libpod-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope: Deactivated successfully.
Jan 27 09:30:14 np0005597378 conmon[378733]: conmon f2397fc47835faf7e325 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope/container/memory.events
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.827 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.827 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:30:14 np0005597378 podman[378717]: 2026-01-27 14:30:14.840954038 +0000 UTC m=+0.283786744 container attach f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:30:14 np0005597378 podman[378717]: 2026-01-27 14:30:14.841506333 +0000 UTC m=+0.284339009 container died f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:30:14 np0005597378 nova_compute[238941]: 2026-01-27 14:30:14.847 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-763d212d4771a3206565ac8108213a2522b31f811890b5b2f6d05aeb9589cfcc-merged.mount: Deactivated successfully.
Jan 27 09:30:15 np0005597378 podman[378717]: 2026-01-27 14:30:15.203544986 +0000 UTC m=+0.646377662 container remove f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:30:15 np0005597378 systemd[1]: libpod-conmon-f2397fc47835faf7e325f54d48825f4df141c37dd18d09d500d5fef01a652175.scope: Deactivated successfully.
Jan 27 09:30:15 np0005597378 podman[378777]: 2026-01-27 14:30:15.375398001 +0000 UTC m=+0.041606823 container create f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:30:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 65 op/s
Jan 27 09:30:15 np0005597378 systemd[1]: Started libpod-conmon-f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3.scope.
Jan 27 09:30:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:15 np0005597378 podman[378777]: 2026-01-27 14:30:15.358841065 +0000 UTC m=+0.025049907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:30:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:15 np0005597378 podman[378777]: 2026-01-27 14:30:15.466376915 +0000 UTC m=+0.132585777 container init f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:30:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:30:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216225823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:30:15 np0005597378 podman[378777]: 2026-01-27 14:30:15.474138584 +0000 UTC m=+0.140347416 container start f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:30:15 np0005597378 podman[378777]: 2026-01-27 14:30:15.477941946 +0000 UTC m=+0.144150878 container attach f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.503 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.512 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.595 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.861 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.862 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.863 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:15 np0005597378 nova_compute[238941]: 2026-01-27 14:30:15.869 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:15 np0005597378 keen_chaplygin[378794]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:30:15 np0005597378 keen_chaplygin[378794]: --> All data devices are unavailable
Jan 27 09:30:15 np0005597378 systemd[1]: libpod-f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3.scope: Deactivated successfully.
Jan 27 09:30:16 np0005597378 podman[378777]: 2026-01-27 14:30:16.000290523 +0000 UTC m=+0.666499365 container died f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:30:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f689ba040aa013cf1340646d2204cdfe4f7b9089fe75decfc44db29031633ce-merged.mount: Deactivated successfully.
Jan 27 09:30:16 np0005597378 podman[378777]: 2026-01-27 14:30:16.047303052 +0000 UTC m=+0.713511884 container remove f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:30:16 np0005597378 systemd[1]: libpod-conmon-f520acf9103873ebf9b3f8cd6856521a07ce27d613d25fb3e49994ae7d9255d3.scope: Deactivated successfully.
Jan 27 09:30:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:16 np0005597378 nova_compute[238941]: 2026-01-27 14:30:16.263 238945 DEBUG oslo_concurrency.lockutils [None req-64472076-05dd-4615-b19b-74f5d5e2ef52 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 25.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.499017753 +0000 UTC m=+0.038971311 container create 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:30:16 np0005597378 systemd[1]: Started libpod-conmon-728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae.scope.
Jan 27 09:30:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.58009886 +0000 UTC m=+0.120052448 container init 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.483912056 +0000 UTC m=+0.023865634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.586160024 +0000 UTC m=+0.126113582 container start 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.589862053 +0000 UTC m=+0.129815741 container attach 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:30:16 np0005597378 jovial_lehmann[378906]: 167 167
Jan 27 09:30:16 np0005597378 systemd[1]: libpod-728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae.scope: Deactivated successfully.
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.59086063 +0000 UTC m=+0.130814198 container died 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:30:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-12e4e1c326729eb7a244bd40966500be5bea909ff66f948a06bcd47e5444fb0c-merged.mount: Deactivated successfully.
Jan 27 09:30:16 np0005597378 podman[378889]: 2026-01-27 14:30:16.623882231 +0000 UTC m=+0.163835789 container remove 728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:30:16 np0005597378 systemd[1]: libpod-conmon-728ec4ccf10ac75a22816ad07381b671335575beab66752b5ec22fec21b636ae.scope: Deactivated successfully.
Jan 27 09:30:16 np0005597378 podman[378929]: 2026-01-27 14:30:16.811722867 +0000 UTC m=+0.045310334 container create b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:30:16 np0005597378 systemd[1]: Started libpod-conmon-b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d.scope.
Jan 27 09:30:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:16 np0005597378 podman[378929]: 2026-01-27 14:30:16.791977945 +0000 UTC m=+0.025565442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:30:16 np0005597378 podman[378929]: 2026-01-27 14:30:16.897305055 +0000 UTC m=+0.130892522 container init b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:30:16 np0005597378 podman[378929]: 2026-01-27 14:30:16.905586038 +0000 UTC m=+0.139173515 container start b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:30:16 np0005597378 podman[378929]: 2026-01-27 14:30:16.910053268 +0000 UTC m=+0.143640755 container attach b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:30:17
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control']
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]: {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:    "0": [
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:        {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "devices": [
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "/dev/loop3"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            ],
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_name": "ceph_lv0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_size": "21470642176",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "name": "ceph_lv0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "tags": {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cluster_name": "ceph",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.crush_device_class": "",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.encrypted": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.objectstore": "bluestore",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osd_id": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.type": "block",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.vdo": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.with_tpm": "0"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            },
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "type": "block",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "vg_name": "ceph_vg0"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:        }
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:    ],
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:    "1": [
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:        {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "devices": [
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "/dev/loop4"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            ],
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_name": "ceph_lv1",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_size": "21470642176",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "name": "ceph_lv1",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "tags": {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cluster_name": "ceph",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.crush_device_class": "",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.encrypted": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.objectstore": "bluestore",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osd_id": "1",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.type": "block",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.vdo": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.with_tpm": "0"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            },
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "type": "block",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "vg_name": "ceph_vg1"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:        }
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:    ],
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:    "2": [
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:        {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "devices": [
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "/dev/loop5"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            ],
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_name": "ceph_lv2",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_size": "21470642176",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "name": "ceph_lv2",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "tags": {
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.cluster_name": "ceph",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.crush_device_class": "",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.encrypted": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.objectstore": "bluestore",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osd_id": "2",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.type": "block",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.vdo": "0",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:                "ceph.with_tpm": "0"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            },
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "type": "block",
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:            "vg_name": "ceph_vg2"
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:        }
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]:    ]
Jan 27 09:30:17 np0005597378 awesome_hermann[378945]: }
Jan 27 09:30:17 np0005597378 systemd[1]: libpod-b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d.scope: Deactivated successfully.
Jan 27 09:30:17 np0005597378 podman[378929]: 2026-01-27 14:30:17.218462105 +0000 UTC m=+0.452049592 container died b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:30:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1340616f1e8f2a09014611e4a6d83485499919bc4a8104415625abad71b402d3-merged.mount: Deactivated successfully.
Jan 27 09:30:17 np0005597378 podman[378929]: 2026-01-27 14:30:17.259868513 +0000 UTC m=+0.493455970 container remove b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hermann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:30:17 np0005597378 systemd[1]: libpod-conmon-b4a8fa5ae19071c96609d1bae53494879c2b3029e6bf286af9070b8df7ee793d.scope: Deactivated successfully.
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 65 op/s
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.756251999 +0000 UTC m=+0.045223151 container create d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:30:17 np0005597378 systemd[1]: Started libpod-conmon-d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8.scope.
Jan 27 09:30:17 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.737252207 +0000 UTC m=+0.026223379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.836827192 +0000 UTC m=+0.125798364 container init d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.842967957 +0000 UTC m=+0.131939109 container start d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:30:17 np0005597378 determined_yonath[379043]: 167 167
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.847677755 +0000 UTC m=+0.136648907 container attach d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 09:30:17 np0005597378 systemd[1]: libpod-d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8.scope: Deactivated successfully.
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.848445025 +0000 UTC m=+0.137416177 container died d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:30:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:30:17 np0005597378 nova_compute[238941]: 2026-01-27 14:30:17.864 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-249f4a3feb58029fa96b4852a3d1eacea81299182ed6d9eaa0d0cb47c9bb9906-merged.mount: Deactivated successfully.
Jan 27 09:30:17 np0005597378 podman[379027]: 2026-01-27 14:30:17.895276479 +0000 UTC m=+0.184247631 container remove d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:30:17 np0005597378 systemd[1]: libpod-conmon-d2847779069f3a35c6f322a686201bf4e8668d8d38dc0af90d5e17dfa71031d8.scope: Deactivated successfully.
Jan 27 09:30:18 np0005597378 podman[379067]: 2026-01-27 14:30:18.072180099 +0000 UTC m=+0.044929312 container create e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:30:18 np0005597378 systemd[1]: Started libpod-conmon-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope.
Jan 27 09:30:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:18 np0005597378 podman[379067]: 2026-01-27 14:30:18.052821217 +0000 UTC m=+0.025570460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:30:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:18 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:18 np0005597378 podman[379067]: 2026-01-27 14:30:18.172372372 +0000 UTC m=+0.145121605 container init e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:30:18 np0005597378 podman[379067]: 2026-01-27 14:30:18.179509524 +0000 UTC m=+0.152258737 container start e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:30:18 np0005597378 podman[379067]: 2026-01-27 14:30:18.18528345 +0000 UTC m=+0.158032663 container attach e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:30:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:30:18 np0005597378 nova_compute[238941]: 2026-01-27 14:30:18.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:18 np0005597378 nova_compute[238941]: 2026-01-27 14:30:18.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:18 np0005597378 nova_compute[238941]: 2026-01-27 14:30:18.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:18 np0005597378 lvm[379171]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:30:18 np0005597378 lvm[379168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:30:18 np0005597378 lvm[379171]: VG ceph_vg1 finished
Jan 27 09:30:18 np0005597378 lvm[379168]: VG ceph_vg0 finished
Jan 27 09:30:18 np0005597378 lvm[379175]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:30:18 np0005597378 lvm[379175]: VG ceph_vg2 finished
Jan 27 09:30:18 np0005597378 podman[379158]: 2026-01-27 14:30:18.921796622 +0000 UTC m=+0.061023207 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 09:30:19 np0005597378 lucid_gauss[379083]: {}
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.030 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.030 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.030 238945 INFO nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Unshelving#033[00m
Jan 27 09:30:19 np0005597378 systemd[1]: libpod-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope: Deactivated successfully.
Jan 27 09:30:19 np0005597378 systemd[1]: libpod-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope: Consumed 1.385s CPU time.
Jan 27 09:30:19 np0005597378 podman[379067]: 2026-01-27 14:30:19.05295452 +0000 UTC m=+1.025703743 container died e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 27 09:30:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-2c189a3bcbba69fcd48aa0ea93de294bc9bf2946ad1b447ab1598c77c85b7f03-merged.mount: Deactivated successfully.
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.100 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.100 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.106 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'pci_requests' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.125 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'numa_topology' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:19 np0005597378 podman[379067]: 2026-01-27 14:30:19.126231796 +0000 UTC m=+1.098980999 container remove e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:30:19 np0005597378 systemd[1]: libpod-conmon-e725098e60e5b96ec28dfe0e16a79af878b1388ef61f3f9f23c67cd39772eb92.scope: Deactivated successfully.
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.139 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.139 238945 INFO nova.compute.claims [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:30:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:30:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:30:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.220 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:30:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:30:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956723293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.796 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.802 238945 DEBUG nova.compute.provider_tree [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.819 238945 DEBUG nova.scheduler.client.report [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:30:19 np0005597378 nova_compute[238941]: 2026-01-27 14:30:19.839 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.150 238945 INFO nova.network.neutron [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating port cc9d6b78-ae76-435f-a504-d4720a04f2b4 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 27 09:30:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:30:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.752 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.753 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.753 238945 DEBUG nova.network.neutron [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.835 238945 DEBUG nova.compute.manager [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.836 238945 DEBUG nova.compute.manager [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:30:20 np0005597378 nova_compute[238941]: 2026-01-27 14:30:20.836 238945 DEBUG oslo_concurrency.lockutils [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:30:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 27 09:30:21 np0005597378 podman[379244]: 2026-01-27 14:30:21.773064616 +0000 UTC m=+0.111684023 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.880 238945 DEBUG nova.network.neutron [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.899 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.900 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.901 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating image(s)#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.924 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.927 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.929 238945 DEBUG oslo_concurrency.lockutils [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.929 238945 DEBUG nova.network.neutron [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.973 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:30:21 np0005597378 nova_compute[238941]: 2026-01-27 14:30:21.997 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.001 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "088a62d2d959f489158508aa0878fb0664bdcc71" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.002 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "088a62d2d959f489158508aa0878fb0664bdcc71" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.219 238945 DEBUG nova.virt.libvirt.imagebackend [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image locations are: [{'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.261 238945 DEBUG nova.virt.libvirt.imagebackend [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Selected location: {'url': 'rbd://4d8fd694-f443-5fb1-b612-70034b2f3c6e/images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.262 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] cloning images/3cc27298-a2ca-4936-ba67-d36ba64e6fdc@snap to None/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.341 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "088a62d2d959f489158508aa0878fb0664bdcc71" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.448 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'migration_context' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:22 np0005597378 nova_compute[238941]: 2026-01-27 14:30:22.613 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] flattening vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 27 09:30:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.406 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.413 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.579 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Image rbd:vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.580 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.581 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Ensure instance console log exists: /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.581 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.581 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.582 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.584 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start _get_guest_xml network_info=[{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:29:50Z,direct_url=<?>,disk_format='raw',id=3cc27298-a2ca-4936-ba67-d36ba64e6fdc,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-347034573-shelved',owner='24cbefea6247422aafb138daa54f3eea',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:30:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.588 238945 WARNING nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.595 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.596 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.601 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.601 238945 DEBUG nova.virt.libvirt.host [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.602 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.602 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-27T14:29:50Z,direct_url=<?>,disk_format='raw',id=3cc27298-a2ca-4936-ba67-d36ba64e6fdc,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-347034573-shelved',owner='24cbefea6247422aafb138daa54f3eea',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-27T14:30:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.602 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.603 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.604 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.604 238945 DEBUG nova.virt.hardware [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.604 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.620 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.746 238945 DEBUG nova.network.neutron [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.747 238945 DEBUG nova.network.neutron [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.761 238945 DEBUG oslo_concurrency.lockutils [req-04567c88-53c4-4d69-96a8-836cf1fcf287 req-c54f84df-b446-4b2f-b846-241d7aa5eeb8 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.761 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.762 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 27 09:30:23 np0005597378 nova_compute[238941]: 2026-01-27 14:30:23.762 238945 DEBUG nova.objects.instance [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153977960' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.205 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.230 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.234 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.675 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:30:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739056456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.796 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.799 238945 DEBUG nova.virt.libvirt.vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='3cc27298-a2ca-4936-ba67-d36ba64e6fdc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member',shelved_at='2026-01-27T14:30:07.641433',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3cc27298-a2ca-4936-ba67-d36ba64e6fdc'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:30:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.799 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.801 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.803 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.827 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <uuid>6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</uuid>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <name>instance-00000099</name>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:name>tempest-TestShelveInstance-server-347034573</nova:name>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:30:23</nova:creationTime>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:user uuid="46ab77ba8e764d19b7827d3cc5bd53ab">tempest-TestShelveInstance-532292556-project-member</nova:user>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:project uuid="24cbefea6247422aafb138daa54f3eea">tempest-TestShelveInstance-532292556</nova:project>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="3cc27298-a2ca-4936-ba67-d36ba64e6fdc"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <nova:ports>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <nova:port uuid="cc9d6b78-ae76-435f-a504-d4720a04f2b4">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        </nova:port>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </nova:ports>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <entry name="serial">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <entry name="uuid">6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c</entry>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <interface type="ethernet">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <mac address="fa:16:3e:75:3d:65"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <driver name="vhost" rx_queue_size="512"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <mtu size="1442"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <target dev="tapcc9d6b78-ae"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </interface>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/console.log" append="off"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <input type="keyboard" bus="usb"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:30:24 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:30:24 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:30:24 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:30:24 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Preparing to wait for external event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.828 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.829 238945 DEBUG nova.virt.libvirt.vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='3cc27298-a2ca-4936-ba67-d36ba64e6fdc',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member',shelved_at='2026-01-27T14:30:07.641433',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3cc27298-a2ca-4936-ba67-d36ba64e6fdc'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T14:30:19Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.829 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.830 238945 DEBUG nova.network.os_vif_util [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.830 238945 DEBUG os_vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.831 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.832 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.835 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.836 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9d6b78-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.836 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9d6b78-ae, col_values=(('external_ids', {'iface-id': 'cc9d6b78-ae76-435f-a504-d4720a04f2b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:3d:65', 'vm-uuid': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:24 np0005597378 NetworkManager[48904]: <info>  [1769524224.8389] manager: (tapcc9d6b78-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.844 238945 INFO os_vif [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.922 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.922 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.922 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] No VIF found with MAC fa:16:3e:75:3d:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.923 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Using config drive#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.944 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.964 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:24 np0005597378 nova_compute[238941]: 2026-01-27 14:30:24.998 238945 DEBUG nova.objects.instance [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'keypairs' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.503 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Creating config drive at /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.510 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzzfe_uz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.656 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzzfe_uz" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.695 238945 DEBUG nova.storage.rbd_utils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] rbd image 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.698 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.867 238945 DEBUG oslo_concurrency.processutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.868 238945 INFO nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting local config drive /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c/disk.config because it was imported into RBD.#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.895 238945 DEBUG nova.network.neutron [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.912 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.912 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 27 09:30:25 np0005597378 kernel: tapcc9d6b78-ae: entered promiscuous mode
Jan 27 09:30:25 np0005597378 NetworkManager[48904]: <info>  [1769524225.9231] manager: (tapcc9d6b78-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Jan 27 09:30:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:25Z|01628|binding|INFO|Claiming lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 for this chassis.
Jan 27 09:30:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:25Z|01629|binding|INFO|cc9d6b78-ae76-435f-a504-d4720a04f2b4: Claiming fa:16:3e:75:3d:65 10.100.0.14
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.925 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.931 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.932 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 bound to our chassis#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.933 154802 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37b14166-b0d0-402b-94a9-ec6d48de23a0#033[00m
Jan 27 09:30:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:25Z|01630|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 ovn-installed in OVS
Jan 27 09:30:25 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:25Z|01631|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 up in Southbound
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:25 np0005597378 nova_compute[238941]: 2026-01-27 14:30:25.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.945 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[779f4cc9-11e3-49ce-89b9-66345571ddca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.947 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37b14166-b1 in ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.952 247546 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37b14166-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.952 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[f430b15f-ad75-43e3-b17c-4113c433de14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.954 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[96d82f34-fd7e-4e38-aa8d-b02c198c650f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:25 np0005597378 systemd-udevd[379619]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 09:30:25 np0005597378 systemd-machined[207425]: New machine qemu-186-instance-00000099.
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.968 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[3e01ca05-e20e-4f9c-884b-4bec65bc164f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:25 np0005597378 NetworkManager[48904]: <info>  [1769524225.9741] device (tapcc9d6b78-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 09:30:25 np0005597378 NetworkManager[48904]: <info>  [1769524225.9750] device (tapcc9d6b78-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 09:30:25 np0005597378 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Jan 27 09:30:25 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:25.986 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[eae5209d-b9f8-4bd0-846f-cea3c37201ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.016 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[ba70e6aa-789a-4837-9275-7489aef6b645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.020 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[c484b2cf-9222-43d8-b06f-d77e26d395f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 NetworkManager[48904]: <info>  [1769524226.0216] manager: (tap37b14166-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.049 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[39ef3a08-e0c4-408b-92d8-98804ae30d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.052 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[df56bdc4-108c-459a-b294-5d513236a1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 NetworkManager[48904]: <info>  [1769524226.0754] device (tap37b14166-b0): carrier: link connected
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.080 247819 DEBUG oslo.privsep.daemon [-] privsep: reply[47444e43-5028-4d34-a687-73fa79be27de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.099 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd174af-859d-4c0d-ad0e-987d59939342]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686569, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379651, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.121 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6be91877-53b2-4f8f-8c61-a7d1970ba74e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:6a0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686569, 'tstamp': 686569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379652, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.126 238945 DEBUG nova.compute.manager [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.127 238945 DEBUG oslo_concurrency.lockutils [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.128 238945 DEBUG oslo_concurrency.lockutils [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.128 238945 DEBUG oslo_concurrency.lockutils [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.129 238945 DEBUG nova.compute.manager [req-99fc6279-d095-4ae5-973e-4161df6bc4df req-cb36c288-b3b7-4130-adb2-08e481eca89c 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Processing event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.140 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[b5277a50-2b98-42a0-a669-f5164c6016dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37b14166-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:6a:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686569, 'reachable_time': 24758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379653, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.169 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3b04c8-78f5-4011-b86e-3abaf6ac147e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.233 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[426178a4-e72c-493f-807c-88dc3b488cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.235 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.235 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.235 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37b14166-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:26 np0005597378 NetworkManager[48904]: <info>  [1769524226.2381] manager: (tap37b14166-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Jan 27 09:30:26 np0005597378 kernel: tap37b14166-b0: entered promiscuous mode
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.237 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.240 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37b14166-b0, col_values=(('external_ids', {'iface-id': 'e3de2cc2-b8d6-417c-834f-e33c5933da91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:26 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:26Z|01632|binding|INFO|Releasing lport e3de2cc2-b8d6-417c-834f-e33c5933da91 from this chassis (sb_readonly=0)
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.241 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.255 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.255 154802 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.256 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[4671065e-6d5d-4eb4-bd32-3acb5a2cbf2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.256 154802 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: global
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    log         /dev/log local0 debug
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    log-tag     haproxy-metadata-proxy-37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    user        root
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    group       root
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    maxconn     1024
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    pidfile     /var/lib/neutron/external/pids/37b14166-b0d0-402b-94a9-ec6d48de23a0.pid.haproxy
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    daemon
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: defaults
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    log global
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    mode http
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    option httplog
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    option dontlognull
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    option http-server-close
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    option forwardfor
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    retries                 3
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-request    30s
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    timeout connect         30s
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    timeout client          32s
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    timeout server          32s
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    timeout http-keep-alive 30s
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: listen listener
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    bind 169.254.169.254:80
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    server metadata /var/lib/neutron/metadata_proxy
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]:    http-request add-header X-OVN-Network-ID 37b14166-b0d0-402b-94a9-ec6d48de23a0
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 27 09:30:26 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:26.257 154802 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'env', 'PROCESS_TAG=haproxy-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37b14166-b0d0-402b-94a9-ec6d48de23a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.515 238945 DEBUG nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.516 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524226.5143204, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.516 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Started (Lifecycle Event)#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.520 238945 DEBUG nova.virt.libvirt.driver [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.524 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance spawned successfully.#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.546 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.549 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.569 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.570 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524226.5156908, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.570 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Paused (Lifecycle Event)#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.594 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.597 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524226.5186973, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.597 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.613 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.616 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:30:26 np0005597378 nova_compute[238941]: 2026-01-27 14:30:26.652 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:30:26 np0005597378 podman[379727]: 2026-01-27 14:30:26.616535868 +0000 UTC m=+0.026584008 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 09:30:26 np0005597378 podman[379727]: 2026-01-27 14:30:26.755105754 +0000 UTC m=+0.165153884 container create 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 09:30:26 np0005597378 systemd[1]: Started libpod-conmon-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856.scope.
Jan 27 09:30:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:30:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcb8c7a9ce2ea7d9fea44a3b30c55db6a3d64d1c7f4492566ebf663816cc0e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 09:30:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 27 09:30:26 np0005597378 podman[379727]: 2026-01-27 14:30:26.835904923 +0000 UTC m=+0.245953043 container init 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:30:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 27 09:30:26 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 27 09:30:26 np0005597378 podman[379727]: 2026-01-27 14:30:26.843195 +0000 UTC m=+0.253243120 container start 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:30:26 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : New worker (379748) forked
Jan 27 09:30:26 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : Loading success.
Jan 27 09:30:27 np0005597378 nova_compute[238941]: 2026-01-27 14:30:27.160 238945 DEBUG nova.compute.manager [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:30:27 np0005597378 nova_compute[238941]: 2026-01-27 14:30:27.243 238945 DEBUG oslo_concurrency.lockutils [None req-55cfa2ae-d8aa-4592-855a-2bbba756d130 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 86 op/s
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005557109052535495 of space, bias 1.0, pg target 0.16671327157606483 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014281404385570126 of space, bias 1.0, pg target 0.42844213156710376 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0472128941940284e-06 of space, bias 4.0, pg target 0.0012566554730328342 quantized to 16 (current 16)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:30:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.202 238945 DEBUG nova.compute.manager [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG oslo_concurrency.lockutils [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG oslo_concurrency.lockutils [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG oslo_concurrency.lockutils [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 DEBUG nova.compute.manager [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.203 238945 WARNING nova.compute.manager [req-c66bc845-5563-4131-b7e0-99cf44818cce req-4e3d41f6-e019-44ae-94df-4c4b63de169b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state active and task_state None.#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:28 np0005597378 nova_compute[238941]: 2026-01-27 14:30:28.416 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 142 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 4.7 MiB/s wr, 182 op/s
Jan 27 09:30:29 np0005597378 nova_compute[238941]: 2026-01-27 14:30:29.839 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 27 09:30:32 np0005597378 nova_compute[238941]: 2026-01-27 14:30:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Jan 27 09:30:33 np0005597378 nova_compute[238941]: 2026-01-27 14:30:33.418 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:34 np0005597378 nova_compute[238941]: 2026-01-27 14:30:34.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:35 np0005597378 nova_compute[238941]: 2026-01-27 14:30:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:35 np0005597378 nova_compute[238941]: 2026-01-27 14:30:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:30:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Jan 27 09:30:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 27 09:30:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 27 09:30:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 27 09:30:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Jan 27 09:30:38 np0005597378 nova_compute[238941]: 2026-01-27 14:30:38.421 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:38 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:38Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:3d:65 10.100.0.14
Jan 27 09:30:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 29 KiB/s wr, 45 op/s
Jan 27 09:30:39 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:39Z|01633|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 27 09:30:39 np0005597378 nova_compute[238941]: 2026-01-27 14:30:39.845 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:40 np0005597378 nova_compute[238941]: 2026-01-27 14:30:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:30:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 16 KiB/s wr, 40 op/s
Jan 27 09:30:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 16 KiB/s wr, 53 op/s
Jan 27 09:30:43 np0005597378 nova_compute[238941]: 2026-01-27 14:30:43.422 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:44 np0005597378 nova_compute[238941]: 2026-01-27 14:30:44.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 123 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 28 KiB/s wr, 56 op/s
Jan 27 09:30:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.337 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.338 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG nova.compute.manager [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG nova.compute.manager [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing instance network info cache due to event network-changed-cc9d6b78-ae76-435f-a504-d4720a04f2b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG oslo_concurrency.lockutils [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.459 238945 DEBUG oslo_concurrency.lockutils [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquired lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.460 238945 DEBUG nova.network.neutron [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Refreshing network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.549 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.550 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.551 238945 INFO nova.compute.manager [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Terminating instance#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.552 238945 DEBUG nova.compute.manager [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:30:46 np0005597378 kernel: tapcc9d6b78-ae (unregistering): left promiscuous mode
Jan 27 09:30:46 np0005597378 NetworkManager[48904]: <info>  [1769524246.6248] device (tapcc9d6b78-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01634|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=0)
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01635|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down in Southbound
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01636|binding|INFO|Removing iface tapcc9d6b78-ae ovn-installed in OVS
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.649 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.651 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.652 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.653 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[68e8af34-d377-420d-986f-4486ea937641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.654 154802 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 namespace which is not needed anymore#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 27 09:30:46 np0005597378 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 13.522s CPU time.
Jan 27 09:30:46 np0005597378 systemd-machined[207425]: Machine qemu-186-instance-00000099 terminated.
Jan 27 09:30:46 np0005597378 kernel: tapcc9d6b78-ae: entered promiscuous mode
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01637|binding|INFO|Claiming lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 for this chassis.
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01638|binding|INFO|cc9d6b78-ae76-435f-a504-d4720a04f2b4: Claiming fa:16:3e:75:3d:65 10.100.0.14
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 kernel: tapcc9d6b78-ae (unregistering): left promiscuous mode
Jan 27 09:30:46 np0005597378 NetworkManager[48904]: <info>  [1769524246.7802] manager: (tapcc9d6b78-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/668)
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.784 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01639|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 ovn-installed in OVS
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01640|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 up in Southbound
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01641|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=1)
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01642|if_status|INFO|Dropped 3 log messages in last 329 seconds (most recently, 328 seconds ago) due to excessive rate
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01643|if_status|INFO|Not setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down as sb is readonly
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.801 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01644|binding|INFO|Removing iface tapcc9d6b78-ae ovn-installed in OVS
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.802 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01645|binding|INFO|Releasing lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 from this chassis (sb_readonly=0)
Jan 27 09:30:46 np0005597378 ovn_controller[144812]: 2026-01-27T14:30:46Z|01646|binding|INFO|Setting lport cc9d6b78-ae76-435f-a504-d4720a04f2b4 down in Southbound
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.805 238945 INFO nova.virt.libvirt.driver [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Instance destroyed successfully.#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.806 238945 DEBUG nova.objects.instance [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lazy-loading 'resources' on Instance uuid 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:30:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:46.810 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:3d:65 10.100.0.14'], port_security=['fa:16:3e:75:3d:65 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24cbefea6247422aafb138daa54f3eea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ed98009b-1ec4-4a35-9d75-c323dbd95769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a82fa8-eb33-4428-971b-f8e14e58ddd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>], logical_port=cc9d6b78-ae76-435f-a504-d4720a04f2b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5b44385dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.814 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : haproxy version is 2.8.14-c23fe91
Jan 27 09:30:46 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [NOTICE]   (379746) : path to executable is /usr/sbin/haproxy
Jan 27 09:30:46 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [WARNING]  (379746) : Exiting Master process...
Jan 27 09:30:46 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [WARNING]  (379746) : Exiting Master process...
Jan 27 09:30:46 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [ALERT]    (379746) : Current worker (379748) exited with code 143 (Terminated)
Jan 27 09:30:46 np0005597378 neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0[379742]: [WARNING]  (379746) : All workers exited. Exiting... (0)
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.818 238945 DEBUG nova.virt.libvirt.vif [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-27T14:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-347034573',display_name='tempest-TestShelveInstance-server-347034573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-347034573',id=153,image_ref='deec719f-9679-4d33-adfe-db01148e4a56',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMo003pXsdGxGaXJXNJkmVGj6iZij/8YUcfuE/aix01MC+tyBXUNrywSWGyE6IgqN1L+kooGDPknA7/r0afvROAgp26qEMx4bIlura66h+lQt2j4DLXrtHi61pF1fMJeFw==',key_name='tempest-TestShelveInstance-972839056',keypairs=<?>,launch_index=0,launched_at=2026-01-27T14:30:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='24cbefea6247422aafb138daa54f3eea',ramdisk_id='',reservation_id='r-21jh6eyt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='deec719f-9679-4d33-adfe-db01148e4a56',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-532292556',owner_user_name='tempest-TestShelveInstance-532292556-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T14:30:27Z,user_data=None,user_id='46ab77ba8e764d19b7827d3cc5bd53ab',uuid=6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.819 238945 DEBUG nova.network.os_vif_util [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converting VIF {"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 27 09:30:46 np0005597378 systemd[1]: libpod-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856.scope: Deactivated successfully.
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.819 238945 DEBUG nova.network.os_vif_util [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.820 238945 DEBUG os_vif [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.822 238945 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9d6b78-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.823 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 27 09:30:46 np0005597378 podman[379783]: 2026-01-27 14:30:46.82675394 +0000 UTC m=+0.070714749 container died 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:30:46 np0005597378 nova_compute[238941]: 2026-01-27 14:30:46.828 238945 INFO os_vif [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:3d:65,bridge_name='br-int',has_traffic_filtering=True,id=cc9d6b78-ae76-435f-a504-d4720a04f2b4,network=Network(37b14166-b0d0-402b-94a9-ec6d48de23a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9d6b78-ae')#033[00m
Jan 27 09:30:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856-userdata-shm.mount: Deactivated successfully.
Jan 27 09:30:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ebcb8c7a9ce2ea7d9fea44a3b30c55db6a3d64d1c7f4492566ebf663816cc0e8-merged.mount: Deactivated successfully.
Jan 27 09:30:46 np0005597378 podman[379783]: 2026-01-27 14:30:46.95322943 +0000 UTC m=+0.197190239 container cleanup 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:30:46 np0005597378 systemd[1]: libpod-conmon-17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856.scope: Deactivated successfully.
Jan 27 09:30:47 np0005597378 podman[379837]: 2026-01-27 14:30:47.245143103 +0000 UTC m=+0.267157346 container remove 17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.252 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[41cee6f6-11bd-45a7-bd45-87de2e0eb104]: (4, ('Tue Jan 27 02:30:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856)\n17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856\nTue Jan 27 02:30:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 (17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856)\n17fb4ef158bd69a835948fa27a0027df3a6902f7f9e5ef3ebea4ecdb7b415856\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.255 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[fd12eec9-6cf0-44d1-905c-742acf7d0304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.255 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37b14166-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:47 np0005597378 kernel: tap37b14166-b0: left promiscuous mode
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.278 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1c6595-1734-402c-855d-1f8f18d37733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.294 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[69bf4d6f-cd85-4a1e-a97b-f54608f0e421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.296 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[ec54b280-c1af-468f-a75b-8385237d5f22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.315 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[762e925a-071e-4d09-b134-945a95f858dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686563, 'reachable_time': 43985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379852, 'error': None, 'target': 'ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 systemd[1]: run-netns-ovnmeta\x2d37b14166\x2db0d0\x2d402b\x2d94a9\x2dec6d48de23a0.mount: Deactivated successfully.
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.321 155324 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37b14166-b0d0-402b-94a9-ec6d48de23a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.321 155324 DEBUG oslo.privsep.daemon [-] privsep: reply[f048a3b1-9520-4caa-96d4-d490d43cae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.323 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.324 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.325 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[db06ed8d-6580-47b5-8bfd-d084cb2182b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.326 154802 INFO neutron.agent.ovn.metadata.agent [-] Port cc9d6b78-ae76-435f-a504-d4720a04f2b4 in datapath 37b14166-b0d0-402b-94a9-ec6d48de23a0 unbound from our chassis#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.327 154802 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37b14166-b0d0-402b-94a9-ec6d48de23a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 27 09:30:47 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:30:47.327 247546 DEBUG oslo.privsep.daemon [-] privsep: reply[3a48e368-e982-4c1e-8f6f-ae344098df0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 123 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 25 KiB/s wr, 50 op/s
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.425 238945 INFO nova.virt.libvirt.driver [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deleting instance files /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del#033[00m
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.426 238945 INFO nova.virt.libvirt.driver [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deletion of /var/lib/nova/instances/6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c_del complete#033[00m
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.680 238945 INFO nova.compute.manager [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.681 238945 DEBUG oslo.service.loopingcall [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.681 238945 DEBUG nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:30:47 np0005597378 nova_compute[238941]: 2026-01-27 14:30:47.681 238945 DEBUG nova.network.neutron [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:30:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.335 238945 DEBUG nova.network.neutron [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updated VIF entry in instance network info cache for port cc9d6b78-ae76-435f-a504-d4720a04f2b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.336 238945 DEBUG nova.network.neutron [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [{"id": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "address": "fa:16:3e:75:3d:65", "network": {"id": "37b14166-b0d0-402b-94a9-ec6d48de23a0", "bridge": "br-int", "label": "tempest-TestShelveInstance-1156622816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "24cbefea6247422aafb138daa54f3eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9d6b78-ae", "ovs_interfaceid": "cc9d6b78-ae76-435f-a504-d4720a04f2b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.360 238945 DEBUG oslo_concurrency.lockutils [req-d4635455-f7fd-4794-80ee-2117f29aedbd req-e5effe60-d074-4467-8538-507c31b17fff 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Releasing lock "refresh_cache-6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.495 238945 DEBUG nova.network.neutron [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.512 238945 INFO nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Took 0.83 seconds to deallocate network for instance.#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.559 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.560 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.567 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.567 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.568 238945 WARNING nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-unplugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.569 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 WARNING nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Acquiring lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.570 238945 DEBUG oslo_concurrency.lockutils [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.571 238945 DEBUG nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] No waiting events found dispatching network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.571 238945 WARNING nova.compute.manager [req-5040af3c-09d4-4b38-8959-84268b311095 req-f5ce434c-102a-4b74-afd6-f2ec96f2ba5f 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received unexpected event network-vif-plugged-cc9d6b78-ae76-435f-a504-d4720a04f2b4 for instance with vm_state deleted and task_state None.#033[00m
Jan 27 09:30:48 np0005597378 nova_compute[238941]: 2026-01-27 14:30:48.611 238945 DEBUG oslo_concurrency.processutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:30:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:30:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583735736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:30:49 np0005597378 nova_compute[238941]: 2026-01-27 14:30:49.198 238945 DEBUG oslo_concurrency.processutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:30:49 np0005597378 nova_compute[238941]: 2026-01-27 14:30:49.206 238945 DEBUG nova.compute.provider_tree [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:30:49 np0005597378 nova_compute[238941]: 2026-01-27 14:30:49.237 238945 DEBUG nova.scheduler.client.report [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:30:49 np0005597378 nova_compute[238941]: 2026-01-27 14:30:49.266 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:49 np0005597378 nova_compute[238941]: 2026-01-27 14:30:49.301 238945 INFO nova.scheduler.client.report [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Deleted allocations for instance 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c#033[00m
Jan 27 09:30:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 55 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 756 KiB/s rd, 25 KiB/s wr, 62 op/s
Jan 27 09:30:49 np0005597378 nova_compute[238941]: 2026-01-27 14:30:49.595 238945 DEBUG oslo_concurrency.lockutils [None req-dc2f6b81-772d-4383-bd79-0992b3b08636 46ab77ba8e764d19b7827d3cc5bd53ab 24cbefea6247422aafb138daa54f3eea - - default default] Lock "6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:30:49 np0005597378 podman[379875]: 2026-01-27 14:30:49.707254293 +0000 UTC m=+0.050554915 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:30:50 np0005597378 nova_compute[238941]: 2026-01-27 14:30:50.656 238945 DEBUG nova.compute.manager [req-5c77ec66-225e-4cd3-9220-912121f63aa5 req-32a6d74e-beee-4905-84e6-27e0bab2f22b 9fe55394286f46f2b54725e0b57f5e55 c6ec452db71b4b1e88c6140101bebb68 - - default default] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Received event network-vif-deleted-cc9d6b78-ae76-435f-a504-d4720a04f2b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 27 09:30:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 13 KiB/s wr, 63 op/s
Jan 27 09:30:51 np0005597378 nova_compute[238941]: 2026-01-27 14:30:51.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:52 np0005597378 podman[379896]: 2026-01-27 14:30:52.735152341 +0000 UTC m=+0.080335368 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:30:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 27 09:30:53 np0005597378 nova_compute[238941]: 2026-01-27 14:30:53.427 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 12 KiB/s wr, 34 op/s
Jan 27 09:30:55 np0005597378 nova_compute[238941]: 2026-01-27 14:30:55.981 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:30:56 np0005597378 nova_compute[238941]: 2026-01-27 14:30:56.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:56 np0005597378 nova_compute[238941]: 2026-01-27 14:30:56.828 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 09:30:58 np0005597378 nova_compute[238941]: 2026-01-27 14:30:58.428 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:30:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 27 09:30:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:30:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3960377269' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:30:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:30:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3960377269' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:31:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Jan 27 09:31:01 np0005597378 nova_compute[238941]: 2026-01-27 14:31:01.803 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524246.800956, 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:31:01 np0005597378 nova_compute[238941]: 2026-01-27 14:31:01.804 238945 INFO nova.compute.manager [-] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:31:01 np0005597378 nova_compute[238941]: 2026-01-27 14:31:01.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:01 np0005597378 nova_compute[238941]: 2026-01-27 14:31:01.890 238945 DEBUG nova.compute.manager [None req-54ce7784-45a8-422e-81c7-92a405de1415 - - - - - -] [instance: 6bef8b7c-ddf4-47e6-a6d4-8ac990637f7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:31:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:03 np0005597378 nova_compute[238941]: 2026-01-27 14:31:03.434 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:06 np0005597378 nova_compute[238941]: 2026-01-27 14:31:06.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:08 np0005597378 nova_compute[238941]: 2026-01-27 14:31:08.435 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:11 np0005597378 nova_compute[238941]: 2026-01-27 14:31:11.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:13 np0005597378 nova_compute[238941]: 2026-01-27 14:31:13.437 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:31:14.971 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:31:14 np0005597378 nova_compute[238941]: 2026-01-27 14:31:14.972 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:14 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:31:14.973 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:31:15 np0005597378 nova_compute[238941]: 2026-01-27 14:31:15.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:15 np0005597378 nova_compute[238941]: 2026-01-27 14:31:15.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:15 np0005597378 nova_compute[238941]: 2026-01-27 14:31:15.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:15 np0005597378 nova_compute[238941]: 2026-01-27 14:31:15.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:15 np0005597378 nova_compute[238941]: 2026-01-27 14:31:15.439 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:31:15 np0005597378 nova_compute[238941]: 2026-01-27 14:31:15.440 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:31:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1157081034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.039 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.237 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.238 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3488MB free_disk=59.987323991023004GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.238 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.239 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.528 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.529 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.552 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:16 np0005597378 nova_compute[238941]: 2026-01-27 14:31:16.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:31:17
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:31:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:31:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492673560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:31:17 np0005597378 nova_compute[238941]: 2026-01-27 14:31:17.185 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:17 np0005597378 nova_compute[238941]: 2026-01-27 14:31:17.190 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:31:17 np0005597378 nova_compute[238941]: 2026-01-27 14:31:17.259 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:17 np0005597378 nova_compute[238941]: 2026-01-27 14:31:17.613 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:31:17 np0005597378 nova_compute[238941]: 2026-01-27 14:31:17.614 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:31:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:31:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:31:18 np0005597378 nova_compute[238941]: 2026-01-27 14:31:18.440 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:18 np0005597378 nova_compute[238941]: 2026-01-27 14:31:18.615 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:19 np0005597378 nova_compute[238941]: 2026-01-27 14:31:19.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:31:20 np0005597378 podman[380074]: 2026-01-27 14:31:20.220679352 +0000 UTC m=+0.059470964 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 09:31:20 np0005597378 nova_compute[238941]: 2026-01-27 14:31:20.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:31:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.534157426 +0000 UTC m=+0.027347978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.66814587 +0000 UTC m=+0.161336402 container create 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:31:20 np0005597378 systemd[1]: Started libpod-conmon-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope.
Jan 27 09:31:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.859658795 +0000 UTC m=+0.352849337 container init 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.869907761 +0000 UTC m=+0.363098283 container start 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.874789742 +0000 UTC m=+0.367980284 container attach 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:31:20 np0005597378 pedantic_pasteur[380151]: 167 167
Jan 27 09:31:20 np0005597378 systemd[1]: libpod-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope: Deactivated successfully.
Jan 27 09:31:20 np0005597378 conmon[380151]: conmon 5990cc94a368d1687dcd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope/container/memory.events
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.877762043 +0000 UTC m=+0.370952565 container died 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8d9e22fa43a69419f3a270d5304197e0726207c9c43c0357b1d9ff85c9d2ab21-merged.mount: Deactivated successfully.
Jan 27 09:31:20 np0005597378 podman[380134]: 2026-01-27 14:31:20.926297732 +0000 UTC m=+0.419488254 container remove 5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:31:20 np0005597378 systemd[1]: libpod-conmon-5990cc94a368d1687dcd517e6fbec918ff31b61ff8ce19eb31daac0d6d5d6369.scope: Deactivated successfully.
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.098547107 +0000 UTC m=+0.042323182 container create dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:31:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:21 np0005597378 systemd[1]: Started libpod-conmon-dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc.scope.
Jan 27 09:31:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:31:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.081180469 +0000 UTC m=+0.024956574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.200389214 +0000 UTC m=+0.144165309 container init dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.207382242 +0000 UTC m=+0.151158317 container start dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.217736421 +0000 UTC m=+0.161512526 container attach dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:21 np0005597378 practical_heisenberg[380193]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:31:21 np0005597378 practical_heisenberg[380193]: --> All data devices are unavailable
Jan 27 09:31:21 np0005597378 systemd[1]: libpod-dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc.scope: Deactivated successfully.
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.682536656 +0000 UTC m=+0.626312741 container died dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 09:31:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8c9416f543550d4a615d04e182f2e80b04855a8f0d9c78e0e61b3166742bc3a2-merged.mount: Deactivated successfully.
Jan 27 09:31:21 np0005597378 podman[380176]: 2026-01-27 14:31:21.726884313 +0000 UTC m=+0.670660388 container remove dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_heisenberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:31:21 np0005597378 systemd[1]: libpod-conmon-dbbf85482b548022879bb1131aa84e1cb82005196a9604a65721d113396141cc.scope: Deactivated successfully.
Jan 27 09:31:21 np0005597378 nova_compute[238941]: 2026-01-27 14:31:21.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.250416922 +0000 UTC m=+0.043631648 container create 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:31:22 np0005597378 systemd[1]: Started libpod-conmon-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope.
Jan 27 09:31:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.230882095 +0000 UTC m=+0.024096841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.32826748 +0000 UTC m=+0.121482226 container init 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.336606405 +0000 UTC m=+0.129821131 container start 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.340314626 +0000 UTC m=+0.133529352 container attach 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:22 np0005597378 intelligent_montalcini[380303]: 167 167
Jan 27 09:31:22 np0005597378 systemd[1]: libpod-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope: Deactivated successfully.
Jan 27 09:31:22 np0005597378 conmon[380303]: conmon 7665c6d1cb2c383e04ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope/container/memory.events
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.343747429 +0000 UTC m=+0.136962155 container died 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-125c7e2286fa94f35c69fa5a05f71632605b305de62588228b247448f86a6ea7-merged.mount: Deactivated successfully.
Jan 27 09:31:22 np0005597378 podman[380287]: 2026-01-27 14:31:22.387815847 +0000 UTC m=+0.181030573 container remove 7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:31:22 np0005597378 systemd[1]: libpod-conmon-7665c6d1cb2c383e04ba5d2c8849393a2fed2fd6cd33ef8183c2258fc0749902.scope: Deactivated successfully.
Jan 27 09:31:22 np0005597378 podman[380327]: 2026-01-27 14:31:22.576713821 +0000 UTC m=+0.052200938 container create 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 09:31:22 np0005597378 systemd[1]: Started libpod-conmon-5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc.scope.
Jan 27 09:31:22 np0005597378 podman[380327]: 2026-01-27 14:31:22.552900909 +0000 UTC m=+0.028388056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:31:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:31:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:22 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:22 np0005597378 podman[380327]: 2026-01-27 14:31:22.666254596 +0000 UTC m=+0.141741743 container init 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:31:22 np0005597378 podman[380327]: 2026-01-27 14:31:22.673229004 +0000 UTC m=+0.148716121 container start 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:31:22 np0005597378 podman[380327]: 2026-01-27 14:31:22.696347807 +0000 UTC m=+0.171835014 container attach 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]: {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:    "0": [
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:        {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "devices": [
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "/dev/loop3"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            ],
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_name": "ceph_lv0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_size": "21470642176",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "name": "ceph_lv0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "tags": {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cluster_name": "ceph",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.crush_device_class": "",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.encrypted": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.objectstore": "bluestore",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osd_id": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.type": "block",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.vdo": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.with_tpm": "0"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            },
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "type": "block",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "vg_name": "ceph_vg0"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:        }
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:    ],
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:    "1": [
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:        {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "devices": [
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "/dev/loop4"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            ],
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_name": "ceph_lv1",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_size": "21470642176",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "name": "ceph_lv1",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "tags": {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cluster_name": "ceph",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.crush_device_class": "",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.encrypted": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.objectstore": "bluestore",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osd_id": "1",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.type": "block",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.vdo": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.with_tpm": "0"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            },
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "type": "block",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "vg_name": "ceph_vg1"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:        }
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:    ],
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:    "2": [
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:        {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "devices": [
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "/dev/loop5"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            ],
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_name": "ceph_lv2",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_size": "21470642176",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "name": "ceph_lv2",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "tags": {
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.cluster_name": "ceph",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.crush_device_class": "",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.encrypted": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.objectstore": "bluestore",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osd_id": "2",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.type": "block",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.vdo": "0",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:                "ceph.with_tpm": "0"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            },
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "type": "block",
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:            "vg_name": "ceph_vg2"
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:        }
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]:    ]
Jan 27 09:31:22 np0005597378 reverent_kowalevski[380342]: }
Jan 27 09:31:23 np0005597378 systemd[1]: libpod-5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc.scope: Deactivated successfully.
Jan 27 09:31:23 np0005597378 podman[380327]: 2026-01-27 14:31:23.015852575 +0000 UTC m=+0.491339702 container died 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:31:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-572cf11e9a4af52fb05d2618c43685f6e9b6367faec2fa81950b1e8160406ae7-merged.mount: Deactivated successfully.
Jan 27 09:31:23 np0005597378 podman[380327]: 2026-01-27 14:31:23.093543929 +0000 UTC m=+0.569031046 container remove 5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:31:23 np0005597378 systemd[1]: libpod-conmon-5eebb8053034912d5c9cdc303cbed71a49a653bea3d74f5dd771f1fd7bfed0bc.scope: Deactivated successfully.
Jan 27 09:31:23 np0005597378 podman[380352]: 2026-01-27 14:31:23.207796731 +0000 UTC m=+0.159245436 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:31:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:23 np0005597378 nova_compute[238941]: 2026-01-27 14:31:23.442 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.623140741 +0000 UTC m=+0.053087552 container create 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:31:23 np0005597378 systemd[1]: Started libpod-conmon-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope.
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.594297474 +0000 UTC m=+0.024244315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:31:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.722970754 +0000 UTC m=+0.152917575 container init 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.7302128 +0000 UTC m=+0.160159611 container start 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:31:23 np0005597378 happy_elbakyan[380462]: 167 167
Jan 27 09:31:23 np0005597378 systemd[1]: libpod-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope: Deactivated successfully.
Jan 27 09:31:23 np0005597378 conmon[380462]: conmon 9e044d9977b93936a828 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope/container/memory.events
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.74136473 +0000 UTC m=+0.171311541 container attach 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.741859533 +0000 UTC m=+0.171806344 container died 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:31:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-51833a8b7bbb32a8cda2fcc8e0246c8eac29edb411456a61326ec8625be63de4-merged.mount: Deactivated successfully.
Jan 27 09:31:23 np0005597378 podman[380446]: 2026-01-27 14:31:23.822962971 +0000 UTC m=+0.252909782 container remove 9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:31:23 np0005597378 systemd[1]: libpod-conmon-9e044d9977b93936a82870e09c1566c22141bd8387084d9a08d1bf76cad4dfff.scope: Deactivated successfully.
Jan 27 09:31:24 np0005597378 podman[380485]: 2026-01-27 14:31:24.008078353 +0000 UTC m=+0.066389061 container create 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:31:24 np0005597378 podman[380485]: 2026-01-27 14:31:23.967451557 +0000 UTC m=+0.025762285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:31:24 np0005597378 systemd[1]: Started libpod-conmon-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope.
Jan 27 09:31:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:31:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:31:24 np0005597378 podman[380485]: 2026-01-27 14:31:24.221825808 +0000 UTC m=+0.280136546 container init 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:31:24 np0005597378 podman[380485]: 2026-01-27 14:31:24.228895518 +0000 UTC m=+0.287206226 container start 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:24 np0005597378 podman[380485]: 2026-01-27 14:31:24.244374086 +0000 UTC m=+0.302684794 container attach 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:31:24 np0005597378 nova_compute[238941]: 2026-01-27 14:31:24.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:24 np0005597378 nova_compute[238941]: 2026-01-27 14:31:24.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:31:24 np0005597378 nova_compute[238941]: 2026-01-27 14:31:24.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:31:24 np0005597378 nova_compute[238941]: 2026-01-27 14:31:24.452 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:31:24 np0005597378 lvm[380582]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:31:24 np0005597378 lvm[380582]: VG ceph_vg1 finished
Jan 27 09:31:24 np0005597378 lvm[380581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:31:24 np0005597378 lvm[380581]: VG ceph_vg0 finished
Jan 27 09:31:24 np0005597378 lvm[380584]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:31:24 np0005597378 lvm[380584]: VG ceph_vg2 finished
Jan 27 09:31:24 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:31:24.975 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:31:24 np0005597378 lvm[380586]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:31:24 np0005597378 lvm[380586]: VG ceph_vg2 finished
Jan 27 09:31:25 np0005597378 quirky_kowalevski[380502]: {}
Jan 27 09:31:25 np0005597378 systemd[1]: libpod-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope: Deactivated successfully.
Jan 27 09:31:25 np0005597378 systemd[1]: libpod-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope: Consumed 1.335s CPU time.
Jan 27 09:31:25 np0005597378 podman[380485]: 2026-01-27 14:31:25.052649753 +0000 UTC m=+1.110960471 container died 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:31:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f1927ebfdd91d62a348bc16a4bbaf19aa4454cebe26c669603a88bb3aa07498-merged.mount: Deactivated successfully.
Jan 27 09:31:25 np0005597378 podman[380485]: 2026-01-27 14:31:25.187164581 +0000 UTC m=+1.245475289 container remove 694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kowalevski, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:31:25 np0005597378 systemd[1]: libpod-conmon-694e6bb9562339a36e25b96ede77d9301a725930f8f3336ceb0be480c8bcdc01.scope: Deactivated successfully.
Jan 27 09:31:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:31:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:31:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:31:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:31:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:31:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:31:26 np0005597378 nova_compute[238941]: 2026-01-27 14:31:26.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:27 np0005597378 nova_compute[238941]: 2026-01-27 14:31:27.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:27 np0005597378 nova_compute[238941]: 2026-01-27 14:31:27.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5957432969392584e-05 of space, bias 1.0, pg target 0.004787229890817775 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697820004071157 of space, bias 1.0, pg target 0.2009346001221347 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0490759032122082e-06 of space, bias 4.0, pg target 0.0012588910838546498 quantized to 16 (current 16)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:31:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.401 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.401 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.444 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.508 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.656 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.657 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.664 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.665 238945 INFO nova.compute.claims [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 27 09:31:28 np0005597378 nova_compute[238941]: 2026-01-27 14:31:28.884 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:31:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557784195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.522 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.531 238945 DEBUG nova.compute.provider_tree [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.563 238945 DEBUG nova.scheduler.client.report [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.666 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.667 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.755 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.756 238945 DEBUG nova.network.neutron [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.859 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 27 09:31:29 np0005597378 nova_compute[238941]: 2026-01-27 14:31:29.964 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.000 238945 DEBUG nova.network.neutron [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.000 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.105 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.107 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.107 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Creating image(s)#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.134 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.158 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.180 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.184 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.281 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.283 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.284 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.284 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "285e7430fe92ea66e9eadd94d86f83f43a584b0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.315 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:30 np0005597378 nova_compute[238941]: 2026-01-27 14:31:30.320 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.457 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.539 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] resizing rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.795 238945 DEBUG nova.objects.instance [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lazy-loading 'migration_context' on Instance uuid af4d7d14-2c7f-4aa7-b66c-da3512878ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.817 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.818 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Ensure instance console log exists: /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.818 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.819 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.819 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.821 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'deec719f-9679-4d33-adfe-db01148e4a56'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.827 238945 WARNING nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.833 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.834 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.837 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.837 238945 DEBUG nova.virt.libvirt.host [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.838 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.838 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T13:38:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2c4f8bd6-d183-4572-a6f5-6353e4694398',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T13:38:08Z,direct_url=<?>,disk_format='qcow2',id=deec719f-9679-4d33-adfe-db01148e4a56,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b2d057bb74245b8be119fa9985925d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T13:38:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.839 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.839 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.839 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.840 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.841 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.841 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.841 238945 DEBUG nova.virt.hardware [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.844 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:31 np0005597378 nova_compute[238941]: 2026-01-27 14:31:31.878 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:31:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700492135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:31:32 np0005597378 nova_compute[238941]: 2026-01-27 14:31:32.548 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:32 np0005597378 nova_compute[238941]: 2026-01-27 14:31:32.577 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:32 np0005597378 nova_compute[238941]: 2026-01-27 14:31:32.583 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 27 09:31:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213299692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 27 09:31:33 np0005597378 nova_compute[238941]: 2026-01-27 14:31:33.239 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:33 np0005597378 nova_compute[238941]: 2026-01-27 14:31:33.241 238945 DEBUG nova.objects.instance [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lazy-loading 'pci_devices' on Instance uuid af4d7d14-2c7f-4aa7-b66c-da3512878ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:31:33 np0005597378 nova_compute[238941]: 2026-01-27 14:31:33.257 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <uuid>af4d7d14-2c7f-4aa7-b66c-da3512878ac1</uuid>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <name>instance-0000009a</name>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <memory>131072</memory>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <vcpu>1</vcpu>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <metadata>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:name>tempest-AggregatesAdminTestJSON-server-39503680</nova:name>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:creationTime>2026-01-27 14:31:31</nova:creationTime>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:flavor name="m1.nano">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:memory>128</nova:memory>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:disk>1</nova:disk>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:swap>0</nova:swap>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:ephemeral>0</nova:ephemeral>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:vcpus>1</nova:vcpus>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      </nova:flavor>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:owner>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:user uuid="fb634d48c75a441ea60491ec31b4bc44">tempest-AggregatesAdminTestJSON-1181524138-project-member</nova:user>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <nova:project uuid="bb645826f58740318309ea0ff8a3c3fd">tempest-AggregatesAdminTestJSON-1181524138</nova:project>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      </nova:owner>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:root type="image" uuid="deec719f-9679-4d33-adfe-db01148e4a56"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <nova:ports/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </nova:instance>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </metadata>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <sysinfo type="smbios">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <system>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <entry name="manufacturer">RDO</entry>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <entry name="product">OpenStack Compute</entry>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <entry name="serial">af4d7d14-2c7f-4aa7-b66c-da3512878ac1</entry>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <entry name="uuid">af4d7d14-2c7f-4aa7-b66c-da3512878ac1</entry>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <entry name="family">Virtual Machine</entry>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </system>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </sysinfo>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <os>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <boot dev="hd"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <smbios mode="sysinfo"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </os>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <features>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <acpi/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <apic/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <vmcoreinfo/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </features>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <clock offset="utc">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <timer name="pit" tickpolicy="delay"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <timer name="hpet" present="no"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </clock>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <cpu mode="host-model" match="exact">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <topology sockets="1" cores="1" threads="1"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </cpu>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  <devices>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <disk type="network" device="disk">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <target dev="vda" bus="virtio"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <disk type="network" device="cdrom">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <driver type="raw" cache="none"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <source protocol="rbd" name="vms/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <host name="192.168.122.100" port="6789"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      </source>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <auth username="openstack">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:        <secret type="ceph" uuid="4d8fd694-f443-5fb1-b612-70034b2f3c6e"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      </auth>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <target dev="sda" bus="sata"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </disk>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <serial type="pty">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <log file="/var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/console.log" append="off"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </serial>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <video>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <model type="virtio"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </video>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <input type="tablet" bus="usb"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <rng model="virtio">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <backend model="random">/dev/urandom</backend>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </rng>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="pci" model="pcie-root-port"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <controller type="usb" index="0"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    <memballoon model="virtio">
Jan 27 09:31:33 np0005597378 nova_compute[238941]:      <stats period="10"/>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:    </memballoon>
Jan 27 09:31:33 np0005597378 nova_compute[238941]:  </devices>
Jan 27 09:31:33 np0005597378 nova_compute[238941]: </domain>
Jan 27 09:31:33 np0005597378 nova_compute[238941]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 27 09:31:33 np0005597378 ovn_controller[144812]: 2026-01-27T14:31:33Z|01647|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 27 09:31:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 52 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 478 KiB/s wr, 11 op/s
Jan 27 09:31:33 np0005597378 nova_compute[238941]: 2026-01-27 14:31:33.448 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:34 np0005597378 nova_compute[238941]: 2026-01-27 14:31:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:35 np0005597378 nova_compute[238941]: 2026-01-27 14:31:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:35 np0005597378 nova_compute[238941]: 2026-01-27 14:31:35.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:31:35 np0005597378 nova_compute[238941]: 2026-01-27 14:31:35.398 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:31:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:31:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:36 np0005597378 nova_compute[238941]: 2026-01-27 14:31:36.882 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:37 np0005597378 nova_compute[238941]: 2026-01-27 14:31:37.398 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:37 np0005597378 nova_compute[238941]: 2026-01-27 14:31:37.398 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:31:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:31:38 np0005597378 nova_compute[238941]: 2026-01-27 14:31:38.449 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:31:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:41 np0005597378 nova_compute[238941]: 2026-01-27 14:31:41.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:31:41 np0005597378 nova_compute[238941]: 2026-01-27 14:31:41.884 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.423 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.423 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.424 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Using config drive#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.457 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.464 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.789 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Creating config drive at /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.794 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn74k_pv9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.944 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn74k_pv9" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.971 238945 DEBUG nova.storage.rbd_utils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] rbd image af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 27 09:31:43 np0005597378 nova_compute[238941]: 2026-01-27 14:31:43.974 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:44 np0005597378 nova_compute[238941]: 2026-01-27 14:31:44.261 238945 DEBUG oslo_concurrency.processutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config af4d7d14-2c7f-4aa7-b66c-da3512878ac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:44 np0005597378 nova_compute[238941]: 2026-01-27 14:31:44.263 238945 INFO nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deleting local config drive /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1/disk.config because it was imported into RBD.#033[00m
Jan 27 09:31:44 np0005597378 systemd-machined[207425]: New machine qemu-187-instance-0000009a.
Jan 27 09:31:44 np0005597378 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Jan 27 09:31:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.530 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524305.529889, af4d7d14-2c7f-4aa7-b66c-da3512878ac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.532 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] VM Resumed (Lifecycle Event)#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.535 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.536 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.540 238945 INFO nova.virt.libvirt.driver [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance spawned successfully.#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.540 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.635 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.638 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.722 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.723 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.724 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.724 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.724 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.725 238945 DEBUG nova.virt.libvirt.driver [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.729 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.730 238945 DEBUG nova.virt.driver [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] Emitting event <LifecycleEvent: 1769524305.5300288, af4d7d14-2c7f-4aa7-b66c-da3512878ac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.730 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] VM Started (Lifecycle Event)#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.831 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.835 238945 DEBUG nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.899 238945 INFO nova.compute.manager [None req-72d72d71-56fb-4691-a1bd-9364342bcc27 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.946 238945 INFO nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 15.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 27 09:31:45 np0005597378 nova_compute[238941]: 2026-01-27 14:31:45.947 238945 DEBUG nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:31:46 np0005597378 nova_compute[238941]: 2026-01-27 14:31:46.025 238945 INFO nova.compute.manager [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 17.40 seconds to build instance.#033[00m
Jan 27 09:31:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:46 np0005597378 nova_compute[238941]: 2026-01-27 14:31:46.119 238945 DEBUG oslo_concurrency.lockutils [None req-8bf5228f-0ac0-4077-af3e-e4aeafc88afc fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:31:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:31:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:31:46.339 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:46 np0005597378 nova_compute[238941]: 2026-01-27 14:31:46.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:31:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.169 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.170 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.170 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.171 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.171 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.172 238945 INFO nova.compute.manager [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Terminating instance#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.173 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "refresh_cache-af4d7d14-2c7f-4aa7-b66c-da3512878ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.173 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquired lock "refresh_cache-af4d7d14-2c7f-4aa7-b66c-da3512878ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.174 238945 DEBUG nova.network.neutron [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.357 238945 DEBUG nova.network.neutron [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.451 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.774 238945 DEBUG nova.network.neutron [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.795 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Releasing lock "refresh_cache-af4d7d14-2c7f-4aa7-b66c-da3512878ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 27 09:31:48 np0005597378 nova_compute[238941]: 2026-01-27 14:31:48.795 238945 DEBUG nova.compute.manager [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 27 09:31:48 np0005597378 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Jan 27 09:31:48 np0005597378 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 4.536s CPU time.
Jan 27 09:31:48 np0005597378 systemd-machined[207425]: Machine qemu-187-instance-0000009a terminated.
Jan 27 09:31:49 np0005597378 nova_compute[238941]: 2026-01-27 14:31:49.020 238945 INFO nova.virt.libvirt.driver [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance destroyed successfully.#033[00m
Jan 27 09:31:49 np0005597378 nova_compute[238941]: 2026-01-27 14:31:49.021 238945 DEBUG nova.objects.instance [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lazy-loading 'resources' on Instance uuid af4d7d14-2c7f-4aa7-b66c-da3512878ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 27 09:31:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 42 op/s
Jan 27 09:31:50 np0005597378 podman[381015]: 2026-01-27 14:31:50.723090458 +0000 UTC m=+0.062387393 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 09:31:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.395 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:31:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 79 op/s
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.887 238945 INFO nova.virt.libvirt.driver [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deleting instance files /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_del#033[00m
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.888 238945 INFO nova.virt.libvirt.driver [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deletion of /var/lib/nova/instances/af4d7d14-2c7f-4aa7-b66c-da3512878ac1_del complete#033[00m
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.961 238945 INFO nova.compute.manager [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 3.16 seconds to destroy the instance on the hypervisor.#033[00m
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.961 238945 DEBUG oslo.service.loopingcall [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.962 238945 DEBUG nova.compute.manager [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 27 09:31:51 np0005597378 nova_compute[238941]: 2026-01-27 14:31:51.962 238945 DEBUG nova.network.neutron [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 27 09:31:52 np0005597378 nova_compute[238941]: 2026-01-27 14:31:52.190 238945 DEBUG nova.network.neutron [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 27 09:31:52 np0005597378 nova_compute[238941]: 2026-01-27 14:31:52.209 238945 DEBUG nova.network.neutron [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 27 09:31:52 np0005597378 nova_compute[238941]: 2026-01-27 14:31:52.224 238945 INFO nova.compute.manager [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Took 0.26 seconds to deallocate network for instance.#033[00m
Jan 27 09:31:52 np0005597378 nova_compute[238941]: 2026-01-27 14:31:52.307 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:31:52 np0005597378 nova_compute[238941]: 2026-01-27 14:31:52.308 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:31:52 np0005597378 nova_compute[238941]: 2026-01-27 14:31:52.361 238945 DEBUG oslo_concurrency.processutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:31:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:31:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784553942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.018 238945 DEBUG oslo_concurrency.processutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.040 238945 DEBUG nova.compute.provider_tree [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.060 238945 DEBUG nova.scheduler.client.report [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.083 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.111 238945 INFO nova.scheduler.client.report [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Deleted allocations for instance af4d7d14-2c7f-4aa7-b66c-da3512878ac1#033[00m
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.190 238945 DEBUG oslo_concurrency.lockutils [None req-e29999a1-5bba-4ec4-a260-e43c7d2fe0a7 fb634d48c75a441ea60491ec31b4bc44 bb645826f58740318309ea0ff8a3c3fd - - default default] Lock "af4d7d14-2c7f-4aa7-b66c-da3512878ac1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:31:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 77 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 95 op/s
Jan 27 09:31:53 np0005597378 nova_compute[238941]: 2026-01-27 14:31:53.455 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:53 np0005597378 podman[381055]: 2026-01-27 14:31:53.753489756 +0000 UTC m=+0.093819172 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:31:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 27 09:31:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:31:56 np0005597378 nova_compute[238941]: 2026-01-27 14:31:56.891 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Jan 27 09:31:58 np0005597378 nova_compute[238941]: 2026-01-27 14:31:58.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:31:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Jan 27 09:31:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:31:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059195204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:31:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:31:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059195204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:32:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 1.2 KiB/s wr, 57 op/s
Jan 27 09:32:01 np0005597378 nova_compute[238941]: 2026-01-27 14:32:01.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 27 09:32:03 np0005597378 nova_compute[238941]: 2026-01-27 14:32:03.474 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:04 np0005597378 nova_compute[238941]: 2026-01-27 14:32:04.019 238945 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769524309.0182176, af4d7d14-2c7f-4aa7-b66c-da3512878ac1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 27 09:32:04 np0005597378 nova_compute[238941]: 2026-01-27 14:32:04.020 238945 INFO nova.compute.manager [-] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] VM Stopped (Lifecycle Event)#033[00m
Jan 27 09:32:04 np0005597378 nova_compute[238941]: 2026-01-27 14:32:04.149 238945 DEBUG nova.compute.manager [None req-e80c9166-afe8-4da1-a9ca-1d6b66347e02 - - - - - -] [instance: af4d7d14-2c7f-4aa7-b66c-da3512878ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 27 09:32:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 27 09:32:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:06 np0005597378 nova_compute[238941]: 2026-01-27 14:32:06.896 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:08 np0005597378 nova_compute[238941]: 2026-01-27 14:32:08.476 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:10 np0005597378 nova_compute[238941]: 2026-01-27 14:32:10.827 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:11 np0005597378 nova_compute[238941]: 2026-01-27 14:32:11.897 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:32:12.893 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:32:12 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:32:12.895 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:32:12 np0005597378 nova_compute[238941]: 2026-01-27 14:32:12.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:13 np0005597378 nova_compute[238941]: 2026-01-27 14:32:13.480 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:13 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:32:13.896 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.384 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.418 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:32:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:32:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661429069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:32:15 np0005597378 nova_compute[238941]: 2026-01-27 14:32:15.997 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.151 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.152 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.987322613596916GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.153 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.153 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.209 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.209 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.223 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:32:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:32:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280265649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.820 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.830 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.855 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.886 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.887 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:32:16 np0005597378 nova_compute[238941]: 2026-01-27 14:32:16.900 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:32:17
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'default.rgw.control', 'volumes', 'backups']
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:32:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:32:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:32:18 np0005597378 nova_compute[238941]: 2026-01-27 14:32:18.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:18 np0005597378 nova_compute[238941]: 2026-01-27 14:32:18.885 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:20 np0005597378 nova_compute[238941]: 2026-01-27 14:32:20.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:21 np0005597378 podman[381127]: 2026-01-27 14:32:21.70927763 +0000 UTC m=+0.050459692 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:32:21 np0005597378 nova_compute[238941]: 2026-01-27 14:32:21.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:21 np0005597378 nova_compute[238941]: 2026-01-27 14:32:21.904 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:22 np0005597378 nova_compute[238941]: 2026-01-27 14:32:22.400 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:23 np0005597378 nova_compute[238941]: 2026-01-27 14:32:23.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:24 np0005597378 podman[381147]: 2026-01-27 14:32:24.839395805 +0000 UTC m=+0.182805751 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:32:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:32:26 np0005597378 nova_compute[238941]: 2026-01-27 14:32:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:26 np0005597378 nova_compute[238941]: 2026-01-27 14:32:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:32:26 np0005597378 nova_compute[238941]: 2026-01-27 14:32:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:32:26 np0005597378 nova_compute[238941]: 2026-01-27 14:32:26.407 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:32:26 np0005597378 podman[381316]: 2026-01-27 14:32:26.768566372 +0000 UTC m=+0.086407922 container create 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:32:26 np0005597378 podman[381316]: 2026-01-27 14:32:26.703170148 +0000 UTC m=+0.021011718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:32:26 np0005597378 systemd[1]: Started libpod-conmon-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope.
Jan 27 09:32:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:32:26 np0005597378 nova_compute[238941]: 2026-01-27 14:32:26.906 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:32:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:32:27 np0005597378 podman[381316]: 2026-01-27 14:32:27.034188715 +0000 UTC m=+0.352030295 container init 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:32:27 np0005597378 podman[381316]: 2026-01-27 14:32:27.042348385 +0000 UTC m=+0.360189935 container start 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:32:27 np0005597378 recursing_shannon[381333]: 167 167
Jan 27 09:32:27 np0005597378 systemd[1]: libpod-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope: Deactivated successfully.
Jan 27 09:32:27 np0005597378 conmon[381333]: conmon 6ec1bad600fa09aa6b4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope/container/memory.events
Jan 27 09:32:27 np0005597378 podman[381316]: 2026-01-27 14:32:27.180257265 +0000 UTC m=+0.498098825 container attach 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:32:27 np0005597378 podman[381316]: 2026-01-27 14:32:27.181850168 +0000 UTC m=+0.499691718 container died 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e4e9e2a23b260f1ee3caee5f0fb6bb137fa8a6e51ecad8cd8025b103c6d4070a-merged.mount: Deactivated successfully.
Jan 27 09:32:27 np0005597378 podman[381316]: 2026-01-27 14:32:27.825896297 +0000 UTC m=+1.143737847 container remove 6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:32:27 np0005597378 systemd[1]: libpod-conmon-6ec1bad600fa09aa6b4a5ff904959f77d78ff1068151982bc54f372210de4cb8.scope: Deactivated successfully.
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.598039455554165e-05 of space, bias 1.0, pg target 0.004794118366662495 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697960506001278 of space, bias 1.0, pg target 0.20093881518003834 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0482996494546334e-06 of space, bias 4.0, pg target 0.0012579595793455601 quantized to 16 (current 16)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:32:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:32:27 np0005597378 podman[381357]: 2026-01-27 14:32:27.998021219 +0000 UTC m=+0.055494478 container create 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:32:28 np0005597378 systemd[1]: Started libpod-conmon-2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5.scope.
Jan 27 09:32:28 np0005597378 podman[381357]: 2026-01-27 14:32:27.967728332 +0000 UTC m=+0.025201611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:32:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:32:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:28 np0005597378 podman[381357]: 2026-01-27 14:32:28.123679427 +0000 UTC m=+0.181152716 container init 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:32:28 np0005597378 podman[381357]: 2026-01-27 14:32:28.132550117 +0000 UTC m=+0.190023376 container start 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:32:28 np0005597378 podman[381357]: 2026-01-27 14:32:28.158931369 +0000 UTC m=+0.216404628 container attach 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:32:28 np0005597378 nova_compute[238941]: 2026-01-27 14:32:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:28 np0005597378 nova_compute[238941]: 2026-01-27 14:32:28.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:28 np0005597378 intelligent_galileo[381373]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:32:28 np0005597378 intelligent_galileo[381373]: --> All data devices are unavailable
Jan 27 09:32:28 np0005597378 systemd[1]: libpod-2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5.scope: Deactivated successfully.
Jan 27 09:32:28 np0005597378 podman[381357]: 2026-01-27 14:32:28.68924485 +0000 UTC m=+0.746718119 container died 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:32:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-134c6989ea9aacdfef800495bbe33922a38f071debf27637721d6cf1f53da1cb-merged.mount: Deactivated successfully.
Jan 27 09:32:29 np0005597378 podman[381357]: 2026-01-27 14:32:29.363939605 +0000 UTC m=+1.421412864 container remove 2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:32:29 np0005597378 systemd[1]: libpod-conmon-2df04b8e17cc80917b674d67c35d3ac0e3e43f2d2cebb7641cdc63399fce81f5.scope: Deactivated successfully.
Jan 27 09:32:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:29 np0005597378 podman[381469]: 2026-01-27 14:32:29.83876079 +0000 UTC m=+0.019913158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:32:30 np0005597378 podman[381469]: 2026-01-27 14:32:30.036615797 +0000 UTC m=+0.217768145 container create a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:32:30 np0005597378 systemd[1]: Started libpod-conmon-a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542.scope.
Jan 27 09:32:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:32:30 np0005597378 podman[381469]: 2026-01-27 14:32:30.262222211 +0000 UTC m=+0.443374649 container init a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:32:30 np0005597378 podman[381469]: 2026-01-27 14:32:30.272871107 +0000 UTC m=+0.454023495 container start a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:32:30 np0005597378 zen_mclaren[381486]: 167 167
Jan 27 09:32:30 np0005597378 systemd[1]: libpod-a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542.scope: Deactivated successfully.
Jan 27 09:32:30 np0005597378 podman[381469]: 2026-01-27 14:32:30.28184484 +0000 UTC m=+0.462997188 container attach a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:32:30 np0005597378 podman[381469]: 2026-01-27 14:32:30.282365343 +0000 UTC m=+0.463517701 container died a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:32:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0cb7c381007f8d1530721c0b718ea6db10f5a51146cf074e9a86ba520b5776d8-merged.mount: Deactivated successfully.
Jan 27 09:32:30 np0005597378 podman[381469]: 2026-01-27 14:32:30.473075046 +0000 UTC m=+0.654227394 container remove a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:32:30 np0005597378 systemd[1]: libpod-conmon-a102ba30c9b3282618e60a6fb9a6f363b7e80c370fc92783eaa6444a79a8d542.scope: Deactivated successfully.
Jan 27 09:32:30 np0005597378 podman[381512]: 2026-01-27 14:32:30.727556549 +0000 UTC m=+0.080628665 container create 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:32:30 np0005597378 podman[381512]: 2026-01-27 14:32:30.671751944 +0000 UTC m=+0.024824070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:32:30 np0005597378 systemd[1]: Started libpod-conmon-319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a.scope.
Jan 27 09:32:30 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:32:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:30 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:31 np0005597378 podman[381512]: 2026-01-27 14:32:31.082509813 +0000 UTC m=+0.435581949 container init 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:32:31 np0005597378 podman[381512]: 2026-01-27 14:32:31.089895911 +0000 UTC m=+0.442968017 container start 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:32:31 np0005597378 podman[381512]: 2026-01-27 14:32:31.12467563 +0000 UTC m=+0.477747756 container attach 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:32:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:31 np0005597378 charming_allen[381528]: {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:    "0": [
Jan 27 09:32:31 np0005597378 charming_allen[381528]:        {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "devices": [
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "/dev/loop3"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            ],
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_name": "ceph_lv0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_size": "21470642176",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "name": "ceph_lv0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "tags": {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cluster_name": "ceph",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.crush_device_class": "",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.encrypted": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.objectstore": "bluestore",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osd_id": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.type": "block",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.vdo": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.with_tpm": "0"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            },
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "type": "block",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "vg_name": "ceph_vg0"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:        }
Jan 27 09:32:31 np0005597378 charming_allen[381528]:    ],
Jan 27 09:32:31 np0005597378 charming_allen[381528]:    "1": [
Jan 27 09:32:31 np0005597378 charming_allen[381528]:        {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "devices": [
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "/dev/loop4"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            ],
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_name": "ceph_lv1",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_size": "21470642176",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "name": "ceph_lv1",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "tags": {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cluster_name": "ceph",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.crush_device_class": "",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.encrypted": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.objectstore": "bluestore",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osd_id": "1",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.type": "block",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.vdo": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.with_tpm": "0"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            },
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "type": "block",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "vg_name": "ceph_vg1"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:        }
Jan 27 09:32:31 np0005597378 charming_allen[381528]:    ],
Jan 27 09:32:31 np0005597378 charming_allen[381528]:    "2": [
Jan 27 09:32:31 np0005597378 charming_allen[381528]:        {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "devices": [
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "/dev/loop5"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            ],
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_name": "ceph_lv2",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_size": "21470642176",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "name": "ceph_lv2",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "tags": {
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.cluster_name": "ceph",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.crush_device_class": "",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.encrypted": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.objectstore": "bluestore",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osd_id": "2",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.type": "block",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.vdo": "0",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:                "ceph.with_tpm": "0"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            },
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "type": "block",
Jan 27 09:32:31 np0005597378 charming_allen[381528]:            "vg_name": "ceph_vg2"
Jan 27 09:32:31 np0005597378 charming_allen[381528]:        }
Jan 27 09:32:31 np0005597378 charming_allen[381528]:    ]
Jan 27 09:32:31 np0005597378 charming_allen[381528]: }
Jan 27 09:32:31 np0005597378 systemd[1]: libpod-319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a.scope: Deactivated successfully.
Jan 27 09:32:31 np0005597378 podman[381512]: 2026-01-27 14:32:31.373040017 +0000 UTC m=+0.726112193 container died 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:32:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-08a02a3867c77c9b4c1172ab654f5b2c720ba04e502060ff78b1dce3fa1e016a-merged.mount: Deactivated successfully.
Jan 27 09:32:31 np0005597378 nova_compute[238941]: 2026-01-27 14:32:31.909 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:32 np0005597378 podman[381512]: 2026-01-27 14:32:32.053421967 +0000 UTC m=+1.406494073 container remove 319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_allen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:32:32 np0005597378 systemd[1]: libpod-conmon-319922628beb98a1f749b96bfd8b2ab19a992eb10a2db49ada563f6d35c63e4a.scope: Deactivated successfully.
Jan 27 09:32:32 np0005597378 podman[381612]: 2026-01-27 14:32:32.52484862 +0000 UTC m=+0.027801040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:32:32 np0005597378 podman[381612]: 2026-01-27 14:32:32.618699531 +0000 UTC m=+0.121651941 container create 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 09:32:32 np0005597378 systemd[1]: Started libpod-conmon-2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1.scope.
Jan 27 09:32:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:32:32 np0005597378 podman[381612]: 2026-01-27 14:32:32.842119676 +0000 UTC m=+0.345072186 container init 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:32:32 np0005597378 podman[381612]: 2026-01-27 14:32:32.851570621 +0000 UTC m=+0.354523021 container start 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:32:32 np0005597378 angry_black[381629]: 167 167
Jan 27 09:32:32 np0005597378 systemd[1]: libpod-2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1.scope: Deactivated successfully.
Jan 27 09:32:32 np0005597378 podman[381612]: 2026-01-27 14:32:32.877072499 +0000 UTC m=+0.380024919 container attach 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:32:32 np0005597378 podman[381612]: 2026-01-27 14:32:32.877768578 +0000 UTC m=+0.380720978 container died 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:32:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b0cfd1da0f1b6e92fb84bc6bde2f24966ffd0cf6c9c0370f0fb1e2591ee6dfd6-merged.mount: Deactivated successfully.
Jan 27 09:32:33 np0005597378 podman[381612]: 2026-01-27 14:32:33.192166447 +0000 UTC m=+0.695118847 container remove 2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:32:33 np0005597378 systemd[1]: libpod-conmon-2da7d7ca2095ec1245a9ace995d16e0750f29807e284b197c090cda96d64e7c1.scope: Deactivated successfully.
Jan 27 09:32:33 np0005597378 podman[381652]: 2026-01-27 14:32:33.407099653 +0000 UTC m=+0.106104302 container create 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:32:33 np0005597378 podman[381652]: 2026-01-27 14:32:33.324465075 +0000 UTC m=+0.023469754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:32:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:33 np0005597378 nova_compute[238941]: 2026-01-27 14:32:33.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:33 np0005597378 systemd[1]: Started libpod-conmon-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope.
Jan 27 09:32:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:32:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:33 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:32:33 np0005597378 podman[381652]: 2026-01-27 14:32:33.619048109 +0000 UTC m=+0.318052798 container init 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:32:33 np0005597378 podman[381652]: 2026-01-27 14:32:33.629663786 +0000 UTC m=+0.328668435 container start 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:32:33 np0005597378 podman[381652]: 2026-01-27 14:32:33.66692512 +0000 UTC m=+0.365929769 container attach 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:32:34 np0005597378 lvm[381745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:32:34 np0005597378 lvm[381748]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:32:34 np0005597378 lvm[381745]: VG ceph_vg0 finished
Jan 27 09:32:34 np0005597378 lvm[381748]: VG ceph_vg1 finished
Jan 27 09:32:34 np0005597378 lvm[381750]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:32:34 np0005597378 lvm[381750]: VG ceph_vg2 finished
Jan 27 09:32:34 np0005597378 lvm[381751]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:32:34 np0005597378 lvm[381751]: VG ceph_vg1 finished
Jan 27 09:32:34 np0005597378 lvm[381753]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:32:34 np0005597378 lvm[381753]: VG ceph_vg1 finished
Jan 27 09:32:34 np0005597378 unruffled_wozniak[381669]: {}
Jan 27 09:32:34 np0005597378 systemd[1]: libpod-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope: Deactivated successfully.
Jan 27 09:32:34 np0005597378 podman[381652]: 2026-01-27 14:32:34.47561741 +0000 UTC m=+1.174622059 container died 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:32:34 np0005597378 systemd[1]: libpod-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope: Consumed 1.291s CPU time.
Jan 27 09:32:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c949698589fd1eceeecba03f766d0154df40e44de4ef0785397dc842c8fcd718-merged.mount: Deactivated successfully.
Jan 27 09:32:34 np0005597378 podman[381652]: 2026-01-27 14:32:34.793116132 +0000 UTC m=+1.492120811 container remove 2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wozniak, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:32:34 np0005597378 systemd[1]: libpod-conmon-2a6dbbbe73c48e125094992ec7a1a02254de4aefdc499b98fe73f4d7ca6acd73.scope: Deactivated successfully.
Jan 27 09:32:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:32:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:32:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:32:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:32:35 np0005597378 nova_compute[238941]: 2026-01-27 14:32:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:32:35 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:32:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:36 np0005597378 nova_compute[238941]: 2026-01-27 14:32:36.912 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:38 np0005597378 nova_compute[238941]: 2026-01-27 14:32:38.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:39 np0005597378 nova_compute[238941]: 2026-01-27 14:32:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:39 np0005597378 nova_compute[238941]: 2026-01-27 14:32:39.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:32:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:41 np0005597378 ovn_controller[144812]: 2026-01-27T14:32:41Z|01648|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 27 09:32:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:41 np0005597378 nova_compute[238941]: 2026-01-27 14:32:41.915 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:43 np0005597378 nova_compute[238941]: 2026-01-27 14:32:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:32:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:43 np0005597378 nova_compute[238941]: 2026-01-27 14:32:43.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:32:46.340 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:32:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:32:46.340 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:32:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:32:46.340 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:32:46 np0005597378 nova_compute[238941]: 2026-01-27 14:32:46.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:32:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:32:48 np0005597378 nova_compute[238941]: 2026-01-27 14:32:48.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 27 09:32:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 27 09:32:49 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 27 09:32:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 614 B/s wr, 7 op/s
Jan 27 09:32:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 716 B/s wr, 8 op/s
Jan 27 09:32:51 np0005597378 nova_compute[238941]: 2026-01-27 14:32:51.919 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:52 np0005597378 podman[381792]: 2026-01-27 14:32:52.73307983 +0000 UTC m=+0.071045518 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 09:32:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 716 B/s wr, 9 op/s
Jan 27 09:32:53 np0005597378 nova_compute[238941]: 2026-01-27 14:32:53.533 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 09:32:55 np0005597378 podman[381811]: 2026-01-27 14:32:55.796456115 +0000 UTC m=+0.133429001 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:32:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:32:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 27 09:32:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 27 09:32:56 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 27 09:32:56 np0005597378 nova_compute[238941]: 2026-01-27 14:32:56.922 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 970 B/s wr, 20 op/s
Jan 27 09:32:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:32:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1420 writes, 6164 keys, 1420 commit groups, 1.0 writes per commit group, ingest: 8.99 MB, 0.01 MB/s#012Interval WAL: 1420 writes, 1420 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     27.2      2.40              0.20        38    0.063       0      0       0.0       0.0#012  L6      1/0    7.84 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     71.1     60.0      5.15              0.83        37    0.139    227K    20K       0.0       0.0#012 Sum      1/0    7.84 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     48.5     49.6      7.55              1.03        75    0.101    227K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.9     34.6     34.3      1.21              0.12         8    0.151     31K   1983       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     71.1     60.0      5.15              0.83        37    0.139    227K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     27.3      2.39              0.20        37    0.065       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.1 total, 600.0 interval#012Flush(GB): cumulative 0.064, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.08 MB/s write, 0.36 GB read, 0.08 MB/s read, 7.6 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 43.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000455 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2691,41.40 MB,13.617%) FilterBlock(76,644.67 KB,0.207093%) IndexBlock(76,1.04 MB,0.341531%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 09:32:58 np0005597378 nova_compute[238941]: 2026-01-27 14:32:58.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:32:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 27 09:32:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:32:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572855815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:32:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:32:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572855815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:33:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 27 09:33:01 np0005597378 nova_compute[238941]: 2026-01-27 14:33:01.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 716 B/s wr, 15 op/s
Jan 27 09:33:03 np0005597378 nova_compute[238941]: 2026-01-27 14:33:03.538 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:06 np0005597378 nova_compute[238941]: 2026-01-27 14:33:06.926 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:08 np0005597378 nova_compute[238941]: 2026-01-27 14:33:08.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:11 np0005597378 nova_compute[238941]: 2026-01-27 14:33:11.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:13 np0005597378 nova_compute[238941]: 2026-01-27 14:33:13.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:16 np0005597378 nova_compute[238941]: 2026-01-27 14:33:16.929 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:33:17
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:33:17 np0005597378 nova_compute[238941]: 2026-01-27 14:33:17.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:17 np0005597378 nova_compute[238941]: 2026-01-27 14:33:17.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:33:17 np0005597378 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:33:17 np0005597378 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:33:17 np0005597378 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:33:17 np0005597378 nova_compute[238941]: 2026-01-27 14:33:17.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:33:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:33:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:33:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324934641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.036 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.172 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.173 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3534MB free_disk=59.987321018241346GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.173 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.173 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.245 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.245 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.265 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:33:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.544 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:33:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186799720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.837 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.842 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.857 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.858 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:33:18 np0005597378 nova_compute[238941]: 2026-01-27 14:33:18.859 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:33:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:20 np0005597378 nova_compute[238941]: 2026-01-27 14:33:20.858 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:20 np0005597378 nova_compute[238941]: 2026-01-27 14:33:20.859 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:21 np0005597378 nova_compute[238941]: 2026-01-27 14:33:21.932 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:22 np0005597378 nova_compute[238941]: 2026-01-27 14:33:22.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:23 np0005597378 nova_compute[238941]: 2026-01-27 14:33:23.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:23 np0005597378 podman[381881]: 2026-01-27 14:33:23.70624332 +0000 UTC m=+0.047591514 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.151857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404151913, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2092, "num_deletes": 254, "total_data_size": 3447439, "memory_usage": 3507312, "flush_reason": "Manual Compaction"}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404180614, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3390389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54453, "largest_seqno": 56544, "table_properties": {"data_size": 3380743, "index_size": 6139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19571, "raw_average_key_size": 20, "raw_value_size": 3361491, "raw_average_value_size": 3508, "num_data_blocks": 271, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524177, "oldest_key_time": 1769524177, "file_creation_time": 1769524404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 28899 microseconds, and 8243 cpu microseconds.
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.180762) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3390389 bytes OK
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.180812) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.186759) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.186808) EVENT_LOG_v1 {"time_micros": 1769524404186796, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.186838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3438661, prev total WAL file size 3438661, number of live WAL files 2.
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.188842) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3310KB)], [128(8031KB)]
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404188875, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11614269, "oldest_snapshot_seqno": -1}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7837 keys, 9958886 bytes, temperature: kUnknown
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404256456, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9958886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9907790, "index_size": 30375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 203540, "raw_average_key_size": 25, "raw_value_size": 9769522, "raw_average_value_size": 1246, "num_data_blocks": 1184, "num_entries": 7837, "num_filter_entries": 7837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.256828) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9958886 bytes
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.262102) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.5 rd, 147.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 8361, records dropped: 524 output_compression: NoCompression
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.262157) EVENT_LOG_v1 {"time_micros": 1769524404262136, "job": 78, "event": "compaction_finished", "compaction_time_micros": 67717, "compaction_time_cpu_micros": 26353, "output_level": 6, "num_output_files": 1, "total_output_size": 9958886, "num_input_records": 8361, "num_output_records": 7837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404263116, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524404264868, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.188763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:24 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:24.265121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:26 np0005597378 nova_compute[238941]: 2026-01-27 14:33:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:26 np0005597378 nova_compute[238941]: 2026-01-27 14:33:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:33:26 np0005597378 nova_compute[238941]: 2026-01-27 14:33:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:33:26 np0005597378 nova_compute[238941]: 2026-01-27 14:33:26.409 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:33:26 np0005597378 podman[381900]: 2026-01-27 14:33:26.737699803 +0000 UTC m=+0.079673130 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 09:33:26 np0005597378 nova_compute[238941]: 2026-01-27 14:33:26.934 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:27 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6006989009276167e-05 of space, bias 1.0, pg target 0.00480209670278285 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695198129379571 of space, bias 1.0, pg target 0.20085594388138714 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0482996494546334e-06 of space, bias 4.0, pg target 0.0012579595793455601 quantized to 16 (current 16)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:33:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:33:28 np0005597378 nova_compute[238941]: 2026-01-27 14:33:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:28 np0005597378 nova_compute[238941]: 2026-01-27 14:33:28.549 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:31 np0005597378 nova_compute[238941]: 2026-01-27 14:33:31.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:33 np0005597378 nova_compute[238941]: 2026-01-27 14:33:33.550 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:33:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:33:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:33:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:33:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:36 np0005597378 nova_compute[238941]: 2026-01-27 14:33:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.535531524 +0000 UTC m=+0.115612129 container create 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.446171244 +0000 UTC m=+0.026251879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:33:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:33:36 np0005597378 systemd[1]: Started libpod-conmon-674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7.scope.
Jan 27 09:33:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.663781663 +0000 UTC m=+0.243862308 container init 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.671166462 +0000 UTC m=+0.251247077 container start 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:33:36 np0005597378 awesome_keldysh[382093]: 167 167
Jan 27 09:33:36 np0005597378 systemd[1]: libpod-674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7.scope: Deactivated successfully.
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.679343532 +0000 UTC m=+0.259424147 container attach 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.679674021 +0000 UTC m=+0.259754636 container died 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:33:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8ac968c1499d5601369c72767d9854bc981bd1403651b222f859c89be6100f4f-merged.mount: Deactivated successfully.
Jan 27 09:33:36 np0005597378 podman[382076]: 2026-01-27 14:33:36.765620799 +0000 UTC m=+0.345701414 container remove 674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:33:36 np0005597378 systemd[1]: libpod-conmon-674ed0ab6404b394f408ab00750984ead810d7affcb96ae11951fbf4d8dfb0f7.scope: Deactivated successfully.
Jan 27 09:33:36 np0005597378 podman[382117]: 2026-01-27 14:33:36.935591553 +0000 UTC m=+0.048848069 container create ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:33:36 np0005597378 nova_compute[238941]: 2026-01-27 14:33:36.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:36 np0005597378 systemd[1]: Started libpod-conmon-ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f.scope.
Jan 27 09:33:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:33:37 np0005597378 podman[382117]: 2026-01-27 14:33:36.908949764 +0000 UTC m=+0.022206310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:33:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:37 np0005597378 podman[382117]: 2026-01-27 14:33:37.030853622 +0000 UTC m=+0.144110158 container init ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:33:37 np0005597378 podman[382117]: 2026-01-27 14:33:37.037042118 +0000 UTC m=+0.150298634 container start ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:33:37 np0005597378 podman[382117]: 2026-01-27 14:33:37.045188888 +0000 UTC m=+0.158445404 container attach ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:33:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:37 np0005597378 optimistic_northcutt[382134]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:33:37 np0005597378 optimistic_northcutt[382134]: --> All data devices are unavailable
Jan 27 09:33:37 np0005597378 systemd[1]: libpod-ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f.scope: Deactivated successfully.
Jan 27 09:33:37 np0005597378 podman[382117]: 2026-01-27 14:33:37.526813866 +0000 UTC m=+0.640070382 container died ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:33:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-10a8716ad436b3a076d1dde279d3de9dfbae3f7e5b313aaf013ca3f56adacdb9-merged.mount: Deactivated successfully.
Jan 27 09:33:37 np0005597378 podman[382117]: 2026-01-27 14:33:37.685357892 +0000 UTC m=+0.798614398 container remove ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:33:37 np0005597378 systemd[1]: libpod-conmon-ddd9293c307ca6ae5a77065eeb9d5e6f60a265f75ec67fe5842da4b6bb28422f.scope: Deactivated successfully.
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.180463705 +0000 UTC m=+0.076903726 container create bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:33:38 np0005597378 systemd[1]: Started libpod-conmon-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope.
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.129269873 +0000 UTC m=+0.025709914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:33:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.28596818 +0000 UTC m=+0.182408231 container init bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.29374731 +0000 UTC m=+0.190187331 container start bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:33:38 np0005597378 flamboyant_bouman[382244]: 167 167
Jan 27 09:33:38 np0005597378 systemd[1]: libpod-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope: Deactivated successfully.
Jan 27 09:33:38 np0005597378 conmon[382244]: conmon bdbb84fad0b696b56633 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope/container/memory.events
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.313480142 +0000 UTC m=+0.209920193 container attach bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.314386256 +0000 UTC m=+0.210826307 container died bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:33:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b8a5b54d09db4645700b921f0da17d4dd81b6d153b43906d44ca8b0017ae5676-merged.mount: Deactivated successfully.
Jan 27 09:33:38 np0005597378 podman[382228]: 2026-01-27 14:33:38.500889996 +0000 UTC m=+0.397330017 container remove bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bouman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:33:38 np0005597378 systemd[1]: libpod-conmon-bdbb84fad0b696b5663382cdd24f98e0dac2ef1043da038b074466415b8500bc.scope: Deactivated successfully.
Jan 27 09:33:38 np0005597378 nova_compute[238941]: 2026-01-27 14:33:38.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:38 np0005597378 podman[382268]: 2026-01-27 14:33:38.73242555 +0000 UTC m=+0.104503790 container create 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:33:38 np0005597378 podman[382268]: 2026-01-27 14:33:38.652861364 +0000 UTC m=+0.024939624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:33:38 np0005597378 systemd[1]: Started libpod-conmon-00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab.scope.
Jan 27 09:33:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:33:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:38 np0005597378 podman[382268]: 2026-01-27 14:33:38.894574473 +0000 UTC m=+0.266652733 container init 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:33:38 np0005597378 podman[382268]: 2026-01-27 14:33:38.900489012 +0000 UTC m=+0.272567252 container start 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:33:38 np0005597378 podman[382268]: 2026-01-27 14:33:38.907882331 +0000 UTC m=+0.279960571 container attach 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]: {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:    "0": [
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:        {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "devices": [
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "/dev/loop3"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            ],
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_name": "ceph_lv0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_size": "21470642176",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "name": "ceph_lv0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "tags": {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cluster_name": "ceph",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.crush_device_class": "",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.encrypted": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.objectstore": "bluestore",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osd_id": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.type": "block",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.vdo": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.with_tpm": "0"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            },
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "type": "block",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "vg_name": "ceph_vg0"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:        }
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:    ],
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:    "1": [
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:        {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "devices": [
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "/dev/loop4"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            ],
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_name": "ceph_lv1",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_size": "21470642176",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "name": "ceph_lv1",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "tags": {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cluster_name": "ceph",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.crush_device_class": "",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.encrypted": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.objectstore": "bluestore",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osd_id": "1",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.type": "block",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.vdo": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.with_tpm": "0"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            },
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "type": "block",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "vg_name": "ceph_vg1"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:        }
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:    ],
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:    "2": [
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:        {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "devices": [
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "/dev/loop5"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            ],
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_name": "ceph_lv2",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_size": "21470642176",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "name": "ceph_lv2",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "tags": {
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.cluster_name": "ceph",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.crush_device_class": "",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.encrypted": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.objectstore": "bluestore",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osd_id": "2",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.type": "block",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.vdo": "0",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:                "ceph.with_tpm": "0"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            },
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "type": "block",
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:            "vg_name": "ceph_vg2"
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:        }
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]:    ]
Jan 27 09:33:39 np0005597378 magical_roentgen[382284]: }
Jan 27 09:33:39 np0005597378 systemd[1]: libpod-00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab.scope: Deactivated successfully.
Jan 27 09:33:39 np0005597378 podman[382268]: 2026-01-27 14:33:39.219435654 +0000 UTC m=+0.591513894 container died 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:33:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-543b4dc337774dd0bb4e62347a0b24d480a199ac21796af42c7e50a6e07162fe-merged.mount: Deactivated successfully.
Jan 27 09:33:39 np0005597378 podman[382268]: 2026-01-27 14:33:39.283473271 +0000 UTC m=+0.655551511 container remove 00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:33:39 np0005597378 systemd[1]: libpod-conmon-00db12702e07a9514c46624b90024b5a5dd1e35e64720d4cb7d3ba5607d761ab.scope: Deactivated successfully.
Jan 27 09:33:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.730506017 +0000 UTC m=+0.039775624 container create 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:33:39 np0005597378 systemd[1]: Started libpod-conmon-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope.
Jan 27 09:33:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.713380715 +0000 UTC m=+0.022650342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.822088056 +0000 UTC m=+0.131357683 container init 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.83444671 +0000 UTC m=+0.143716307 container start 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.838564291 +0000 UTC m=+0.147833918 container attach 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 09:33:39 np0005597378 boring_neumann[382383]: 167 167
Jan 27 09:33:39 np0005597378 systemd[1]: libpod-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope: Deactivated successfully.
Jan 27 09:33:39 np0005597378 conmon[382383]: conmon 8d8e8f9638c63598f2ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope/container/memory.events
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.842117837 +0000 UTC m=+0.151387454 container died 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:33:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-22b994254cc0f6916d85afa875a069e049f0cf57a56618f01f779b2815daf099-merged.mount: Deactivated successfully.
Jan 27 09:33:39 np0005597378 podman[382367]: 2026-01-27 14:33:39.890161693 +0000 UTC m=+0.199431300 container remove 8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:33:39 np0005597378 systemd[1]: libpod-conmon-8d8e8f9638c63598f2aee96f0a4fb1d58698ebbc3605ced5a1eb9f17210c92fd.scope: Deactivated successfully.
Jan 27 09:33:40 np0005597378 podman[382406]: 2026-01-27 14:33:40.098728728 +0000 UTC m=+0.065186000 container create 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:33:40 np0005597378 systemd[1]: Started libpod-conmon-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope.
Jan 27 09:33:40 np0005597378 podman[382406]: 2026-01-27 14:33:40.060348952 +0000 UTC m=+0.026806234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:33:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:33:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:33:40 np0005597378 podman[382406]: 2026-01-27 14:33:40.194542681 +0000 UTC m=+0.160999983 container init 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:33:40 np0005597378 podman[382406]: 2026-01-27 14:33:40.202578148 +0000 UTC m=+0.169035420 container start 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:33:40 np0005597378 podman[382406]: 2026-01-27 14:33:40.209185846 +0000 UTC m=+0.175643118 container attach 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:33:40 np0005597378 lvm[382498]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:33:40 np0005597378 lvm[382498]: VG ceph_vg0 finished
Jan 27 09:33:40 np0005597378 lvm[382501]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:33:40 np0005597378 lvm[382501]: VG ceph_vg1 finished
Jan 27 09:33:40 np0005597378 lvm[382503]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:33:40 np0005597378 lvm[382503]: VG ceph_vg2 finished
Jan 27 09:33:40 np0005597378 lvm[382504]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:33:40 np0005597378 lvm[382504]: VG ceph_vg1 finished
Jan 27 09:33:41 np0005597378 sharp_burnell[382422]: {}
Jan 27 09:33:41 np0005597378 systemd[1]: libpod-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope: Deactivated successfully.
Jan 27 09:33:41 np0005597378 systemd[1]: libpod-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope: Consumed 1.453s CPU time.
Jan 27 09:33:41 np0005597378 podman[382406]: 2026-01-27 14:33:41.080508625 +0000 UTC m=+1.046965897 container died 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:33:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-059a5ccc1fe6c27e1637d7d5e9e163a7d3c285bc5397ae83788e62c15c4edfaa-merged.mount: Deactivated successfully.
Jan 27 09:33:41 np0005597378 podman[382406]: 2026-01-27 14:33:41.149968358 +0000 UTC m=+1.116425630 container remove 38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_burnell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:33:41 np0005597378 systemd[1]: libpod-conmon-38884190a57ccb1994e0d0778da9fe5aaa4be750c54350ec57b137dbbc456bb0.scope: Deactivated successfully.
Jan 27 09:33:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:33:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:33:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:33:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:33:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:41 np0005597378 nova_compute[238941]: 2026-01-27 14:33:41.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:41 np0005597378 nova_compute[238941]: 2026-01-27 14:33:41.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:33:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:41 np0005597378 nova_compute[238941]: 2026-01-27 14:33:41.941 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:33:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:33:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:43 np0005597378 nova_compute[238941]: 2026-01-27 14:33:43.553 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:44 np0005597378 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 09:33:44 np0005597378 systemd[1]: virtsecretd.service: Consumed 1.169s CPU time.
Jan 27 09:33:45 np0005597378 nova_compute[238941]: 2026-01-27 14:33:45.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:33:46.341 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:33:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:33:46.342 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:33:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:33:46.342 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.352864) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426353033, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 439, "num_deletes": 256, "total_data_size": 378329, "memory_usage": 388008, "flush_reason": "Manual Compaction"}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426359323, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 375705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56545, "largest_seqno": 56983, "table_properties": {"data_size": 373037, "index_size": 703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6177, "raw_average_key_size": 18, "raw_value_size": 367747, "raw_average_value_size": 1094, "num_data_blocks": 31, "num_entries": 336, "num_filter_entries": 336, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524405, "oldest_key_time": 1769524405, "file_creation_time": 1769524426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 6538 microseconds, and 3144 cpu microseconds.
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.359424) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 375705 bytes OK
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.359450) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.364568) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.364591) EVENT_LOG_v1 {"time_micros": 1769524426364585, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.364611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 375600, prev total WAL file size 375600, number of live WAL files 2.
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.365147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353133' seq:0, type:0; will stop at (end)
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(366KB)], [131(9725KB)]
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426365178, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10334591, "oldest_snapshot_seqno": -1}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7649 keys, 10213422 bytes, temperature: kUnknown
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426425393, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10213422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10162747, "index_size": 30466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 200573, "raw_average_key_size": 26, "raw_value_size": 10026775, "raw_average_value_size": 1310, "num_data_blocks": 1186, "num_entries": 7649, "num_filter_entries": 7649, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.425708) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10213422 bytes
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.428298) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 169.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(54.7) write-amplify(27.2) OK, records in: 8173, records dropped: 524 output_compression: NoCompression
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.428349) EVENT_LOG_v1 {"time_micros": 1769524426428313, "job": 80, "event": "compaction_finished", "compaction_time_micros": 60341, "compaction_time_cpu_micros": 25829, "output_level": 6, "num_output_files": 1, "total_output_size": 10213422, "num_input_records": 8173, "num_output_records": 7649, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426428786, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524426431300, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.365048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:33:46.431539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:33:46 np0005597378 nova_compute[238941]: 2026-01-27 14:33:46.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:33:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:33:48 np0005597378 nova_compute[238941]: 2026-01-27 14:33:48.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:51 np0005597378 nova_compute[238941]: 2026-01-27 14:33:51.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:53 np0005597378 nova_compute[238941]: 2026-01-27 14:33:53.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:33:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:53 np0005597378 nova_compute[238941]: 2026-01-27 14:33:53.556 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:54 np0005597378 podman[382545]: 2026-01-27 14:33:54.747440293 +0000 UTC m=+0.087981114 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:33:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:33:56 np0005597378 nova_compute[238941]: 2026-01-27 14:33:56.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:57 np0005597378 podman[382565]: 2026-01-27 14:33:57.761851747 +0000 UTC m=+0.099270748 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:33:58 np0005597378 nova_compute[238941]: 2026-01-27 14:33:58.557 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:33:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:33:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:33:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698543454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:33:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:33:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698543454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:34:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:01 np0005597378 nova_compute[238941]: 2026-01-27 14:34:01.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:34:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4347 writes, 18K keys, 4347 commit groups, 1.0 writes per commit group, ingest: 21.46 MB, 0.04 MB/s#012Interval WAL: 4347 writes, 1605 syncs, 2.71 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:34:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:03 np0005597378 nova_compute[238941]: 2026-01-27 14:34:03.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:06 np0005597378 nova_compute[238941]: 2026-01-27 14:34:06.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:34:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 47K writes, 182K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4239 writes, 16K keys, 4239 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s#012Interval WAL: 4239 writes, 1717 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:34:08 np0005597378 nova_compute[238941]: 2026-01-27 14:34:08.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:11 np0005597378 nova_compute[238941]: 2026-01-27 14:34:11.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:13 np0005597378 nova_compute[238941]: 2026-01-27 14:34:13.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:34:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.0 total, 600.0 interval#012Cumulative writes: 37K writes, 153K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3476 writes, 15K keys, 3476 commit groups, 1.0 writes per commit group, ingest: 17.17 MB, 0.03 MB/s#012Interval WAL: 3476 writes, 1293 syncs, 2.69 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:34:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:16 np0005597378 nova_compute[238941]: 2026-01-27 14:34:16.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:34:17
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data']
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:34:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:34:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.529 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.530 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:34:18 np0005597378 nova_compute[238941]: 2026-01-27 14:34:18.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:34:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820041330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.206 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.397 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.399 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3525MB free_disk=59.987321018241346GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.399 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.399 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:34:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.692 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.693 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.771 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.856 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.857 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.879 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.908 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:34:19 np0005597378 nova_compute[238941]: 2026-01-27 14:34:19.932 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:34:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:34:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358968772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:34:20 np0005597378 nova_compute[238941]: 2026-01-27 14:34:20.551 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:34:20 np0005597378 nova_compute[238941]: 2026-01-27 14:34:20.558 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:34:20 np0005597378 nova_compute[238941]: 2026-01-27 14:34:20.620 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:34:20 np0005597378 nova_compute[238941]: 2026-01-27 14:34:20.622 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:34:20 np0005597378 nova_compute[238941]: 2026-01-27 14:34:20.622 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:34:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:21 np0005597378 nova_compute[238941]: 2026-01-27 14:34:21.959 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:22 np0005597378 nova_compute[238941]: 2026-01-27 14:34:22.623 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:22 np0005597378 nova_compute[238941]: 2026-01-27 14:34:22.623 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:22 np0005597378 nova_compute[238941]: 2026-01-27 14:34:22.624 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:23 np0005597378 nova_compute[238941]: 2026-01-27 14:34:23.566 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:25 np0005597378 podman[382634]: 2026-01-27 14:34:25.720591548 +0000 UTC m=+0.060449111 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:34:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:26 np0005597378 nova_compute[238941]: 2026-01-27 14:34:26.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:26 np0005597378 nova_compute[238941]: 2026-01-27 14:34:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:34:26 np0005597378 nova_compute[238941]: 2026-01-27 14:34:26.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:34:26 np0005597378 nova_compute[238941]: 2026-01-27 14:34:26.501 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:34:26 np0005597378 nova_compute[238941]: 2026-01-27 14:34:26.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 09:34:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6006989009276167e-05 of space, bias 1.0, pg target 0.00480209670278285 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006695198129379571 of space, bias 1.0, pg target 0.20085594388138714 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0482996494546334e-06 of space, bias 4.0, pg target 0.0012579595793455601 quantized to 16 (current 16)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:34:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:34:28 np0005597378 nova_compute[238941]: 2026-01-27 14:34:28.568 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:28 np0005597378 podman[382654]: 2026-01-27 14:34:28.744617692 +0000 UTC m=+0.085781074 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:34:29 np0005597378 nova_compute[238941]: 2026-01-27 14:34:29.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:31 np0005597378 nova_compute[238941]: 2026-01-27 14:34:31.966 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:33 np0005597378 nova_compute[238941]: 2026-01-27 14:34:33.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:36 np0005597378 nova_compute[238941]: 2026-01-27 14:34:36.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:36 np0005597378 nova_compute[238941]: 2026-01-27 14:34:36.969 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:38 np0005597378 nova_compute[238941]: 2026-01-27 14:34:38.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:41 np0005597378 nova_compute[238941]: 2026-01-27 14:34:41.973 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.50169561 +0000 UTC m=+0.044565713 container create f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:34:42 np0005597378 systemd[1]: Started libpod-conmon-f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d.scope.
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:34:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:34:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.479078591 +0000 UTC m=+0.021948714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.588143102 +0000 UTC m=+0.131013225 container init f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.596205729 +0000 UTC m=+0.139075832 container start f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.601919593 +0000 UTC m=+0.144789766 container attach f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:34:42 np0005597378 focused_mirzakhani[382837]: 167 167
Jan 27 09:34:42 np0005597378 systemd[1]: libpod-f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d.scope: Deactivated successfully.
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.604051121 +0000 UTC m=+0.146921224 container died f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:34:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6267e7443e1182100d25b393761f35de07a28634cc206350e18e5d0c3a733ddc-merged.mount: Deactivated successfully.
Jan 27 09:34:42 np0005597378 podman[382821]: 2026-01-27 14:34:42.657562824 +0000 UTC m=+0.200432927 container remove f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:34:42 np0005597378 systemd[1]: libpod-conmon-f697cd78686bac12f640d1519e7aa9d7a951a47766b6e13ea8597aa23238ca2d.scope: Deactivated successfully.
Jan 27 09:34:42 np0005597378 podman[382860]: 2026-01-27 14:34:42.849528301 +0000 UTC m=+0.073131904 container create d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:34:42 np0005597378 podman[382860]: 2026-01-27 14:34:42.800053107 +0000 UTC m=+0.023656730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:34:42 np0005597378 systemd[1]: Started libpod-conmon-d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648.scope.
Jan 27 09:34:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:34:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:42 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:43 np0005597378 podman[382860]: 2026-01-27 14:34:43.032890756 +0000 UTC m=+0.256494389 container init d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:34:43 np0005597378 podman[382860]: 2026-01-27 14:34:43.040267634 +0000 UTC m=+0.263871237 container start d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:34:43 np0005597378 podman[382860]: 2026-01-27 14:34:43.146987003 +0000 UTC m=+0.370590626 container attach d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:34:43 np0005597378 nova_compute[238941]: 2026-01-27 14:34:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:43 np0005597378 nova_compute[238941]: 2026-01-27 14:34:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:34:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:43 np0005597378 objective_kare[382876]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:34:43 np0005597378 objective_kare[382876]: --> All data devices are unavailable
Jan 27 09:34:43 np0005597378 systemd[1]: libpod-d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648.scope: Deactivated successfully.
Jan 27 09:34:43 np0005597378 podman[382860]: 2026-01-27 14:34:43.555383856 +0000 UTC m=+0.778987459 container died d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:34:43 np0005597378 nova_compute[238941]: 2026-01-27 14:34:43.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cdb59546eee9c2c087ff2b1d732d69b0fe6bc41f5259fb830e6ab1e057e1c3da-merged.mount: Deactivated successfully.
Jan 27 09:34:43 np0005597378 podman[382860]: 2026-01-27 14:34:43.973438941 +0000 UTC m=+1.197042544 container remove d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:34:44 np0005597378 systemd[1]: libpod-conmon-d2def0b3f02dea1e14d4bc448aad948c14f170545b31c53a6d6675a495e41648.scope: Deactivated successfully.
Jan 27 09:34:44 np0005597378 podman[382967]: 2026-01-27 14:34:44.452953463 +0000 UTC m=+0.063263707 container create 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:34:44 np0005597378 systemd[1]: Started libpod-conmon-609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c.scope.
Jan 27 09:34:44 np0005597378 podman[382967]: 2026-01-27 14:34:44.417139218 +0000 UTC m=+0.027449462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:34:44 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:34:44 np0005597378 podman[382967]: 2026-01-27 14:34:44.659597706 +0000 UTC m=+0.269908000 container init 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:34:44 np0005597378 podman[382967]: 2026-01-27 14:34:44.667874419 +0000 UTC m=+0.278184663 container start 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 09:34:44 np0005597378 exciting_agnesi[382983]: 167 167
Jan 27 09:34:44 np0005597378 systemd[1]: libpod-609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c.scope: Deactivated successfully.
Jan 27 09:34:44 np0005597378 podman[382967]: 2026-01-27 14:34:44.748257837 +0000 UTC m=+0.358568111 container attach 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:34:44 np0005597378 podman[382967]: 2026-01-27 14:34:44.748889064 +0000 UTC m=+0.359199308 container died 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:34:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-76c9ef4048ba98599eb2e1e0a2f60206cb1b8ccf496ac587848675674ba14ab9-merged.mount: Deactivated successfully.
Jan 27 09:34:45 np0005597378 podman[382967]: 2026-01-27 14:34:45.108212424 +0000 UTC m=+0.718522668 container remove 609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:34:45 np0005597378 systemd[1]: libpod-conmon-609615e9ff92497a589251ca3d10aec3d18009fdc7b885e71b8d37e3cda2d13c.scope: Deactivated successfully.
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.289180315 +0000 UTC m=+0.047707217 container create 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:34:45 np0005597378 systemd[1]: Started libpod-conmon-3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61.scope.
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.268764204 +0000 UTC m=+0.027291126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:34:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:34:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:45 np0005597378 nova_compute[238941]: 2026-01-27 14:34:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:34:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.459619881 +0000 UTC m=+0.218146803 container init 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.468154931 +0000 UTC m=+0.226681833 container start 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:34:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.513636858 +0000 UTC m=+0.272163790 container attach 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]: {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:    "0": [
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:        {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "devices": [
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "/dev/loop3"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            ],
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_name": "ceph_lv0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_size": "21470642176",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "name": "ceph_lv0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "tags": {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cluster_name": "ceph",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.crush_device_class": "",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.encrypted": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.objectstore": "bluestore",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osd_id": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.type": "block",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.vdo": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.with_tpm": "0"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            },
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "type": "block",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "vg_name": "ceph_vg0"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:        }
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:    ],
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:    "1": [
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:        {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "devices": [
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "/dev/loop4"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            ],
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_name": "ceph_lv1",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_size": "21470642176",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "name": "ceph_lv1",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "tags": {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cluster_name": "ceph",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.crush_device_class": "",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.encrypted": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.objectstore": "bluestore",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osd_id": "1",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.type": "block",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.vdo": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.with_tpm": "0"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            },
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "type": "block",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "vg_name": "ceph_vg1"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:        }
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:    ],
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:    "2": [
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:        {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "devices": [
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "/dev/loop5"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            ],
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_name": "ceph_lv2",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_size": "21470642176",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "name": "ceph_lv2",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "tags": {
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.cluster_name": "ceph",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.crush_device_class": "",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.encrypted": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.objectstore": "bluestore",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osd_id": "2",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.type": "block",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.vdo": "0",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:                "ceph.with_tpm": "0"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            },
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "type": "block",
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:            "vg_name": "ceph_vg2"
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:        }
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]:    ]
Jan 27 09:34:45 np0005597378 pedantic_dirac[383024]: }
Jan 27 09:34:45 np0005597378 systemd[1]: libpod-3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61.scope: Deactivated successfully.
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.794258596 +0000 UTC m=+0.552785498 container died 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:34:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f0a901fecd2ed556dc0f6c58347cb31ac3dadbdb2ac471d5e32ee009df93f415-merged.mount: Deactivated successfully.
Jan 27 09:34:45 np0005597378 podman[383007]: 2026-01-27 14:34:45.971374973 +0000 UTC m=+0.729901875 container remove 3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:34:46 np0005597378 systemd[1]: libpod-conmon-3b3f1f23affbde1e3f5259e555b80e192103db660e0f3a39db8a1655784d7a61.scope: Deactivated successfully.
Jan 27 09:34:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:34:46.343 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:34:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:34:46.344 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:34:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:34:46.344 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:34:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:46 np0005597378 podman[383110]: 2026-01-27 14:34:46.4566872 +0000 UTC m=+0.028879489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:34:46 np0005597378 podman[383110]: 2026-01-27 14:34:46.595674799 +0000 UTC m=+0.167867038 container create 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:34:46 np0005597378 systemd[1]: Started libpod-conmon-2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928.scope.
Jan 27 09:34:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:34:46 np0005597378 podman[383110]: 2026-01-27 14:34:46.942870153 +0000 UTC m=+0.515062402 container init 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:34:46 np0005597378 podman[383110]: 2026-01-27 14:34:46.950011725 +0000 UTC m=+0.522203964 container start 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:34:46 np0005597378 nifty_rhodes[383126]: 167 167
Jan 27 09:34:46 np0005597378 systemd[1]: libpod-2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928.scope: Deactivated successfully.
Jan 27 09:34:46 np0005597378 nova_compute[238941]: 2026-01-27 14:34:46.976 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:46 np0005597378 podman[383110]: 2026-01-27 14:34:46.984617549 +0000 UTC m=+0.556809788 container attach 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:34:46 np0005597378 podman[383110]: 2026-01-27 14:34:46.984907156 +0000 UTC m=+0.557099385 container died 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:34:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-54faa5ef59de4d1deb707b814c6e602c1d2fe29e768719b0d12a85aa437c6ca2-merged.mount: Deactivated successfully.
Jan 27 09:34:47 np0005597378 podman[383110]: 2026-01-27 14:34:47.258576337 +0000 UTC m=+0.830768576 container remove 2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:34:47 np0005597378 systemd[1]: libpod-conmon-2c042650bb732339c9f5735c276d444af31d0a7554ad1bcf9e01ef357ccbb928.scope: Deactivated successfully.
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:34:47 np0005597378 podman[383151]: 2026-01-27 14:34:47.412845127 +0000 UTC m=+0.028904340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:34:47 np0005597378 podman[383151]: 2026-01-27 14:34:47.551969729 +0000 UTC m=+0.168028902 container create 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:34:47 np0005597378 systemd[1]: Started libpod-conmon-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope.
Jan 27 09:34:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:34:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:34:47 np0005597378 podman[383151]: 2026-01-27 14:34:47.900730635 +0000 UTC m=+0.516789828 container init 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:34:47 np0005597378 podman[383151]: 2026-01-27 14:34:47.908980517 +0000 UTC m=+0.525039690 container start 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:34:47 np0005597378 podman[383151]: 2026-01-27 14:34:47.925008609 +0000 UTC m=+0.541067782 container attach 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:34:48 np0005597378 nova_compute[238941]: 2026-01-27 14:34:48.574 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:48 np0005597378 lvm[383246]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:34:48 np0005597378 lvm[383245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:34:48 np0005597378 lvm[383245]: VG ceph_vg0 finished
Jan 27 09:34:48 np0005597378 lvm[383246]: VG ceph_vg1 finished
Jan 27 09:34:48 np0005597378 lvm[383248]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:34:48 np0005597378 lvm[383248]: VG ceph_vg2 finished
Jan 27 09:34:48 np0005597378 dazzling_edison[383167]: {}
Jan 27 09:34:48 np0005597378 systemd[1]: libpod-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope: Deactivated successfully.
Jan 27 09:34:48 np0005597378 podman[383151]: 2026-01-27 14:34:48.83686309 +0000 UTC m=+1.452922283 container died 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:34:48 np0005597378 systemd[1]: libpod-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope: Consumed 1.508s CPU time.
Jan 27 09:34:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8b75e4d9d168e54b6bdc25a6ee53de6b0a974a3f0b2d4a029044adce2404381e-merged.mount: Deactivated successfully.
Jan 27 09:34:49 np0005597378 podman[383151]: 2026-01-27 14:34:49.087442058 +0000 UTC m=+1.703501231 container remove 1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:34:49 np0005597378 systemd[1]: libpod-conmon-1245226d0665b83cb619f20a6ac9f9e15eee364b6ff4ea05e0f3a0e156839998.scope: Deactivated successfully.
Jan 27 09:34:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:34:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:34:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:34:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:34:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:34:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:34:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:34:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:34:51 np0005597378 nova_compute[238941]: 2026-01-27 14:34:51.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 27 09:34:53 np0005597378 nova_compute[238941]: 2026-01-27 14:34:53.621 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 27 09:34:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 27 09:34:54 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 27 09:34:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Jan 27 09:34:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:34:56 np0005597378 podman[383290]: 2026-01-27 14:34:56.718185558 +0000 UTC m=+0.055394714 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 09:34:57 np0005597378 nova_compute[238941]: 2026-01-27 14:34:57.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Jan 27 09:34:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 27 09:34:58 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 27 09:34:58 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 27 09:34:58 np0005597378 nova_compute[238941]: 2026-01-27 14:34:58.623 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:34:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 46 op/s
Jan 27 09:34:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:34:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1900967748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:34:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:34:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1900967748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:34:59 np0005597378 podman[383309]: 2026-01-27 14:34:59.733771845 +0000 UTC m=+0.076036512 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:35:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 27 09:35:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 27 09:35:01 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 27 09:35:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 4.9 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.8 KiB/s wr, 56 op/s
Jan 27 09:35:02 np0005597378 nova_compute[238941]: 2026-01-27 14:35:02.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 462 KiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 64 op/s
Jan 27 09:35:03 np0005597378 nova_compute[238941]: 2026-01-27 14:35:03.625 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.4 KiB/s wr, 70 op/s
Jan 27 09:35:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 27 09:35:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 27 09:35:06 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 27 09:35:07 np0005597378 nova_compute[238941]: 2026-01-27 14:35:07.022 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 895 B/s wr, 25 op/s
Jan 27 09:35:08 np0005597378 nova_compute[238941]: 2026-01-27 14:35:08.626 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 126 B/s wr, 46 op/s
Jan 27 09:35:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 102 B/s wr, 51 op/s
Jan 27 09:35:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 27 09:35:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 27 09:35:11 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 27 09:35:12 np0005597378 nova_compute[238941]: 2026-01-27 14:35:12.024 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 462 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 895 B/s wr, 69 op/s
Jan 27 09:35:13 np0005597378 nova_compute[238941]: 2026-01-27 14:35:13.627 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 KiB/s wr, 88 op/s
Jan 27 09:35:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:17 np0005597378 nova_compute[238941]: 2026-01-27 14:35:17.026 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:35:17
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta']
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.6 KiB/s wr, 78 op/s
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:35:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:35:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.425 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.425 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.426 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.426 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.629 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:35:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104041873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:35:18 np0005597378 nova_compute[238941]: 2026-01-27 14:35:18.982 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.157 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.159 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3500MB free_disk=59.98731577582657GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.159 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.159 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.230 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.230 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.251 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:35:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 59 op/s
Jan 27 09:35:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:35:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003957050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.816 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.821 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.840 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.842 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:35:19 np0005597378 nova_compute[238941]: 2026-01-27 14:35:19.842 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:35:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 45 op/s
Jan 27 09:35:22 np0005597378 nova_compute[238941]: 2026-01-27 14:35:22.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:22 np0005597378 nova_compute[238941]: 2026-01-27 14:35:22.842 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:23 np0005597378 nova_compute[238941]: 2026-01-27 14:35:23.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:23 np0005597378 nova_compute[238941]: 2026-01-27 14:35:23.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 27 09:35:23 np0005597378 nova_compute[238941]: 2026-01-27 14:35:23.630 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 19 op/s
Jan 27 09:35:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:27 np0005597378 nova_compute[238941]: 2026-01-27 14:35:27.031 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:27 np0005597378 podman[383380]: 2026-01-27 14:35:27.719198539 +0000 UTC m=+0.058662263 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6094379657303952e-05 of space, bias 1.0, pg target 0.004828313897191186 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.021134189914476e-06 of space, bias 1.0, pg target 0.0012063402569743428 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0537023756073548e-06 of space, bias 4.0, pg target 0.0012644428507288259 quantized to 16 (current 16)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:35:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:35:28 np0005597378 nova_compute[238941]: 2026-01-27 14:35:28.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:28 np0005597378 nova_compute[238941]: 2026-01-27 14:35:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:35:28 np0005597378 nova_compute[238941]: 2026-01-27 14:35:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:35:28 np0005597378 nova_compute[238941]: 2026-01-27 14:35:28.400 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:35:28 np0005597378 nova_compute[238941]: 2026-01-27 14:35:28.632 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:29 np0005597378 nova_compute[238941]: 2026-01-27 14:35:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:30 np0005597378 podman[383399]: 2026-01-27 14:35:30.731356621 +0000 UTC m=+0.077650784 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:35:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:32 np0005597378 nova_compute[238941]: 2026-01-27 14:35:32.034 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:33 np0005597378 nova_compute[238941]: 2026-01-27 14:35:33.634 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:37 np0005597378 nova_compute[238941]: 2026-01-27 14:35:37.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:37 np0005597378 nova_compute[238941]: 2026-01-27 14:35:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:38 np0005597378 nova_compute[238941]: 2026-01-27 14:35:38.635 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:42 np0005597378 nova_compute[238941]: 2026-01-27 14:35:42.038 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:43 np0005597378 nova_compute[238941]: 2026-01-27 14:35:43.638 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:44 np0005597378 nova_compute[238941]: 2026-01-27 14:35:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:44 np0005597378 nova_compute[238941]: 2026-01-27 14:35:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:35:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:35:46.344 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:35:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:35:46.345 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:35:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:35:46.345 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:35:46 np0005597378 nova_compute[238941]: 2026-01-27 14:35:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:47 np0005597378 nova_compute[238941]: 2026-01-27 14:35:47.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:35:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:35:48 np0005597378 nova_compute[238941]: 2026-01-27 14:35:48.639 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:35:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:35:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:50 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:35:50 np0005597378 podman[383640]: 2026-01-27 14:35:50.929134156 +0000 UTC m=+0.115467245 container create 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:35:50 np0005597378 podman[383640]: 2026-01-27 14:35:50.834000871 +0000 UTC m=+0.020333990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:35:51 np0005597378 systemd[1]: Started libpod-conmon-038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee.scope.
Jan 27 09:35:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:35:51 np0005597378 podman[383640]: 2026-01-27 14:35:51.127264399 +0000 UTC m=+0.313597508 container init 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:35:51 np0005597378 podman[383640]: 2026-01-27 14:35:51.137372992 +0000 UTC m=+0.323706081 container start 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:35:51 np0005597378 laughing_chebyshev[383655]: 167 167
Jan 27 09:35:51 np0005597378 systemd[1]: libpod-038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee.scope: Deactivated successfully.
Jan 27 09:35:51 np0005597378 podman[383640]: 2026-01-27 14:35:51.160963078 +0000 UTC m=+0.347296197 container attach 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:35:51 np0005597378 podman[383640]: 2026-01-27 14:35:51.162463078 +0000 UTC m=+0.348796167 container died 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:35:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-121566fced849a697e0cba5ec956958f57eb4be8bb13fbb6b858625882fa5f63-merged.mount: Deactivated successfully.
Jan 27 09:35:51 np0005597378 podman[383640]: 2026-01-27 14:35:51.312521085 +0000 UTC m=+0.498854164 container remove 038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 27 09:35:51 np0005597378 systemd[1]: libpod-conmon-038648763b0cc8b507cbe2c4b9aabb731aeec9ba340b54538370800833f744ee.scope: Deactivated successfully.
Jan 27 09:35:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:51 np0005597378 podman[383682]: 2026-01-27 14:35:51.484562825 +0000 UTC m=+0.054651875 container create d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:35:51 np0005597378 systemd[1]: Started libpod-conmon-d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e.scope.
Jan 27 09:35:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:35:51 np0005597378 podman[383682]: 2026-01-27 14:35:51.45102255 +0000 UTC m=+0.021111620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:35:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:51 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:51 np0005597378 podman[383682]: 2026-01-27 14:35:51.571612012 +0000 UTC m=+0.141701082 container init d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:35:51 np0005597378 podman[383682]: 2026-01-27 14:35:51.577614205 +0000 UTC m=+0.147703255 container start d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:35:51 np0005597378 podman[383682]: 2026-01-27 14:35:51.591352384 +0000 UTC m=+0.161441434 container attach d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:35:52 np0005597378 nova_compute[238941]: 2026-01-27 14:35:52.042 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:52 np0005597378 inspiring_wozniak[383698]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:35:52 np0005597378 inspiring_wozniak[383698]: --> All data devices are unavailable
Jan 27 09:35:52 np0005597378 systemd[1]: libpod-d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e.scope: Deactivated successfully.
Jan 27 09:35:52 np0005597378 podman[383682]: 2026-01-27 14:35:52.07914326 +0000 UTC m=+0.649232310 container died d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:35:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5284f2f2d36d6181e94bd738db33eb52741bc4a9309c7ec77882c5cc1c2cff62-merged.mount: Deactivated successfully.
Jan 27 09:35:52 np0005597378 podman[383682]: 2026-01-27 14:35:52.132279253 +0000 UTC m=+0.702368303 container remove d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:52 np0005597378 systemd[1]: libpod-conmon-d144fb965748c3f5c9fc9cad378ca666f80d4ea5d947ea2e699415bf901fca5e.scope: Deactivated successfully.
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.590997604 +0000 UTC m=+0.054522982 container create 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:35:52 np0005597378 systemd[1]: Started libpod-conmon-3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3.scope.
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.566134884 +0000 UTC m=+0.029660252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:35:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.679572393 +0000 UTC m=+0.143097811 container init 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.687110726 +0000 UTC m=+0.150636064 container start 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.690452636 +0000 UTC m=+0.153977994 container attach 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:35:52 np0005597378 heuristic_herschel[383807]: 167 167
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.693026935 +0000 UTC m=+0.156552283 container died 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:52 np0005597378 systemd[1]: libpod-3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3.scope: Deactivated successfully.
Jan 27 09:35:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8f3880172d531c66927ab4c20171efc9df8d631ebc89793a4fc3fe9871d186e4-merged.mount: Deactivated successfully.
Jan 27 09:35:52 np0005597378 podman[383791]: 2026-01-27 14:35:52.731515194 +0000 UTC m=+0.195040522 container remove 3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:35:52 np0005597378 systemd[1]: libpod-conmon-3db9c579b0929b2a8d61ccfa68aceefb22354e92b67cc6fe2a2321e7802b8fc3.scope: Deactivated successfully.
Jan 27 09:35:52 np0005597378 podman[383830]: 2026-01-27 14:35:52.932000101 +0000 UTC m=+0.080155413 container create c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:35:52 np0005597378 podman[383830]: 2026-01-27 14:35:52.87746139 +0000 UTC m=+0.025616712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:35:52 np0005597378 systemd[1]: Started libpod-conmon-c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e.scope.
Jan 27 09:35:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:35:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:53 np0005597378 podman[383830]: 2026-01-27 14:35:53.046704444 +0000 UTC m=+0.194859756 container init c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:53 np0005597378 podman[383830]: 2026-01-27 14:35:53.053449236 +0000 UTC m=+0.201604538 container start c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:35:53 np0005597378 podman[383830]: 2026-01-27 14:35:53.074537105 +0000 UTC m=+0.222692427 container attach c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]: {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:    "0": [
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:        {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "devices": [
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "/dev/loop3"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            ],
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_name": "ceph_lv0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_size": "21470642176",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "name": "ceph_lv0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "tags": {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cluster_name": "ceph",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.crush_device_class": "",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.encrypted": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.objectstore": "bluestore",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osd_id": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.type": "block",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.vdo": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.with_tpm": "0"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            },
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "type": "block",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "vg_name": "ceph_vg0"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:        }
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:    ],
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:    "1": [
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:        {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "devices": [
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "/dev/loop4"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            ],
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_name": "ceph_lv1",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_size": "21470642176",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "name": "ceph_lv1",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "tags": {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cluster_name": "ceph",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.crush_device_class": "",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.encrypted": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.objectstore": "bluestore",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osd_id": "1",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.type": "block",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.vdo": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.with_tpm": "0"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            },
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "type": "block",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "vg_name": "ceph_vg1"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:        }
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:    ],
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:    "2": [
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:        {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "devices": [
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "/dev/loop5"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            ],
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_name": "ceph_lv2",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_size": "21470642176",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "name": "ceph_lv2",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "tags": {
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.cluster_name": "ceph",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.crush_device_class": "",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.encrypted": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.objectstore": "bluestore",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osd_id": "2",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.type": "block",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.vdo": "0",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:                "ceph.with_tpm": "0"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            },
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "type": "block",
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:            "vg_name": "ceph_vg2"
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:        }
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]:    ]
Jan 27 09:35:53 np0005597378 hungry_lumiere[383847]: }
Jan 27 09:35:53 np0005597378 systemd[1]: libpod-c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e.scope: Deactivated successfully.
Jan 27 09:35:53 np0005597378 podman[383830]: 2026-01-27 14:35:53.367275059 +0000 UTC m=+0.515430361 container died c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 09:35:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-07f26f93f7db8fb3af1c170c72f63f7559085327ebeae0046480a727abe78ef8-merged.mount: Deactivated successfully.
Jan 27 09:35:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:53 np0005597378 podman[383830]: 2026-01-27 14:35:53.530746288 +0000 UTC m=+0.678901630 container remove c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lumiere, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:35:53 np0005597378 systemd[1]: libpod-conmon-c8e9c71613c6f864b575470a0e30b25c2c3853214f68e31a46e4675f25b6a60e.scope: Deactivated successfully.
Jan 27 09:35:53 np0005597378 nova_compute[238941]: 2026-01-27 14:35:53.641 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:54.034449732 +0000 UTC m=+0.063706419 container create ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:53.991274207 +0000 UTC m=+0.020530914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:35:54 np0005597378 systemd[1]: Started libpod-conmon-ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7.scope.
Jan 27 09:35:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:54.17307571 +0000 UTC m=+0.202332397 container init ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:54.184424756 +0000 UTC m=+0.213681443 container start ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:35:54 np0005597378 blissful_lamarr[383947]: 167 167
Jan 27 09:35:54 np0005597378 systemd[1]: libpod-ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7.scope: Deactivated successfully.
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:54.213683945 +0000 UTC m=+0.242940662 container attach ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:54.214857917 +0000 UTC m=+0.244114634 container died ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:35:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-75e25a017e5334f0477c50fe723cbeed328bb4f005e59afcb29c82835ba52383-merged.mount: Deactivated successfully.
Jan 27 09:35:54 np0005597378 podman[383931]: 2026-01-27 14:35:54.310549118 +0000 UTC m=+0.339805835 container remove ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:35:54 np0005597378 systemd[1]: libpod-conmon-ca609b48fa35005c4fc4ac6146785546b48b99c0dc0642f957f9543de58027e7.scope: Deactivated successfully.
Jan 27 09:35:54 np0005597378 podman[383974]: 2026-01-27 14:35:54.472195917 +0000 UTC m=+0.041083928 container create 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:35:54 np0005597378 systemd[1]: Started libpod-conmon-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope.
Jan 27 09:35:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:35:54 np0005597378 podman[383974]: 2026-01-27 14:35:54.454614273 +0000 UTC m=+0.023502304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:35:54 np0005597378 podman[383974]: 2026-01-27 14:35:54.563348466 +0000 UTC m=+0.132236507 container init 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:54 np0005597378 podman[383974]: 2026-01-27 14:35:54.570271522 +0000 UTC m=+0.139159523 container start 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:54 np0005597378 podman[383974]: 2026-01-27 14:35:54.574060525 +0000 UTC m=+0.142948536 container attach 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:35:55 np0005597378 lvm[384069]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:35:55 np0005597378 lvm[384069]: VG ceph_vg1 finished
Jan 27 09:35:55 np0005597378 lvm[384068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:35:55 np0005597378 lvm[384068]: VG ceph_vg0 finished
Jan 27 09:35:55 np0005597378 lvm[384071]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:35:55 np0005597378 lvm[384071]: VG ceph_vg2 finished
Jan 27 09:35:55 np0005597378 unruffled_kilby[383990]: {}
Jan 27 09:35:55 np0005597378 systemd[1]: libpod-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope: Deactivated successfully.
Jan 27 09:35:55 np0005597378 systemd[1]: libpod-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope: Consumed 1.361s CPU time.
Jan 27 09:35:55 np0005597378 podman[383974]: 2026-01-27 14:35:55.407042599 +0000 UTC m=+0.975930610 container died 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:35:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-842c1e7c58d37041178e8827da1e66d4abecea33d42c8d32274d9389ceb8d799-merged.mount: Deactivated successfully.
Jan 27 09:35:55 np0005597378 podman[383974]: 2026-01-27 14:35:55.450356077 +0000 UTC m=+1.019244088 container remove 5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:35:55 np0005597378 systemd[1]: libpod-conmon-5d980f98310e57a5f1d169c1b3feff9e97f2a46e447282a29adfddd44de97638.scope: Deactivated successfully.
Jan 27 09:35:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:35:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:35:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:35:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:35:57 np0005597378 nova_compute[238941]: 2026-01-27 14:35:57.046 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:57 np0005597378 nova_compute[238941]: 2026-01-27 14:35:57.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:35:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:58 np0005597378 nova_compute[238941]: 2026-01-27 14:35:58.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:35:58 np0005597378 podman[384109]: 2026-01-27 14:35:58.735626147 +0000 UTC m=+0.068266732 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:35:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:35:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271944443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:35:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:35:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271944443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:36:00 np0005597378 ceph-osd[86941]: bluestore.MempoolThread fragmentation_score=0.004502 took=0.000055s
Jan 27 09:36:00 np0005597378 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.004046 took=0.000057s
Jan 27 09:36:00 np0005597378 ceph-osd[85897]: bluestore.MempoolThread fragmentation_score=0.004528 took=0.000084s
Jan 27 09:36:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:01 np0005597378 podman[384130]: 2026-01-27 14:36:01.783292938 +0000 UTC m=+0.118846576 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:36:02 np0005597378 nova_compute[238941]: 2026-01-27 14:36:02.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:03 np0005597378 nova_compute[238941]: 2026-01-27 14:36:03.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:07 np0005597378 nova_compute[238941]: 2026-01-27 14:36:07.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:08 np0005597378 nova_compute[238941]: 2026-01-27 14:36:08.646 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:12 np0005597378 nova_compute[238941]: 2026-01-27 14:36:12.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:13 np0005597378 nova_compute[238941]: 2026-01-27 14:36:13.650 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:17 np0005597378 nova_compute[238941]: 2026-01-27 14:36:17.055 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:36:17
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr']
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:36:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:36:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.652 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:36:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2948348745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:36:18 np0005597378 nova_compute[238941]: 2026-01-27 14:36:18.993 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.173 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.174 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3567MB free_disk=59.98731577582657GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.175 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.175 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.242 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.242 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.258 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:36:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:36:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225372502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:36:19 np0005597378 nova_compute[238941]: 2026-01-27 14:36:19.994 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:36:20 np0005597378 nova_compute[238941]: 2026-01-27 14:36:20.001 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:36:20 np0005597378 nova_compute[238941]: 2026-01-27 14:36:20.019 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:36:20 np0005597378 nova_compute[238941]: 2026-01-27 14:36:20.020 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:36:20 np0005597378 nova_compute[238941]: 2026-01-27 14:36:20.021 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:36:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:22 np0005597378 nova_compute[238941]: 2026-01-27 14:36:22.058 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:23 np0005597378 nova_compute[238941]: 2026-01-27 14:36:23.655 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:25 np0005597378 nova_compute[238941]: 2026-01-27 14:36:25.020 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:25 np0005597378 nova_compute[238941]: 2026-01-27 14:36:25.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:25 np0005597378 nova_compute[238941]: 2026-01-27 14:36:25.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:27 np0005597378 nova_compute[238941]: 2026-01-27 14:36:27.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6094379657303952e-05 of space, bias 1.0, pg target 0.004828313897191186 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.021134189914476e-06 of space, bias 1.0, pg target 0.0012063402569743428 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0537023756073548e-06 of space, bias 4.0, pg target 0.0012644428507288259 quantized to 16 (current 16)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:36:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:36:28 np0005597378 nova_compute[238941]: 2026-01-27 14:36:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:28 np0005597378 nova_compute[238941]: 2026-01-27 14:36:28.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:36:28 np0005597378 nova_compute[238941]: 2026-01-27 14:36:28.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:36:28 np0005597378 nova_compute[238941]: 2026-01-27 14:36:28.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:36:28 np0005597378 nova_compute[238941]: 2026-01-27 14:36:28.656 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:29 np0005597378 podman[384200]: 2026-01-27 14:36:29.701359925 +0000 UTC m=+0.045814627 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:36:30 np0005597378 nova_compute[238941]: 2026-01-27 14:36:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:30 np0005597378 nova_compute[238941]: 2026-01-27 14:36:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:30 np0005597378 nova_compute[238941]: 2026-01-27 14:36:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:36:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:32 np0005597378 nova_compute[238941]: 2026-01-27 14:36:32.063 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:32 np0005597378 podman[384220]: 2026-01-27 14:36:32.73247534 +0000 UTC m=+0.073505234 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 09:36:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:33 np0005597378 nova_compute[238941]: 2026-01-27 14:36:33.659 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:37 np0005597378 nova_compute[238941]: 2026-01-27 14:36:37.065 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:38 np0005597378 nova_compute[238941]: 2026-01-27 14:36:38.430 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:38 np0005597378 nova_compute[238941]: 2026-01-27 14:36:38.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:42 np0005597378 nova_compute[238941]: 2026-01-27 14:36:42.068 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:43 np0005597378 nova_compute[238941]: 2026-01-27 14:36:43.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:44 np0005597378 nova_compute[238941]: 2026-01-27 14:36:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:44 np0005597378 nova_compute[238941]: 2026-01-27 14:36:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:36:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:36:46.346 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:36:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:36:46.346 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:36:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:36:46.346 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:36:46 np0005597378 nova_compute[238941]: 2026-01-27 14:36:46.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:47 np0005597378 nova_compute[238941]: 2026-01-27 14:36:47.069 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:36:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:36:48 np0005597378 nova_compute[238941]: 2026-01-27 14:36:48.663 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:49 np0005597378 nova_compute[238941]: 2026-01-27 14:36:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:49 np0005597378 nova_compute[238941]: 2026-01-27 14:36:49.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:36:49 np0005597378 nova_compute[238941]: 2026-01-27 14:36:49.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:36:49 np0005597378 nova_compute[238941]: 2026-01-27 14:36:49.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:36:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:52 np0005597378 nova_compute[238941]: 2026-01-27 14:36:52.072 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:53 np0005597378 nova_compute[238941]: 2026-01-27 14:36:53.665 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:36:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:36:56 np0005597378 podman[384391]: 2026-01-27 14:36:56.856593182 +0000 UTC m=+0.057798790 container create 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:36:56 np0005597378 systemd[1]: Started libpod-conmon-90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745.scope.
Jan 27 09:36:56 np0005597378 podman[384391]: 2026-01-27 14:36:56.824378774 +0000 UTC m=+0.025584402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:36:56 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:36:56 np0005597378 podman[384391]: 2026-01-27 14:36:56.960090424 +0000 UTC m=+0.161296072 container init 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:36:56 np0005597378 podman[384391]: 2026-01-27 14:36:56.967449422 +0000 UTC m=+0.168655030 container start 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:36:56 np0005597378 unruffled_grothendieck[384407]: 167 167
Jan 27 09:36:56 np0005597378 systemd[1]: libpod-90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745.scope: Deactivated successfully.
Jan 27 09:36:56 np0005597378 podman[384391]: 2026-01-27 14:36:56.99594075 +0000 UTC m=+0.197146378 container attach 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:36:56 np0005597378 podman[384391]: 2026-01-27 14:36:56.997795639 +0000 UTC m=+0.199001257 container died 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:36:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fc4fd7b9042e4ac65621b9ba3ab0e7eeec3aa9322b85b76da09e8d23c5b0dea4-merged.mount: Deactivated successfully.
Jan 27 09:36:57 np0005597378 nova_compute[238941]: 2026-01-27 14:36:57.074 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:57 np0005597378 podman[384391]: 2026-01-27 14:36:57.098163067 +0000 UTC m=+0.299368675 container remove 90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:36:57 np0005597378 systemd[1]: libpod-conmon-90e28a253d351fa4ad740905f08369b83f08d8bd8b4ae694f9c4bd71db34b745.scope: Deactivated successfully.
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.268216403 +0000 UTC m=+0.045800506 container create 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:36:57 np0005597378 systemd[1]: Started libpod-conmon-22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f.scope.
Jan 27 09:36:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:36:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:57 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.250364882 +0000 UTC m=+0.027949005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.356847723 +0000 UTC m=+0.134431866 container init 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.367207522 +0000 UTC m=+0.144791625 container start 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.379630977 +0000 UTC m=+0.157215130 container attach 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:36:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 463 KiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:36:57 np0005597378 angry_noether[384449]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:36:57 np0005597378 angry_noether[384449]: --> All data devices are unavailable
Jan 27 09:36:57 np0005597378 systemd[1]: libpod-22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f.scope: Deactivated successfully.
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.826104748 +0000 UTC m=+0.603688881 container died 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:36:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 27 09:36:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c6be319db92cb15d97cfd2714cd10e39ad35e6e3094d3275a8a191da62a4802f-merged.mount: Deactivated successfully.
Jan 27 09:36:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 27 09:36:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 27 09:36:57 np0005597378 podman[384433]: 2026-01-27 14:36:57.96446992 +0000 UTC m=+0.742054023 container remove 22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:36:57 np0005597378 systemd[1]: libpod-conmon-22e5e70844b64056fb5a64bc6cd4208a4a3bf9c69753ad86d1503728d1903c5f.scope: Deactivated successfully.
Jan 27 09:36:58 np0005597378 podman[384542]: 2026-01-27 14:36:58.435603485 +0000 UTC m=+0.048876359 container create 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:36:58 np0005597378 podman[384542]: 2026-01-27 14:36:58.41202448 +0000 UTC m=+0.025297394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:36:58 np0005597378 systemd[1]: Started libpod-conmon-2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98.scope.
Jan 27 09:36:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:36:58 np0005597378 podman[384542]: 2026-01-27 14:36:58.661736504 +0000 UTC m=+0.275009398 container init 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:36:58 np0005597378 nova_compute[238941]: 2026-01-27 14:36:58.667 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:36:58 np0005597378 podman[384542]: 2026-01-27 14:36:58.67049888 +0000 UTC m=+0.283771764 container start 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:36:58 np0005597378 admiring_davinci[384558]: 167 167
Jan 27 09:36:58 np0005597378 systemd[1]: libpod-2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98.scope: Deactivated successfully.
Jan 27 09:36:58 np0005597378 podman[384542]: 2026-01-27 14:36:58.67976166 +0000 UTC m=+0.293034564 container attach 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:36:58 np0005597378 podman[384542]: 2026-01-27 14:36:58.681018334 +0000 UTC m=+0.294291218 container died 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:36:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-39f1c8c03b0db14a7f7ede0d3364308ba960ef0da10a47a35565155430c6f6ce-merged.mount: Deactivated successfully.
Jan 27 09:36:59 np0005597378 podman[384542]: 2026-01-27 14:36:59.006582664 +0000 UTC m=+0.619855548 container remove 2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:36:59 np0005597378 systemd[1]: libpod-conmon-2dfd0d7220263bc6cd2e772a4c1c75e7c720926282d2e526841d1b4940bb2e98.scope: Deactivated successfully.
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.174068201 +0000 UTC m=+0.030956966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.332800631 +0000 UTC m=+0.189689386 container create 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:36:59 np0005597378 systemd[1]: Started libpod-conmon-3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b.scope.
Jan 27 09:36:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:36:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.465035908 +0000 UTC m=+0.321924703 container init 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.473985589 +0000 UTC m=+0.330874334 container start 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.482145899 +0000 UTC m=+0.339034644 container attach 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:36:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 27 09:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222453428' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:36:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:36:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222453428' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:36:59 np0005597378 nice_bell[384599]: {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:    "0": [
Jan 27 09:36:59 np0005597378 nice_bell[384599]:        {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "devices": [
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "/dev/loop3"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            ],
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_name": "ceph_lv0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_size": "21470642176",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "name": "ceph_lv0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "tags": {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cluster_name": "ceph",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.crush_device_class": "",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.encrypted": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.objectstore": "bluestore",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osd_id": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.type": "block",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.vdo": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.with_tpm": "0"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            },
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "type": "block",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "vg_name": "ceph_vg0"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:        }
Jan 27 09:36:59 np0005597378 nice_bell[384599]:    ],
Jan 27 09:36:59 np0005597378 nice_bell[384599]:    "1": [
Jan 27 09:36:59 np0005597378 nice_bell[384599]:        {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "devices": [
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "/dev/loop4"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            ],
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_name": "ceph_lv1",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_size": "21470642176",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "name": "ceph_lv1",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "tags": {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cluster_name": "ceph",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.crush_device_class": "",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.encrypted": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.objectstore": "bluestore",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osd_id": "1",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.type": "block",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.vdo": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.with_tpm": "0"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            },
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "type": "block",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "vg_name": "ceph_vg1"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:        }
Jan 27 09:36:59 np0005597378 nice_bell[384599]:    ],
Jan 27 09:36:59 np0005597378 nice_bell[384599]:    "2": [
Jan 27 09:36:59 np0005597378 nice_bell[384599]:        {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "devices": [
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "/dev/loop5"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            ],
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_name": "ceph_lv2",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_size": "21470642176",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "name": "ceph_lv2",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "tags": {
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.cluster_name": "ceph",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.crush_device_class": "",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.encrypted": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.objectstore": "bluestore",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osd_id": "2",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.type": "block",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.vdo": "0",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:                "ceph.with_tpm": "0"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            },
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "type": "block",
Jan 27 09:36:59 np0005597378 nice_bell[384599]:            "vg_name": "ceph_vg2"
Jan 27 09:36:59 np0005597378 nice_bell[384599]:        }
Jan 27 09:36:59 np0005597378 nice_bell[384599]:    ]
Jan 27 09:36:59 np0005597378 nice_bell[384599]: }
Jan 27 09:36:59 np0005597378 systemd[1]: libpod-3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b.scope: Deactivated successfully.
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.831527731 +0000 UTC m=+0.688416496 container died 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:36:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0303f3296f320c68d24cd5cef7bf74e3d2d3f79566ca211a523f87a8df315c6e-merged.mount: Deactivated successfully.
Jan 27 09:36:59 np0005597378 podman[384582]: 2026-01-27 14:36:59.893659887 +0000 UTC m=+0.750548632 container remove 3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:36:59 np0005597378 systemd[1]: libpod-conmon-3b4a91e0263bd16d667a3e30f38b3f52eda3fdf028a57db74c52143fb595b47b.scope: Deactivated successfully.
Jan 27 09:36:59 np0005597378 podman[384609]: 2026-01-27 14:36:59.943408308 +0000 UTC m=+0.082507246 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:37:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 27 09:37:00 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 27 09:37:00 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.369542211 +0000 UTC m=+0.040551014 container create c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:37:00 np0005597378 systemd[1]: Started libpod-conmon-c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab.scope.
Jan 27 09:37:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.445580022 +0000 UTC m=+0.116588865 container init c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.352717477 +0000 UTC m=+0.023726310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.453926737 +0000 UTC m=+0.124935540 container start c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:37:00 np0005597378 sad_rosalind[384718]: 167 167
Jan 27 09:37:00 np0005597378 systemd[1]: libpod-c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab.scope: Deactivated successfully.
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.460633498 +0000 UTC m=+0.131642301 container attach c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.46108329 +0000 UTC m=+0.132092113 container died c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:37:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b25087d39781e1d379ccc8c809d801a670b32b99fdd1893bf4e1c3db9d0d8f53-merged.mount: Deactivated successfully.
Jan 27 09:37:00 np0005597378 podman[384701]: 2026-01-27 14:37:00.52044624 +0000 UTC m=+0.191455033 container remove c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rosalind, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:37:00 np0005597378 systemd[1]: libpod-conmon-c403eb10c913edbf83711f6c31386cedb3ff65c67862f8cf3f98dd2fe840f8ab.scope: Deactivated successfully.
Jan 27 09:37:00 np0005597378 podman[384742]: 2026-01-27 14:37:00.697768153 +0000 UTC m=+0.045352174 container create e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:37:00 np0005597378 systemd[1]: Started libpod-conmon-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope.
Jan 27 09:37:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:37:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:37:00 np0005597378 podman[384742]: 2026-01-27 14:37:00.679253414 +0000 UTC m=+0.026837455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:37:00 np0005597378 podman[384742]: 2026-01-27 14:37:00.785123878 +0000 UTC m=+0.132707929 container init e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:37:00 np0005597378 podman[384742]: 2026-01-27 14:37:00.794980154 +0000 UTC m=+0.142564175 container start e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:37:00 np0005597378 podman[384742]: 2026-01-27 14:37:00.798713015 +0000 UTC m=+0.146297056 container attach e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:37:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:01 np0005597378 lvm[384839]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:37:01 np0005597378 lvm[384839]: VG ceph_vg1 finished
Jan 27 09:37:01 np0005597378 lvm[384838]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:37:01 np0005597378 lvm[384838]: VG ceph_vg0 finished
Jan 27 09:37:01 np0005597378 lvm[384841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:37:01 np0005597378 lvm[384841]: VG ceph_vg2 finished
Jan 27 09:37:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.6 MiB/s wr, 30 op/s
Jan 27 09:37:01 np0005597378 sweet_mirzakhani[384759]: {}
Jan 27 09:37:01 np0005597378 systemd[1]: libpod-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope: Deactivated successfully.
Jan 27 09:37:01 np0005597378 systemd[1]: libpod-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope: Consumed 1.360s CPU time.
Jan 27 09:37:01 np0005597378 podman[384742]: 2026-01-27 14:37:01.649397057 +0000 UTC m=+0.996981088 container died e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 27 09:37:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b2911fc42450d5a96e15df2bb8f2a04ebb05d5717f0c946053d81fa41d482ac3-merged.mount: Deactivated successfully.
Jan 27 09:37:01 np0005597378 podman[384742]: 2026-01-27 14:37:01.710188596 +0000 UTC m=+1.057772617 container remove e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:37:01 np0005597378 systemd[1]: libpod-conmon-e8c4cb46f7bc3378d17e053cfb4d4dd94a68b19b6b21864f88d88981703e80af.scope: Deactivated successfully.
Jan 27 09:37:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:37:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:37:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:37:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:37:02 np0005597378 nova_compute[238941]: 2026-01-27 14:37:02.078 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:37:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:37:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 37 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.6 MiB/s wr, 46 op/s
Jan 27 09:37:03 np0005597378 nova_compute[238941]: 2026-01-27 14:37:03.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:03 np0005597378 podman[384882]: 2026-01-27 14:37:03.775119554 +0000 UTC m=+0.109251958 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 27 09:37:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 27 09:37:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:07 np0005597378 nova_compute[238941]: 2026-01-27 14:37:07.082 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 39 op/s
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.628086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627628134, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1910, "num_deletes": 253, "total_data_size": 3195377, "memory_usage": 3249240, "flush_reason": "Manual Compaction"}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627673078, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3130614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56984, "largest_seqno": 58893, "table_properties": {"data_size": 3121683, "index_size": 5616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17872, "raw_average_key_size": 20, "raw_value_size": 3103945, "raw_average_value_size": 3515, "num_data_blocks": 249, "num_entries": 883, "num_filter_entries": 883, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524426, "oldest_key_time": 1769524426, "file_creation_time": 1769524627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 45051 microseconds, and 7054 cpu microseconds.
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.673136) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3130614 bytes OK
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.673159) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.682517) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.682552) EVENT_LOG_v1 {"time_micros": 1769524627682545, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.682573) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3187262, prev total WAL file size 3187262, number of live WAL files 2.
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.683591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3057KB)], [134(9974KB)]
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627683646, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13344036, "oldest_snapshot_seqno": -1}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8011 keys, 11601867 bytes, temperature: kUnknown
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627764689, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11601867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11547378, "index_size": 33384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 208769, "raw_average_key_size": 26, "raw_value_size": 11403618, "raw_average_value_size": 1423, "num_data_blocks": 1305, "num_entries": 8011, "num_filter_entries": 8011, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.764928) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11601867 bytes
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.768517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.5 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 8532, records dropped: 521 output_compression: NoCompression
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.768534) EVENT_LOG_v1 {"time_micros": 1769524627768526, "job": 82, "event": "compaction_finished", "compaction_time_micros": 81121, "compaction_time_cpu_micros": 25045, "output_level": 6, "num_output_files": 1, "total_output_size": 11601867, "num_input_records": 8532, "num_output_records": 8011, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627769033, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524627770747, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.683463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:07.770780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:08 np0005597378 nova_compute[238941]: 2026-01-27 14:37:08.670 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 27 09:37:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Jan 27 09:37:12 np0005597378 nova_compute[238941]: 2026-01-27 14:37:12.084 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.0 MiB/s wr, 11 op/s
Jan 27 09:37:13 np0005597378 nova_compute[238941]: 2026-01-27 14:37:13.700 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 379 KiB/s wr, 0 op/s
Jan 27 09:37:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:17 np0005597378 nova_compute[238941]: 2026-01-27 14:37:17.087 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:37:17
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log']
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:37:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:37:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.414 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.444 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.444 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.445 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.445 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.445 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:37:18 np0005597378 nova_compute[238941]: 2026-01-27 14:37:18.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:37:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417110961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.044 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.201 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.202 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.98731359001249GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.202 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.203 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.271 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.272 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.289 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:37:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:37:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858766454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.882 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.889 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.921 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.923 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:37:19 np0005597378 nova_compute[238941]: 2026-01-27 14:37:19.924 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:37:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:37:20.572 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:37:20 np0005597378 nova_compute[238941]: 2026-01-27 14:37:20.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:20 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:37:20.574 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:37:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:22 np0005597378 nova_compute[238941]: 2026-01-27 14:37:22.090 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:23 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:37:23.576 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:37:23 np0005597378 nova_compute[238941]: 2026-01-27 14:37:23.705 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:25 np0005597378 nova_compute[238941]: 2026-01-27 14:37:25.892 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:25 np0005597378 nova_compute[238941]: 2026-01-27 14:37:25.893 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:26 np0005597378 nova_compute[238941]: 2026-01-27 14:37:26.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:27 np0005597378 nova_compute[238941]: 2026-01-27 14:37:27.091 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.613081700868452e-05 of space, bias 1.0, pg target 0.0048392451026053555 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698150843422635 of space, bias 1.0, pg target 0.20094452530267903 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547891308679597e-06 of space, bias 4.0, pg target 0.0012657469570415518 quantized to 16 (current 16)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:37:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:37:28 np0005597378 nova_compute[238941]: 2026-01-27 14:37:28.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 27 09:37:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 27 09:37:28 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 27 09:37:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 818 B/s rd, 102 B/s wr, 1 op/s
Jan 27 09:37:30 np0005597378 nova_compute[238941]: 2026-01-27 14:37:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:30 np0005597378 nova_compute[238941]: 2026-01-27 14:37:30.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:37:30 np0005597378 nova_compute[238941]: 2026-01-27 14:37:30.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:37:30 np0005597378 nova_compute[238941]: 2026-01-27 14:37:30.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:37:30 np0005597378 nova_compute[238941]: 2026-01-27 14:37:30.399 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:30 np0005597378 podman[384953]: 2026-01-27 14:37:30.748739114 +0000 UTC m=+0.086900945 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:37:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 409 B/s wr, 2 op/s
Jan 27 09:37:32 np0005597378 nova_compute[238941]: 2026-01-27 14:37:32.094 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 20 op/s
Jan 27 09:37:33 np0005597378 nova_compute[238941]: 2026-01-27 14:37:33.708 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:34 np0005597378 podman[384972]: 2026-01-27 14:37:34.742001375 +0000 UTC m=+0.086577946 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 27 09:37:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 27 09:37:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 27 09:37:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 27 09:37:36 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 27 09:37:37 np0005597378 nova_compute[238941]: 2026-01-27 14:37:37.096 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 27 09:37:38 np0005597378 nova_compute[238941]: 2026-01-27 14:37:38.712 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:39 np0005597378 nova_compute[238941]: 2026-01-27 14:37:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Jan 27 09:37:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1023 B/s wr, 22 op/s
Jan 27 09:37:42 np0005597378 nova_compute[238941]: 2026-01-27 14:37:42.099 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 409 B/s wr, 4 op/s
Jan 27 09:37:43 np0005597378 nova_compute[238941]: 2026-01-27 14:37:43.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:44 np0005597378 nova_compute[238941]: 2026-01-27 14:37:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:44 np0005597378 nova_compute[238941]: 2026-01-27 14:37:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:37:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:37:46.347 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:37:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:37:46.347 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:37:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:37:46.347 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.666571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666666635, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 562, "num_deletes": 252, "total_data_size": 583838, "memory_usage": 593616, "flush_reason": "Manual Compaction"}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666744184, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 404979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58894, "largest_seqno": 59455, "table_properties": {"data_size": 402204, "index_size": 746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7470, "raw_average_key_size": 20, "raw_value_size": 396492, "raw_average_value_size": 1092, "num_data_blocks": 34, "num_entries": 363, "num_filter_entries": 363, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524628, "oldest_key_time": 1769524628, "file_creation_time": 1769524666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 77691 microseconds, and 2229 cpu microseconds.
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.744254) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 404979 bytes OK
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.744286) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.791143) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.791229) EVENT_LOG_v1 {"time_micros": 1769524666791212, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.791274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 580708, prev total WAL file size 580708, number of live WAL files 2.
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.792265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353039' seq:0, type:0; will stop at (end)
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(395KB)], [137(11MB)]
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666792662, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12006846, "oldest_snapshot_seqno": -1}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7872 keys, 8831573 bytes, temperature: kUnknown
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666873535, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8831573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8782357, "index_size": 28431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 206072, "raw_average_key_size": 26, "raw_value_size": 8645352, "raw_average_value_size": 1098, "num_data_blocks": 1099, "num_entries": 7872, "num_filter_entries": 7872, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.873978) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8831573 bytes
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.891019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(51.5) write-amplify(21.8) OK, records in: 8374, records dropped: 502 output_compression: NoCompression
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.891086) EVENT_LOG_v1 {"time_micros": 1769524666891061, "job": 84, "event": "compaction_finished", "compaction_time_micros": 80960, "compaction_time_cpu_micros": 40593, "output_level": 6, "num_output_files": 1, "total_output_size": 8831573, "num_input_records": 8374, "num_output_records": 7872, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666891581, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524666894505, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.792123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:46 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:37:46.894563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:37:47 np0005597378 nova_compute[238941]: 2026-01-27 14:37:47.101 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:37:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:37:48 np0005597378 nova_compute[238941]: 2026-01-27 14:37:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:48 np0005597378 nova_compute[238941]: 2026-01-27 14:37:48.714 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:52 np0005597378 nova_compute[238941]: 2026-01-27 14:37:52.104 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:53 np0005597378 nova_compute[238941]: 2026-01-27 14:37:53.717 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:37:57 np0005597378 nova_compute[238941]: 2026-01-27 14:37:57.105 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:57 np0005597378 nova_compute[238941]: 2026-01-27 14:37:57.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:37:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:58 np0005597378 nova_compute[238941]: 2026-01-27 14:37:58.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:37:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:37:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:37:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/108643896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:37:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:37:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/108643896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:38:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:01 np0005597378 podman[384999]: 2026-01-27 14:38:01.707418713 +0000 UTC m=+0.052574410 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:38:02 np0005597378 nova_compute[238941]: 2026-01-27 14:38:02.108 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:38:02 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.052815196 +0000 UTC m=+0.049895667 container create 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:38:03 np0005597378 systemd[1]: Started libpod-conmon-576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d.scope.
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.024569465 +0000 UTC m=+0.021649956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:38:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.156018799 +0000 UTC m=+0.153099290 container init 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.165838475 +0000 UTC m=+0.162918946 container start 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:38:03 np0005597378 nostalgic_roentgen[385177]: 167 167
Jan 27 09:38:03 np0005597378 systemd[1]: libpod-576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d.scope: Deactivated successfully.
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.188000901 +0000 UTC m=+0.185081372 container attach 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.188956258 +0000 UTC m=+0.186036759 container died 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:38:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-f267542e14b791fe62b6d91264e5c86fd3ee31260d229f870cdf198ee03734eb-merged.mount: Deactivated successfully.
Jan 27 09:38:03 np0005597378 podman[385160]: 2026-01-27 14:38:03.322685194 +0000 UTC m=+0.319765665 container remove 576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 27 09:38:03 np0005597378 systemd[1]: libpod-conmon-576d75c0b3fc5905b32a184155e6eb6eb3ad1d8a5676674f4524a20f0eb2d67d.scope: Deactivated successfully.
Jan 27 09:38:03 np0005597378 podman[385201]: 2026-01-27 14:38:03.546626553 +0000 UTC m=+0.081870009 container create 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:38:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:03 np0005597378 podman[385201]: 2026-01-27 14:38:03.491858486 +0000 UTC m=+0.027101972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:38:03 np0005597378 systemd[1]: Started libpod-conmon-00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0.scope.
Jan 27 09:38:03 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:38:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:03 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:03 np0005597378 podman[385201]: 2026-01-27 14:38:03.651665186 +0000 UTC m=+0.186908662 container init 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:38:03 np0005597378 podman[385201]: 2026-01-27 14:38:03.660934346 +0000 UTC m=+0.196177802 container start 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:38:03 np0005597378 podman[385201]: 2026-01-27 14:38:03.666046154 +0000 UTC m=+0.201289610 container attach 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:38:03 np0005597378 nova_compute[238941]: 2026-01-27 14:38:03.720 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:04 np0005597378 exciting_euclid[385217]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:38:04 np0005597378 exciting_euclid[385217]: --> All data devices are unavailable
Jan 27 09:38:04 np0005597378 systemd[1]: libpod-00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0.scope: Deactivated successfully.
Jan 27 09:38:04 np0005597378 podman[385237]: 2026-01-27 14:38:04.214582388 +0000 UTC m=+0.027831442 container died 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:38:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-62f6b4c1a05ad17ad6669a53c1e00880b29b93ccf196be4e83a12f74f32768e1-merged.mount: Deactivated successfully.
Jan 27 09:38:04 np0005597378 podman[385237]: 2026-01-27 14:38:04.550914448 +0000 UTC m=+0.364163482 container remove 00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:38:04 np0005597378 systemd[1]: libpod-conmon-00ed2bfad0cde268774a83b01a45d562dcfa764c4a7bb6b1055dc779fcdb1bf0.scope: Deactivated successfully.
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.021680704 +0000 UTC m=+0.048488539 container create 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:38:05 np0005597378 systemd[1]: Started libpod-conmon-221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69.scope.
Jan 27 09:38:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.000134663 +0000 UTC m=+0.026942498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.108817133 +0000 UTC m=+0.135624988 container init 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.116387528 +0000 UTC m=+0.143195363 container start 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:38:05 np0005597378 beautiful_snyder[385335]: 167 167
Jan 27 09:38:05 np0005597378 systemd[1]: libpod-221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69.scope: Deactivated successfully.
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.126155992 +0000 UTC m=+0.152963867 container attach 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.127556139 +0000 UTC m=+0.154363984 container died 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:38:05 np0005597378 podman[385327]: 2026-01-27 14:38:05.160571589 +0000 UTC m=+0.100387028 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 09:38:05 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d26c7c3cd2b33b0e94a311e527ad93c3b4b466e2dc7da26bc0b7212192fc1584-merged.mount: Deactivated successfully.
Jan 27 09:38:05 np0005597378 podman[385313]: 2026-01-27 14:38:05.187967359 +0000 UTC m=+0.214775194 container remove 221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_snyder, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:38:05 np0005597378 systemd[1]: libpod-conmon-221214d83f8f68136b8605585a5c9a6256d54f381a9285e57a7495b91f83cf69.scope: Deactivated successfully.
Jan 27 09:38:05 np0005597378 podman[385377]: 2026-01-27 14:38:05.436929013 +0000 UTC m=+0.116774741 container create cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:38:05 np0005597378 podman[385377]: 2026-01-27 14:38:05.347167021 +0000 UTC m=+0.027012769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:38:05 np0005597378 systemd[1]: Started libpod-conmon-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope.
Jan 27 09:38:05 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:38:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:05 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:05 np0005597378 podman[385377]: 2026-01-27 14:38:05.615850338 +0000 UTC m=+0.295696086 container init cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:38:05 np0005597378 podman[385377]: 2026-01-27 14:38:05.622886767 +0000 UTC m=+0.302732485 container start cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:38:05 np0005597378 podman[385377]: 2026-01-27 14:38:05.630157664 +0000 UTC m=+0.310003422 container attach cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:38:05 np0005597378 elastic_benz[385392]: {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:    "0": [
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:        {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "devices": [
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "/dev/loop3"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            ],
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_name": "ceph_lv0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_size": "21470642176",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "name": "ceph_lv0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "tags": {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cluster_name": "ceph",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.crush_device_class": "",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.encrypted": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.objectstore": "bluestore",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osd_id": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.type": "block",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.vdo": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.with_tpm": "0"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            },
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "type": "block",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "vg_name": "ceph_vg0"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:        }
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:    ],
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:    "1": [
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:        {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "devices": [
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "/dev/loop4"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            ],
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_name": "ceph_lv1",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_size": "21470642176",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "name": "ceph_lv1",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "tags": {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cluster_name": "ceph",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.crush_device_class": "",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.encrypted": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.objectstore": "bluestore",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osd_id": "1",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.type": "block",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.vdo": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.with_tpm": "0"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            },
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "type": "block",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "vg_name": "ceph_vg1"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:        }
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:    ],
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:    "2": [
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:        {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "devices": [
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "/dev/loop5"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            ],
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_name": "ceph_lv2",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_size": "21470642176",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "name": "ceph_lv2",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "tags": {
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.cluster_name": "ceph",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.crush_device_class": "",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.encrypted": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.objectstore": "bluestore",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osd_id": "2",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.type": "block",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.vdo": "0",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:                "ceph.with_tpm": "0"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            },
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "type": "block",
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:            "vg_name": "ceph_vg2"
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:        }
Jan 27 09:38:05 np0005597378 elastic_benz[385392]:    ]
Jan 27 09:38:05 np0005597378 elastic_benz[385392]: }
Jan 27 09:38:05 np0005597378 systemd[1]: libpod-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope: Deactivated successfully.
Jan 27 09:38:05 np0005597378 conmon[385392]: conmon cb082c792724af9e5019 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope/container/memory.events
Jan 27 09:38:05 np0005597378 podman[385377]: 2026-01-27 14:38:05.959771313 +0000 UTC m=+0.639617041 container died cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:38:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e5c7da33c3096257a2f9da51235be0663d40232c425346f28f9d3d2a8af1fa8b-merged.mount: Deactivated successfully.
Jan 27 09:38:06 np0005597378 podman[385377]: 2026-01-27 14:38:06.207039002 +0000 UTC m=+0.886884740 container remove cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_benz, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:38:06 np0005597378 systemd[1]: libpod-conmon-cb082c792724af9e5019195d4f0b2edd847ef4dd70a612bdc37710c7affd67f2.scope: Deactivated successfully.
Jan 27 09:38:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.725841572 +0000 UTC m=+0.055274751 container create 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:38:06 np0005597378 systemd[1]: Started libpod-conmon-6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030.scope.
Jan 27 09:38:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.699524822 +0000 UTC m=+0.028958051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.811194964 +0000 UTC m=+0.140628233 container init 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.818607524 +0000 UTC m=+0.148040703 container start 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.82253868 +0000 UTC m=+0.151971859 container attach 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:38:06 np0005597378 zen_beaver[385494]: 167 167
Jan 27 09:38:06 np0005597378 systemd[1]: libpod-6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030.scope: Deactivated successfully.
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.826448766 +0000 UTC m=+0.155881935 container died 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:38:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a69514564fb25750a6fc1957e835e2145a133aefb6097ed5e01c82952ffca464-merged.mount: Deactivated successfully.
Jan 27 09:38:06 np0005597378 podman[385477]: 2026-01-27 14:38:06.87001304 +0000 UTC m=+0.199446229 container remove 6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:38:06 np0005597378 systemd[1]: libpod-conmon-6fde69638e3901a33f8471aa243481d4d616217a9fbce83a8c23a449754bb030.scope: Deactivated successfully.
Jan 27 09:38:07 np0005597378 podman[385517]: 2026-01-27 14:38:07.056624843 +0000 UTC m=+0.050908384 container create fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:38:07 np0005597378 systemd[1]: Started libpod-conmon-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope.
Jan 27 09:38:07 np0005597378 nova_compute[238941]: 2026-01-27 14:38:07.110 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:07 np0005597378 podman[385517]: 2026-01-27 14:38:07.035617836 +0000 UTC m=+0.029901407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:38:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:07 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:38:07 np0005597378 podman[385517]: 2026-01-27 14:38:07.158893191 +0000 UTC m=+0.153176762 container init fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:38:07 np0005597378 podman[385517]: 2026-01-27 14:38:07.167305878 +0000 UTC m=+0.161589409 container start fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 09:38:07 np0005597378 podman[385517]: 2026-01-27 14:38:07.243993126 +0000 UTC m=+0.238276667 container attach fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:38:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:08 np0005597378 lvm[385611]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:38:08 np0005597378 lvm[385611]: VG ceph_vg0 finished
Jan 27 09:38:08 np0005597378 lvm[385612]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:38:08 np0005597378 lvm[385612]: VG ceph_vg1 finished
Jan 27 09:38:08 np0005597378 lvm[385614]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:38:08 np0005597378 lvm[385614]: VG ceph_vg2 finished
Jan 27 09:38:08 np0005597378 jolly_herschel[385533]: {}
Jan 27 09:38:08 np0005597378 systemd[1]: libpod-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope: Deactivated successfully.
Jan 27 09:38:08 np0005597378 systemd[1]: libpod-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope: Consumed 1.541s CPU time.
Jan 27 09:38:08 np0005597378 podman[385517]: 2026-01-27 14:38:08.122771026 +0000 UTC m=+1.117054597 container died fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:38:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-14df5e8d0c779db936b6b377babd053b272ba193957a9691002fcd170e5b3d1f-merged.mount: Deactivated successfully.
Jan 27 09:38:08 np0005597378 podman[385517]: 2026-01-27 14:38:08.227975173 +0000 UTC m=+1.222258714 container remove fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:38:08 np0005597378 systemd[1]: libpod-conmon-fac6f0e9a8e2160b89ded96e33686ac77665ca93044a57740213ffe4d6b5a170.scope: Deactivated successfully.
Jan 27 09:38:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:38:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:38:08 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:38:08 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:38:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:38:08 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:38:08 np0005597378 nova_compute[238941]: 2026-01-27 14:38:08.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:12 np0005597378 nova_compute[238941]: 2026-01-27 14:38:12.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:13 np0005597378 nova_compute[238941]: 2026-01-27 14:38:13.724 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:17 np0005597378 nova_compute[238941]: 2026-01-27 14:38:17.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:38:17
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', '.mgr']
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:38:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:38:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:38:18 np0005597378 nova_compute[238941]: 2026-01-27 14:38:18.726 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:19 np0005597378 nova_compute[238941]: 2026-01-27 14:38:19.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:19 np0005597378 nova_compute[238941]: 2026-01-27 14:38:19.421 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:38:19 np0005597378 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:38:19 np0005597378 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:38:19 np0005597378 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:38:19 np0005597378 nova_compute[238941]: 2026-01-27 14:38:19.422 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:38:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:38:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656926820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.183 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.381 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.383 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.383 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.384 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.441 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.441 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:38:20 np0005597378 nova_compute[238941]: 2026-01-27 14:38:20.457 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:38:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:38:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589773761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:38:21 np0005597378 nova_compute[238941]: 2026-01-27 14:38:21.110 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:38:21 np0005597378 nova_compute[238941]: 2026-01-27 14:38:21.118 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:38:21 np0005597378 nova_compute[238941]: 2026-01-27 14:38:21.135 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:38:21 np0005597378 nova_compute[238941]: 2026-01-27 14:38:21.138 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:38:21 np0005597378 nova_compute[238941]: 2026-01-27 14:38:21.138 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:38:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:22 np0005597378 nova_compute[238941]: 2026-01-27 14:38:22.119 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:23 np0005597378 nova_compute[238941]: 2026-01-27 14:38:23.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:26 np0005597378 nova_compute[238941]: 2026-01-27 14:38:26.139 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:26 np0005597378 nova_compute[238941]: 2026-01-27 14:38:26.139 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:27 np0005597378 nova_compute[238941]: 2026-01-27 14:38:27.122 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:27 np0005597378 nova_compute[238941]: 2026-01-27 14:38:27.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:38:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:38:28 np0005597378 nova_compute[238941]: 2026-01-27 14:38:28.730 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:31 np0005597378 nova_compute[238941]: 2026-01-27 14:38:31.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:32 np0005597378 nova_compute[238941]: 2026-01-27 14:38:32.124 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:32 np0005597378 nova_compute[238941]: 2026-01-27 14:38:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:32 np0005597378 nova_compute[238941]: 2026-01-27 14:38:32.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:38:32 np0005597378 nova_compute[238941]: 2026-01-27 14:38:32.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:38:32 np0005597378 nova_compute[238941]: 2026-01-27 14:38:32.471 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:38:32 np0005597378 podman[385698]: 2026-01-27 14:38:32.740946404 +0000 UTC m=+0.077605193 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:38:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:33 np0005597378 nova_compute[238941]: 2026-01-27 14:38:33.732 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:35 np0005597378 podman[385717]: 2026-01-27 14:38:35.748407912 +0000 UTC m=+0.088043876 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:38:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:37 np0005597378 nova_compute[238941]: 2026-01-27 14:38:37.127 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:38 np0005597378 nova_compute[238941]: 2026-01-27 14:38:38.735 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:39 np0005597378 nova_compute[238941]: 2026-01-27 14:38:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:42 np0005597378 nova_compute[238941]: 2026-01-27 14:38:42.128 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:43 np0005597378 nova_compute[238941]: 2026-01-27 14:38:43.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:38:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:38:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:38:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:38:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:38:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:38:46 np0005597378 nova_compute[238941]: 2026-01-27 14:38:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:46 np0005597378 nova_compute[238941]: 2026-01-27 14:38:46.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:38:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:47 np0005597378 nova_compute[238941]: 2026-01-27 14:38:47.131 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:38:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:38:48 np0005597378 nova_compute[238941]: 2026-01-27 14:38:48.740 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:49 np0005597378 nova_compute[238941]: 2026-01-27 14:38:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:38:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:52 np0005597378 nova_compute[238941]: 2026-01-27 14:38:52.133 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:53 np0005597378 nova_compute[238941]: 2026-01-27 14:38:53.741 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:38:57 np0005597378 nova_compute[238941]: 2026-01-27 14:38:57.135 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:58 np0005597378 nova_compute[238941]: 2026-01-27 14:38:58.743 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:38:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:38:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:38:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/76920615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:38:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:38:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/76920615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:39:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:02 np0005597378 nova_compute[238941]: 2026-01-27 14:39:02.136 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:03 np0005597378 podman[385744]: 2026-01-27 14:39:03.71114175 +0000 UTC m=+0.055725134 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 09:39:03 np0005597378 nova_compute[238941]: 2026-01-27 14:39:03.745 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:05 np0005597378 podman[385764]: 2026-01-27 14:39:05.876416304 +0000 UTC m=+0.078763145 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:39:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:07 np0005597378 nova_compute[238941]: 2026-01-27 14:39:07.139 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:08 np0005597378 nova_compute[238941]: 2026-01-27 14:39:08.748 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:09 np0005597378 podman[385883]: 2026-01-27 14:39:09.249669286 +0000 UTC m=+0.281712648 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:39:09 np0005597378 podman[385883]: 2026-01-27 14:39:09.351156923 +0000 UTC m=+0.383200335 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:39:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:39:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:10 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:39:10 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:39:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:11 np0005597378 podman[386211]: 2026-01-27 14:39:11.732923506 +0000 UTC m=+0.095641751 container create 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:39:11 np0005597378 podman[386211]: 2026-01-27 14:39:11.659825214 +0000 UTC m=+0.022543479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:39:11 np0005597378 systemd[1]: Started libpod-conmon-40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621.scope.
Jan 27 09:39:11 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:39:11 np0005597378 podman[386211]: 2026-01-27 14:39:11.933144685 +0000 UTC m=+0.295862960 container init 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:39:11 np0005597378 podman[386211]: 2026-01-27 14:39:11.939868897 +0000 UTC m=+0.302587172 container start 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:39:11 np0005597378 intelligent_chaum[386227]: 167 167
Jan 27 09:39:11 np0005597378 systemd[1]: libpod-40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621.scope: Deactivated successfully.
Jan 27 09:39:11 np0005597378 podman[386211]: 2026-01-27 14:39:11.984823829 +0000 UTC m=+0.347542074 container attach 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:39:11 np0005597378 podman[386211]: 2026-01-27 14:39:11.985190598 +0000 UTC m=+0.347908843 container died 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:39:12 np0005597378 nova_compute[238941]: 2026-01-27 14:39:12.141 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:12 np0005597378 systemd[1]: var-lib-containers-storage-overlay-52cf720069b4e54b8050179939a0fe97f188ba6d55aed6692be66037f5d3a9b4-merged.mount: Deactivated successfully.
Jan 27 09:39:12 np0005597378 podman[386211]: 2026-01-27 14:39:12.270608496 +0000 UTC m=+0.633326781 container remove 40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:39:12 np0005597378 systemd[1]: libpod-conmon-40e2ef196e77d0207ea76710b66b3cf938e4d0263c9968b665022dc442712621.scope: Deactivated successfully.
Jan 27 09:39:12 np0005597378 podman[386253]: 2026-01-27 14:39:12.468562234 +0000 UTC m=+0.071805737 container create d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:39:12 np0005597378 podman[386253]: 2026-01-27 14:39:12.419740297 +0000 UTC m=+0.022983830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:39:12 np0005597378 systemd[1]: Started libpod-conmon-d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07.scope.
Jan 27 09:39:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:12 np0005597378 podman[386253]: 2026-01-27 14:39:12.576451384 +0000 UTC m=+0.179694917 container init d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:39:12 np0005597378 podman[386253]: 2026-01-27 14:39:12.583553245 +0000 UTC m=+0.186796768 container start d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:39:12 np0005597378 podman[386253]: 2026-01-27 14:39:12.595921719 +0000 UTC m=+0.199165242 container attach d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:39:13 np0005597378 kind_galois[386269]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:39:13 np0005597378 kind_galois[386269]: --> All data devices are unavailable
Jan 27 09:39:13 np0005597378 systemd[1]: libpod-d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07.scope: Deactivated successfully.
Jan 27 09:39:13 np0005597378 podman[386289]: 2026-01-27 14:39:13.134761371 +0000 UTC m=+0.029214559 container died d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 27 09:39:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-82064daa6fc5839be1635c4593523a67461d045d807816ea8384900148671dee-merged.mount: Deactivated successfully.
Jan 27 09:39:13 np0005597378 podman[386289]: 2026-01-27 14:39:13.357971481 +0000 UTC m=+0.252424649 container remove d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:39:13 np0005597378 systemd[1]: libpod-conmon-d22254ca7a890404665572ed64dec745d5ea27a0c50792578f2b31fe8fba3e07.scope: Deactivated successfully.
Jan 27 09:39:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:13 np0005597378 nova_compute[238941]: 2026-01-27 14:39:13.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:13 np0005597378 podman[386366]: 2026-01-27 14:39:13.833683819 +0000 UTC m=+0.024796339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:39:13 np0005597378 podman[386366]: 2026-01-27 14:39:13.991544797 +0000 UTC m=+0.182657287 container create 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:39:14 np0005597378 systemd[1]: Started libpod-conmon-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope.
Jan 27 09:39:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:39:14 np0005597378 podman[386366]: 2026-01-27 14:39:14.214979043 +0000 UTC m=+0.406091553 container init 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 27 09:39:14 np0005597378 podman[386366]: 2026-01-27 14:39:14.226905774 +0000 UTC m=+0.418018264 container start 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:39:14 np0005597378 wizardly_turing[386382]: 167 167
Jan 27 09:39:14 np0005597378 systemd[1]: libpod-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope: Deactivated successfully.
Jan 27 09:39:14 np0005597378 conmon[386382]: conmon 8481936d5361be223480 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope/container/memory.events
Jan 27 09:39:14 np0005597378 podman[386366]: 2026-01-27 14:39:14.36173061 +0000 UTC m=+0.552843120 container attach 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:39:14 np0005597378 podman[386366]: 2026-01-27 14:39:14.36245724 +0000 UTC m=+0.553569750 container died 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:39:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ede43c802a2781b5e53b406135f48417c66aefcb1b36037c12e6ab0236e58e92-merged.mount: Deactivated successfully.
Jan 27 09:39:14 np0005597378 podman[386366]: 2026-01-27 14:39:14.627551959 +0000 UTC m=+0.818664449 container remove 8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_turing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:39:14 np0005597378 systemd[1]: libpod-conmon-8481936d5361be223480462040f3cabe6a50d76ee73b17aece020e0add245314.scope: Deactivated successfully.
Jan 27 09:39:14 np0005597378 podman[386406]: 2026-01-27 14:39:14.831257473 +0000 UTC m=+0.059704581 container create 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:39:14 np0005597378 systemd[1]: Started libpod-conmon-4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2.scope.
Jan 27 09:39:14 np0005597378 podman[386406]: 2026-01-27 14:39:14.811450939 +0000 UTC m=+0.039898067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:39:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:39:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:14 np0005597378 podman[386406]: 2026-01-27 14:39:14.947510118 +0000 UTC m=+0.175957256 container init 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:39:14 np0005597378 podman[386406]: 2026-01-27 14:39:14.956381517 +0000 UTC m=+0.184828625 container start 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:39:14 np0005597378 podman[386406]: 2026-01-27 14:39:14.96537525 +0000 UTC m=+0.193822418 container attach 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]: {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:    "0": [
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:        {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "devices": [
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "/dev/loop3"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            ],
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_name": "ceph_lv0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_size": "21470642176",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "name": "ceph_lv0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "tags": {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cluster_name": "ceph",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.crush_device_class": "",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.encrypted": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.objectstore": "bluestore",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osd_id": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.type": "block",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.vdo": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.with_tpm": "0"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            },
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "type": "block",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "vg_name": "ceph_vg0"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:        }
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:    ],
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:    "1": [
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:        {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "devices": [
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "/dev/loop4"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            ],
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_name": "ceph_lv1",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_size": "21470642176",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "name": "ceph_lv1",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "tags": {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cluster_name": "ceph",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.crush_device_class": "",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.encrypted": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.objectstore": "bluestore",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osd_id": "1",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.type": "block",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.vdo": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.with_tpm": "0"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            },
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "type": "block",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "vg_name": "ceph_vg1"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:        }
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:    ],
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:    "2": [
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:        {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "devices": [
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "/dev/loop5"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            ],
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_name": "ceph_lv2",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_size": "21470642176",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "name": "ceph_lv2",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "tags": {
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.cluster_name": "ceph",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.crush_device_class": "",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.encrypted": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.objectstore": "bluestore",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osd_id": "2",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.type": "block",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.vdo": "0",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:                "ceph.with_tpm": "0"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            },
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "type": "block",
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:            "vg_name": "ceph_vg2"
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:        }
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]:    ]
Jan 27 09:39:15 np0005597378 beautiful_hugle[386423]: }
Jan 27 09:39:15 np0005597378 systemd[1]: libpod-4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2.scope: Deactivated successfully.
Jan 27 09:39:15 np0005597378 podman[386406]: 2026-01-27 14:39:15.274320922 +0000 UTC m=+0.502768050 container died 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:39:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fecd18042948ff7e6e668880419f54cafb5a400e296dfb3e5f23d923f61a7ed6-merged.mount: Deactivated successfully.
Jan 27 09:39:15 np0005597378 podman[386406]: 2026-01-27 14:39:15.506993416 +0000 UTC m=+0.735440524 container remove 4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hugle, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:39:15 np0005597378 systemd[1]: libpod-conmon-4b470f132c05a321f9d83377466d27e3087aff6a85c67412198dc09d057a2df2.scope: Deactivated successfully.
Jan 27 09:39:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:15 np0005597378 podman[386506]: 2026-01-27 14:39:15.98506673 +0000 UTC m=+0.042646352 container create f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 27 09:39:16 np0005597378 systemd[1]: Started libpod-conmon-f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1.scope.
Jan 27 09:39:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:39:16 np0005597378 podman[386506]: 2026-01-27 14:39:15.962476211 +0000 UTC m=+0.020055853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:39:16 np0005597378 podman[386506]: 2026-01-27 14:39:16.072339693 +0000 UTC m=+0.129919345 container init f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:39:16 np0005597378 podman[386506]: 2026-01-27 14:39:16.079234479 +0000 UTC m=+0.136814101 container start f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:39:16 np0005597378 podman[386506]: 2026-01-27 14:39:16.083250987 +0000 UTC m=+0.140830609 container attach f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:39:16 np0005597378 nifty_tharp[386522]: 167 167
Jan 27 09:39:16 np0005597378 systemd[1]: libpod-f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1.scope: Deactivated successfully.
Jan 27 09:39:16 np0005597378 podman[386506]: 2026-01-27 14:39:16.086613868 +0000 UTC m=+0.144193490 container died f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:39:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ae2ea9244e29728d35ba43f5f4dc2b082b841cc3d4192f5b732f6d38b2d06a76-merged.mount: Deactivated successfully.
Jan 27 09:39:16 np0005597378 podman[386506]: 2026-01-27 14:39:16.142477854 +0000 UTC m=+0.200057476 container remove f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:39:16 np0005597378 systemd[1]: libpod-conmon-f4ab516b9483a68c6045077c00144f04835e236afb7b20fb141a7f6476e381e1.scope: Deactivated successfully.
Jan 27 09:39:16 np0005597378 podman[386546]: 2026-01-27 14:39:16.345584992 +0000 UTC m=+0.050697558 container create ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:39:16 np0005597378 systemd[1]: Started libpod-conmon-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope.
Jan 27 09:39:16 np0005597378 podman[386546]: 2026-01-27 14:39:16.31952424 +0000 UTC m=+0.024636816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:39:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:39:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:39:16 np0005597378 podman[386546]: 2026-01-27 14:39:16.445203529 +0000 UTC m=+0.150316095 container init ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:39:16 np0005597378 podman[386546]: 2026-01-27 14:39:16.453592385 +0000 UTC m=+0.158704951 container start ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:39:16 np0005597378 podman[386546]: 2026-01-27 14:39:16.460698017 +0000 UTC m=+0.165810613 container attach ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:39:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:17 np0005597378 nova_compute[238941]: 2026-01-27 14:39:17.145 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:17 np0005597378 lvm[386642]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:39:17 np0005597378 lvm[386642]: VG ceph_vg1 finished
Jan 27 09:39:17 np0005597378 lvm[386641]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:39:17 np0005597378 lvm[386641]: VG ceph_vg0 finished
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:39:17
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.mgr', 'volumes']
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:39:17 np0005597378 lvm[386644]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:39:17 np0005597378 lvm[386644]: VG ceph_vg2 finished
Jan 27 09:39:17 np0005597378 lvm[386646]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:39:17 np0005597378 lvm[386646]: VG ceph_vg2 finished
Jan 27 09:39:17 np0005597378 lvm[386648]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:39:17 np0005597378 brave_bhaskara[386563]: {}
Jan 27 09:39:17 np0005597378 lvm[386648]: VG ceph_vg2 finished
Jan 27 09:39:17 np0005597378 systemd[1]: libpod-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope: Deactivated successfully.
Jan 27 09:39:17 np0005597378 systemd[1]: libpod-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope: Consumed 1.427s CPU time.
Jan 27 09:39:17 np0005597378 podman[386546]: 2026-01-27 14:39:17.330858833 +0000 UTC m=+1.035971399 container died ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:39:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8866ff0ee73da2f8ebda83b2dbd26bb0491552dd411b0ac08b60a82dc7098f19-merged.mount: Deactivated successfully.
Jan 27 09:39:17 np0005597378 podman[386546]: 2026-01-27 14:39:17.434594181 +0000 UTC m=+1.139706747 container remove ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:39:17 np0005597378 systemd[1]: libpod-conmon-ca6dbbc54599241b7d914424adbd87156e3253bee8d81615a5e8ed6c66417676.scope: Deactivated successfully.
Jan 27 09:39:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:39:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:39:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:39:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:39:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:39:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:39:18 np0005597378 nova_compute[238941]: 2026-01-27 14:39:18.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:21 np0005597378 nova_compute[238941]: 2026-01-27 14:39:21.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:21 np0005597378 nova_compute[238941]: 2026-01-27 14:39:21.447 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:39:21 np0005597378 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:39:21 np0005597378 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:39:21 np0005597378 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:39:21 np0005597378 nova_compute[238941]: 2026-01-27 14:39:21.448 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:39:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:39:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066967691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.130 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.147 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.302 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.303 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3516MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.535 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.537 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.661 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.781 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.781 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.795 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.832 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:39:22 np0005597378 nova_compute[238941]: 2026-01-27 14:39:22.857 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:39:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:39:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137054054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:39:23 np0005597378 nova_compute[238941]: 2026-01-27 14:39:23.447 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:39:23 np0005597378 nova_compute[238941]: 2026-01-27 14:39:23.453 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:39:23 np0005597378 nova_compute[238941]: 2026-01-27 14:39:23.473 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:39:23 np0005597378 nova_compute[238941]: 2026-01-27 14:39:23.476 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:39:23 np0005597378 nova_compute[238941]: 2026-01-27 14:39:23.476 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:39:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:23 np0005597378 nova_compute[238941]: 2026-01-27 14:39:23.753 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:27 np0005597378 nova_compute[238941]: 2026-01-27 14:39:27.151 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:27 np0005597378 nova_compute[238941]: 2026-01-27 14:39:27.480 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:27 np0005597378 nova_compute[238941]: 2026-01-27 14:39:27.481 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:39:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:39:28 np0005597378 nova_compute[238941]: 2026-01-27 14:39:28.755 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:29 np0005597378 nova_compute[238941]: 2026-01-27 14:39:29.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:31 np0005597378 nova_compute[238941]: 2026-01-27 14:39:31.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:32 np0005597378 nova_compute[238941]: 2026-01-27 14:39:32.153 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:32 np0005597378 nova_compute[238941]: 2026-01-27 14:39:32.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:32 np0005597378 nova_compute[238941]: 2026-01-27 14:39:32.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:39:32 np0005597378 nova_compute[238941]: 2026-01-27 14:39:32.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:39:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:33 np0005597378 nova_compute[238941]: 2026-01-27 14:39:33.757 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:34 np0005597378 podman[386733]: 2026-01-27 14:39:34.718299519 +0000 UTC m=+0.060487812 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:39:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:36 np0005597378 podman[386754]: 2026-01-27 14:39:36.759878057 +0000 UTC m=+0.095898967 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 09:39:37 np0005597378 nova_compute[238941]: 2026-01-27 14:39:37.154 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:38 np0005597378 nova_compute[238941]: 2026-01-27 14:39:38.760 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:41 np0005597378 nova_compute[238941]: 2026-01-27 14:39:41.857 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:39:41 np0005597378 nova_compute[238941]: 2026-01-27 14:39:41.858 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:42 np0005597378 nova_compute[238941]: 2026-01-27 14:39:42.156 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:43 np0005597378 nova_compute[238941]: 2026-01-27 14:39:43.763 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:39:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:39:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:39:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:39:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:39:46.348 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:39:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:47 np0005597378 nova_compute[238941]: 2026-01-27 14:39:47.159 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:47 np0005597378 nova_compute[238941]: 2026-01-27 14:39:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:47 np0005597378 nova_compute[238941]: 2026-01-27 14:39:47.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:39:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:39:48 np0005597378 nova_compute[238941]: 2026-01-27 14:39:48.764 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:51 np0005597378 nova_compute[238941]: 2026-01-27 14:39:51.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:52 np0005597378 nova_compute[238941]: 2026-01-27 14:39:52.161 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:53 np0005597378 systemd-logind[786]: New session 51 of user zuul.
Jan 27 09:39:53 np0005597378 systemd[1]: Started Session 51 of User zuul.
Jan 27 09:39:53 np0005597378 nova_compute[238941]: 2026-01-27 14:39:53.765 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:39:55.575 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:39:55 np0005597378 nova_compute[238941]: 2026-01-27 14:39:55.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:39:55.575 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:39:55 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:39:55.576 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:39:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:39:57 np0005597378 nova_compute[238941]: 2026-01-27 14:39:57.163 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:57 np0005597378 nova_compute[238941]: 2026-01-27 14:39:57.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:39:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:58 np0005597378 nova_compute[238941]: 2026-01-27 14:39:58.767 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:39:59 np0005597378 systemd[1]: session-51.scope: Deactivated successfully.
Jan 27 09:39:59 np0005597378 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Jan 27 09:39:59 np0005597378 systemd-logind[786]: Removed session 51.
Jan 27 09:39:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:39:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:39:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3740279027' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:39:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:39:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3740279027' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:40:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:02 np0005597378 nova_compute[238941]: 2026-01-27 14:40:02.165 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:04 np0005597378 nova_compute[238941]: 2026-01-27 14:40:04.088 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:05 np0005597378 podman[387040]: 2026-01-27 14:40:05.752419972 +0000 UTC m=+0.092456721 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:40:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:07 np0005597378 nova_compute[238941]: 2026-01-27 14:40:07.166 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:07 np0005597378 podman[387059]: 2026-01-27 14:40:07.748280003 +0000 UTC m=+0.083550613 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:40:08 np0005597378 nova_compute[238941]: 2026-01-27 14:40:08.770 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:12 np0005597378 nova_compute[238941]: 2026-01-27 14:40:12.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:13 np0005597378 nova_compute[238941]: 2026-01-27 14:40:13.771 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:17 np0005597378 nova_compute[238941]: 2026-01-27 14:40:17.170 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:40:17
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:40:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:40:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.7498155 +0000 UTC m=+0.047084033 container create 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:40:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:40:18 np0005597378 nova_compute[238941]: 2026-01-27 14:40:18.773 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:18 np0005597378 systemd[1]: Started libpod-conmon-7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600.scope.
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.725760685 +0000 UTC m=+0.023029198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:40:18 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.856680347 +0000 UTC m=+0.153948950 container init 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.868260777 +0000 UTC m=+0.165529270 container start 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:40:18 np0005597378 wonderful_perlman[387243]: 167 167
Jan 27 09:40:18 np0005597378 systemd[1]: libpod-7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600.scope: Deactivated successfully.
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.876888079 +0000 UTC m=+0.174156622 container attach 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.877525245 +0000 UTC m=+0.174793778 container died 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:40:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5db546a7a533c63c76ae3daacdd34e68e7693194baba889719ee7d8b6db21085-merged.mount: Deactivated successfully.
Jan 27 09:40:18 np0005597378 podman[387227]: 2026-01-27 14:40:18.949298901 +0000 UTC m=+0.246567384 container remove 7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:40:18 np0005597378 systemd[1]: libpod-conmon-7fae627b725a8dfcad784ed0c931bb19aa1b17fee020cb7c91784bc4211ed600.scope: Deactivated successfully.
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.117557404 +0000 UTC m=+0.049902160 container create a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:40:19 np0005597378 systemd[1]: Started libpod-conmon-a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e.scope.
Jan 27 09:40:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:40:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.095734028 +0000 UTC m=+0.028078794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:40:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:19 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.203631352 +0000 UTC m=+0.135976108 container init a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.210626569 +0000 UTC m=+0.142971315 container start a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.215385027 +0000 UTC m=+0.147729773 container attach a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:40:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:19 np0005597378 funny_almeida[387283]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:40:19 np0005597378 funny_almeida[387283]: --> All data devices are unavailable
Jan 27 09:40:19 np0005597378 systemd[1]: libpod-a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e.scope: Deactivated successfully.
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.730835212 +0000 UTC m=+0.663179948 container died a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:40:19 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a3574235348298deb26c6df0e03e11f62bfeacb3bf4858be122b164c598d532d-merged.mount: Deactivated successfully.
Jan 27 09:40:19 np0005597378 podman[387267]: 2026-01-27 14:40:19.820133307 +0000 UTC m=+0.752478053 container remove a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_almeida, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:40:19 np0005597378 systemd[1]: libpod-conmon-a09e3c46062e646c8f4a20920fb497bac3bb916e65a57fa56121a8cea0d5464e.scope: Deactivated successfully.
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.293812131 +0000 UTC m=+0.060635247 container create c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.255618127 +0000 UTC m=+0.022441273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:40:20 np0005597378 systemd[1]: Started libpod-conmon-c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0.scope.
Jan 27 09:40:20 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.452409625 +0000 UTC m=+0.219232761 container init c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.460739318 +0000 UTC m=+0.227562444 container start c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:40:20 np0005597378 nervous_mayer[387397]: 167 167
Jan 27 09:40:20 np0005597378 systemd[1]: libpod-c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0.scope: Deactivated successfully.
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.482710688 +0000 UTC m=+0.249533804 container attach c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.484154437 +0000 UTC m=+0.250977543 container died c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:40:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cb3d486756a5626e110de9338fefd345832b9dee4caa06797b4c0691d2e64c40-merged.mount: Deactivated successfully.
Jan 27 09:40:20 np0005597378 podman[387381]: 2026-01-27 14:40:20.660906207 +0000 UTC m=+0.427729333 container remove c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mayer, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:40:20 np0005597378 systemd[1]: libpod-conmon-c83ae76aed2062a31793a61544979c27d3724c9e43df7939067b148e77b2a0f0.scope: Deactivated successfully.
Jan 27 09:40:20 np0005597378 podman[387425]: 2026-01-27 14:40:20.909923776 +0000 UTC m=+0.052244942 container create d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:40:20 np0005597378 systemd[1]: Started libpod-conmon-d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7.scope.
Jan 27 09:40:20 np0005597378 podman[387425]: 2026-01-27 14:40:20.882781978 +0000 UTC m=+0.025103164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:40:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:40:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:21 np0005597378 podman[387425]: 2026-01-27 14:40:21.043834178 +0000 UTC m=+0.186155364 container init d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:40:21 np0005597378 podman[387425]: 2026-01-27 14:40:21.051297308 +0000 UTC m=+0.193618484 container start d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:40:21 np0005597378 podman[387425]: 2026-01-27 14:40:21.067912373 +0000 UTC m=+0.210233589 container attach d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]: {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:    "0": [
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:        {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "devices": [
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "/dev/loop3"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            ],
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_name": "ceph_lv0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_size": "21470642176",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "name": "ceph_lv0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "tags": {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cluster_name": "ceph",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.crush_device_class": "",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.encrypted": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.objectstore": "bluestore",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osd_id": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.type": "block",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.vdo": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.with_tpm": "0"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            },
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "type": "block",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "vg_name": "ceph_vg0"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:        }
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:    ],
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:    "1": [
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:        {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "devices": [
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "/dev/loop4"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            ],
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_name": "ceph_lv1",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_size": "21470642176",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "name": "ceph_lv1",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "tags": {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cluster_name": "ceph",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.crush_device_class": "",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.encrypted": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.objectstore": "bluestore",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osd_id": "1",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.type": "block",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.vdo": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.with_tpm": "0"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            },
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "type": "block",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "vg_name": "ceph_vg1"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:        }
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:    ],
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:    "2": [
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:        {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "devices": [
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "/dev/loop5"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            ],
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_name": "ceph_lv2",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_size": "21470642176",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "name": "ceph_lv2",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "tags": {
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.cluster_name": "ceph",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.crush_device_class": "",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.encrypted": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.objectstore": "bluestore",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osd_id": "2",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.type": "block",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.vdo": "0",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:                "ceph.with_tpm": "0"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            },
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "type": "block",
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:            "vg_name": "ceph_vg2"
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:        }
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]:    ]
Jan 27 09:40:21 np0005597378 hardcore_meitner[387441]: }
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:21 np0005597378 systemd[1]: libpod-d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7.scope: Deactivated successfully.
Jan 27 09:40:21 np0005597378 podman[387425]: 2026-01-27 14:40:21.40707336 +0000 UTC m=+0.549394546 container died d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.410 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.411 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:40:21 np0005597378 systemd[1]: var-lib-containers-storage-overlay-263b12ec0ba4c196b4467cdd7d99a34e0387494870100cfb3448504e935f7952-merged.mount: Deactivated successfully.
Jan 27 09:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:21 np0005597378 podman[387425]: 2026-01-27 14:40:21.904203803 +0000 UTC m=+1.046524969 container remove d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_meitner, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:40:21 np0005597378 systemd[1]: libpod-conmon-d8bfd7595224bef80b4cc7de85cdd855f7383e9dd64c3295991d22e188ca18a7.scope: Deactivated successfully.
Jan 27 09:40:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:40:21 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572764043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:40:21 np0005597378 nova_compute[238941]: 2026-01-27 14:40:21.994 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.183 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.184 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.185 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.185 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.300 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.301 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.326 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.445031048 +0000 UTC m=+0.076155773 container create 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.394555714 +0000 UTC m=+0.025680469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:40:22 np0005597378 systemd[1]: Started libpod-conmon-3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2.scope.
Jan 27 09:40:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.603616102 +0000 UTC m=+0.234740917 container init 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.610221558 +0000 UTC m=+0.241346283 container start 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:40:22 np0005597378 pedantic_davinci[387582]: 167 167
Jan 27 09:40:22 np0005597378 systemd[1]: libpod-3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2.scope: Deactivated successfully.
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.667419753 +0000 UTC m=+0.298544498 container attach 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.667859375 +0000 UTC m=+0.298984090 container died 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:40:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a5a1f7c0bb13b4184b1c6d1858c8056d76a2a19f632f19132573eb75cf4583c7-merged.mount: Deactivated successfully.
Jan 27 09:40:22 np0005597378 podman[387549]: 2026-01-27 14:40:22.797013589 +0000 UTC m=+0.428138314 container remove 3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:40:22 np0005597378 systemd[1]: libpod-conmon-3d8a0d5a315334fb56002365c8c23c799761a815e7a522aaf0d4f6da3f6692a2.scope: Deactivated successfully.
Jan 27 09:40:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:40:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281487720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.893 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.900 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.962 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.965 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:40:22 np0005597378 nova_compute[238941]: 2026-01-27 14:40:22.965 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:40:22 np0005597378 podman[387607]: 2026-01-27 14:40:22.968789385 +0000 UTC m=+0.052746395 container create 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 09:40:23 np0005597378 systemd[1]: Started libpod-conmon-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope.
Jan 27 09:40:23 np0005597378 podman[387607]: 2026-01-27 14:40:22.937910187 +0000 UTC m=+0.021867217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:40:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:40:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:40:23 np0005597378 podman[387607]: 2026-01-27 14:40:23.083580514 +0000 UTC m=+0.167537574 container init 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:40:23 np0005597378 podman[387607]: 2026-01-27 14:40:23.090470829 +0000 UTC m=+0.174427829 container start 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:40:23 np0005597378 podman[387607]: 2026-01-27 14:40:23.097918149 +0000 UTC m=+0.181875159 container attach 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:40:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:23 np0005597378 lvm[387701]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:40:23 np0005597378 lvm[387700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:40:23 np0005597378 lvm[387701]: VG ceph_vg1 finished
Jan 27 09:40:23 np0005597378 lvm[387700]: VG ceph_vg0 finished
Jan 27 09:40:23 np0005597378 nova_compute[238941]: 2026-01-27 14:40:23.776 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:23 np0005597378 lvm[387703]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:40:23 np0005597378 lvm[387703]: VG ceph_vg2 finished
Jan 27 09:40:23 np0005597378 pedantic_kare[387622]: {}
Jan 27 09:40:23 np0005597378 systemd[1]: libpod-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope: Deactivated successfully.
Jan 27 09:40:23 np0005597378 podman[387607]: 2026-01-27 14:40:23.912764704 +0000 UTC m=+0.996721734 container died 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:40:23 np0005597378 systemd[1]: libpod-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope: Consumed 1.287s CPU time.
Jan 27 09:40:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4ee9e8e21b3b2cdbc9bddcda68559da8b67d833000ab42c1e1000cf1a6fe1c1f-merged.mount: Deactivated successfully.
Jan 27 09:40:23 np0005597378 podman[387607]: 2026-01-27 14:40:23.972179707 +0000 UTC m=+1.056136717 container remove 98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kare, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:40:23 np0005597378 systemd[1]: libpod-conmon-98f936289925117b8a60b8cb15051cd5da44ae8294baeff7f1f9b96f1d7b055c.scope: Deactivated successfully.
Jan 27 09:40:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:40:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:40:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:40:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:40:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:40:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:40:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:27 np0005597378 nova_compute[238941]: 2026-01-27 14:40:27.174 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:27 np0005597378 nova_compute[238941]: 2026-01-27 14:40:27.968 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:27 np0005597378 nova_compute[238941]: 2026-01-27 14:40:27.968 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:40:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:40:28 np0005597378 nova_compute[238941]: 2026-01-27 14:40:28.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:29 np0005597378 nova_compute[238941]: 2026-01-27 14:40:29.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:32 np0005597378 nova_compute[238941]: 2026-01-27 14:40:32.214 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:32 np0005597378 nova_compute[238941]: 2026-01-27 14:40:32.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:33 np0005597378 nova_compute[238941]: 2026-01-27 14:40:33.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:33 np0005597378 nova_compute[238941]: 2026-01-27 14:40:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:40:33 np0005597378 nova_compute[238941]: 2026-01-27 14:40:33.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:40:33 np0005597378 nova_compute[238941]: 2026-01-27 14:40:33.396 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:40:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:33 np0005597378 nova_compute[238941]: 2026-01-27 14:40:33.779 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:36 np0005597378 podman[387741]: 2026-01-27 14:40:36.255184016 +0000 UTC m=+0.087693884 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:40:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:37 np0005597378 nova_compute[238941]: 2026-01-27 14:40:37.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:38 np0005597378 podman[387760]: 2026-01-27 14:40:38.7482162 +0000 UTC m=+0.083912972 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:40:38 np0005597378 nova_compute[238941]: 2026-01-27 14:40:38.781 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:41 np0005597378 nova_compute[238941]: 2026-01-27 14:40:41.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:42 np0005597378 nova_compute[238941]: 2026-01-27 14:40:42.267 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:43 np0005597378 nova_compute[238941]: 2026-01-27 14:40:43.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:40:46.349 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:40:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:40:46.350 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:40:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:40:46.350 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:40:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:47 np0005597378 nova_compute[238941]: 2026-01-27 14:40:47.270 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:40:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:40:48 np0005597378 nova_compute[238941]: 2026-01-27 14:40:48.785 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:49 np0005597378 nova_compute[238941]: 2026-01-27 14:40:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:49 np0005597378 nova_compute[238941]: 2026-01-27 14:40:49.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:40:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:51 np0005597378 nova_compute[238941]: 2026-01-27 14:40:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:40:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:52 np0005597378 nova_compute[238941]: 2026-01-27 14:40:52.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:53 np0005597378 nova_compute[238941]: 2026-01-27 14:40:53.788 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.787459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856787484, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1742, "num_deletes": 251, "total_data_size": 2911568, "memory_usage": 2953104, "flush_reason": "Manual Compaction"}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856801933, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 2850303, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59456, "largest_seqno": 61197, "table_properties": {"data_size": 2842200, "index_size": 4981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16276, "raw_average_key_size": 19, "raw_value_size": 2826138, "raw_average_value_size": 3471, "num_data_blocks": 222, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524667, "oldest_key_time": 1769524667, "file_creation_time": 1769524856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 14641 microseconds, and 6569 cpu microseconds.
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.802087) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 2850303 bytes OK
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.802151) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.803929) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.803960) EVENT_LOG_v1 {"time_micros": 1769524856803953, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.803984) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2904115, prev total WAL file size 2904115, number of live WAL files 2.
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.805054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(2783KB)], [140(8624KB)]
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856805113, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11681876, "oldest_snapshot_seqno": -1}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8172 keys, 9938503 bytes, temperature: kUnknown
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856864418, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9938503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9886198, "index_size": 30752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 212921, "raw_average_key_size": 26, "raw_value_size": 9742862, "raw_average_value_size": 1192, "num_data_blocks": 1193, "num_entries": 8172, "num_filter_entries": 8172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.864686) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9938503 bytes
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.866732) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.4 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 8686, records dropped: 514 output_compression: NoCompression
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.866748) EVENT_LOG_v1 {"time_micros": 1769524856866740, "job": 86, "event": "compaction_finished", "compaction_time_micros": 59416, "compaction_time_cpu_micros": 22826, "output_level": 6, "num_output_files": 1, "total_output_size": 9938503, "num_input_records": 8686, "num_output_records": 8172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856867566, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524856869391, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.804976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:40:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:40:56.869508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:40:57 np0005597378 nova_compute[238941]: 2026-01-27 14:40:57.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:40:58 np0005597378 nova_compute[238941]: 2026-01-27 14:40:58.789 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:40:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:40:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391454043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:40:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:40:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391454043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:40:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:02 np0005597378 nova_compute[238941]: 2026-01-27 14:41:02.275 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:03 np0005597378 nova_compute[238941]: 2026-01-27 14:41:03.793 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:06 np0005597378 systemd-logind[786]: New session 52 of user zuul.
Jan 27 09:41:06 np0005597378 systemd[1]: Started Session 52 of User zuul.
Jan 27 09:41:06 np0005597378 podman[387788]: 2026-01-27 14:41:06.347666044 +0000 UTC m=+0.056091815 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:41:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:41:06.987 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:41:06 np0005597378 nova_compute[238941]: 2026-01-27 14:41:06.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:06 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:41:06.988 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:41:07 np0005597378 nova_compute[238941]: 2026-01-27 14:41:07.277 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:08 np0005597378 nova_compute[238941]: 2026-01-27 14:41:08.794 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:09 np0005597378 podman[387979]: 2026-01-27 14:41:09.76609336 +0000 UTC m=+0.104530395 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 09:41:10 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:41:10.989 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:41:11 np0005597378 systemd-logind[786]: New session 53 of user zuul.
Jan 27 09:41:11 np0005597378 systemd[1]: Started Session 53 of User zuul.
Jan 27 09:41:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:11 np0005597378 systemd[1]: Reloading.
Jan 27 09:41:12 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 09:41:12 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 09:41:12 np0005597378 nova_compute[238941]: 2026-01-27 14:41:12.280 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:12 np0005597378 systemd[1]: Starting dnf makecache...
Jan 27 09:41:12 np0005597378 systemd[1]: Reloading.
Jan 27 09:41:12 np0005597378 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 09:41:12 np0005597378 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 09:41:12 np0005597378 dnf[388175]: Metadata cache refreshed recently.
Jan 27 09:41:12 np0005597378 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 09:41:12 np0005597378 systemd[1]: Finished dnf makecache.
Jan 27 09:41:12 np0005597378 systemd[1]: Starting Podman API Socket...
Jan 27 09:41:12 np0005597378 systemd[1]: Listening on Podman API Socket.
Jan 27 09:41:13 np0005597378 dbus-broker-launch[773]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Jan 27 09:41:13 np0005597378 systemd[1]: podman.socket: Deactivated successfully.
Jan 27 09:41:13 np0005597378 systemd[1]: Closed Podman API Socket.
Jan 27 09:41:13 np0005597378 systemd[1]: Stopping Podman API Socket...
Jan 27 09:41:13 np0005597378 systemd[1]: Starting Podman API Socket...
Jan 27 09:41:13 np0005597378 systemd[1]: Listening on Podman API Socket.
Jan 27 09:41:13 np0005597378 systemd-logind[786]: New session 54 of user zuul.
Jan 27 09:41:13 np0005597378 systemd[1]: Started Session 54 of User zuul.
Jan 27 09:41:13 np0005597378 systemd[1]: Starting Podman API Service...
Jan 27 09:41:13 np0005597378 systemd[1]: Started Podman API Service.
Jan 27 09:41:13 np0005597378 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 27 09:41:13 np0005597378 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Setting parallel job count to 25"
Jan 27 09:41:13 np0005597378 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Using sqlite as database backend"
Jan 27 09:41:13 np0005597378 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 27 09:41:13 np0005597378 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 27 09:41:13 np0005597378 podman[388243]: time="2026-01-27T14:41:13Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 27 09:41:13 np0005597378 podman[388243]: @ - - [27/Jan/2026:14:41:13 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 27 09:41:13 np0005597378 podman[388243]: @ - - [27/Jan/2026:14:41:13 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22535 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 27 09:41:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:13 np0005597378 nova_compute[238941]: 2026-01-27 14:41:13.807 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:41:17
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['volumes', 'images', 'vms', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:41:17 np0005597378 nova_compute[238941]: 2026-01-27 14:41:17.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:41:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:41:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:41:18 np0005597378 nova_compute[238941]: 2026-01-27 14:41:18.809 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.285 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.417 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.418 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:41:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:41:22 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850832318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:41:22 np0005597378 nova_compute[238941]: 2026-01-27 14:41:22.959 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.119 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.121 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.121 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.122 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.183 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.183 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.203 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:41:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:41:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1044059050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.784 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.792 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.816 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.818 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:41:23 np0005597378 nova_compute[238941]: 2026-01-27 14:41:23.818 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:41:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:41:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:41:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:41:25 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.283806304 +0000 UTC m=+0.044347571 container create 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:41:25 np0005597378 systemd[1]: Started libpod-conmon-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope.
Jan 27 09:41:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.264008802 +0000 UTC m=+0.024550079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.374079635 +0000 UTC m=+0.134620922 container init 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.381838233 +0000 UTC m=+0.142379500 container start 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:41:25 np0005597378 frosty_lichterman[388458]: 167 167
Jan 27 09:41:25 np0005597378 systemd[1]: libpod-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope: Deactivated successfully.
Jan 27 09:41:25 np0005597378 conmon[388458]: conmon 52bbdc25b9d33709ae9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope/container/memory.events
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.390215188 +0000 UTC m=+0.150756445 container attach 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.391209724 +0000 UTC m=+0.151750991 container died 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:41:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cb25cb7a1d6357bd06e40d4adc0c3f6640cc5eb46dd027e589a46d1be9e6a8e1-merged.mount: Deactivated successfully.
Jan 27 09:41:25 np0005597378 podman[388442]: 2026-01-27 14:41:25.451053259 +0000 UTC m=+0.211594526 container remove 52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:41:25 np0005597378 systemd[1]: libpod-conmon-52bbdc25b9d33709ae9ada9041130e1e849448475815982f1d36b02888f8e830.scope: Deactivated successfully.
Jan 27 09:41:25 np0005597378 podman[388481]: 2026-01-27 14:41:25.625160989 +0000 UTC m=+0.049362875 container create 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:41:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:25 np0005597378 systemd[1]: Started libpod-conmon-41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23.scope.
Jan 27 09:41:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:41:25 np0005597378 podman[388481]: 2026-01-27 14:41:25.600701763 +0000 UTC m=+0.024903679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:41:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:25 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:25 np0005597378 podman[388481]: 2026-01-27 14:41:25.722575242 +0000 UTC m=+0.146777148 container init 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:41:25 np0005597378 podman[388481]: 2026-01-27 14:41:25.730542426 +0000 UTC m=+0.154744312 container start 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:41:25 np0005597378 podman[388481]: 2026-01-27 14:41:25.736976178 +0000 UTC m=+0.161178084 container attach 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:41:26 np0005597378 laughing_sutherland[388497]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:41:26 np0005597378 laughing_sutherland[388497]: --> All data devices are unavailable
Jan 27 09:41:26 np0005597378 systemd[1]: libpod-41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23.scope: Deactivated successfully.
Jan 27 09:41:26 np0005597378 podman[388481]: 2026-01-27 14:41:26.227234356 +0000 UTC m=+0.651436232 container died 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:41:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ffb03444d60260b6a1349cc92f0ac730095c3b374fa18aa80c0cfdd590ac97c1-merged.mount: Deactivated successfully.
Jan 27 09:41:26 np0005597378 podman[388481]: 2026-01-27 14:41:26.28252114 +0000 UTC m=+0.706723026 container remove 41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:41:26 np0005597378 systemd[1]: libpod-conmon-41a61688c482c68d528d08a437802bd5f3d96390d584fe7b10bcb1208a8f7a23.scope: Deactivated successfully.
Jan 27 09:41:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.718987636 +0000 UTC m=+0.042338217 container create ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:41:26 np0005597378 systemd[1]: Started libpod-conmon-ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c.scope.
Jan 27 09:41:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.788711976 +0000 UTC m=+0.112062597 container init ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.795696533 +0000 UTC m=+0.119047114 container start ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.699994937 +0000 UTC m=+0.023345558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.800166094 +0000 UTC m=+0.123516705 container attach ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 27 09:41:26 np0005597378 elegant_tesla[388607]: 167 167
Jan 27 09:41:26 np0005597378 systemd[1]: libpod-ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c.scope: Deactivated successfully.
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.801236032 +0000 UTC m=+0.124586623 container died ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:41:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-561354ad403b29e926d858e5b4c5e86a312c8b93631d25e9aacb429255171b42-merged.mount: Deactivated successfully.
Jan 27 09:41:26 np0005597378 podman[388591]: 2026-01-27 14:41:26.846406404 +0000 UTC m=+0.169756995 container remove ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:41:26 np0005597378 systemd[1]: libpod-conmon-ccf256c3786c3ebdf83478fde3b374da8d7f1ecc2981d350df42186e8eb5118c.scope: Deactivated successfully.
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:27.005435218 +0000 UTC m=+0.039618253 container create 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:41:27 np0005597378 systemd[1]: Started libpod-conmon-063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0.scope.
Jan 27 09:41:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:41:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:26.989097201 +0000 UTC m=+0.023280256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:41:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:27.102282036 +0000 UTC m=+0.136465091 container init 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:27.10765512 +0000 UTC m=+0.141838155 container start 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:27.111082663 +0000 UTC m=+0.145265728 container attach 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:41:27 np0005597378 nova_compute[238941]: 2026-01-27 14:41:27.287 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]: {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:    "0": [
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:        {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "devices": [
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "/dev/loop3"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            ],
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_name": "ceph_lv0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_size": "21470642176",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "name": "ceph_lv0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "tags": {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cluster_name": "ceph",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.crush_device_class": "",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.encrypted": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.objectstore": "bluestore",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osd_id": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.type": "block",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.vdo": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.with_tpm": "0"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            },
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "type": "block",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "vg_name": "ceph_vg0"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:        }
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:    ],
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:    "1": [
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:        {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "devices": [
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "/dev/loop4"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            ],
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_name": "ceph_lv1",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_size": "21470642176",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "name": "ceph_lv1",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "tags": {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cluster_name": "ceph",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.crush_device_class": "",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.encrypted": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.objectstore": "bluestore",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osd_id": "1",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.type": "block",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.vdo": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.with_tpm": "0"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            },
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "type": "block",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "vg_name": "ceph_vg1"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:        }
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:    ],
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:    "2": [
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:        {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "devices": [
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "/dev/loop5"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            ],
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_name": "ceph_lv2",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_size": "21470642176",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "name": "ceph_lv2",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "tags": {
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.cluster_name": "ceph",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.crush_device_class": "",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.encrypted": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.objectstore": "bluestore",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osd_id": "2",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.type": "block",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.vdo": "0",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:                "ceph.with_tpm": "0"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            },
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "type": "block",
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:            "vg_name": "ceph_vg2"
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:        }
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]:    ]
Jan 27 09:41:27 np0005597378 unruffled_pike[388646]: }
Jan 27 09:41:27 np0005597378 systemd[1]: libpod-063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0.scope: Deactivated successfully.
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:27.450626509 +0000 UTC m=+0.484809574 container died 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:41:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3008d58e58ea67d52319f402b915d2c47366aac5dacdecf0e130451b377082ec-merged.mount: Deactivated successfully.
Jan 27 09:41:27 np0005597378 podman[388631]: 2026-01-27 14:41:27.533000968 +0000 UTC m=+0.567184013 container remove 063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:41:27 np0005597378 systemd[1]: libpod-conmon-063f796709b8c319da469f7076509f4a25ef3062a45eb079516ad5adb5b044c0.scope: Deactivated successfully.
Jan 27 09:41:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.051960508 +0000 UTC m=+0.067965985 container create 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.010657599 +0000 UTC m=+0.026663096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:41:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:41:28 np0005597378 systemd[1]: Started libpod-conmon-25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba.scope.
Jan 27 09:41:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.202914025 +0000 UTC m=+0.218919532 container init 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.209975496 +0000 UTC m=+0.225980963 container start 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:41:28 np0005597378 optimistic_goldberg[388744]: 167 167
Jan 27 09:41:28 np0005597378 systemd[1]: libpod-25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba.scope: Deactivated successfully.
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.2172354 +0000 UTC m=+0.233240867 container attach 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.21762514 +0000 UTC m=+0.233630627 container died 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:41:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d4c183c76fd668cc7e0fc3f0875ce07a5d813bb0ac0e51703b0dfb698a8411db-merged.mount: Deactivated successfully.
Jan 27 09:41:28 np0005597378 podman[388728]: 2026-01-27 14:41:28.3943643 +0000 UTC m=+0.410369767 container remove 25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:41:28 np0005597378 systemd[1]: libpod-conmon-25147e1184398661478d1df82e3e3c35fe258172667cea8a314fe585734085ba.scope: Deactivated successfully.
Jan 27 09:41:28 np0005597378 podman[388243]: time="2026-01-27T14:41:28Z" level=info msg="Received shutdown.Stop(), terminating!" PID=388243
Jan 27 09:41:28 np0005597378 systemd[1]: podman.service: Deactivated successfully.
Jan 27 09:41:28 np0005597378 podman[388767]: 2026-01-27 14:41:28.574839451 +0000 UTC m=+0.047439503 container create ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:41:28 np0005597378 systemd[1]: Started libpod-conmon-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope.
Jan 27 09:41:28 np0005597378 podman[388767]: 2026-01-27 14:41:28.551560986 +0000 UTC m=+0.024161058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:41:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:41:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:41:28 np0005597378 podman[388767]: 2026-01-27 14:41:28.674782511 +0000 UTC m=+0.147382593 container init ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 27 09:41:28 np0005597378 podman[388767]: 2026-01-27 14:41:28.681747788 +0000 UTC m=+0.154347840 container start ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:41:28 np0005597378 podman[388767]: 2026-01-27 14:41:28.689620239 +0000 UTC m=+0.162220311 container attach ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:41:28 np0005597378 nova_compute[238941]: 2026-01-27 14:41:28.813 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:29 np0005597378 lvm[388862]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:41:29 np0005597378 lvm[388861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:41:29 np0005597378 lvm[388861]: VG ceph_vg0 finished
Jan 27 09:41:29 np0005597378 lvm[388862]: VG ceph_vg1 finished
Jan 27 09:41:29 np0005597378 lvm[388864]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:41:29 np0005597378 lvm[388864]: VG ceph_vg2 finished
Jan 27 09:41:29 np0005597378 lucid_villani[388783]: {}
Jan 27 09:41:29 np0005597378 systemd[1]: libpod-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope: Deactivated successfully.
Jan 27 09:41:29 np0005597378 systemd[1]: libpod-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope: Consumed 1.474s CPU time.
Jan 27 09:41:29 np0005597378 podman[388767]: 2026-01-27 14:41:29.593688837 +0000 UTC m=+1.066288889 container died ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:41:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cb12810c083ec5adc344d7cad1f509793b8365091f50727e203be68d4699ef93-merged.mount: Deactivated successfully.
Jan 27 09:41:29 np0005597378 podman[388767]: 2026-01-27 14:41:29.665547684 +0000 UTC m=+1.138147736 container remove ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:41:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:29 np0005597378 systemd[1]: libpod-conmon-ad7318e1eb19fe8d5188bc3d0c888a94c69f0cb6777f07eb4cda384e4f2f0465.scope: Deactivated successfully.
Jan 27 09:41:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:41:29 np0005597378 nova_compute[238941]: 2026-01-27 14:41:29.820 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:29 np0005597378 nova_compute[238941]: 2026-01-27 14:41:29.821 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:41:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:41:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:41:30 np0005597378 nova_compute[238941]: 2026-01-27 14:41:30.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:41:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.757220) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524891757300, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 550, "num_deletes": 256, "total_data_size": 550231, "memory_usage": 561656, "flush_reason": "Manual Compaction"}
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524891803001, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 545351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61198, "largest_seqno": 61747, "table_properties": {"data_size": 542318, "index_size": 1004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7020, "raw_average_key_size": 18, "raw_value_size": 536188, "raw_average_value_size": 1422, "num_data_blocks": 44, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524857, "oldest_key_time": 1769524857, "file_creation_time": 1769524891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 45831 microseconds, and 4644 cpu microseconds.
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.803063) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 545351 bytes OK
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.803086) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.882937) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.882984) EVENT_LOG_v1 {"time_micros": 1769524891882975, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.883012) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 547109, prev total WAL file size 560577, number of live WAL files 2.
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.924288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353132' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(532KB)], [143(9705KB)]
Jan 27 09:41:31 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524891924371, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10483854, "oldest_snapshot_seqno": -1}
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8026 keys, 10365066 bytes, temperature: kUnknown
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524892052480, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10365066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10312728, "index_size": 31169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 210821, "raw_average_key_size": 26, "raw_value_size": 10170874, "raw_average_value_size": 1267, "num_data_blocks": 1209, "num_entries": 8026, "num_filter_entries": 8026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769524891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.052760) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10365066 bytes
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.057591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.8 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(38.2) write-amplify(19.0) OK, records in: 8549, records dropped: 523 output_compression: NoCompression
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.057616) EVENT_LOG_v1 {"time_micros": 1769524892057604, "job": 88, "event": "compaction_finished", "compaction_time_micros": 128184, "compaction_time_cpu_micros": 29565, "output_level": 6, "num_output_files": 1, "total_output_size": 10365066, "num_input_records": 8549, "num_output_records": 8026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524892057878, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769524892060140, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:31.924182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:41:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:41:32.060179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:41:32 np0005597378 nova_compute[238941]: 2026-01-27 14:41:32.291 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:33 np0005597378 nova_compute[238941]: 2026-01-27 14:41:33.815 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:34 np0005597378 nova_compute[238941]: 2026-01-27 14:41:34.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:35 np0005597378 nova_compute[238941]: 2026-01-27 14:41:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:35 np0005597378 nova_compute[238941]: 2026-01-27 14:41:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:41:35 np0005597378 nova_compute[238941]: 2026-01-27 14:41:35.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:41:35 np0005597378 nova_compute[238941]: 2026-01-27 14:41:35.410 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:41:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:36 np0005597378 podman[388911]: 2026-01-27 14:41:36.763224428 +0000 UTC m=+0.094416793 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:41:36 np0005597378 systemd[1]: session-52.scope: Deactivated successfully.
Jan 27 09:41:36 np0005597378 systemd-logind[786]: Session 52 logged out. Waiting for processes to exit.
Jan 27 09:41:36 np0005597378 systemd-logind[786]: Removed session 52.
Jan 27 09:41:36 np0005597378 systemd[1]: session-53.scope: Deactivated successfully.
Jan 27 09:41:36 np0005597378 systemd[1]: session-53.scope: Consumed 1.155s CPU time.
Jan 27 09:41:36 np0005597378 systemd-logind[786]: Session 53 logged out. Waiting for processes to exit.
Jan 27 09:41:36 np0005597378 systemd-logind[786]: Removed session 53.
Jan 27 09:41:37 np0005597378 nova_compute[238941]: 2026-01-27 14:41:37.294 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:37 np0005597378 systemd[1]: session-54.scope: Deactivated successfully.
Jan 27 09:41:37 np0005597378 systemd-logind[786]: Session 54 logged out. Waiting for processes to exit.
Jan 27 09:41:37 np0005597378 systemd-logind[786]: Removed session 54.
Jan 27 09:41:38 np0005597378 nova_compute[238941]: 2026-01-27 14:41:38.816 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:39 np0005597378 nova_compute[238941]: 2026-01-27 14:41:39.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:39 np0005597378 nova_compute[238941]: 2026-01-27 14:41:39.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:41:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:40 np0005597378 podman[388975]: 2026-01-27 14:41:40.79264874 +0000 UTC m=+0.127066369 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:41:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:42 np0005597378 nova_compute[238941]: 2026-01-27 14:41:42.296 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:43 np0005597378 nova_compute[238941]: 2026-01-27 14:41:43.399 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:43 np0005597378 nova_compute[238941]: 2026-01-27 14:41:43.818 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:41:46.350 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:41:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:41:46.351 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:41:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:41:46.351 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:41:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:47 np0005597378 nova_compute[238941]: 2026-01-27 14:41:47.298 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:41:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:41:48 np0005597378 nova_compute[238941]: 2026-01-27 14:41:48.819 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:49 np0005597378 nova_compute[238941]: 2026-01-27 14:41:49.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:49 np0005597378 nova_compute[238941]: 2026-01-27 14:41:49.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:41:49 np0005597378 nova_compute[238941]: 2026-01-27 14:41:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:51 np0005597378 nova_compute[238941]: 2026-01-27 14:41:51.435 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:52 np0005597378 nova_compute[238941]: 2026-01-27 14:41:52.301 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:53 np0005597378 nova_compute[238941]: 2026-01-27 14:41:53.820 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:41:57 np0005597378 nova_compute[238941]: 2026-01-27 14:41:57.302 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:41:58 np0005597378 nova_compute[238941]: 2026-01-27 14:41:58.822 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:41:59 np0005597378 nova_compute[238941]: 2026-01-27 14:41:59.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:41:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:41:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703705961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:41:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:41:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703705961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:41:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:02 np0005597378 nova_compute[238941]: 2026-01-27 14:42:02.304 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:02 np0005597378 nova_compute[238941]: 2026-01-27 14:42:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:02 np0005597378 nova_compute[238941]: 2026-01-27 14:42:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:42:02 np0005597378 nova_compute[238941]: 2026-01-27 14:42:02.405 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:42:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:03 np0005597378 nova_compute[238941]: 2026-01-27 14:42:03.824 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:07 np0005597378 nova_compute[238941]: 2026-01-27 14:42:07.306 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:07 np0005597378 podman[389003]: 2026-01-27 14:42:07.714302306 +0000 UTC m=+0.057466852 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:42:08 np0005597378 nova_compute[238941]: 2026-01-27 14:42:08.826 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:11 np0005597378 podman[389022]: 2026-01-27 14:42:11.740220614 +0000 UTC m=+0.075535827 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:42:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:12 np0005597378 nova_compute[238941]: 2026-01-27 14:42:12.308 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:13 np0005597378 nova_compute[238941]: 2026-01-27 14:42:13.827 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:42:17
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images']
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:42:17 np0005597378 nova_compute[238941]: 2026-01-27 14:42:17.310 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:42:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:42:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:42:18 np0005597378 nova_compute[238941]: 2026-01-27 14:42:18.829 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.311 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.405 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.438 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.439 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.439 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:42:22 np0005597378 nova_compute[238941]: 2026-01-27 14:42:22.440 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:42:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:42:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3917090285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.034 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.245 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.246 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3591MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.246 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.247 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.318 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.343 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:42:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.831 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:42:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165816354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.951 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.958 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.974 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.976 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:42:23 np0005597378 nova_compute[238941]: 2026-01-27 14:42:23.976 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:42:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:27 np0005597378 nova_compute[238941]: 2026-01-27 14:42:27.312 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:42:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:42:28 np0005597378 nova_compute[238941]: 2026-01-27 14:42:28.833 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:42:30 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:42:30 np0005597378 nova_compute[238941]: 2026-01-27 14:42:30.954 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:30 np0005597378 nova_compute[238941]: 2026-01-27 14:42:30.955 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:30 np0005597378 nova_compute[238941]: 2026-01-27 14:42:30.955 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.085073385 +0000 UTC m=+0.043930400 container create d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:42:31 np0005597378 systemd[1]: Started libpod-conmon-d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2.scope.
Jan 27 09:42:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.065717526 +0000 UTC m=+0.024574571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.179059406 +0000 UTC m=+0.137916451 container init d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.186325 +0000 UTC m=+0.145182015 container start d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.191403077 +0000 UTC m=+0.150260112 container attach d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:42:31 np0005597378 relaxed_germain[389251]: 167 167
Jan 27 09:42:31 np0005597378 systemd[1]: libpod-d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2.scope: Deactivated successfully.
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.193125533 +0000 UTC m=+0.151982568 container died d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 09:42:31 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7d40ff9589d48644260350241010f2971aa867dbd351dddfa1d02b986f03d89c-merged.mount: Deactivated successfully.
Jan 27 09:42:31 np0005597378 podman[389235]: 2026-01-27 14:42:31.242052205 +0000 UTC m=+0.200909220 container remove d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_germain, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:42:31 np0005597378 systemd[1]: libpod-conmon-d73a9ce1c2cb6e23cb1ac0ea30f4c071c6d0aaf940cb33308c63715649417df2.scope: Deactivated successfully.
Jan 27 09:42:31 np0005597378 podman[389274]: 2026-01-27 14:42:31.410480402 +0000 UTC m=+0.044599257 container create 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:42:31 np0005597378 systemd[1]: Started libpod-conmon-89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7.scope.
Jan 27 09:42:31 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:42:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:31 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:31 np0005597378 podman[389274]: 2026-01-27 14:42:31.391541795 +0000 UTC m=+0.025660680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:42:31 np0005597378 podman[389274]: 2026-01-27 14:42:31.499260683 +0000 UTC m=+0.133379568 container init 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:42:31 np0005597378 podman[389274]: 2026-01-27 14:42:31.512241092 +0000 UTC m=+0.146359957 container start 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 27 09:42:31 np0005597378 podman[389274]: 2026-01-27 14:42:31.517568255 +0000 UTC m=+0.151687150 container attach 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 09:42:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:31 np0005597378 gracious_wiles[389292]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:42:31 np0005597378 gracious_wiles[389292]: --> All data devices are unavailable
Jan 27 09:42:31 np0005597378 systemd[1]: libpod-89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7.scope: Deactivated successfully.
Jan 27 09:42:31 np0005597378 podman[389274]: 2026-01-27 14:42:31.992638156 +0000 UTC m=+0.626757051 container died 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:42:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4fa16cc431af117c9a0a210cbc54c03b07dfdc3047f351bf6b3c757665776461-merged.mount: Deactivated successfully.
Jan 27 09:42:32 np0005597378 podman[389274]: 2026-01-27 14:42:32.044697692 +0000 UTC m=+0.678816557 container remove 89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:42:32 np0005597378 systemd[1]: libpod-conmon-89c5602b52ff1de3e034cbda776b15ca93fc1273ee2696941a47d8a185c44fb7.scope: Deactivated successfully.
Jan 27 09:42:32 np0005597378 nova_compute[238941]: 2026-01-27 14:42:32.315 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.474559501 +0000 UTC m=+0.037708192 container create 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:42:32 np0005597378 systemd[1]: Started libpod-conmon-21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8.scope.
Jan 27 09:42:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.459018145 +0000 UTC m=+0.022166856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.558226385 +0000 UTC m=+0.121375106 container init 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.565536971 +0000 UTC m=+0.128685652 container start 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.569151409 +0000 UTC m=+0.132300120 container attach 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:42:32 np0005597378 brave_gates[389403]: 167 167
Jan 27 09:42:32 np0005597378 systemd[1]: libpod-21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8.scope: Deactivated successfully.
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.572367515 +0000 UTC m=+0.135516236 container died 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:42:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3dfdef79ad42db3918303b768095aa83fedaf77f05eb764d6045f29ff278d2ad-merged.mount: Deactivated successfully.
Jan 27 09:42:32 np0005597378 podman[389387]: 2026-01-27 14:42:32.609612594 +0000 UTC m=+0.172761285 container remove 21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gates, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:42:32 np0005597378 systemd[1]: libpod-conmon-21ec51251d25755efe1ab29437a56f1a30f0e15b7a3b9933f6ecdecf04ada5a8.scope: Deactivated successfully.
Jan 27 09:42:32 np0005597378 podman[389427]: 2026-01-27 14:42:32.764858697 +0000 UTC m=+0.038824102 container create 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:42:32 np0005597378 systemd[1]: Started libpod-conmon-3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb.scope.
Jan 27 09:42:32 np0005597378 podman[389427]: 2026-01-27 14:42:32.748627392 +0000 UTC m=+0.022592817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:42:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:42:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:32 np0005597378 podman[389427]: 2026-01-27 14:42:32.865150717 +0000 UTC m=+0.139116142 container init 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:42:32 np0005597378 podman[389427]: 2026-01-27 14:42:32.871914479 +0000 UTC m=+0.145879874 container start 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:42:32 np0005597378 podman[389427]: 2026-01-27 14:42:32.877591511 +0000 UTC m=+0.151556936 container attach 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]: {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:    "0": [
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:        {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "devices": [
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "/dev/loop3"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            ],
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_name": "ceph_lv0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_size": "21470642176",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "name": "ceph_lv0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "tags": {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cluster_name": "ceph",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.crush_device_class": "",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.encrypted": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.objectstore": "bluestore",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osd_id": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.type": "block",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.vdo": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.with_tpm": "0"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            },
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "type": "block",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "vg_name": "ceph_vg0"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:        }
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:    ],
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:    "1": [
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:        {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "devices": [
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "/dev/loop4"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            ],
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_name": "ceph_lv1",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_size": "21470642176",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "name": "ceph_lv1",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "tags": {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cluster_name": "ceph",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.crush_device_class": "",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.encrypted": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.objectstore": "bluestore",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osd_id": "1",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.type": "block",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.vdo": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.with_tpm": "0"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            },
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "type": "block",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "vg_name": "ceph_vg1"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:        }
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:    ],
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:    "2": [
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:        {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "devices": [
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "/dev/loop5"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            ],
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_name": "ceph_lv2",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_size": "21470642176",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "name": "ceph_lv2",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "tags": {
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.cluster_name": "ceph",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.crush_device_class": "",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.encrypted": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.objectstore": "bluestore",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osd_id": "2",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.type": "block",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.vdo": "0",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:                "ceph.with_tpm": "0"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            },
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "type": "block",
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:            "vg_name": "ceph_vg2"
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:        }
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]:    ]
Jan 27 09:42:33 np0005597378 jovial_chatterjee[389444]: }
Jan 27 09:42:33 np0005597378 systemd[1]: libpod-3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb.scope: Deactivated successfully.
Jan 27 09:42:33 np0005597378 podman[389427]: 2026-01-27 14:42:33.178122282 +0000 UTC m=+0.452087707 container died 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:42:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9bff7bb7bd58f1ad57e530b7d07ecf11bf14a434751035d525dcd465549d2d53-merged.mount: Deactivated successfully.
Jan 27 09:42:33 np0005597378 podman[389427]: 2026-01-27 14:42:33.225674217 +0000 UTC m=+0.499639622 container remove 3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:42:33 np0005597378 systemd[1]: libpod-conmon-3fa35bfc897d6e6e03274744a0a6e54f5965a4c7fdf0425b219ea44a8ac7b7fb.scope: Deactivated successfully.
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.694848031 +0000 UTC m=+0.059609241 container create 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 09:42:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:33 np0005597378 systemd[1]: Started libpod-conmon-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope.
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.660711594 +0000 UTC m=+0.025472834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:42:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.796608729 +0000 UTC m=+0.161369959 container init 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.802181279 +0000 UTC m=+0.166942479 container start 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:42:33 np0005597378 kind_hellman[389544]: 167 167
Jan 27 09:42:33 np0005597378 systemd[1]: libpod-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope: Deactivated successfully.
Jan 27 09:42:33 np0005597378 conmon[389544]: conmon 15633298cc40ee33399b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope/container/memory.events
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.832885732 +0000 UTC m=+0.197646962 container attach 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.833315684 +0000 UTC m=+0.198076894 container died 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:42:33 np0005597378 nova_compute[238941]: 2026-01-27 14:42:33.836 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-41312f277ceb1e45bddf2a8877cb6f7f30dd9d0c032b1ae0a459d932046fef04-merged.mount: Deactivated successfully.
Jan 27 09:42:33 np0005597378 podman[389528]: 2026-01-27 14:42:33.903797054 +0000 UTC m=+0.268558264 container remove 15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hellman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 27 09:42:33 np0005597378 systemd[1]: libpod-conmon-15633298cc40ee33399bd2b2d1e4eca349785481ca2e484e6b36b80d60591c16.scope: Deactivated successfully.
Jan 27 09:42:34 np0005597378 podman[389568]: 2026-01-27 14:42:34.08743494 +0000 UTC m=+0.040312093 container create 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 27 09:42:34 np0005597378 systemd[1]: Started libpod-conmon-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope.
Jan 27 09:42:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:42:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:42:34 np0005597378 podman[389568]: 2026-01-27 14:42:34.070209808 +0000 UTC m=+0.023086991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:42:34 np0005597378 podman[389568]: 2026-01-27 14:42:34.173531708 +0000 UTC m=+0.126408881 container init 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 27 09:42:34 np0005597378 podman[389568]: 2026-01-27 14:42:34.179860488 +0000 UTC m=+0.132737641 container start 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 27 09:42:34 np0005597378 podman[389568]: 2026-01-27 14:42:34.183397534 +0000 UTC m=+0.136274707 container attach 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:42:34 np0005597378 lvm[389663]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:42:34 np0005597378 lvm[389664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:42:34 np0005597378 lvm[389664]: VG ceph_vg1 finished
Jan 27 09:42:34 np0005597378 lvm[389663]: VG ceph_vg0 finished
Jan 27 09:42:34 np0005597378 lvm[389666]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:42:34 np0005597378 lvm[389666]: VG ceph_vg2 finished
Jan 27 09:42:34 np0005597378 vigorous_bouman[389585]: {}
Jan 27 09:42:35 np0005597378 systemd[1]: libpod-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope: Deactivated successfully.
Jan 27 09:42:35 np0005597378 systemd[1]: libpod-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope: Consumed 1.345s CPU time.
Jan 27 09:42:35 np0005597378 podman[389568]: 2026-01-27 14:42:35.011220676 +0000 UTC m=+0.964097829 container died 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:42:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-87a0d5589ed8e3f84ea4af73500a1565522e19ba025cb11aa592c08973332b8c-merged.mount: Deactivated successfully.
Jan 27 09:42:35 np0005597378 podman[389568]: 2026-01-27 14:42:35.059403678 +0000 UTC m=+1.012280831 container remove 7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:42:35 np0005597378 systemd[1]: libpod-conmon-7570e0840ab050b4a2c68f4f1f642a74698189c64d6d9bec869886405c855a12.scope: Deactivated successfully.
Jan 27 09:42:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:42:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:42:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:42:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:42:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:42:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:42:36 np0005597378 nova_compute[238941]: 2026-01-27 14:42:36.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:37 np0005597378 nova_compute[238941]: 2026-01-27 14:42:37.318 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:37 np0005597378 nova_compute[238941]: 2026-01-27 14:42:37.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:37 np0005597378 nova_compute[238941]: 2026-01-27 14:42:37.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:42:37 np0005597378 nova_compute[238941]: 2026-01-27 14:42:37.384 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:42:37 np0005597378 nova_compute[238941]: 2026-01-27 14:42:37.404 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:42:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:38 np0005597378 podman[389704]: 2026-01-27 14:42:38.733462219 +0000 UTC m=+0.067286476 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:42:38 np0005597378 nova_compute[238941]: 2026-01-27 14:42:38.838 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:42 np0005597378 nova_compute[238941]: 2026-01-27 14:42:42.320 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:42 np0005597378 podman[389726]: 2026-01-27 14:42:42.746207953 +0000 UTC m=+0.082996527 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 09:42:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:43 np0005597378 nova_compute[238941]: 2026-01-27 14:42:43.840 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:44 np0005597378 nova_compute[238941]: 2026-01-27 14:42:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:44 np0005597378 nova_compute[238941]: 2026-01-27 14:42:44.802 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:42:46.352 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:42:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:42:46.353 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:42:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:42:46.353 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:42:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:47 np0005597378 nova_compute[238941]: 2026-01-27 14:42:47.322 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:42:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:42:48 np0005597378 nova_compute[238941]: 2026-01-27 14:42:48.841 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:50 np0005597378 nova_compute[238941]: 2026-01-27 14:42:50.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:50 np0005597378 nova_compute[238941]: 2026-01-27 14:42:50.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:42:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:52 np0005597378 nova_compute[238941]: 2026-01-27 14:42:52.324 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:52 np0005597378 nova_compute[238941]: 2026-01-27 14:42:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:42:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:53 np0005597378 nova_compute[238941]: 2026-01-27 14:42:53.844 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:42:57 np0005597378 nova_compute[238941]: 2026-01-27 14:42:57.325 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:42:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1339 writes, 6320 keys, 1339 commit groups, 1.0 writes per commit group, ingest: 8.84 MB, 0.01 MB/s#012Interval WAL: 1339 writes, 1339 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.8      2.62              0.23        44    0.059       0      0       0.0       0.0#012  L6      1/0    9.88 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   4.9     76.9     65.2      5.63              1.00        43    0.131    278K    23K       0.0       0.0#012 Sum      1/0    9.88 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     52.5     53.7      8.25              1.23        87    0.095    278K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     95.1     98.1      0.70              0.20        12    0.058     50K   3108       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0     76.9     65.2      5.63              1.00        43    0.131    278K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.9      2.61              0.23        43    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.1 total, 600.0 interval#012Flush(GB): cumulative 0.074, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.43 GB write, 0.08 MB/s write, 0.42 GB read, 0.08 MB/s read, 8.2 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 50.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000482 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3152,48.37 MB,15.9102%) FilterBlock(88,779.05 KB,0.250259%) IndexBlock(88,1.25 MB,0.412324%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 09:42:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:42:58 np0005597378 nova_compute[238941]: 2026-01-27 14:42:58.846 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:42:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:42:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2606469699' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:42:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:42:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2606469699' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:42:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:02 np0005597378 nova_compute[238941]: 2026-01-27 14:43:02.326 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:03 np0005597378 nova_compute[238941]: 2026-01-27 14:43:03.849 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:07 np0005597378 nova_compute[238941]: 2026-01-27 14:43:07.329 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:08 np0005597378 nova_compute[238941]: 2026-01-27 14:43:08.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:09 np0005597378 podman[389753]: 2026-01-27 14:43:09.718160648 +0000 UTC m=+0.053889827 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 09:43:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:12 np0005597378 nova_compute[238941]: 2026-01-27 14:43:12.331 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:13 np0005597378 podman[389772]: 2026-01-27 14:43:13.77256745 +0000 UTC m=+0.112061586 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:43:13 np0005597378 nova_compute[238941]: 2026-01-27 14:43:13.851 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:43:17
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:43:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:17 np0005597378 nova_compute[238941]: 2026-01-27 14:43:17.333 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:43:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:43:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:43:18 np0005597378 nova_compute[238941]: 2026-01-27 14:43:18.853 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.335 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.445 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.446 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.447 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.447 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:43:22 np0005597378 nova_compute[238941]: 2026-01-27 14:43:22.447 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:43:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:43:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569491326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.082 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.301 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.303 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.303 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.673 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.674 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.693 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:43:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:23 np0005597378 nova_compute[238941]: 2026-01-27 14:43:23.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:43:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1114900228' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:43:24 np0005597378 nova_compute[238941]: 2026-01-27 14:43:24.318 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:43:24 np0005597378 nova_compute[238941]: 2026-01-27 14:43:24.324 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:43:24 np0005597378 nova_compute[238941]: 2026-01-27 14:43:24.408 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:43:24 np0005597378 nova_compute[238941]: 2026-01-27 14:43:24.410 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:43:24 np0005597378 nova_compute[238941]: 2026-01-27 14:43:24.411 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:43:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:27 np0005597378 nova_compute[238941]: 2026-01-27 14:43:27.336 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:43:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:43:28 np0005597378 nova_compute[238941]: 2026-01-27 14:43:28.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:32 np0005597378 nova_compute[238941]: 2026-01-27 14:43:32.338 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:32 np0005597378 nova_compute[238941]: 2026-01-27 14:43:32.411 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:32 np0005597378 nova_compute[238941]: 2026-01-27 14:43:32.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:32 np0005597378 nova_compute[238941]: 2026-01-27 14:43:32.412 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:33 np0005597378 nova_compute[238941]: 2026-01-27 14:43:33.856 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:43:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.478017984 +0000 UTC m=+0.059354833 container create 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:43:36 np0005597378 systemd[1]: Started libpod-conmon-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope.
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.447596108 +0000 UTC m=+0.028933047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:43:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.588905278 +0000 UTC m=+0.170242137 container init 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.598814193 +0000 UTC m=+0.180151042 container start 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:43:36 np0005597378 practical_babbage[390000]: 167 167
Jan 27 09:43:36 np0005597378 systemd[1]: libpod-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope: Deactivated successfully.
Jan 27 09:43:36 np0005597378 conmon[390000]: conmon 34e6b20295f16d29b3c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope/container/memory.events
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.607860846 +0000 UTC m=+0.189197725 container attach 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.608496343 +0000 UTC m=+0.189833192 container died 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:43:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-41dd30f4bd7b091af88e877d236db72f2ab32aa2a4af90e860c8b15569649dee-merged.mount: Deactivated successfully.
Jan 27 09:43:36 np0005597378 podman[389984]: 2026-01-27 14:43:36.743478344 +0000 UTC m=+0.324815193 container remove 34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:43:36 np0005597378 systemd[1]: libpod-conmon-34e6b20295f16d29b3c7ad0facd29d7732261a4fed35c465d6082daf83287cc2.scope: Deactivated successfully.
Jan 27 09:43:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:43:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:43:36 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:43:36 np0005597378 podman[390024]: 2026-01-27 14:43:36.960528985 +0000 UTC m=+0.080376747 container create b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:43:36 np0005597378 podman[390024]: 2026-01-27 14:43:36.90180691 +0000 UTC m=+0.021654692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:43:37 np0005597378 systemd[1]: Started libpod-conmon-b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77.scope.
Jan 27 09:43:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:43:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:37 np0005597378 podman[390024]: 2026-01-27 14:43:37.075752555 +0000 UTC m=+0.195600347 container init b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:43:37 np0005597378 podman[390024]: 2026-01-27 14:43:37.085183949 +0000 UTC m=+0.205031711 container start b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:43:37 np0005597378 podman[390024]: 2026-01-27 14:43:37.099363639 +0000 UTC m=+0.219211411 container attach b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:43:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:37 np0005597378 nova_compute[238941]: 2026-01-27 14:43:37.340 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:37 np0005597378 awesome_kalam[390041]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:43:37 np0005597378 awesome_kalam[390041]: --> All data devices are unavailable
Jan 27 09:43:37 np0005597378 systemd[1]: libpod-b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77.scope: Deactivated successfully.
Jan 27 09:43:37 np0005597378 podman[390024]: 2026-01-27 14:43:37.613166609 +0000 UTC m=+0.733014391 container died b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:43:37 np0005597378 systemd[1]: var-lib-containers-storage-overlay-cc547473c6f7f0e52fa9e37a64b8e7975ceda32ac90a1db14e08ebbbafad49a2-merged.mount: Deactivated successfully.
Jan 27 09:43:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:37 np0005597378 podman[390024]: 2026-01-27 14:43:37.764692523 +0000 UTC m=+0.884540285 container remove b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:43:37 np0005597378 systemd[1]: libpod-conmon-b3d4761d783a5ef67ee0d6a193b74295678add5574c2caf403143ec6c4ca2a77.scope: Deactivated successfully.
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.304240084 +0000 UTC m=+0.086262444 container create 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.244498492 +0000 UTC m=+0.026520882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:43:38 np0005597378 nova_compute[238941]: 2026-01-27 14:43:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:38 np0005597378 nova_compute[238941]: 2026-01-27 14:43:38.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:43:38 np0005597378 nova_compute[238941]: 2026-01-27 14:43:38.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:43:38 np0005597378 nova_compute[238941]: 2026-01-27 14:43:38.441 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:43:38 np0005597378 nova_compute[238941]: 2026-01-27 14:43:38.443 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:38 np0005597378 systemd[1]: Started libpod-conmon-4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d.scope.
Jan 27 09:43:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.539044502 +0000 UTC m=+0.321066912 container init 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.547415596 +0000 UTC m=+0.329437956 container start 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:43:38 np0005597378 crazy_ritchie[390153]: 167 167
Jan 27 09:43:38 np0005597378 systemd[1]: libpod-4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d.scope: Deactivated successfully.
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.558983296 +0000 UTC m=+0.341005686 container attach 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.559548612 +0000 UTC m=+0.341570982 container died 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:43:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-14a620e62a148dafe5e8eba24a553c907697f8fcfcf55822aade066c334db617-merged.mount: Deactivated successfully.
Jan 27 09:43:38 np0005597378 podman[390137]: 2026-01-27 14:43:38.673143959 +0000 UTC m=+0.455166319 container remove 4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:43:38 np0005597378 systemd[1]: libpod-conmon-4669362d026a99340866e6f749f3140f085745f75b5f76f2b63e60edac5c7b4d.scope: Deactivated successfully.
Jan 27 09:43:38 np0005597378 nova_compute[238941]: 2026-01-27 14:43:38.858 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:38 np0005597378 podman[390177]: 2026-01-27 14:43:38.899397246 +0000 UTC m=+0.094326200 container create e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:43:38 np0005597378 podman[390177]: 2026-01-27 14:43:38.827763875 +0000 UTC m=+0.022692849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:43:38 np0005597378 systemd[1]: Started libpod-conmon-e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc.scope.
Jan 27 09:43:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:43:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:38 np0005597378 podman[390177]: 2026-01-27 14:43:38.998932117 +0000 UTC m=+0.193861091 container init e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:43:39 np0005597378 podman[390177]: 2026-01-27 14:43:39.008273567 +0000 UTC m=+0.203202521 container start e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:43:39 np0005597378 podman[390177]: 2026-01-27 14:43:39.03670254 +0000 UTC m=+0.231631494 container attach e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]: {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:    "0": [
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:        {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "devices": [
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "/dev/loop3"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            ],
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_name": "ceph_lv0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_size": "21470642176",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "name": "ceph_lv0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "tags": {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cluster_name": "ceph",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.crush_device_class": "",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.encrypted": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.objectstore": "bluestore",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osd_id": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.type": "block",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.vdo": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.with_tpm": "0"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            },
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "type": "block",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "vg_name": "ceph_vg0"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:        }
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:    ],
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:    "1": [
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:        {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "devices": [
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "/dev/loop4"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            ],
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_name": "ceph_lv1",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_size": "21470642176",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "name": "ceph_lv1",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "tags": {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cluster_name": "ceph",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.crush_device_class": "",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.encrypted": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.objectstore": "bluestore",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osd_id": "1",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.type": "block",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.vdo": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.with_tpm": "0"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            },
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "type": "block",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "vg_name": "ceph_vg1"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:        }
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:    ],
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:    "2": [
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:        {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "devices": [
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "/dev/loop5"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            ],
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_name": "ceph_lv2",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_size": "21470642176",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "name": "ceph_lv2",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "tags": {
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.cluster_name": "ceph",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.crush_device_class": "",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.encrypted": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.objectstore": "bluestore",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osd_id": "2",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.type": "block",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.vdo": "0",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:                "ceph.with_tpm": "0"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            },
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "type": "block",
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:            "vg_name": "ceph_vg2"
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:        }
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]:    ]
Jan 27 09:43:39 np0005597378 relaxed_ellis[390193]: }
Jan 27 09:43:39 np0005597378 systemd[1]: libpod-e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc.scope: Deactivated successfully.
Jan 27 09:43:39 np0005597378 podman[390177]: 2026-01-27 14:43:39.30555204 +0000 UTC m=+0.500480994 container died e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:43:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-af78635dd3683b8d3c5219bd2610ee90daf31ff9da3cd5934699a01ef8a2a2cd-merged.mount: Deactivated successfully.
Jan 27 09:43:39 np0005597378 podman[390177]: 2026-01-27 14:43:39.367954984 +0000 UTC m=+0.562883938 container remove e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:43:39 np0005597378 systemd[1]: libpod-conmon-e94a51ba44293adec688dcb2b2e87a80435aec4e5a7e9ff9ae64bb0ba2a80efc.scope: Deactivated successfully.
Jan 27 09:43:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:39 np0005597378 podman[390276]: 2026-01-27 14:43:39.926765201 +0000 UTC m=+0.077774376 container create 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:43:39 np0005597378 systemd[1]: Started libpod-conmon-0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d.scope.
Jan 27 09:43:39 np0005597378 podman[390276]: 2026-01-27 14:43:39.880622083 +0000 UTC m=+0.031631288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:43:39 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:43:40 np0005597378 podman[390276]: 2026-01-27 14:43:40.042073904 +0000 UTC m=+0.193083159 container init 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:43:40 np0005597378 podman[390290]: 2026-01-27 14:43:40.045760743 +0000 UTC m=+0.084475757 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 09:43:40 np0005597378 podman[390276]: 2026-01-27 14:43:40.053630834 +0000 UTC m=+0.204640009 container start 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 27 09:43:40 np0005597378 dreamy_panini[390299]: 167 167
Jan 27 09:43:40 np0005597378 systemd[1]: libpod-0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d.scope: Deactivated successfully.
Jan 27 09:43:40 np0005597378 podman[390276]: 2026-01-27 14:43:40.064523386 +0000 UTC m=+0.215532561 container attach 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 27 09:43:40 np0005597378 podman[390276]: 2026-01-27 14:43:40.064973628 +0000 UTC m=+0.215982803 container died 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:43:40 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d35233b98e15399d767334bdad86578afd90fa27b5708295de5ef99e796e1ced-merged.mount: Deactivated successfully.
Jan 27 09:43:40 np0005597378 podman[390276]: 2026-01-27 14:43:40.164544679 +0000 UTC m=+0.315553854 container remove 0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:43:40 np0005597378 systemd[1]: libpod-conmon-0e9ddd79a8d9831d6d53b92ef3555a002cac4c3b5d7dbff7842efed46f6f708d.scope: Deactivated successfully.
Jan 27 09:43:40 np0005597378 podman[390336]: 2026-01-27 14:43:40.344388073 +0000 UTC m=+0.048648006 container create 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:43:40 np0005597378 systemd[1]: Started libpod-conmon-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope.
Jan 27 09:43:40 np0005597378 podman[390336]: 2026-01-27 14:43:40.320417569 +0000 UTC m=+0.024677502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:43:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:43:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:40 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:43:40 np0005597378 podman[390336]: 2026-01-27 14:43:40.455180404 +0000 UTC m=+0.159440367 container init 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:43:40 np0005597378 podman[390336]: 2026-01-27 14:43:40.463085395 +0000 UTC m=+0.167345328 container start 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:43:40 np0005597378 podman[390336]: 2026-01-27 14:43:40.490480951 +0000 UTC m=+0.194740884 container attach 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:43:41 np0005597378 lvm[390431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:43:41 np0005597378 lvm[390431]: VG ceph_vg0 finished
Jan 27 09:43:41 np0005597378 lvm[390432]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:43:41 np0005597378 lvm[390432]: VG ceph_vg1 finished
Jan 27 09:43:41 np0005597378 lvm[390434]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:43:41 np0005597378 lvm[390434]: VG ceph_vg2 finished
Jan 27 09:43:41 np0005597378 great_lalande[390353]: {}
Jan 27 09:43:41 np0005597378 systemd[1]: libpod-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope: Deactivated successfully.
Jan 27 09:43:41 np0005597378 systemd[1]: libpod-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope: Consumed 1.466s CPU time.
Jan 27 09:43:41 np0005597378 podman[390336]: 2026-01-27 14:43:41.36927732 +0000 UTC m=+1.073537253 container died 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:43:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4aee7647a6153d62c52e357841ec695f542c1f7eb9ede4d4c02c8977bc49033a-merged.mount: Deactivated successfully.
Jan 27 09:43:41 np0005597378 podman[390336]: 2026-01-27 14:43:41.4184724 +0000 UTC m=+1.122732333 container remove 091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:43:41 np0005597378 systemd[1]: libpod-conmon-091c356fefb63d98a7c061313fe6231fd6027ba1b78f92605669ffb10bbc9b19.scope: Deactivated successfully.
Jan 27 09:43:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:43:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:43:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:43:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:43:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:42 np0005597378 nova_compute[238941]: 2026-01-27 14:43:42.342 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:43:42 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:43:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:43 np0005597378 nova_compute[238941]: 2026-01-27 14:43:43.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:44 np0005597378 podman[390474]: 2026-01-27 14:43:44.77112209 +0000 UTC m=+0.101927775 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 09:43:45 np0005597378 nova_compute[238941]: 2026-01-27 14:43:45.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:43:46.354 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:43:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:43:46.355 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:43:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:43:46.355 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:43:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:47 np0005597378 nova_compute[238941]: 2026-01-27 14:43:47.346 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:43:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:43:48 np0005597378 nova_compute[238941]: 2026-01-27 14:43:48.862 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:51 np0005597378 nova_compute[238941]: 2026-01-27 14:43:51.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:51 np0005597378 nova_compute[238941]: 2026-01-27 14:43:51.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:43:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:52 np0005597378 nova_compute[238941]: 2026-01-27 14:43:52.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:52 np0005597378 nova_compute[238941]: 2026-01-27 14:43:52.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:53 np0005597378 nova_compute[238941]: 2026-01-27 14:43:53.865 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:43:57 np0005597378 nova_compute[238941]: 2026-01-27 14:43:57.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:43:58 np0005597378 nova_compute[238941]: 2026-01-27 14:43:58.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:43:59 np0005597378 nova_compute[238941]: 2026-01-27 14:43:59.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:43:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:43:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861966499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:43:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:43:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861966499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:43:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:02 np0005597378 nova_compute[238941]: 2026-01-27 14:44:02.351 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:44:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 15K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 357 writes, 828 keys, 357 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 357 writes, 165 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:44:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:03 np0005597378 nova_compute[238941]: 2026-01-27 14:44:03.871 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:07 np0005597378 nova_compute[238941]: 2026-01-27 14:44:07.353 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:44:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 47K writes, 183K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 507 writes, 1307 keys, 507 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s#012Interval WAL: 507 writes, 229 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:44:08 np0005597378 nova_compute[238941]: 2026-01-27 14:44:08.872 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:10 np0005597378 podman[390500]: 2026-01-27 14:44:10.715486024 +0000 UTC m=+0.052669654 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:44:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:12 np0005597378 nova_compute[238941]: 2026-01-27 14:44:12.355 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:13 np0005597378 nova_compute[238941]: 2026-01-27 14:44:13.874 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:44:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.0 total, 600.0 interval#012Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 477 writes, 1189 keys, 477 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s#012Interval WAL: 477 writes, 216 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:44:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:15 np0005597378 podman[390519]: 2026-01-27 14:44:15.751512754 +0000 UTC m=+0.083535152 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:44:17
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:44:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:17 np0005597378 nova_compute[238941]: 2026-01-27 14:44:17.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:44:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:44:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:44:18 np0005597378 nova_compute[238941]: 2026-01-27 14:44:18.925 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:22 np0005597378 nova_compute[238941]: 2026-01-27 14:44:22.357 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:22 np0005597378 nova_compute[238941]: 2026-01-27 14:44:22.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:23 np0005597378 nova_compute[238941]: 2026-01-27 14:44:23.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:23 np0005597378 nova_compute[238941]: 2026-01-27 14:44:23.965 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:44:23 np0005597378 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:44:23 np0005597378 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:44:23 np0005597378 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:44:23 np0005597378 nova_compute[238941]: 2026-01-27 14:44:23.966 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:44:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:44:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182101310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.535 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.673 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.674 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3562MB free_disk=59.98730885889381GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.675 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.675 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.849 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.849 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:44:24 np0005597378 nova_compute[238941]: 2026-01-27 14:44:24.947 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.024 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.025 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.036 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.062 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.079 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:44:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:44:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040869253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.660 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.666 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.685 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.686 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:44:25 np0005597378 nova_compute[238941]: 2026-01-27 14:44:25.687 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:44:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 09:44:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:27 np0005597378 nova_compute[238941]: 2026-01-27 14:44:27.358 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6209684390454133e-05 of space, bias 1.0, pg target 0.00486290531713624 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696006054290456 of space, bias 1.0, pg target 0.20088018162871366 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0547425556425053e-06 of space, bias 4.0, pg target 0.0012656910667710065 quantized to 16 (current 16)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:44:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:44:28 np0005597378 nova_compute[238941]: 2026-01-27 14:44:28.928 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:32 np0005597378 nova_compute[238941]: 2026-01-27 14:44:32.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:33 np0005597378 nova_compute[238941]: 2026-01-27 14:44:33.931 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:34 np0005597378 nova_compute[238941]: 2026-01-27 14:44:34.688 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:34 np0005597378 nova_compute[238941]: 2026-01-27 14:44:34.689 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:34 np0005597378 nova_compute[238941]: 2026-01-27 14:44:34.689 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:37 np0005597378 nova_compute[238941]: 2026-01-27 14:44:37.361 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:38 np0005597378 nova_compute[238941]: 2026-01-27 14:44:38.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:39 np0005597378 nova_compute[238941]: 2026-01-27 14:44:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:39 np0005597378 nova_compute[238941]: 2026-01-27 14:44:39.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:44:39 np0005597378 nova_compute[238941]: 2026-01-27 14:44:39.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:44:39 np0005597378 nova_compute[238941]: 2026-01-27 14:44:39.420 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:44:39 np0005597378 nova_compute[238941]: 2026-01-27 14:44:39.420 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:41 np0005597378 podman[390613]: 2026-01-27 14:44:41.712495513 +0000 UTC m=+0.047747682 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:44:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:42 np0005597378 nova_compute[238941]: 2026-01-27 14:44:42.362 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:44:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:44:43 np0005597378 podman[390819]: 2026-01-27 14:44:43.08261051 +0000 UTC m=+0.054656627 container create 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 09:44:43 np0005597378 systemd[1]: Started libpod-conmon-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope.
Jan 27 09:44:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:44:43 np0005597378 podman[390819]: 2026-01-27 14:44:43.06321873 +0000 UTC m=+0.035264847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:44:43 np0005597378 podman[390819]: 2026-01-27 14:44:43.166294585 +0000 UTC m=+0.138340732 container init 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:44:43 np0005597378 podman[390819]: 2026-01-27 14:44:43.174900956 +0000 UTC m=+0.146947063 container start 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:44:43 np0005597378 relaxed_benz[390836]: 167 167
Jan 27 09:44:43 np0005597378 systemd[1]: libpod-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope: Deactivated successfully.
Jan 27 09:44:43 np0005597378 podman[390819]: 2026-01-27 14:44:43.181933324 +0000 UTC m=+0.153979461 container attach 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:44:43 np0005597378 conmon[390836]: conmon 5ccacfad7498767a37d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope/container/memory.events
Jan 27 09:44:43 np0005597378 podman[390841]: 2026-01-27 14:44:43.221977758 +0000 UTC m=+0.025662589 container died 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:44:43 np0005597378 systemd[1]: var-lib-containers-storage-overlay-24cf38f179a277d76f8f3f89f0e7bb9c13b8b2fb720c1bf02693174bf2ecd275-merged.mount: Deactivated successfully.
Jan 27 09:44:43 np0005597378 podman[390841]: 2026-01-27 14:44:43.465503839 +0000 UTC m=+0.269188640 container remove 5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_benz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:44:43 np0005597378 systemd[1]: libpod-conmon-5ccacfad7498767a37d6d00b24f4299600263aa7f6c398b4a6ab5b066a315a8a.scope: Deactivated successfully.
Jan 27 09:44:43 np0005597378 podman[390864]: 2026-01-27 14:44:43.649916106 +0000 UTC m=+0.058608103 container create 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:44:43 np0005597378 podman[390864]: 2026-01-27 14:44:43.614407124 +0000 UTC m=+0.023099141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:44:43 np0005597378 systemd[1]: Started libpod-conmon-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope.
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 27 09:44:43 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 27 09:44:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:44:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:44:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:43 np0005597378 podman[390864]: 2026-01-27 14:44:43.833986273 +0000 UTC m=+0.242678300 container init 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:44:43 np0005597378 podman[390864]: 2026-01-27 14:44:43.840698702 +0000 UTC m=+0.249390689 container start 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:44:43 np0005597378 podman[390864]: 2026-01-27 14:44:43.853808214 +0000 UTC m=+0.262500211 container attach 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:44:43 np0005597378 nova_compute[238941]: 2026-01-27 14:44:43.936 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:44 np0005597378 happy_dewdney[390880]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:44:44 np0005597378 happy_dewdney[390880]: --> All data devices are unavailable
Jan 27 09:44:44 np0005597378 systemd[1]: libpod-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope: Deactivated successfully.
Jan 27 09:44:44 np0005597378 conmon[390880]: conmon 844898808bb5010d904f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope/container/memory.events
Jan 27 09:44:44 np0005597378 podman[390864]: 2026-01-27 14:44:44.353585259 +0000 UTC m=+0.762277276 container died 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:44:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1d54d46dad53ed4f5f33ec8f6099ee72b77eeb39dedb7acf09ff503fa1e9392d-merged.mount: Deactivated successfully.
Jan 27 09:44:44 np0005597378 podman[390864]: 2026-01-27 14:44:44.420692749 +0000 UTC m=+0.829384746 container remove 844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:44:44 np0005597378 systemd[1]: libpod-conmon-844898808bb5010d904fbe7ca56f06428a540057bdb4ba861929b78e56ac0745.scope: Deactivated successfully.
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.860543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525084860597, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1758, "num_deletes": 251, "total_data_size": 2936414, "memory_usage": 2987144, "flush_reason": "Manual Compaction"}
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525084905115, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2886255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61748, "largest_seqno": 63505, "table_properties": {"data_size": 2878067, "index_size": 5001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16496, "raw_average_key_size": 20, "raw_value_size": 2861782, "raw_average_value_size": 3477, "num_data_blocks": 223, "num_entries": 823, "num_filter_entries": 823, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769524891, "oldest_key_time": 1769524891, "file_creation_time": 1769525084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 44619 microseconds, and 6145 cpu microseconds.
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:44:44 np0005597378 podman[390974]: 2026-01-27 14:44:44.906965121 +0000 UTC m=+0.062562880 container create b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.905164) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2886255 bytes OK
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.905183) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.914360) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.914394) EVENT_LOG_v1 {"time_micros": 1769525084914387, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.914413) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2928909, prev total WAL file size 2928909, number of live WAL files 2.
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.915254) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2818KB)], [146(10122KB)]
Jan 27 09:44:44 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525084915279, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13251321, "oldest_snapshot_seqno": -1}
Jan 27 09:44:44 np0005597378 podman[390974]: 2026-01-27 14:44:44.869004093 +0000 UTC m=+0.024601872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:44:45 np0005597378 systemd[1]: Started libpod-conmon-b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26.scope.
Jan 27 09:44:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8331 keys, 11442923 bytes, temperature: kUnknown
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525085090961, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11442923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11387513, "index_size": 33473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 217782, "raw_average_key_size": 26, "raw_value_size": 11239154, "raw_average_value_size": 1349, "num_data_blocks": 1302, "num_entries": 8331, "num_filter_entries": 8331, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.091232) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11442923 bytes
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.112498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.4 rd, 65.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.9 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 8849, records dropped: 518 output_compression: NoCompression
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.112537) EVENT_LOG_v1 {"time_micros": 1769525085112522, "job": 90, "event": "compaction_finished", "compaction_time_micros": 175764, "compaction_time_cpu_micros": 26806, "output_level": 6, "num_output_files": 1, "total_output_size": 11442923, "num_input_records": 8849, "num_output_records": 8331, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525085113153, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525085114987, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:44.915205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:44:45 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:44:45.115098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:44:45 np0005597378 podman[390974]: 2026-01-27 14:44:45.118114224 +0000 UTC m=+0.273712013 container init b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:44:45 np0005597378 podman[390974]: 2026-01-27 14:44:45.124967528 +0000 UTC m=+0.280565287 container start b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:44:45 np0005597378 affectionate_leavitt[390990]: 167 167
Jan 27 09:44:45 np0005597378 systemd[1]: libpod-b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26.scope: Deactivated successfully.
Jan 27 09:44:45 np0005597378 podman[390974]: 2026-01-27 14:44:45.183198329 +0000 UTC m=+0.338796088 container attach b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:44:45 np0005597378 podman[390974]: 2026-01-27 14:44:45.184838283 +0000 UTC m=+0.340436072 container died b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:44:45 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d610ab6d1f20218956dae84c5ba8f8266a9926e1e116b163b861403194c88860-merged.mount: Deactivated successfully.
Jan 27 09:44:45 np0005597378 nova_compute[238941]: 2026-01-27 14:44:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:45 np0005597378 podman[390974]: 2026-01-27 14:44:45.465974443 +0000 UTC m=+0.621572193 container remove b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 09:44:45 np0005597378 systemd[1]: libpod-conmon-b019854c35a5d53fd41ec816170c0ae68d6c6ff41e6fe42785090c283f9dce26.scope: Deactivated successfully.
Jan 27 09:44:45 np0005597378 podman[391013]: 2026-01-27 14:44:45.626498339 +0000 UTC m=+0.040641541 container create 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:44:45 np0005597378 systemd[1]: Started libpod-conmon-29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9.scope.
Jan 27 09:44:45 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:44:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:45 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:45 np0005597378 podman[391013]: 2026-01-27 14:44:45.607340995 +0000 UTC m=+0.021484227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:44:45 np0005597378 podman[391013]: 2026-01-27 14:44:45.710047599 +0000 UTC m=+0.124190821 container init 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:44:45 np0005597378 podman[391013]: 2026-01-27 14:44:45.717691565 +0000 UTC m=+0.131834767 container start 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:44:45 np0005597378 podman[391013]: 2026-01-27 14:44:45.721902758 +0000 UTC m=+0.136045960 container attach 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:44:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 0 op/s
Jan 27 09:44:46 np0005597378 modest_napier[391029]: {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:    "0": [
Jan 27 09:44:46 np0005597378 modest_napier[391029]:        {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "devices": [
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "/dev/loop3"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            ],
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_name": "ceph_lv0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_size": "21470642176",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "name": "ceph_lv0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "tags": {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cluster_name": "ceph",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.crush_device_class": "",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.encrypted": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.objectstore": "bluestore",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osd_id": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.type": "block",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.vdo": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.with_tpm": "0"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            },
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "type": "block",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "vg_name": "ceph_vg0"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:        }
Jan 27 09:44:46 np0005597378 modest_napier[391029]:    ],
Jan 27 09:44:46 np0005597378 modest_napier[391029]:    "1": [
Jan 27 09:44:46 np0005597378 modest_napier[391029]:        {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "devices": [
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "/dev/loop4"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            ],
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_name": "ceph_lv1",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_size": "21470642176",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "name": "ceph_lv1",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "tags": {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cluster_name": "ceph",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.crush_device_class": "",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.encrypted": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.objectstore": "bluestore",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osd_id": "1",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.type": "block",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.vdo": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.with_tpm": "0"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            },
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "type": "block",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "vg_name": "ceph_vg1"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:        }
Jan 27 09:44:46 np0005597378 modest_napier[391029]:    ],
Jan 27 09:44:46 np0005597378 modest_napier[391029]:    "2": [
Jan 27 09:44:46 np0005597378 modest_napier[391029]:        {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "devices": [
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "/dev/loop5"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            ],
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_name": "ceph_lv2",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_size": "21470642176",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "name": "ceph_lv2",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "tags": {
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.cluster_name": "ceph",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.crush_device_class": "",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.encrypted": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.objectstore": "bluestore",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osd_id": "2",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.type": "block",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.vdo": "0",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:                "ceph.with_tpm": "0"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            },
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "type": "block",
Jan 27 09:44:46 np0005597378 modest_napier[391029]:            "vg_name": "ceph_vg2"
Jan 27 09:44:46 np0005597378 modest_napier[391029]:        }
Jan 27 09:44:46 np0005597378 modest_napier[391029]:    ]
Jan 27 09:44:46 np0005597378 modest_napier[391029]: }
Jan 27 09:44:46 np0005597378 systemd[1]: libpod-29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9.scope: Deactivated successfully.
Jan 27 09:44:46 np0005597378 podman[391013]: 2026-01-27 14:44:46.054406885 +0000 UTC m=+0.468550107 container died 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:44:46 np0005597378 systemd[1]: var-lib-containers-storage-overlay-abc2e8f91d6eb61e9861b0b6205f8804a7147017ba74af29465d245e59703587-merged.mount: Deactivated successfully.
Jan 27 09:44:46 np0005597378 podman[391013]: 2026-01-27 14:44:46.184755811 +0000 UTC m=+0.598899013 container remove 29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:44:46 np0005597378 systemd[1]: libpod-conmon-29b371cc40573e8e1cf99d0f8b3bf70891f6aaca516bd9d218ec006e90401db9.scope: Deactivated successfully.
Jan 27 09:44:46 np0005597378 podman[391040]: 2026-01-27 14:44:46.274487139 +0000 UTC m=+0.192034893 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 09:44:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:44:46.355 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:44:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:44:46.356 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:44:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:44:46.356 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:44:46 np0005597378 podman[391137]: 2026-01-27 14:44:46.734057675 +0000 UTC m=+0.089944244 container create b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:44:46 np0005597378 podman[391137]: 2026-01-27 14:44:46.679537852 +0000 UTC m=+0.035424471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:44:46 np0005597378 systemd[1]: Started libpod-conmon-b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa.scope.
Jan 27 09:44:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:44:46 np0005597378 podman[391137]: 2026-01-27 14:44:46.86065216 +0000 UTC m=+0.216538759 container init b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:44:46 np0005597378 podman[391137]: 2026-01-27 14:44:46.872721614 +0000 UTC m=+0.228608193 container start b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:44:46 np0005597378 intelligent_ride[391153]: 167 167
Jan 27 09:44:46 np0005597378 systemd[1]: libpod-b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa.scope: Deactivated successfully.
Jan 27 09:44:46 np0005597378 podman[391137]: 2026-01-27 14:44:46.89122269 +0000 UTC m=+0.247109289 container attach b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:44:46 np0005597378 podman[391137]: 2026-01-27 14:44:46.891741653 +0000 UTC m=+0.247628242 container died b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:44:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 27 09:44:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 27 09:44:47 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 27 09:44:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-91f4b4f2023258e993920d19678c69dd77c755202746ca17b2b75b4015bd6e61-merged.mount: Deactivated successfully.
Jan 27 09:44:47 np0005597378 podman[391137]: 2026-01-27 14:44:47.112202587 +0000 UTC m=+0.468089166 container remove b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:44:47 np0005597378 systemd[1]: libpod-conmon-b3275bd2d3032f8904e2fdd64fd30ca1278954a09632462c7d106dc765d0c5fa.scope: Deactivated successfully.
Jan 27 09:44:47 np0005597378 podman[391178]: 2026-01-27 14:44:47.287558789 +0000 UTC m=+0.049600420 container create 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:44:47 np0005597378 systemd[1]: Started libpod-conmon-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope.
Jan 27 09:44:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:47 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:44:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:47 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:44:47 np0005597378 podman[391178]: 2026-01-27 14:44:47.264482341 +0000 UTC m=+0.026523982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:44:47 np0005597378 nova_compute[238941]: 2026-01-27 14:44:47.367 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:47 np0005597378 podman[391178]: 2026-01-27 14:44:47.441858578 +0000 UTC m=+0.203900229 container init 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 09:44:47 np0005597378 podman[391178]: 2026-01-27 14:44:47.449179405 +0000 UTC m=+0.211221036 container start 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:44:47 np0005597378 podman[391178]: 2026-01-27 14:44:47.48551885 +0000 UTC m=+0.247560461 container attach 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 383 B/s wr, 1 op/s
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:44:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:44:48 np0005597378 lvm[391274]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:44:48 np0005597378 lvm[391274]: VG ceph_vg1 finished
Jan 27 09:44:48 np0005597378 lvm[391273]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:44:48 np0005597378 lvm[391273]: VG ceph_vg0 finished
Jan 27 09:44:48 np0005597378 lvm[391276]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:44:48 np0005597378 lvm[391276]: VG ceph_vg2 finished
Jan 27 09:44:48 np0005597378 tender_wright[391195]: {}
Jan 27 09:44:48 np0005597378 systemd[1]: libpod-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope: Deactivated successfully.
Jan 27 09:44:48 np0005597378 podman[391178]: 2026-01-27 14:44:48.285125425 +0000 UTC m=+1.047167046 container died 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:44:48 np0005597378 systemd[1]: libpod-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope: Consumed 1.300s CPU time.
Jan 27 09:44:48 np0005597378 systemd[1]: var-lib-containers-storage-overlay-96278554374de106b4e6da72fb9c69f0f3fdcabec62f6f7300dba3ef3cb89214-merged.mount: Deactivated successfully.
Jan 27 09:44:48 np0005597378 podman[391178]: 2026-01-27 14:44:48.431618554 +0000 UTC m=+1.193660175 container remove 095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:44:48 np0005597378 systemd[1]: libpod-conmon-095968d01edf6dd43843bc276c87283242cb7ff3a10e620f0764f6419ed07f90.scope: Deactivated successfully.
Jan 27 09:44:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:44:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:44:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:48 np0005597378 nova_compute[238941]: 2026-01-27 14:44:48.938 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:49 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:44:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 8.5 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 55 op/s
Jan 27 09:44:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 462 KiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 27 09:44:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 27 09:44:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 27 09:44:52 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 27 09:44:52 np0005597378 nova_compute[238941]: 2026-01-27 14:44:52.369 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 27 09:44:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 27 09:44:53 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 27 09:44:53 np0005597378 nova_compute[238941]: 2026-01-27 14:44:53.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:53 np0005597378 nova_compute[238941]: 2026-01-27 14:44:53.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:44:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 13 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 27 09:44:53 np0005597378 nova_compute[238941]: 2026-01-27 14:44:53.940 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:54 np0005597378 nova_compute[238941]: 2026-01-27 14:44:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:44:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Jan 27 09:44:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:44:57 np0005597378 nova_compute[238941]: 2026-01-27 14:44:57.370 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Jan 27 09:44:58 np0005597378 nova_compute[238941]: 2026-01-27 14:44:58.942 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:44:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:44:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712303200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:44:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:44:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712303200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:44:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Jan 27 09:45:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.2 MiB/s wr, 12 op/s
Jan 27 09:45:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:02 np0005597378 nova_compute[238941]: 2026-01-27 14:45:02.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Jan 27 09:45:03 np0005597378 nova_compute[238941]: 2026-01-27 14:45:03.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 683 KiB/s wr, 9 op/s
Jan 27 09:45:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:07 np0005597378 nova_compute[238941]: 2026-01-27 14:45:07.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 426 B/s wr, 9 op/s
Jan 27 09:45:08 np0005597378 nova_compute[238941]: 2026-01-27 14:45:08.944 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 426 B/s wr, 44 op/s
Jan 27 09:45:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 64 op/s
Jan 27 09:45:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:12 np0005597378 nova_compute[238941]: 2026-01-27 14:45:12.376 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:12 np0005597378 podman[391317]: 2026-01-27 14:45:12.717370223 +0000 UTC m=+0.054438561 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 09:45:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 27 09:45:13 np0005597378 nova_compute[238941]: 2026-01-27 14:45:13.946 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 27 09:45:16 np0005597378 podman[391336]: 2026-01-27 14:45:16.7432654 +0000 UTC m=+0.087188880 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:45:17
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'default.rgw.log', 'backups', 'vms']
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:45:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:17 np0005597378 nova_compute[238941]: 2026-01-27 14:45:17.378 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 70 op/s
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:45:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:45:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:45:18 np0005597378 nova_compute[238941]: 2026-01-27 14:45:18.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 70 op/s
Jan 27 09:45:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 35 op/s
Jan 27 09:45:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:22 np0005597378 nova_compute[238941]: 2026-01-27 14:45:22.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.430 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.431 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:45:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.950 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:23 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:45:23 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193806835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:45:23 np0005597378 nova_compute[238941]: 2026-01-27 14:45:23.998 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.136 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.137 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.987300782464445GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.137 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.137 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.249 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.250 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.269 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:45:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:45:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443395814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.814 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.822 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.864 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.867 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:45:24 np0005597378 nova_compute[238941]: 2026-01-27 14:45:24.867 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:45:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:27 np0005597378 nova_compute[238941]: 2026-01-27 14:45:27.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6344317842167927e-05 of space, bias 1.0, pg target 0.004903295352650378 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003375809290620695 of space, bias 1.0, pg target 0.10127427871862085 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0566987151115941e-06 of space, bias 4.0, pg target 0.001268038458133913 quantized to 16 (current 16)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:45:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:45:28 np0005597378 nova_compute[238941]: 2026-01-27 14:45:28.951 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:32 np0005597378 nova_compute[238941]: 2026-01-27 14:45:32.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:33 np0005597378 nova_compute[238941]: 2026-01-27 14:45:33.868 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:33 np0005597378 nova_compute[238941]: 2026-01-27 14:45:33.868 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:33 np0005597378 nova_compute[238941]: 2026-01-27 14:45:33.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:34 np0005597378 nova_compute[238941]: 2026-01-27 14:45:34.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:37 np0005597378 nova_compute[238941]: 2026-01-27 14:45:37.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:38 np0005597378 nova_compute[238941]: 2026-01-27 14:45:38.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:39 np0005597378 nova_compute[238941]: 2026-01-27 14:45:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:40 np0005597378 nova_compute[238941]: 2026-01-27 14:45:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:40 np0005597378 nova_compute[238941]: 2026-01-27 14:45:40.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:45:40 np0005597378 nova_compute[238941]: 2026-01-27 14:45:40.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:45:40 np0005597378 nova_compute[238941]: 2026-01-27 14:45:40.403 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:45:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:42 np0005597378 nova_compute[238941]: 2026-01-27 14:45:42.386 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:43 np0005597378 podman[391406]: 2026-01-27 14:45:43.706309306 +0000 UTC m=+0.050872605 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:45:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:43 np0005597378 nova_compute[238941]: 2026-01-27 14:45:43.954 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:45 np0005597378 nova_compute[238941]: 2026-01-27 14:45:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:45:46.356 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:45:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:45:46.357 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:45:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:45:46.357 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:45:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:47 np0005597378 nova_compute[238941]: 2026-01-27 14:45:47.387 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:47 np0005597378 podman[391425]: 2026-01-27 14:45:47.728136654 +0000 UTC m=+0.074501320 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:45:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:45:48 np0005597378 nova_compute[238941]: 2026-01-27 14:45:48.956 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 27 09:45:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 27 09:45:49 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.651099398 +0000 UTC m=+0.037677101 container create 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:45:49 np0005597378 systemd[1]: Started libpod-conmon-542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e.scope.
Jan 27 09:45:49 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.634934844 +0000 UTC m=+0.021512567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.735409849 +0000 UTC m=+0.121987572 container init 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.742169971 +0000 UTC m=+0.128747674 container start 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:45:49 np0005597378 modest_banzai[391611]: 167 167
Jan 27 09:45:49 np0005597378 systemd[1]: libpod-542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e.scope: Deactivated successfully.
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.75554819 +0000 UTC m=+0.142125913 container attach 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.75667941 +0000 UTC m=+0.143257113 container died 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:45:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.8 KiB/s rd, 307 B/s wr, 7 op/s
Jan 27 09:45:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c963d3f153c687b3f91c2c44aab814154f5259eaa99ae092a5d5492730d0dfa8-merged.mount: Deactivated successfully.
Jan 27 09:45:49 np0005597378 podman[391594]: 2026-01-27 14:45:49.808311735 +0000 UTC m=+0.194889438 container remove 542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:45:49 np0005597378 systemd[1]: libpod-conmon-542df552ebd111a6e42f308afaa0d6c63d595ff0b07fb0bff5c756d6fe8b7d3e.scope: Deactivated successfully.
Jan 27 09:45:50 np0005597378 podman[391636]: 2026-01-27 14:45:49.95799738 +0000 UTC m=+0.024079417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:50 np0005597378 podman[391636]: 2026-01-27 14:45:50.053009218 +0000 UTC m=+0.119091225 container create 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:45:50 np0005597378 systemd[1]: Started libpod-conmon-3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993.scope.
Jan 27 09:45:50 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:50 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:50 np0005597378 podman[391636]: 2026-01-27 14:45:50.207899472 +0000 UTC m=+0.273981499 container init 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:45:50 np0005597378 podman[391636]: 2026-01-27 14:45:50.215462065 +0000 UTC m=+0.281544072 container start 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:45:50 np0005597378 podman[391636]: 2026-01-27 14:45:50.257132033 +0000 UTC m=+0.323214060 container attach 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]: [
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:    {
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "available": false,
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "being_replaced": false,
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "ceph_device_lvm": false,
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "lsm_data": {},
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "lvs": [],
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "path": "/dev/sr0",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "rejected_reasons": [
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "Insufficient space (<5GB)",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "Has a FileSystem"
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        ],
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        "sys_api": {
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "actuators": null,
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "device_nodes": [
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:                "sr0"
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            ],
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "devname": "sr0",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "human_readable_size": "482.00 KB",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "id_bus": "ata",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "model": "QEMU DVD-ROM",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "nr_requests": "2",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "parent": "/dev/sr0",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "partitions": {},
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "path": "/dev/sr0",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "removable": "1",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "rev": "2.5+",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "ro": "0",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "rotational": "1",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "sas_address": "",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "sas_device_handle": "",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "scheduler_mode": "mq-deadline",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "sectors": 0,
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "sectorsize": "2048",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "size": 493568.0,
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "support_discard": "2048",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "type": "disk",
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:            "vendor": "QEMU"
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:        }
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]:    }
Jan 27 09:45:50 np0005597378 hopeful_keldysh[391652]: ]
Jan 27 09:45:50 np0005597378 systemd[1]: libpod-3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993.scope: Deactivated successfully.
Jan 27 09:45:50 np0005597378 podman[392425]: 2026-01-27 14:45:50.809567619 +0000 UTC m=+0.024883768 container died 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:45:50 np0005597378 systemd[1]: var-lib-containers-storage-overlay-89b412d3f2badcde1d9a0b57ee158cb6b1e71e851e226c2ba3d0381134470e80-merged.mount: Deactivated successfully.
Jan 27 09:45:50 np0005597378 podman[392425]: 2026-01-27 14:45:50.9598292 +0000 UTC m=+0.175145329 container remove 3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:45:50 np0005597378 systemd[1]: libpod-conmon-3eb1fd34823893afa67688f7756da8e096a1cc8d05419198fa389ef8d014b993.scope: Deactivated successfully.
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:45:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.443148602 +0000 UTC m=+0.023928772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.545349124 +0000 UTC m=+0.126129274 container create f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:45:51 np0005597378 systemd[1]: Started libpod-conmon-f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63.scope.
Jan 27 09:45:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.735858593 +0000 UTC m=+0.316638773 container init f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.743691023 +0000 UTC m=+0.324471183 container start f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.749606032 +0000 UTC m=+0.330386182 container attach f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:45:51 np0005597378 pensive_wu[392518]: 167 167
Jan 27 09:45:51 np0005597378 systemd[1]: libpod-f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63.scope: Deactivated successfully.
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.75177778 +0000 UTC m=+0.332557930 container died f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:45:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 614 B/s wr, 8 op/s
Jan 27 09:45:51 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3f75d22cd94d43132b01d1370e1a112ae5a2d5e9265d65f1466b18375733b682-merged.mount: Deactivated successfully.
Jan 27 09:45:51 np0005597378 podman[392501]: 2026-01-27 14:45:51.832064154 +0000 UTC m=+0.412844304 container remove f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:45:51 np0005597378 systemd[1]: libpod-conmon-f6594df90bf3435756934be23ed3f32b6f636821569373ef2ed7f39db2426b63.scope: Deactivated successfully.
Jan 27 09:45:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:45:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:52 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:45:52 np0005597378 podman[392541]: 2026-01-27 14:45:52.023827857 +0000 UTC m=+0.049985702 container create 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 27 09:45:52 np0005597378 systemd[1]: Started libpod-conmon-5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809.scope.
Jan 27 09:45:52 np0005597378 podman[392541]: 2026-01-27 14:45:52.000772678 +0000 UTC m=+0.026930543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:52 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:52 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:52 np0005597378 podman[392541]: 2026-01-27 14:45:52.158625462 +0000 UTC m=+0.184783317 container init 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:45:52 np0005597378 podman[392541]: 2026-01-27 14:45:52.166231565 +0000 UTC m=+0.192389410 container start 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:45:52 np0005597378 podman[392541]: 2026-01-27 14:45:52.18467173 +0000 UTC m=+0.210829575 container attach 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:45:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:52 np0005597378 nova_compute[238941]: 2026-01-27 14:45:52.388 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:52 np0005597378 xenodochial_banzai[392558]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:45:52 np0005597378 xenodochial_banzai[392558]: --> All data devices are unavailable
Jan 27 09:45:52 np0005597378 systemd[1]: libpod-5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809.scope: Deactivated successfully.
Jan 27 09:45:52 np0005597378 podman[392578]: 2026-01-27 14:45:52.698892992 +0000 UTC m=+0.029514952 container died 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:45:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fff980ba69b8bd54127f438d0fba316018d49e018f45799f081f2c79b9aefa0f-merged.mount: Deactivated successfully.
Jan 27 09:45:52 np0005597378 podman[392578]: 2026-01-27 14:45:52.752647644 +0000 UTC m=+0.083269584 container remove 5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banzai, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:45:52 np0005597378 systemd[1]: libpod-conmon-5f7749541675fe49a24ba40399ccba12e4189524a9b0d8c6bbddce96edb82809.scope: Deactivated successfully.
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.205051467 +0000 UTC m=+0.035745059 container create 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:45:53 np0005597378 systemd[1]: Started libpod-conmon-45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0.scope.
Jan 27 09:45:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.282773322 +0000 UTC m=+0.113466944 container init 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.191023721 +0000 UTC m=+0.021717333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.29014561 +0000 UTC m=+0.120839202 container start 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.293746537 +0000 UTC m=+0.124440129 container attach 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:45:53 np0005597378 trusting_babbage[392669]: 167 167
Jan 27 09:45:53 np0005597378 systemd[1]: libpod-45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0.scope: Deactivated successfully.
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.297194439 +0000 UTC m=+0.127888031 container died 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:45:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3d98ff2f40e98fd79e1a89ecfecc8c0dd866d4ff6981007c97c7ad6a3db5d79f-merged.mount: Deactivated successfully.
Jan 27 09:45:53 np0005597378 podman[392653]: 2026-01-27 14:45:53.333517743 +0000 UTC m=+0.164211335 container remove 45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 27 09:45:53 np0005597378 systemd[1]: libpod-conmon-45a0f51de325e2aaf6abc8dad57bce5d9964488ce0b079d4e91932afb06629d0.scope: Deactivated successfully.
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.507157 +0000 UTC m=+0.043918028 container create 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:45:53 np0005597378 systemd[1]: Started libpod-conmon-8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81.scope.
Jan 27 09:45:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.486773683 +0000 UTC m=+0.023534741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.587924417 +0000 UTC m=+0.124685465 container init 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.596347073 +0000 UTC m=+0.133108121 container start 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.601256254 +0000 UTC m=+0.138017282 container attach 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:45:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 462 KiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]: {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:    "0": [
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:        {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "devices": [
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "/dev/loop3"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            ],
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_name": "ceph_lv0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_size": "21470642176",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "name": "ceph_lv0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "tags": {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cluster_name": "ceph",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.crush_device_class": "",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.encrypted": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.objectstore": "bluestore",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osd_id": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.type": "block",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.vdo": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.with_tpm": "0"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            },
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "type": "block",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "vg_name": "ceph_vg0"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:        }
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:    ],
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:    "1": [
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:        {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "devices": [
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "/dev/loop4"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            ],
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_name": "ceph_lv1",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_size": "21470642176",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "name": "ceph_lv1",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "tags": {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cluster_name": "ceph",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.crush_device_class": "",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.encrypted": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.objectstore": "bluestore",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osd_id": "1",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.type": "block",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.vdo": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.with_tpm": "0"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            },
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "type": "block",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "vg_name": "ceph_vg1"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:        }
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:    ],
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:    "2": [
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:        {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "devices": [
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "/dev/loop5"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            ],
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_name": "ceph_lv2",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_size": "21470642176",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "name": "ceph_lv2",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "tags": {
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.cluster_name": "ceph",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.crush_device_class": "",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.encrypted": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.objectstore": "bluestore",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osd_id": "2",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.type": "block",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.vdo": "0",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:                "ceph.with_tpm": "0"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            },
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "type": "block",
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:            "vg_name": "ceph_vg2"
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:        }
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]:    ]
Jan 27 09:45:53 np0005597378 objective_lumiere[392710]: }
Jan 27 09:45:53 np0005597378 systemd[1]: libpod-8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81.scope: Deactivated successfully.
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.902760471 +0000 UTC m=+0.439521519 container died 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:45:53 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ecd4b284057b7d30ff421ec2122924d155618ff8419925c19fc4a0fc9ba8671f-merged.mount: Deactivated successfully.
Jan 27 09:45:53 np0005597378 podman[392694]: 2026-01-27 14:45:53.953352278 +0000 UTC m=+0.490113306 container remove 8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lumiere, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:45:53 np0005597378 nova_compute[238941]: 2026-01-27 14:45:53.957 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:53 np0005597378 systemd[1]: libpod-conmon-8a7dd0f7603d6705e1e9f882fe904222e0d1c02b7de1b2c39096cfefa9117c81.scope: Deactivated successfully.
Jan 27 09:45:54 np0005597378 nova_compute[238941]: 2026-01-27 14:45:54.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:54 np0005597378 nova_compute[238941]: 2026-01-27 14:45:54.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.421605526 +0000 UTC m=+0.043489547 container create 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:45:54 np0005597378 systemd[1]: Started libpod-conmon-9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236.scope.
Jan 27 09:45:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.404273082 +0000 UTC m=+0.026157143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.507037578 +0000 UTC m=+0.128921609 container init 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.513283406 +0000 UTC m=+0.135167427 container start 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.517366954 +0000 UTC m=+0.139251005 container attach 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:45:54 np0005597378 exciting_moore[392807]: 167 167
Jan 27 09:45:54 np0005597378 systemd[1]: libpod-9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236.scope: Deactivated successfully.
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.51942391 +0000 UTC m=+0.141307951 container died 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:45:54 np0005597378 systemd[1]: var-lib-containers-storage-overlay-45665bfe7b3d02d98329365522f3a460324ce642fe77be3b4b84e5c0f143fc37-merged.mount: Deactivated successfully.
Jan 27 09:45:54 np0005597378 podman[392791]: 2026-01-27 14:45:54.593786604 +0000 UTC m=+0.215670625 container remove 9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 27 09:45:54 np0005597378 systemd[1]: libpod-conmon-9b9879917f8506dd66c41186092fc58c6767cf530d41d837b0528e80bd076236.scope: Deactivated successfully.
Jan 27 09:45:54 np0005597378 podman[392831]: 2026-01-27 14:45:54.772916959 +0000 UTC m=+0.046333504 container create 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:45:54 np0005597378 systemd[1]: Started libpod-conmon-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope.
Jan 27 09:45:54 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:45:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:54 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:45:54 np0005597378 podman[392831]: 2026-01-27 14:45:54.754964307 +0000 UTC m=+0.028380882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:45:54 np0005597378 podman[392831]: 2026-01-27 14:45:54.851237949 +0000 UTC m=+0.124654544 container init 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:45:54 np0005597378 podman[392831]: 2026-01-27 14:45:54.859535062 +0000 UTC m=+0.132951607 container start 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 27 09:45:54 np0005597378 podman[392831]: 2026-01-27 14:45:54.865762809 +0000 UTC m=+0.139179384 container attach 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:45:55 np0005597378 nova_compute[238941]: 2026-01-27 14:45:55.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:45:55 np0005597378 lvm[392926]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:45:55 np0005597378 lvm[392926]: VG ceph_vg0 finished
Jan 27 09:45:55 np0005597378 lvm[392927]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:45:55 np0005597378 lvm[392927]: VG ceph_vg1 finished
Jan 27 09:45:55 np0005597378 lvm[392929]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:45:55 np0005597378 lvm[392929]: VG ceph_vg2 finished
Jan 27 09:45:55 np0005597378 lvm[392931]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:45:55 np0005597378 lvm[392931]: VG ceph_vg2 finished
Jan 27 09:45:55 np0005597378 hungry_noether[392847]: {}
Jan 27 09:45:55 np0005597378 systemd[1]: libpod-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope: Deactivated successfully.
Jan 27 09:45:55 np0005597378 podman[392831]: 2026-01-27 14:45:55.643286783 +0000 UTC m=+0.916703348 container died 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:45:55 np0005597378 systemd[1]: libpod-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope: Consumed 1.363s CPU time.
Jan 27 09:45:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-be595b8d43a2fc0dbdb0cbd62fe0c040bd7f816ee809ab88aef8ae6474db5afc-merged.mount: Deactivated successfully.
Jan 27 09:45:55 np0005597378 podman[392831]: 2026-01-27 14:45:55.750863017 +0000 UTC m=+1.024279572 container remove 64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:45:55 np0005597378 systemd[1]: libpod-conmon-64761ba30b75f8b60859e1b2889cd0f4eeadfdef094ac6d58f422055dced6511.scope: Deactivated successfully.
Jan 27 09:45:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 462 KiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 27 09:45:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:45:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:55 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:45:55 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:56 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:45:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:45:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 27 09:45:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 27 09:45:57 np0005597378 ceph-mon[75090]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 27 09:45:57 np0005597378 nova_compute[238941]: 2026-01-27 14:45:57.407 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 462 KiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Jan 27 09:45:58 np0005597378 nova_compute[238941]: 2026-01-27 14:45:58.958 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:45:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:45:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792448894' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:45:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:45:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792448894' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:45:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Jan 27 09:46:00 np0005597378 nova_compute[238941]: 2026-01-27 14:46:00.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 27 09:46:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:02 np0005597378 nova_compute[238941]: 2026-01-27 14:46:02.410 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:03 np0005597378 nova_compute[238941]: 2026-01-27 14:46:03.960 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.354579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167354633, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 983, "num_deletes": 252, "total_data_size": 1416830, "memory_usage": 1446920, "flush_reason": "Manual Compaction"}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167366912, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 909879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63506, "largest_seqno": 64488, "table_properties": {"data_size": 905839, "index_size": 1691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10530, "raw_average_key_size": 20, "raw_value_size": 897171, "raw_average_value_size": 1783, "num_data_blocks": 76, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525085, "oldest_key_time": 1769525085, "file_creation_time": 1769525167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 12383 microseconds, and 4104 cpu microseconds.
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.366964) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 909879 bytes OK
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.366985) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.372665) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.372707) EVENT_LOG_v1 {"time_micros": 1769525167372698, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.372740) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1412133, prev total WAL file size 1412133, number of live WAL files 2.
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.373449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353038' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(888KB)], [149(10MB)]
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167373485, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12352802, "oldest_snapshot_seqno": -1}
Jan 27 09:46:07 np0005597378 nova_compute[238941]: 2026-01-27 14:46:07.462 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8350 keys, 9476017 bytes, temperature: kUnknown
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167464443, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9476017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9423997, "index_size": 30019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218370, "raw_average_key_size": 26, "raw_value_size": 9278856, "raw_average_value_size": 1111, "num_data_blocks": 1161, "num_entries": 8350, "num_filter_entries": 8350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.464840) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9476017 bytes
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.467384) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.6 rd, 104.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.9 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(24.0) write-amplify(10.4) OK, records in: 8834, records dropped: 484 output_compression: NoCompression
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.467434) EVENT_LOG_v1 {"time_micros": 1769525167467417, "job": 92, "event": "compaction_finished", "compaction_time_micros": 91084, "compaction_time_cpu_micros": 28983, "output_level": 6, "num_output_files": 1, "total_output_size": 9476017, "num_input_records": 8834, "num_output_records": 8350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167467935, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525167470635, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.373346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:46:07 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:46:07.470791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:46:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:08 np0005597378 nova_compute[238941]: 2026-01-27 14:46:08.961 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:12 np0005597378 nova_compute[238941]: 2026-01-27 14:46:12.463 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:13 np0005597378 nova_compute[238941]: 2026-01-27 14:46:13.963 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:14 np0005597378 podman[392972]: 2026-01-27 14:46:14.736761512 +0000 UTC m=+0.069313210 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:46:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:46:17
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'backups', '.mgr', 'images', 'default.rgw.log']
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:46:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:17 np0005597378 nova_compute[238941]: 2026-01-27 14:46:17.465 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:46:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:46:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:46:18 np0005597378 podman[392991]: 2026-01-27 14:46:18.752981179 +0000 UTC m=+0.092151663 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:46:18 np0005597378 nova_compute[238941]: 2026-01-27 14:46:18.964 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:22 np0005597378 nova_compute[238941]: 2026-01-27 14:46:22.467 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:23 np0005597378 nova_compute[238941]: 2026-01-27 14:46:23.967 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.412 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.413 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.413 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.413 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:46:24 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:46:24 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365443614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:46:24 np0005597378 nova_compute[238941]: 2026-01-27 14:46:24.998 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.154 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.156 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3532MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.156 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.156 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.225 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.226 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.244 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:46:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:46:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535561177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:46:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.814 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.823 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.840 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.841 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:46:25 np0005597378 nova_compute[238941]: 2026-01-27 14:46:25.842 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:46:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:27 np0005597378 nova_compute[238941]: 2026-01-27 14:46:27.470 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:46:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:46:28 np0005597378 nova_compute[238941]: 2026-01-27 14:46:28.968 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:32 np0005597378 nova_compute[238941]: 2026-01-27 14:46:32.472 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:33 np0005597378 nova_compute[238941]: 2026-01-27 14:46:33.843 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:33 np0005597378 nova_compute[238941]: 2026-01-27 14:46:33.971 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:35 np0005597378 nova_compute[238941]: 2026-01-27 14:46:35.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:35 np0005597378 nova_compute[238941]: 2026-01-27 14:46:35.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:37 np0005597378 nova_compute[238941]: 2026-01-27 14:46:37.473 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:38 np0005597378 nova_compute[238941]: 2026-01-27 14:46:38.985 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:40 np0005597378 nova_compute[238941]: 2026-01-27 14:46:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:42 np0005597378 nova_compute[238941]: 2026-01-27 14:46:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:42 np0005597378 nova_compute[238941]: 2026-01-27 14:46:42.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:46:42 np0005597378 nova_compute[238941]: 2026-01-27 14:46:42.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:46:42 np0005597378 nova_compute[238941]: 2026-01-27 14:46:42.411 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:46:42 np0005597378 nova_compute[238941]: 2026-01-27 14:46:42.475 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:43 np0005597378 nova_compute[238941]: 2026-01-27 14:46:43.987 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:45 np0005597378 nova_compute[238941]: 2026-01-27 14:46:45.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:45 np0005597378 nova_compute[238941]: 2026-01-27 14:46:45.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:45 np0005597378 nova_compute[238941]: 2026-01-27 14:46:45.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:46:45 np0005597378 podman[393058]: 2026-01-27 14:46:45.733308399 +0000 UTC m=+0.070448661 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:46:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:46:46.358 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:46:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:46:46.358 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:46:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:46:46.359 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:46:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:47 np0005597378 nova_compute[238941]: 2026-01-27 14:46:47.477 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:46:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:46:48 np0005597378 nova_compute[238941]: 2026-01-27 14:46:48.989 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:49 np0005597378 podman[393078]: 2026-01-27 14:46:49.746403332 +0000 UTC m=+0.089271005 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 09:46:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:52 np0005597378 nova_compute[238941]: 2026-01-27 14:46:52.479 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:53 np0005597378 nova_compute[238941]: 2026-01-27 14:46:53.990 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:55 np0005597378 nova_compute[238941]: 2026-01-27 14:46:55.401 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:55 np0005597378 nova_compute[238941]: 2026-01-27 14:46:55.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:46:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:46:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:46:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:46:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:46:57 np0005597378 nova_compute[238941]: 2026-01-27 14:46:57.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:46:57 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:46:57 np0005597378 nova_compute[238941]: 2026-01-27 14:46:57.482 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.681969608 +0000 UTC m=+0.080082538 container create 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:46:57 np0005597378 systemd[1]: Started libpod-conmon-8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1.scope.
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.625455763 +0000 UTC m=+0.023568713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:46:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.758521702 +0000 UTC m=+0.156634652 container init 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.765267183 +0000 UTC m=+0.163380123 container start 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:46:57 np0005597378 frosty_euclid[393331]: 167 167
Jan 27 09:46:57 np0005597378 systemd[1]: libpod-8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1.scope: Deactivated successfully.
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.775383774 +0000 UTC m=+0.173496734 container attach 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.775689972 +0000 UTC m=+0.173802912 container died 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:46:57 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8690d800a4bf8666a22af9c89d50d608ca5a4b9850963be75f70ebbe46b79ad1-merged.mount: Deactivated successfully.
Jan 27 09:46:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:57 np0005597378 podman[393315]: 2026-01-27 14:46:57.854539737 +0000 UTC m=+0.252652667 container remove 8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:46:57 np0005597378 systemd[1]: libpod-conmon-8f1fb8261847f8a189640d9ed2125a7a13b2f74b2b070172a3a95081d0926ef1.scope: Deactivated successfully.
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.039709754 +0000 UTC m=+0.047315101 container create bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:46:58 np0005597378 systemd[1]: Started libpod-conmon-bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09.scope.
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.016683235 +0000 UTC m=+0.024288602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:46:58 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:46:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:58 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.145278885 +0000 UTC m=+0.152884242 container init bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.153096985 +0000 UTC m=+0.160702322 container start bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.157302677 +0000 UTC m=+0.164908054 container attach bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 27 09:46:58 np0005597378 elated_nobel[393373]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:46:58 np0005597378 elated_nobel[393373]: --> All data devices are unavailable
Jan 27 09:46:58 np0005597378 systemd[1]: libpod-bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09.scope: Deactivated successfully.
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.660714039 +0000 UTC m=+0.668319376 container died bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:46:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1d015444f741c2f440dd7d611749aba3dda84101d40fbc40731b07638407e227-merged.mount: Deactivated successfully.
Jan 27 09:46:58 np0005597378 podman[393357]: 2026-01-27 14:46:58.726214186 +0000 UTC m=+0.733819513 container remove bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:46:58 np0005597378 systemd[1]: libpod-conmon-bf2aaf0dfe3ebe8eef5d96f854a07901c8fb1a896a70e8c9e17cf7ecb7a5aa09.scope: Deactivated successfully.
Jan 27 09:46:58 np0005597378 nova_compute[238941]: 2026-01-27 14:46:58.992 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.220865602 +0000 UTC m=+0.054029109 container create 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:46:59 np0005597378 systemd[1]: Started libpod-conmon-3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468.scope.
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.193579531 +0000 UTC m=+0.026743068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:46:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.314080253 +0000 UTC m=+0.147243790 container init 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.321978675 +0000 UTC m=+0.155142172 container start 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:46:59 np0005597378 strange_agnesi[393485]: 167 167
Jan 27 09:46:59 np0005597378 systemd[1]: libpod-3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468.scope: Deactivated successfully.
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.341250281 +0000 UTC m=+0.174413788 container attach 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.342035613 +0000 UTC m=+0.175199120 container died 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:46:59 np0005597378 systemd[1]: var-lib-containers-storage-overlay-881410437b04dc2c3d963cb297f96227fb1905158d272ccb4783a29dee653b79-merged.mount: Deactivated successfully.
Jan 27 09:46:59 np0005597378 podman[393468]: 2026-01-27 14:46:59.476745875 +0000 UTC m=+0.309909382 container remove 3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:46:59 np0005597378 systemd[1]: libpod-conmon-3d717f5945141da5d060147068f9ee23fae2e871e3a6032f94879722146d4468.scope: Deactivated successfully.
Jan 27 09:46:59 np0005597378 podman[393514]: 2026-01-27 14:46:59.673562925 +0000 UTC m=+0.048504753 container create 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:46:59 np0005597378 systemd[1]: Started libpod-conmon-88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2.scope.
Jan 27 09:46:59 np0005597378 systemd-logind[786]: New session 55 of user zuul.
Jan 27 09:46:59 np0005597378 podman[393514]: 2026-01-27 14:46:59.651539674 +0000 UTC m=+0.026481522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:46:59 np0005597378 systemd[1]: Started Session 55 of User zuul.
Jan 27 09:46:59 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:46:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:59 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:46:59 np0005597378 podman[393514]: 2026-01-27 14:46:59.817212417 +0000 UTC m=+0.192154275 container init 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:46:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:46:59 np0005597378 podman[393514]: 2026-01-27 14:46:59.826940218 +0000 UTC m=+0.201882046 container start 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:46:59 np0005597378 podman[393514]: 2026-01-27 14:46:59.880794883 +0000 UTC m=+0.255736721 container attach 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 27 09:47:00 np0005597378 tender_brown[393532]: {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:    "0": [
Jan 27 09:47:00 np0005597378 tender_brown[393532]:        {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "devices": [
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "/dev/loop3"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            ],
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_name": "ceph_lv0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_size": "21470642176",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "name": "ceph_lv0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "tags": {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cluster_name": "ceph",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.crush_device_class": "",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.encrypted": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.objectstore": "bluestore",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osd_id": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.type": "block",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.vdo": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.with_tpm": "0"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            },
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "type": "block",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "vg_name": "ceph_vg0"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:        }
Jan 27 09:47:00 np0005597378 tender_brown[393532]:    ],
Jan 27 09:47:00 np0005597378 tender_brown[393532]:    "1": [
Jan 27 09:47:00 np0005597378 tender_brown[393532]:        {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "devices": [
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "/dev/loop4"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            ],
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_name": "ceph_lv1",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_size": "21470642176",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "name": "ceph_lv1",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "tags": {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cluster_name": "ceph",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.crush_device_class": "",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.encrypted": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.objectstore": "bluestore",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osd_id": "1",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.type": "block",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.vdo": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.with_tpm": "0"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            },
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "type": "block",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "vg_name": "ceph_vg1"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:        }
Jan 27 09:47:00 np0005597378 tender_brown[393532]:    ],
Jan 27 09:47:00 np0005597378 tender_brown[393532]:    "2": [
Jan 27 09:47:00 np0005597378 tender_brown[393532]:        {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "devices": [
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "/dev/loop5"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            ],
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_name": "ceph_lv2",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_size": "21470642176",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "name": "ceph_lv2",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "tags": {
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.cluster_name": "ceph",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.crush_device_class": "",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.encrypted": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.objectstore": "bluestore",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osd_id": "2",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.type": "block",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.vdo": "0",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:                "ceph.with_tpm": "0"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            },
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "type": "block",
Jan 27 09:47:00 np0005597378 tender_brown[393532]:            "vg_name": "ceph_vg2"
Jan 27 09:47:00 np0005597378 tender_brown[393532]:        }
Jan 27 09:47:00 np0005597378 tender_brown[393532]:    ]
Jan 27 09:47:00 np0005597378 tender_brown[393532]: }
Jan 27 09:47:00 np0005597378 systemd[1]: libpod-88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2.scope: Deactivated successfully.
Jan 27 09:47:00 np0005597378 podman[393514]: 2026-01-27 14:47:00.157731341 +0000 UTC m=+0.532673169 container died 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:47:00 np0005597378 systemd[1]: var-lib-containers-storage-overlay-458e7b92b2f2a5409905585f8bf6187df50fc30e078cbd97af677785f09515e2-merged.mount: Deactivated successfully.
Jan 27 09:47:00 np0005597378 podman[393514]: 2026-01-27 14:47:00.399255738 +0000 UTC m=+0.774197566 container remove 88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:47:01 np0005597378 systemd[1]: libpod-conmon-88f111a7068cc3dc519d6987c9c5ed828e05dd45835a131422d77c4703b0edf2.scope: Deactivated successfully.
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.639446111 +0000 UTC m=+0.045123771 container create acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:47:01 np0005597378 systemd[1]: Started libpod-conmon-acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6.scope.
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.618789716 +0000 UTC m=+0.024467406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:47:01 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.753756287 +0000 UTC m=+0.159433987 container init acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.761808022 +0000 UTC m=+0.167485682 container start acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 27 09:47:01 np0005597378 festive_northcutt[393715]: 167 167
Jan 27 09:47:01 np0005597378 systemd[1]: libpod-acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6.scope: Deactivated successfully.
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.795811375 +0000 UTC m=+0.201489045 container attach acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.796556214 +0000 UTC m=+0.202233904 container died acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:47:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:01 np0005597378 systemd[1]: var-lib-containers-storage-overlay-618ea6bbb84c03d9e052c564eb2fa76be3bfe84cf4248a656f71dac98120d3db-merged.mount: Deactivated successfully.
Jan 27 09:47:01 np0005597378 podman[393698]: 2026-01-27 14:47:01.902517587 +0000 UTC m=+0.308195247 container remove acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:47:01 np0005597378 systemd[1]: libpod-conmon-acdffaefd41763dadcddc3035d8ec53f222c730c29b5c8d91ab7000f3699ddc6.scope: Deactivated successfully.
Jan 27 09:47:02 np0005597378 podman[393760]: 2026-01-27 14:47:02.090388985 +0000 UTC m=+0.060872584 container create 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:47:02 np0005597378 systemd[1]: Started libpod-conmon-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope.
Jan 27 09:47:02 np0005597378 podman[393760]: 2026-01-27 14:47:02.053243258 +0000 UTC m=+0.023726887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:47:02 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:47:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:47:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:47:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:47:02 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:47:02 np0005597378 podman[393760]: 2026-01-27 14:47:02.215893262 +0000 UTC m=+0.186376891 container init 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:47:02 np0005597378 podman[393760]: 2026-01-27 14:47:02.22627351 +0000 UTC m=+0.196757109 container start 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:47:02 np0005597378 podman[393760]: 2026-01-27 14:47:02.248709951 +0000 UTC m=+0.219193550 container attach 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:47:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:02 np0005597378 nova_compute[238941]: 2026-01-27 14:47:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:02 np0005597378 nova_compute[238941]: 2026-01-27 14:47:02.485 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:03 np0005597378 lvm[393902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:47:03 np0005597378 lvm[393899]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:47:03 np0005597378 lvm[393899]: VG ceph_vg0 finished
Jan 27 09:47:03 np0005597378 lvm[393902]: VG ceph_vg1 finished
Jan 27 09:47:03 np0005597378 lvm[393904]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:47:03 np0005597378 lvm[393904]: VG ceph_vg2 finished
Jan 27 09:47:03 np0005597378 sleepy_diffie[393776]: {}
Jan 27 09:47:03 np0005597378 systemd[1]: libpod-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope: Deactivated successfully.
Jan 27 09:47:03 np0005597378 podman[393760]: 2026-01-27 14:47:03.155031829 +0000 UTC m=+1.125515448 container died 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 27 09:47:03 np0005597378 systemd[1]: libpod-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope: Consumed 1.506s CPU time.
Jan 27 09:47:03 np0005597378 systemd[1]: var-lib-containers-storage-overlay-ed28c6230c1510eb4d1544d96a823d277e43233df5e28fd3874c47f9ccae4d0f-merged.mount: Deactivated successfully.
Jan 27 09:47:03 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22988 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:03 np0005597378 podman[393760]: 2026-01-27 14:47:03.426929022 +0000 UTC m=+1.397412621 container remove 63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:47:03 np0005597378 systemd[1]: libpod-conmon-63524bbc988d855b1e615116c2cacb7fd398753ed6192a3d6f8c3ccf83478268.scope: Deactivated successfully.
Jan 27 09:47:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:47:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:47:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:47:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:47:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:04 np0005597378 nova_compute[238941]: 2026-01-27 14:47:03.997 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:04 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22990 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:47:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:47:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 09:47:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023217482' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 09:47:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:06 np0005597378 nova_compute[238941]: 2026-01-27 14:47:06.405 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:06 np0005597378 nova_compute[238941]: 2026-01-27 14:47:06.406 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:47:06 np0005597378 nova_compute[238941]: 2026-01-27 14:47:06.502 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:47:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:07 np0005597378 nova_compute[238941]: 2026-01-27 14:47:07.488 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:08 np0005597378 ovs-vsctl[394076]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 09:47:09 np0005597378 nova_compute[238941]: 2026-01-27 14:47:08.999 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:09 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 09:47:09 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 09:47:09 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 09:47:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:10 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: cache status {prefix=cache status} (starting...)
Jan 27 09:47:10 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: client ls {prefix=client ls} (starting...)
Jan 27 09:47:10 np0005597378 lvm[394416]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:47:10 np0005597378 lvm[394416]: VG ceph_vg0 finished
Jan 27 09:47:10 np0005597378 lvm[394423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:47:10 np0005597378 lvm[394423]: VG ceph_vg1 finished
Jan 27 09:47:10 np0005597378 lvm[394435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:47:10 np0005597378 lvm[394435]: VG ceph_vg2 finished
Jan 27 09:47:10 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22994 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:11 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: damage ls {prefix=damage ls} (starting...)
Jan 27 09:47:11 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump loads {prefix=dump loads} (starting...)
Jan 27 09:47:11 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22996 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:11 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 27 09:47:11 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 27 09:47:11 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 27 09:47:11 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.22998 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:11 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 27 09:47:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 27 09:47:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770657714' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 27 09:47:12 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 27 09:47:12 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 27 09:47:12 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:12 np0005597378 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:47:12 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:47:12.236+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351197988' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:47:12 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: ops {prefix=ops} (starting...)
Jan 27 09:47:12 np0005597378 nova_compute[238941]: 2026-01-27 14:47:12.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265232475' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 27 09:47:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957874114' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 27 09:47:13 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: session ls {prefix=session ls} (starting...)
Jan 27 09:47:13 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: status {prefix=status} (starting...)
Jan 27 09:47:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 09:47:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806728884' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 09:47:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 27 09:47:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091920010' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 27 09:47:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:13 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23016 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:14 np0005597378 nova_compute[238941]: 2026-01-27 14:47:14.000 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 09:47:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334324496' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 09:47:14 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23018 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:14 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 09:47:14 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254999333' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818623392' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704116878' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 09:47:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314841788' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 27 09:47:15 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473200911' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 09:47:16 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23032 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:16 np0005597378 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:47:16 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:47:16.447+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:47:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 09:47:16 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769637560' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 09:47:16 np0005597378 podman[395225]: 2026-01-27 14:47:16.74159701 +0000 UTC m=+0.071402997 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 09:47:16 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23036 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089456621' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:47:17
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', 'images']
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505800 session 0x564bcd672a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 277127168 unmapped: 39501824 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee7f9000/0x0/0x4ffc00000, data 0x24c444e/0x2653000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 277127168 unmapped: 39501824 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088385 data_alloc: 234881024 data_used: 22455368
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4c00 session 0x564bccd91500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd40a9dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bca3cca80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bcc228540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.537541389s of 13.847627640s, submitted: 53
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee7f9000/0x0/0x4ffc00000, data 0x24c444e/0x2653000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bd34cc540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4c00 session 0x564bccddda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bccddc540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505800 session 0x564bcd673180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bccf4ca80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113459 data_alloc: 234881024 data_used: 22455368
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee458000/0x0/0x4ffc00000, data 0x286544e/0x29f4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279404544 unmapped: 37224448 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee458000/0x0/0x4ffc00000, data 0x286544e/0x29f4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281124864 unmapped: 35504128 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee458000/0x0/0x4ffc00000, data 0x286544e/0x29f4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281870336 unmapped: 34758656 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3190203 data_alloc: 234881024 data_used: 23307336
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd85800 session 0x564bcea6a700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bccdacc40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283385856 unmapped: 33243136 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eda2a000/0x0/0x4ffc00000, data 0x329344e/0x3422000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bd3846540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.417898178s of 11.741650581s, submitted: 77
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bcd675c00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283705344 unmapped: 32923648 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3195012 data_alloc: 234881024 data_used: 23341128
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283828224 unmapped: 32800768 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9e4000/0x0/0x4ffc00000, data 0x32d8471/0x3468000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3217540 data_alloc: 234881024 data_used: 24972395
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 32120832 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9db000/0x0/0x4ffc00000, data 0x32e1471/0x3471000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3217740 data_alloc: 234881024 data_used: 24976491
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 32047104 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.264063835s of 12.286459923s, submitted: 9
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284590080 unmapped: 32038912 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 31358976 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed7e3000/0x0/0x4ffc00000, data 0x34d1471/0x3661000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 32030720 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bd08d7dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcd675180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bd3846000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd40a9500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 32030720 heap: 316628992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd85800 session 0x564bd40a96c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3800 session 0x564bca9f8000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcc88ac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bd2688700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd08d7a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3300662 data_alloc: 234881024 data_used: 25094763
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 35799040 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece88000/0x0/0x4ffc00000, data 0x3e33481/0x3fc4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece88000/0x0/0x4ffc00000, data 0x3e33481/0x3fc4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3300678 data_alloc: 234881024 data_used: 25094763
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf73800 session 0x564bd3847880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca63e000 session 0x564bcb50dc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcefdac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.198307991s of 11.552964211s, submitted: 36
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7e800 session 0x564bcd674e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 35790848 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x3e334a4/0x3fc5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 35782656 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87a000 session 0x564bcefda000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72e400 session 0x564bccddd180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a800 session 0x564bcb5f8e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234263 data_alloc: 234881024 data_used: 21292155
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed440000/0x0/0x4ffc00000, data 0x387a481/0x3a0b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283049984 unmapped: 37781504 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235575 data_alloc: 234881024 data_used: 21398651
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 38100992 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed440000/0x0/0x4ffc00000, data 0x387a481/0x3a0b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca803000 session 0x564bcea96540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29a800 session 0x564bccf3b6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.402175903s of 10.445899010s, submitted: 22
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87b800 session 0x564bd2689340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3166167 data_alloc: 234881024 data_used: 25227071
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee252000/0x0/0x4ffc00000, data 0x2a69481/0x2bfa000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3166167 data_alloc: 234881024 data_used: 25227071
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 33677312 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 289079296 unmapped: 31752192 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee02d000/0x0/0x4ffc00000, data 0x2c8e481/0x2e1f000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [0,0,0,0,0,2,0,7])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290439168 unmapped: 30392320 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 289742848 unmapped: 31088640 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.614546776s of 10.402552605s, submitted: 71
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed845000/0x0/0x4ffc00000, data 0x3470481/0x3601000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3234551 data_alloc: 234881024 data_used: 25616191
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed7b2000/0x0/0x4ffc00000, data 0x34fb481/0x368c000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292241408 unmapped: 28590080 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292249600 unmapped: 28581888 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292249600 unmapped: 28581888 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf73800 session 0x564bcb082540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd40a88c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e000 session 0x564bd37c0000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061853 data_alloc: 218103808 data_used: 15602991
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee9fe000/0x0/0x4ffc00000, data 0x210b44e/0x229a000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee9fe000/0x0/0x4ffc00000, data 0x210b44e/0x229a000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061853 data_alloc: 218103808 data_used: 15602991
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 34234368 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdf3c400 session 0x564bcceba000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7f000 session 0x564bd34cd180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.748150826s of 12.829801559s, submitted: 49
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccd91a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee9fe000/0x0/0x4ffc00000, data 0x210b44e/0x229a000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892661 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892661 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279715840 unmapped: 41115648 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892661 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279724032 unmapped: 41107456 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4efe80000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279732224 unmapped: 41099264 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279732224 unmapped: 41099264 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279732224 unmapped: 41099264 heap: 320831488 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.525594711s of 18.546001434s, submitted: 11
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca89000 session 0x564bd08d6fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccb4ec40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bccb4f880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd08d6a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb79c00 session 0x564bd34cda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937569 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937569 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2937569 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278552576 unmapped: 46481408 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278560768 unmapped: 46473216 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcefdaa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccebae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb79c00 session 0x564bcefda700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd08d6e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278560768 unmapped: 46473216 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.824385643s of 13.901331902s, submitted: 4
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281247744 unmapped: 43786240 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef6c8000/0x0/0x4ffc00000, data 0x15f544e/0x1784000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcb64b340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505800 session 0x564bcc228a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bd2688e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963267 data_alloc: 218103808 data_used: 6161164
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb79c00 session 0x564bd40a8540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bcea976c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bd0fa6540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef321000/0x0/0x4ffc00000, data 0x199c44e/0x1b2b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963523 data_alloc: 218103808 data_used: 6192908
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bd08d68c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026627 data_alloc: 234881024 data_used: 16850700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef321000/0x0/0x4ffc00000, data 0x199c44e/0x1b2b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef321000/0x0/0x4ffc00000, data 0x199c44e/0x1b2b000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026627 data_alloc: 234881024 data_used: 16850700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278568960 unmapped: 46465024 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.068483353s of 17.878995895s, submitted: 9
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44834816 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280870912 unmapped: 44163072 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eef66000/0x0/0x4ffc00000, data 0x1d4f44e/0x1ede000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280788992 unmapped: 44244992 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065495 data_alloc: 234881024 data_used: 17904396
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280379392 unmapped: 44654592 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280379392 unmapped: 44654592 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eedbf000/0x0/0x4ffc00000, data 0x1efe44e/0x208d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280379392 unmapped: 44654592 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081447 data_alloc: 234881024 data_used: 18177804
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eedad000/0x0/0x4ffc00000, data 0x1f0f44e/0x209e000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.841414452s of 12.251655579s, submitted: 62
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b7800 session 0x564bd26896c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92e400 session 0x564bd2689880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1191800 session 0x564bcc2296c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd800 session 0x564bd34cd880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030400 session 0x564bcb082380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281427968 unmapped: 43606016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda8000/0x0/0x4ffc00000, data 0x1f1544e/0x20a4000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281436160 unmapped: 43597824 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081847 data_alloc: 234881024 data_used: 18185996
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.594211578s of 16.665733337s, submitted: 1
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281444352 unmapped: 43589632 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154c00 session 0x564bcefda8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083024 data_alloc: 234881024 data_used: 18185996
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 43581440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda7000/0x0/0x4ffc00000, data 0x1f15471/0x20a5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eeda5000/0x0/0x4ffc00000, data 0x1f16471/0x20a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29a800 session 0x564bd08d68c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154000 session 0x564bcefda700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3121904 data_alloc: 234881024 data_used: 18190092
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bcefdaa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf71000 session 0x564bd08d6a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bccb4ec40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3121904 data_alloc: 234881024 data_used: 18190092
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 43573248 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.467000961s of 17.345470428s, submitted: 9
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2dec800 session 0x564bcb64ae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee6a6000/0x0/0x4ffc00000, data 0x2616471/0x27a6000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3123593 data_alloc: 234881024 data_used: 18190092
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 43565056 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282001408 unmapped: 43032576 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282001408 unmapped: 43032576 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42180608 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee362000/0x0/0x4ffc00000, data 0x295a471/0x2aea000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 41934848 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199893 data_alloc: 234881024 data_used: 24596881
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 41934848 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee113000/0x0/0x4ffc00000, data 0x2ba9471/0x2d39000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee10f000/0x0/0x4ffc00000, data 0x2bad471/0x2d3d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202475 data_alloc: 234881024 data_used: 24596881
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee10f000/0x0/0x4ffc00000, data 0x2bad471/0x2d3d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283107328 unmapped: 41926656 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202475 data_alloc: 234881024 data_used: 24596881
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.933923721s of 17.879743576s, submitted: 28
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283246592 unmapped: 41787392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 40787968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee077000/0x0/0x4ffc00000, data 0x2c45471/0x2dd5000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 40787968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 40779776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 40779776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edcc8000/0x0/0x4ffc00000, data 0x2ff4471/0x3184000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227579 data_alloc: 234881024 data_used: 24596881
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284262400 unmapped: 40771584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284786688 unmapped: 40247296 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284262400 unmapped: 40771584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4800 session 0x564bd26896c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87b400 session 0x564bcc2296c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3c00 session 0x564bcd92b880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bca3cd880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87b400 session 0x564bd40a9dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc3f000/0x0/0x4ffc00000, data 0x307d471/0x320d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238596 data_alloc: 234881024 data_used: 24600465
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284270592 unmapped: 40763392 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc3f000/0x0/0x4ffc00000, data 0x307d471/0x320d000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.306918144s of 11.977951050s, submitted: 35
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284418048 unmapped: 40615936 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb603400 session 0x564bcefdb500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 40468480 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 40173568 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3245456 data_alloc: 234881024 data_used: 25204625
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 40173568 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc1b000/0x0/0x4ffc00000, data 0x30a1471/0x3231000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 40173568 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd27ae800 session 0x564bcea968c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72f800 session 0x564bd08d7180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 40165376 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1190800 session 0x564bca3cca80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1191c00 session 0x564bccd91500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edc1b000/0x0/0x4ffc00000, data 0x30a1471/0x3231000, compress 0x0/0x0/0x0, omap 0x44263, meta 0xed6bd9d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 40165376 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282451968 unmapped: 42582016 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110624 data_alloc: 234881024 data_used: 17701265
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282460160 unmapped: 42573824 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72f800 session 0x564bd34cd880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282468352 unmapped: 42565632 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bcefda540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eecff000/0x0/0x4ffc00000, data 0x1fbd44e/0x214c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109348 data_alloc: 234881024 data_used: 17697169
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.256325722s of 14.974139214s, submitted: 35
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42557440 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eecff000/0x0/0x4ffc00000, data 0x1fbd44e/0x214c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 42450944 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283549696 unmapped: 41484288 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee864000/0x0/0x4ffc00000, data 0x245944e/0x25e8000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2c00 session 0x564bca944540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d2c00 session 0x564bccddc000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145406 data_alloc: 234881024 data_used: 17815918
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283549696 unmapped: 41484288 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 283566080 unmapped: 41467904 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab800 session 0x564bccdac380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028768 data_alloc: 218103808 data_used: 9668974
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ad400 session 0x564bccb4e000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bcd92ae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45916160 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.923320770s of 10.008566856s, submitted: 47
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef3b0000/0x0/0x4ffc00000, data 0x190d44e/0x1a9c000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2dec400 session 0x564bd40a88c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45907968 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45899776 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46143, meta 0xed69ebd), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976926 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45891584 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.975721359s of 29.299236298s, submitted: 7
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278233088 unmapped: 46800896 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278233088 unmapped: 46800896 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bcb082540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bca9f9a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bd08d7c00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bd2688380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 36216832 heap: 325033984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ef90d000/0x0/0x4ffc00000, data 0x13b044e/0x153f000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,6,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd85c00 session 0x564bd34cc1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bccf3aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bca944c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048717 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bd08d76c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bd34cd880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278675456 unmapped: 50036736 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278675456 unmapped: 50036736 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x256145e/0x26f1000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802400 session 0x564bccb4ec40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 51806208 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bccb4f880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bd34cc1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bca944540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bccb4e000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4c00 session 0x564bcb5f8a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x256145e/0x26f1000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bcefdb880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094544 data_alloc: 218103808 data_used: 6798190
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bd08d6e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd37c0a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 51798016 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.292297363s of 10.319147110s, submitted: 52
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x256145e/0x26f1000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 276922368 unmapped: 51789824 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bcc2b5500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 277266432 unmapped: 51445760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280354816 unmapped: 48357376 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb4800 session 0x564bd34cc540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd2688700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd000 session 0x564bcb50dc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75bc00 session 0x564bccf3b500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bcc229500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202504 data_alloc: 218103808 data_used: 13735278
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278290432 unmapped: 50421760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278290432 unmapped: 50421760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee014000/0x0/0x4ffc00000, data 0x2ca845e/0x2e38000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 278290432 unmapped: 50421760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280125440 unmapped: 48586752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280125440 unmapped: 48586752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243848 data_alloc: 234881024 data_used: 20634990
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.034842491s of 10.587124825s, submitted: 13
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee014000/0x0/0x4ffc00000, data 0x2ca845e/0x2e38000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154800 session 0x564bcdb70e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7fc00 session 0x564bcdb71880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280133632 unmapped: 48578560 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd84400 session 0x564bd3847880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 280141824 unmapped: 48570368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207442 data_alloc: 218103808 data_used: 13741422
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee0ed000/0x0/0x4ffc00000, data 0x2bcf45e/0x2d5f000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [0,0,0,0,0,0,0,0,22,38])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284434432 unmapped: 44277760 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 44081152 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bcea6a700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 40509440 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3280638 data_alloc: 234881024 data_used: 21418862
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9a0000/0x0/0x4ffc00000, data 0x32f045e/0x3480000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 41099264 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.594480991s of 11.006065369s, submitted: 158
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed9a8000/0x0/0x4ffc00000, data 0x331445e/0x34a4000, compress 0x0/0x0/0x0, omap 0x46177, meta 0xed69e89), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274686 data_alloc: 234881024 data_used: 21418862
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 41771008 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 39976960 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 39976960 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed577000/0x0/0x4ffc00000, data 0x373f45e/0x38cf000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290045952 unmapped: 38666240 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314548 data_alloc: 234881024 data_used: 22401902
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290045952 unmapped: 38666240 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5400 session 0x564bcd672fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded400 session 0x564bcdb70a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802000 session 0x564bd2688540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca7fc00 session 0x564bd40a9180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5400 session 0x564bd08d7dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 290275328 unmapped: 38436864 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5800 session 0x564bca9f8e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c0800 session 0x564bd37c0380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c0800 session 0x564bccddd180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee78d000/0x0/0x4ffc00000, data 0x247a44e/0x2609000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158102 data_alloc: 218103808 data_used: 15260526
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee78d000/0x0/0x4ffc00000, data 0x247a44e/0x2609000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.389715195s of 13.823884010s, submitted: 103
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bd40a9500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 41263104 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcb50ddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bcea96c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bcd673dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bd40a9a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 40894464 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee843000/0x0/0x4ffc00000, data 0x247a44e/0x2609000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcc228540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bcb64aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c0800 session 0x564bd3846000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd01c1000 session 0x564bccf3b180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bccf4da40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219585 data_alloc: 234881024 data_used: 19721070
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 40894464 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee2b3000/0x0/0x4ffc00000, data 0x2a0a44e/0x2b99000, compress 0x0/0x0/0x0, omap 0x461ab, meta 0xed69e55), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1191000 session 0x564bca8d56c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc7400 session 0x564bd34cc700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 40894464 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc155c00 session 0x564bccdac540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cdc00 session 0x564bd37c0000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286269440 unmapped: 42442752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286269440 unmapped: 42442752 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c4800 session 0x564bd08d6700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 42426368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7400 session 0x564bd08d7340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcb50ca80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3119265 data_alloc: 218103808 data_used: 14723950
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 42426368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 42426368 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6c000/0x0/0x4ffc00000, data 0x1f5144e/0x20e0000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc400 session 0x564bd08d7a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.727919579s of 10.244180679s, submitted: 52
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bdc00 session 0x564bd0fa7500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286294016 unmapped: 42418176 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bccb4fc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150675 data_alloc: 234881024 data_used: 19611705
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150675 data_alloc: 234881024 data_used: 19611705
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eed6b000/0x0/0x4ffc00000, data 0x1f5145e/0x20e1000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 42115072 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.951860428s of 13.079006195s, submitted: 2
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3163961 data_alloc: 234881024 data_used: 19634233
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3183435 data_alloc: 234881024 data_used: 19988537
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 41459712 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185531 data_alloc: 234881024 data_used: 20037689
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee91e000/0x0/0x4ffc00000, data 0x239e45e/0x252e000, compress 0x0/0x0/0x0, omap 0x46084, meta 0xed69f7c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bca944540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7400 session 0x564bd3846380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 40411136 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd0fa6380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf936c00 session 0x564bccd91500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bcc88a1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcefda540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.225875854s of 13.874032021s, submitted: 39
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 293642240 unmapped: 35069952 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3240692 data_alloc: 234881024 data_used: 20037689
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40370176 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2832000 session 0x564bccf4ca80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2000 session 0x564bccddc000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bcb64ba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bccebbc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd3846fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bd40a8540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2000 session 0x564bcb082e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40370176 heap: 328712192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2832000 session 0x564bd3847c00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc154400 session 0x564bcb64ae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 34922496 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bccebba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bcc2b5a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd2689500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e2000 session 0x564bd08d6000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bcd675a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb5400 session 0x564bceffb180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee2f3000/0x0/0x4ffc00000, data 0x29c84c0/0x2b59000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bcb64ae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccd90e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191338 data_alloc: 218103808 data_used: 13826121
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd35bf400 session 0x564bd37c1a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccd91a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88800 session 0x564bcc2b5880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504000 session 0x564bd08d6540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 43606016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf937400 session 0x564bcb082c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ee316000/0x0/0x4ffc00000, data 0x29a44d0/0x2b36000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 45432832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f800 session 0x564bccb4efc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb5400 session 0x564bcd92bdc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 45432832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bd0fa6700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.195973396s of 10.071157455s, submitted: 69
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6800 session 0x564bcc967a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eea39000/0x0/0x4ffc00000, data 0x1e444d0/0x1fd6000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 45776896 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117018 data_alloc: 218103808 data_used: 13180489
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 45891584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 45891584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eee52000/0x0/0x4ffc00000, data 0x1e684d0/0x1ffa000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172698 data_alloc: 234881024 data_used: 22532169
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172698 data_alloc: 234881024 data_used: 22532169
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eee52000/0x0/0x4ffc00000, data 0x1e684d0/0x1ffa000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xed69d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 43663360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.509855270s of 11.519581795s, submitted: 2
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 292397056 unmapped: 39993344 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38412288 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed07d000/0x0/0x4ffc00000, data 0x2a974d0/0x2c29000, compress 0x0/0x0/0x0, omap 0x46282, meta 0xff09d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298221568 unmapped: 34168832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301956 data_alloc: 234881024 data_used: 23806025
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb952000/0x0/0x4ffc00000, data 0x30284d0/0x31ba000, compress 0x0/0x0/0x0, omap 0x46282, meta 0x110a9d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb931000/0x0/0x4ffc00000, data 0x30494d0/0x31db000, compress 0x0/0x0/0x0, omap 0x46282, meta 0x110a9d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303836 data_alloc: 234881024 data_used: 23838793
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb90d000/0x0/0x4ffc00000, data 0x306d4d0/0x31ff000, compress 0x0/0x0/0x0, omap 0x46282, meta 0x110a9d7e), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 31670272 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.609977722s of 13.966870308s, submitted: 193
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5000 session 0x564bca945340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e400 session 0x564bccdadc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5000 session 0x564bccf3b340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb5400 session 0x564bcb64b880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6800 session 0x564bcd92ba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28f000 session 0x564bccf3a8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb012c00 session 0x564bd08d7a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bca9f8540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181847 data_alloc: 218103808 data_used: 13606473
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb157000/0x0/0x4ffc00000, data 0x38234d0/0x39b5000, compress 0x0/0x0/0x0, omap 0x4644d, meta 0x110a9bb3), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb9e000/0x0/0x4ffc00000, data 0x26644d0/0x27f6000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb9e000/0x0/0x4ffc00000, data 0x26644d0/0x27f6000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb9e000/0x0/0x4ffc00000, data 0x26644d0/0x27f6000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181847 data_alloc: 218103808 data_used: 13606473
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298147840 unmapped: 34242560 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bca8d5500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec313000/0x0/0x4ffc00000, data 0x26674d0/0x27f9000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf937800 session 0x564bcea6aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72e400 session 0x564bca8d56c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcb64aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.311414719s of 10.554850578s, submitted: 54
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181203 data_alloc: 218103808 data_used: 13606473
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 34324480 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297664512 unmapped: 34725888 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec313000/0x0/0x4ffc00000, data 0x26674d0/0x27f9000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297664512 unmapped: 34725888 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bccd91500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bceffac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf937800 session 0x564bcd92ae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bca9f8000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec312000/0x0/0x4ffc00000, data 0x26674e0/0x27fa000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,2])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7c00 session 0x564bcc88a1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7c00 session 0x564bd08d7a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccf3b340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bd0fa6700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bd08d6540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec312000/0x0/0x4ffc00000, data 0x26674e0/0x27fa000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3279111 data_alloc: 234881024 data_used: 18917961
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb4f000/0x0/0x4ffc00000, data 0x2e2a4e0/0x2fbd000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb4f000/0x0/0x4ffc00000, data 0x2e2a4e0/0x2fbd000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb49000/0x0/0x4ffc00000, data 0x2e304e0/0x2fc3000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bcb082e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bd37c1a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3279359 data_alloc: 234881024 data_used: 18917961
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 34529280 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd90e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.832988739s of 11.122942924s, submitted: 20
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bceffb180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298164224 unmapped: 34226176 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 30580736 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ead00000/0x0/0x4ffc00000, data 0x3c714e0/0x3e04000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414368 data_alloc: 234881024 data_used: 23732798
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eacfd000/0x0/0x4ffc00000, data 0x3c744e0/0x3e07000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bcc967880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2dec000 session 0x564bd34cc8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcb0836c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bcefda540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 29835264 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77d000 session 0x564bd34cc000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2833000 session 0x564bd40a9180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf72000 session 0x564bcea97dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd40a8380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bca945340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 29442048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea9ad000/0x0/0x4ffc00000, data 0x3fcb4f0/0x415f000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 29442048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcd92b6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc72e400 session 0x564bcea961c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440674 data_alloc: 234881024 data_used: 23732798
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 29442048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c4c00 session 0x564bccdaddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504000 session 0x564bccd91a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.345057487s of 10.011887550s, submitted: 175
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcb50ddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297811968 unmapped: 34578432 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 297811968 unmapped: 34578432 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd34cda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3000 session 0x564bd08d6c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298287104 unmapped: 34103296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299843584 unmapped: 32546816 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebc06000/0x0/0x4ffc00000, data 0x2d714f0/0x2f05000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306964 data_alloc: 234881024 data_used: 18332238
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 32514048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 32514048 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4aa400 session 0x564bca8d4540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bccddc8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 32505856 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebbfe000/0x0/0x4ffc00000, data 0x2d794f0/0x2f0d000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bd40a8540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cf947e/0x1e8b000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175879 data_alloc: 234881024 data_used: 16287806
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecc81000/0x0/0x4ffc00000, data 0x1cf946e/0x1e8a000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecc81000/0x0/0x4ffc00000, data 0x1cf946e/0x1e8a000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175879 data_alloc: 234881024 data_used: 16287806
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 32776192 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.237992287s of 15.121253014s, submitted: 66
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 25411584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 27459584 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec318000/0x0/0x4ffc00000, data 0x266346e/0x27f4000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec1a8000/0x0/0x4ffc00000, data 0x27d346e/0x2964000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305102848 unmapped: 27287552 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.0 total, 600.0 interval#012Cumulative writes: 34K writes, 138K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 34K writes, 11K syncs, 2.87 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4081 writes, 17K keys, 4081 commit groups, 1.0 writes per commit group, ingest: 21.12 MB, 0.04 MB/s#012Interval WAL: 4081 writes, 1534 syncs, 2.66 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305168384 unmapped: 27222016 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf936800 session 0x564bd3846000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bd0fa6380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb012400 session 0x564bcefda8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bccddce00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3257933 data_alloc: 234881024 data_used: 17033180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 29966336 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf936800 session 0x564bd40a8000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 29949952 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x29bb46e/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,4])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 29949952 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bca9f9dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bd40a9500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bd40a96c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcdb71c00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bd2689dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf78000/0x0/0x4ffc00000, data 0x2a0346e/0x2b94000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265482 data_alloc: 234881024 data_used: 17118684
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd35be000 session 0x564bd40a8540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 30007296 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf78000/0x0/0x4ffc00000, data 0x2a0346e/0x2b94000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.775732040s of 12.617618561s, submitted: 138
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3800 session 0x564bcb5f8e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275959 data_alloc: 234881024 data_used: 18429404
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf57000/0x0/0x4ffc00000, data 0x2a2446e/0x2bb5000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf57000/0x0/0x4ffc00000, data 0x2a2446e/0x2bb5000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276991 data_alloc: 234881024 data_used: 18522076
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 29999104 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302399488 unmapped: 29990912 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302399488 unmapped: 29990912 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302399488 unmapped: 29990912 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf54000/0x0/0x4ffc00000, data 0x2a2746e/0x2bb8000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.947317123s of 11.255783081s, submitted: 13
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288089 data_alloc: 234881024 data_used: 18534364
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305111040 unmapped: 27279360 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 28082176 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eba91000/0x0/0x4ffc00000, data 0x2eea46e/0x307b000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1152400 session 0x564bcea6bdc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bcea6b6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bccddc700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d3800 session 0x564bd08d7a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1152400 session 0x564bccd90e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eba8a000/0x0/0x4ffc00000, data 0x2ef0497/0x3082000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334804 data_alloc: 234881024 data_used: 18751452
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 28024832 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd15fc400 session 0x564bd2689880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 27877376 heap: 332390400 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb72e000/0x0/0x4ffc00000, data 0x324c4d0/0x33de000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [0,0,0,0,0,4])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305635328 unmapped: 29892608 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305635328 unmapped: 29892608 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.188038826s of 10.137979507s, submitted: 118
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410308 data_alloc: 234881024 data_used: 20932572
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305651712 unmapped: 29876224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305651712 unmapped: 29876224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bd34cda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88400 session 0x564bccd91a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305651712 unmapped: 29876224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd1152400 session 0x564bccd90700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb393000/0x0/0x4ffc00000, data 0x35e74d0/0x3779000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361712 data_alloc: 234881024 data_used: 19404252
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7c00 session 0x564bcd92ba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 31211520 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce4b6c00 session 0x564bcc228540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb36f000/0x0/0x4ffc00000, data 0x360b4d0/0x379d000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bcc967a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb83000/0x0/0x4ffc00000, data 0x2a824c0/0x2c13000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3245586 data_alloc: 218103808 data_used: 11161036
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebb83000/0x0/0x4ffc00000, data 0x2a824c0/0x2c13000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 298622976 unmapped: 36904960 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.757285118s of 12.069029808s, submitted: 50
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 34119680 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb8df000/0x0/0x4ffc00000, data 0x309b4c0/0x322c000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300052 data_alloc: 218103808 data_used: 12017100
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 34111488 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 34103296 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 34095104 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 34086912 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb8dd000/0x0/0x4ffc00000, data 0x309e4c0/0x322f000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3297812 data_alloc: 218103808 data_used: 12021196
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 34062336 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 34021376 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 34021376 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 34021376 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.515048027s of 11.809061050s, submitted: 156
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a8000 session 0x564bd08d6c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bcd675a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4c00 session 0x564bcdb70a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb8da000/0x0/0x4ffc00000, data 0x30a14c0/0x3232000, compress 0x0/0x0/0x0, omap 0x4664b, meta 0x110a99b5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 34013184 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5000 session 0x564bca945340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf70800 session 0x564bcefdbdc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214588 data_alloc: 218103808 data_used: 9016780
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 33980416 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4400 session 0x564bd0fa7c00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152718 data_alloc: 218103808 data_used: 8879564
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c4800 session 0x564bccf3aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6000 session 0x564bcc2b5dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 33972224 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29ac00 session 0x564bd0fa6a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec86b000/0x0/0x4ffc00000, data 0x1d0445e/0x1e94000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 33955840 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 33955840 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 33947648 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 33939456 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048120 data_alloc: 218103808 data_used: 6116698
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bd2689340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd90540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bd38476c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcc88a1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 33931264 heap: 335527936 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.519496918s of 31.023237228s, submitted: 70
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2ded800 session 0x564bd38468c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd40a9180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bd3846e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcb083180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bcea96380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115619 data_alloc: 218103808 data_used: 6116698
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed097000/0x0/0x4ffc00000, data 0x18e44bf/0x1a75000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 43687936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c5000 session 0x564bcd6728c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd908c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcc229500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcea961c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bd37c0e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4400 session 0x564bcea6a700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151123 data_alloc: 218103808 data_used: 6116698
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcea6afc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcea96000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbda000/0x0/0x4ffc00000, data 0x1da14bf/0x1f32000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bccddda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3213774 data_alloc: 234881024 data_used: 16147817
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.585988045s of 12.116201401s, submitted: 52
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccdaddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbd9000/0x0/0x4ffc00000, data 0x1da14cf/0x1f33000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 46546944 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 46301184 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5800 session 0x564bd3846c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccebae00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd2688e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bccf4c8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 45146112 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bd0fa68c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bcd674a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bccf4cfc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282758 data_alloc: 234881024 data_used: 20738921
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bca8d5500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcc2b4000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bccd956c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bceffac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323618 data_alloc: 234881024 data_used: 20788073
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bccf4ca80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 43188224 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bcd6756c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 43180032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.031028748s of 10.043293953s, submitted: 53
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf18000/0x0/0x4ffc00000, data 0x2a60502/0x2bf4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 43081728 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 40255488 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 38076416 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3435075 data_alloc: 234881024 data_used: 25682281
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 35086336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb1d7000/0x0/0x4ffc00000, data 0x3799502/0x392d000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448499 data_alloc: 234881024 data_used: 27217257
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.520648003s of 11.550464630s, submitted: 124
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312623104 unmapped: 34455552 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 33734656 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x3b6e502/0x3d02000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,19])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3496971 data_alloc: 234881024 data_used: 27254121
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313868288 unmapped: 33210368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bd34cd180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bd37c0000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88400 session 0x564bd37c08c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd08d6700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313876480 unmapped: 33202176 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313925632 unmapped: 33153024 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 27820032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea442000/0x0/0x4ffc00000, data 0x453353b/0x46c9000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,3,2])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd37c0000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd84800 session 0x564bcd6756c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bd37c1dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3543038 data_alloc: 234881024 data_used: 28028281
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea425000/0x0/0x4ffc00000, data 0x455053b/0x46e6000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7800 session 0x564bcb64ac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd0fa7a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314048512 unmapped: 33030144 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd26881c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314146816 unmapped: 32931840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.908327103s of 10.164081573s, submitted: 122
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bcc88a1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318701568 unmapped: 28377088 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcefdbc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc7400 session 0x564bca9f8540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd3847880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcc229500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bcdb70540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd27af800 session 0x564bcefdac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d4000/0x0/0x4ffc00000, data 0x4fa159d/0x5138000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd37c0e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617665 data_alloc: 234881024 data_used: 28233081
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315170816 unmapped: 31907840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3400 session 0x564bd40a8a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bccebb6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d0000/0x0/0x4ffc00000, data 0x4fa45f9/0x513c000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649761 data_alloc: 234881024 data_used: 33556857
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bca8d41c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd08d7340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccf3b180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4efc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 28090368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.126866341s of 10.051178932s, submitted: 51
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bcc88ac40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315949056 unmapped: 31129600 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3579543 data_alloc: 251658240 data_used: 36066169
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea867000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea868000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3613059 data_alloc: 251658240 data_used: 36390742
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325566464 unmapped: 21512192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325615616 unmapped: 21463040 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 21979136 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea368000/0x0/0x4ffc00000, data 0x460e5c6/0x47a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3618871 data_alloc: 251658240 data_used: 36960086
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.973931313s of 12.332912445s, submitted: 76
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329621504 unmapped: 17457152 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329662464 unmapped: 17416192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 17063936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330039296 unmapped: 17039360 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99e5000/0x0/0x4ffc00000, data 0x4f905c6/0x5126000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684395 data_alloc: 251658240 data_used: 38274902
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd34ccfc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccf4c8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf70800 session 0x564bd34cd6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd0fa6e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bccddc700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92f800 session 0x564bcd672fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406949 data_alloc: 234881024 data_used: 21413057
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 23830528 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.216124535s of 10.042542458s, submitted: 195
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bcea6a540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bd34cd500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eaef3000/0x0/0x4ffc00000, data 0x31ac4cf/0x333e000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 23846912 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccf4c700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec876000/0x0/0x4ffc00000, data 0x210544e/0x2294000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bcd674e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccd95880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111213 data_alloc: 218103808 data_used: 6241262
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb1c000/0x0/0x4ffc00000, data 0xe6144e/0xff0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bccb4f880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311730176 unmapped: 35348480 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.195335388s of 20.409894943s, submitted: 46
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325222400 unmapped: 21856256 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb6800 session 0x564bccd94c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccebb6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bd08d7340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccb4efc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bccddc700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184004 data_alloc: 218103808 data_used: 6136814
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87a800 session 0x564bcc229180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [0,0,0,0,0,0,0,0,5,2])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bcea6a540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd34cc380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 48390144 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030c00 session 0x564bd2688700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4abc00 session 0x564bccd90380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030000 session 0x564bd37c08c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311582720 unmapped: 48635904 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2ccc00 session 0x564bcb0821c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bcb082e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcefdb880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bca8d4e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 316588032 unmapped: 43630592 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.074495316s of 19.268436432s, submitted: 41
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467488 data_alloc: 251658240 data_used: 33608686
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322863104 unmapped: 37355520 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb859000/0x0/0x4ffc00000, data 0x311b4b0/0x32ab000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3478950 data_alloc: 251658240 data_used: 33842158
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326033408 unmapped: 34185216 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb064000/0x0/0x4ffc00000, data 0x39184b0/0x3aa8000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522832 data_alloc: 251658240 data_used: 33960942
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d2000 session 0x564bceffb180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [280,281], i have 281, src has [1,281]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.465786934s of 10.867195129s, submitted: 144
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d7880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6bdc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344834048 unmapped: 15384576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bce2ccc00 session 0x564bd08d76c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 18046976 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 26599424 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 282 ms_handle_reset con 0x564bd12e3c00 session 0x564bccddda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 26591232 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb4ab400 session 0x564bccdaddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3678431 data_alloc: 251658240 data_used: 42726398
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 283 heartbeat osd_stat(store_statfs(0x4e9b73000/0x0/0x4ffc00000, data 0x4e02cae/0x4f97000, compress 0x0/0x0/0x0, omap 0x47530, meta 0x110a8ad0), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb75a400 session 0x564bccddc1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333643776 unmapped: 26574848 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcc2d2000 session 0x564bd3846c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a8c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680133 data_alloc: 251658240 data_used: 42726398
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd1152000 session 0x564bcb0836c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcea96000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bccb4f500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bccf4da40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bccf3a000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bca9f8380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a9500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bccd956c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd08d6c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 26517504 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.470180511s of 13.830414772s, submitted: 70
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bcd92ba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bcea97340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bca944c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd0fa6540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730339 data_alloc: 251658240 data_used: 42726398
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bccdac540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933f000/0x0/0x4ffc00000, data 0x56362e5/0x57cd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd672a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd6728c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd08d6e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d68c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732101 data_alloc: 251658240 data_used: 42726398
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933e000/0x0/0x4ffc00000, data 0x56362f5/0x57ce000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a8800 session 0x564bd08d6000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd3846700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 23896064 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 15638528 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 11714560 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.061373711s of 15.209449768s, submitted: 14
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933b000/0x0/0x4ffc00000, data 0x5637305/0x57d0000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3875583 data_alloc: 268435456 data_used: 64652286
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 6529024 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 6168576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 4472832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 3768320 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e892e000/0x0/0x4ffc00000, data 0x6044305/0x61dd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 3522560 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3951443 data_alloc: 268435456 data_used: 66069502
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8908000/0x0/0x4ffc00000, data 0x606b305/0x6204000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.538804054s of 14.776687622s, submitted: 86
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd2689500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bca9f9500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3950891 data_alloc: 268435456 data_used: 66069502
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 3481600 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bccd90380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8905000/0x0/0x4ffc00000, data 0x606e305/0x6207000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 3457024 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792688 data_alloc: 268435456 data_used: 57512958
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcf138000 session 0x564bd34cddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc154c00 session 0x564bccdacfc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bd3846fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3597436 data_alloc: 251658240 data_used: 43418622
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598844 data_alloc: 251658240 data_used: 43578366
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce92f800 session 0x564bccb4f880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.184745789s of 16.264896393s, submitted: 22
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcc29bc00 session 0x564bcd92b6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bca63e000 session 0x564bccd94c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bd2830400 session 0x564bd3847dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 6619136 heap: 375971840 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcb0a9800 session 0x564bcea97dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49987584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 286 heartbeat osd_stat(store_statfs(0x4e84b7000/0x0/0x4ffc00000, data 0x64b9e91/0x6653000, compress 0x0/0x0/0x0, omap 0x47b9e, meta 0x110a8462), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 286 ms_handle_reset con 0x564bcc154c00 session 0x564bccf3aa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e84b4000/0x0/0x4ffc00000, data 0x64bba81/0x6656000, compress 0x0/0x0/0x0, omap 0x47c79, meta 0x110a8387), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859240 data_alloc: 251658240 data_used: 48092158
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 50003968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 288 ms_handle_reset con 0x564bcb543000 session 0x564bccebbc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0b3000/0x0/0x4ffc00000, data 0x38ba245/0x3a57000, compress 0x0/0x0/0x0, omap 0x481df, meta 0x110a7e21), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640010 data_alloc: 251658240 data_used: 48092158
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce2ccc00 session 0x564bcc2b56c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bcb015400 session 0x564bcdb70e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.908384323s of 10.784473419s, submitted: 63
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b0000/0x0/0x4ffc00000, data 0x38bbce0/0x3a5a000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce92f000 session 0x564bcb082380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3639513 data_alloc: 251658240 data_used: 48092771
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bcccb5000 session 0x564bccd94540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec58e000/0x0/0x4ffc00000, data 0x23de84e/0x257b000, compress 0x0/0x0/0x0, omap 0x48749, meta 0x110a78b7), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bca5c5400 session 0x564bd34cc000
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bce92f400 session 0x564bd37c0540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 67878912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 291 ms_handle_reset con 0x564bce2cdc00 session 0x564bcd674e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194344 data_alloc: 218103808 data_used: 3716593
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed76f000/0x0/0x4ffc00000, data 0xe50287/0xfed000, compress 0x0/0x0/0x0, omap 0x48bd9, meta 0x110a7427), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.671979904s of 12.600020409s, submitted: 72
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb0821c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bd0fa6a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bd37c0380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2689340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcdb70a80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2688700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb50c1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcb50c380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bce2cdc00 session 0x564bcb64bdc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233345 data_alloc: 218103808 data_used: 3720591
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bcb543800
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bcc2b5500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec493000/0x0/0x4ffc00000, data 0x1338d78/0x14d9000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb4ac400 session 0x564bcc967a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcea97180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.606670380s of 14.736115456s, submitted: 46
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd34cc380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 85032960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29b800 session 0x564bd08d7500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233898 data_alloc: 218103808 data_used: 3720607
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320339968 unmapped: 85049344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.413036346s of 13.421483994s, submitted: 4
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 82018304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec365000/0x0/0x4ffc00000, data 0x1465d9b/0x1607000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321503232 unmapped: 83886080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321150976 unmapped: 84238336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebda1000/0x0/0x4ffc00000, data 0x1a29d9b/0x1bcb000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,0,0,0,4])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3313756 data_alloc: 218103808 data_used: 9641375
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314012 data_alloc: 218103808 data_used: 9649567
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.950139046s of 10.105495453s, submitted: 85
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314116 data_alloc: 218103808 data_used: 9674143
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd18000/0x0/0x4ffc00000, data 0x1ab2d9b/0x1c54000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bceffa380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcefdaa80
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bca8d4e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321560576 unmapped: 83828736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc200c00 session 0x564bd08d7880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321978 data_alloc: 218103808 data_used: 9670047
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ebd11000/0x0/0x4ffc00000, data 0x1ab49a9/0x1c59000, compress 0x0/0x0/0x0, omap 0x4914a, meta 0x12246eb6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccf3a1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29b800 session 0x564bccf3b880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 70385664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.576875687s of 10.103278160s, submitted: 32
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 327180288 unmapped: 78209024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 76718080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccd91a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 75661312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424611 data_alloc: 234881024 data_used: 16704927
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322871296 unmapped: 82518016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 294 ms_handle_reset con 0x564bcb543c00 session 0x564bccebba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bccf3b500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de0000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcca7ec00 session 0x564bcc88b180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcb543c00 session 0x564bcb64a540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429181 data_alloc: 234881024 data_used: 16704943
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bd0fa6700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322887680 unmapped: 82501632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.745933533s of 10.679224014s, submitted: 58
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314638336 unmapped: 90750976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3310157 data_alloc: 218103808 data_used: 3724703
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314646528 unmapped: 90742784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b5e/0x1d8a000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309609 data_alloc: 218103808 data_used: 3720607
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4f500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bccf73000 session 0x564bca3cd880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bccddc1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bd3846540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.256450653s of 12.440944672s, submitted: 37
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bca8d4700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc228540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311357 data_alloc: 218103808 data_used: 3728764
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcccb6400 session 0x564bccd95500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bca85d6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bccdaddc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bceffb180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314580992 unmapped: 90808320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bcc88a540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb4ab800 session 0x564bcd674c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bcb50c8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x34b2bad/0x365a000, compress 0x0/0x0/0x0, omap 0x49f17, meta 0x133e60e9), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bcc229180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc2b4380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [296,297], i have 297, src has [1,297]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 297 ms_handle_reset con 0x564bcdf3d400 session 0x564bccd90fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x34b472b/0x365b000, compress 0x0/0x0/0x0, omap 0x4a2f8, meta 0x133e5d08), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422141 data_alloc: 218103808 data_used: 9147162
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467581 data_alloc: 234881024 data_used: 16831258
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.041012764s of 13.806105614s, submitted: 93
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470355 data_alloc: 234881024 data_used: 16831258
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325451776 unmapped: 79937536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491451 data_alloc: 234881024 data_used: 19464474
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325844992 unmapped: 79544320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.217539787s of 10.448942184s, submitted: 41
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29bc00 session 0x564bd0fa6fc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bcd675dc0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca944540
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bceffa380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcd674380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcdf3d400 session 0x564bd40a8380
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bcc228e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.690319061s of 43.858875275s, submitted: 23
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bca9f81c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bca944e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bd0fa6c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcca88800 session 0x564bccf3a1c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca85c8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bccddda40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314125 data_alloc: 218103808 data_used: 7700648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64b880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bccf3b500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.886404037s of 19.264944077s, submitted: 16
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcd87ac00 session 0x564bccf3a8c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313589760 unmapped: 91799552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.420921326s of 58.578922272s, submitted: 6
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313622528 unmapped: 91766784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dd28/0x1005000, compress 0x0/0x0/0x0, omap 0x4a87b, meta 0x133e5785), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 299 ms_handle_reset con 0x564bcb543c00 session 0x564bca944e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253803 data_alloc: 218103808 data_used: 3732648
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dbf4/0x1003000, compress 0x0/0x0/0x0, omap 0x4a19b, meta 0x133e5e65), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 91660288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.0 total, 600.0 interval#012Cumulative writes: 37K writes, 153K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3476 writes, 15K keys, 3476 commit groups, 1.0 writes per commit group, ingest: 17.17 MB, 0.03 MB/s#012Interval WAL: 3476 writes, 1293 syncs, 2.69 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312786944 unmapped: 92602368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc200c00 session 0x564bd40a9a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64ba40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315457536 unmapped: 89931776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 123.139335632s of 125.560668945s, submitted: 63
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315473920 unmapped: 89915392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315490304 unmapped: 89899008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 301 ms_handle_reset con 0x564bcc72f800 session 0x564bcc967880
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 91693056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ebc34000/0x0/0x4ffc00000, data 0x9f1230/0xb97000, compress 0x0/0x0/0x0, omap 0x4a737, meta 0x133e58c9), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228506 data_alloc: 218103808 data_used: 3740719
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 302 ms_handle_reset con 0x564bcd87ac00 session 0x564bccebb500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3173403 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.102717400s of 11.751947403s, submitted: 81
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306003968 unmapped: 99385344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175457 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 ms_handle_reset con 0x564bcb543c00 session 0x564bd2689340
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23038 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.004046 took=0.000057s
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307159040 unmapped: 98230272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 108.550910950s of 110.262329102s, submitted: 108
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [0,0,0,0,1])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307208192 unmapped: 98181120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307216384 unmapped: 98172928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [305,306], i have 306, src has [1,306]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 306 ms_handle_reset con 0x564bcc200c00 session 0x564bcd675a40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307265536 unmapped: 98123776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234062 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307273728 unmapped: 98115584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb500
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307314688 unmapped: 98074624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307339264 unmapped: 98050048 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.985069275s of 32.682743073s, submitted: 30
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265706 data_alloc: 218103808 data_used: 140300
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307363840 unmapped: 98025472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 308 ms_handle_reset con 0x564bcc72f800 session 0x564bccb4e700
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d231/0x101e000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d20e/0x101d000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263809 data_alloc: 218103808 data_used: 140284
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307421184 unmapped: 97968128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307683328 unmapped: 97705984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307830784 unmapped: 97558528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 97435648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308027392 unmapped: 97361920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 97296384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.0 total, 600.0 interval#012Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 477 writes, 1189 keys, 477 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s#012Interval WAL: 477 writes, 216 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308199424 unmapped: 97189888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bce92f400
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308314112 unmapped: 97075200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 ms_handle_reset con 0x564bcb4ac400 session 0x564bcb50dc00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 nova_compute[238941]: 2026-01-27 14:47:17.491 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 434.266174316s of 434.955169678s, submitted: 42
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [309,310], i have 310, src has [1,310]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 310 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 310 ms_handle_reset con 0x564bcc200c00 session 0x564bccd90e00
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227396 data_alloc: 218103808 data_used: 144345
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 311 ms_handle_reset con 0x564bcc29b800 session 0x564bd1a7b6c0
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308461568 unmapped: 96927744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec418000/0x0/0x4ffc00000, data 0x202407/0x3b2000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3206379 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec419000/0x0/0x4ffc00000, data 0x20242a/0x3b3000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314171392 unmapped: 91217920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 ms_handle_reset con 0x564bcc72f800 session 0x564bde13f180
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.222564697s of 18.768489838s, submitted: 94
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.976013184s of 45.642684937s, submitted: 114
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 314 ms_handle_reset con 0x564bcd87ac00 session 0x564bdd5f8c40
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308838400 unmapped: 96550912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308871168 unmapped: 96518144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308903936 unmapped: 96485376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}'
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'config show' '{prefix=config show}'
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308920320 unmapped: 96468992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:17 np0005597378 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}'
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918225937' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:47:17 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23042 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 09:47:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 09:47:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 09:47:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728652625' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:47:18 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23046 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:19 np0005597378 nova_compute[238941]: 2026-01-27 14:47:19.003 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:20 np0005597378 podman[395522]: 2026-01-27 14:47:20.667760152 +0000 UTC m=+0.113401453 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 09:47:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7400 session 0x5640b6b3c000
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 78921728 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b6eea1c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6eff800 session 0x5640b9734e00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7154400 session 0x5640b90e0380
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.533633232s of 14.548633575s, submitted: 8
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eeb35000/0x0/0x4ffc00000, data 0x2187987/0x2317000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [1,0,0,6,10])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b6b468c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640b54fd340
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7400 session 0x5640b6b41180
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fa400 session 0x5640ba1cafc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7154400 session 0x5640b9680c40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462274 data_alloc: 234881024 data_used: 22724844
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 72417280 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 71557120 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472196 data_alloc: 234881024 data_used: 22835436
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 71548928 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 71532544 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 71532544 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 71532544 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 67035136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529300 data_alloc: 234881024 data_used: 32349436
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 67035136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 67002368 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed797000/0x0/0x4ffc00000, data 0x351d9f9/0x36af000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 67002368 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.448032379s of 13.833637238s, submitted: 146
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 67002368 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640b9193500
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f9c00 session 0x5640b920c380
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed5b3000/0x0/0x4ffc00000, data 0x3706a22/0x3899000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3554177 data_alloc: 234881024 data_used: 32349436
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 66551808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed4b1000/0x0/0x4ffc00000, data 0x3808a5b/0x399b000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 65044480 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3609885 data_alloc: 234881024 data_used: 33374460
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305397760 unmapped: 61677568 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3618507 data_alloc: 234881024 data_used: 33571068
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305545216 unmapped: 61530112 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.140190125s of 12.468670845s, submitted: 126
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306184192 unmapped: 60891136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3630171 data_alloc: 234881024 data_used: 35418364
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306888704 unmapped: 60186624 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbf4000/0x0/0x4ffc00000, data 0x40bca5b/0x424f000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x40bda5b/0x4250000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3623875 data_alloc: 234881024 data_used: 35422460
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 60850176 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.487917900s of 11.496400833s, submitted: 4
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 57704448 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x40bda5b/0x4250000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [0,0,0,1,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310304768 unmapped: 56770560 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310509568 unmapped: 56565760 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079800 session 0x5640b7ca9880
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640b920d340
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640b71081c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640b98b5880
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682975 data_alloc: 234881024 data_used: 36983036
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310509568 unmapped: 56565760 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079800 session 0x5640b98b21c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec363000/0x0/0x4ffc00000, data 0x4955a5b/0x4ae8000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebb86000/0x0/0x4ffc00000, data 0x5133a5b/0x52c6000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3724287 data_alloc: 234881024 data_used: 36987132
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebb84000/0x0/0x4ffc00000, data 0x5135a5b/0x52c8000, compress 0x0/0x0/0x0, omap 0x60699, meta 0xed4f967), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310566912 unmapped: 56508416 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310583296 unmapped: 56492032 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310591488 unmapped: 56483840 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.196700096s of 11.650994301s, submitted: 128
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882c00 session 0x5640b9680380
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310599680 unmapped: 56475648 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728408 data_alloc: 234881024 data_used: 36987644
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 310599680 unmapped: 56475648 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd6c00 session 0x5640b71fd6c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebb82000/0x0/0x4ffc00000, data 0x5137a5b/0x52ca000, compress 0x0/0x0/0x0, omap 0x60c5d, meta 0xed4f3a3), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640ba1ca8c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec4db000/0x0/0x4ffc00000, data 0x45b19f9/0x4743000, compress 0x0/0x0/0x0, omap 0x6111d, meta 0xed4eee3), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640956 data_alloc: 234881024 data_used: 33567484
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306831360 unmapped: 60243968 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec4db000/0x0/0x4ffc00000, data 0x45b19f9/0x4743000, compress 0x0/0x0/0x0, omap 0x6111d, meta 0xed4eee3), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 58966016 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec4db000/0x0/0x4ffc00000, data 0x45b19f9/0x4743000, compress 0x0/0x0/0x0, omap 0x6111d, meta 0xed4eee3), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 58966016 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024010658s of 10.095094681s, submitted: 37
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3cd000 session 0x5640b97bafc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3cbc00 session 0x5640b6b076c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 61030400 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df7400 session 0x5640b9b77340
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492677 data_alloc: 234881024 data_used: 25441558
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edb28000/0x0/0x4ffc00000, data 0x31949d9/0x3324000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edb28000/0x0/0x4ffc00000, data 0x31949d9/0x3324000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492677 data_alloc: 234881024 data_used: 25441558
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 61022208 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edb28000/0x0/0x4ffc00000, data 0x31949d9/0x3324000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [0,0,0,0,0,0,1,6])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 60653568 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.363403320s of 10.096714973s, submitted: 67
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306634752 unmapped: 60440576 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521465 data_alloc: 234881024 data_used: 25474326
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed7d0000/0x0/0x4ffc00000, data 0x34eb9d9/0x367b000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521465 data_alloc: 234881024 data_used: 25474326
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 59383808 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b9681dc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288800 session 0x5640b9b76e00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ed7d0000/0x0/0x4ffc00000, data 0x34eb9d9/0x367b000, compress 0x0/0x0/0x0, omap 0x617c3, meta 0xed4e83d), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b920ca80
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413133 data_alloc: 218103808 data_used: 18318614
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 64520192 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.984126091s of 13.101228714s, submitted: 47
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640b7d4ca80
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee306000/0x0/0x4ffc00000, data 0x29b69d9/0x2b46000, compress 0x0/0x0/0x0, omap 0x6197c, meta 0xed4e684), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 302563328 unmapped: 64512000 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b9517a40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298172416 unmapped: 68902912 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ef305000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243663 data_alloc: 218103808 data_used: 11600134
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 298180608 unmapped: 68894720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bebffc00 session 0x5640b9a15340
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640ba1e21c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640b7109180
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b7d4ddc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.870647430s of 18.944410324s, submitted: 32
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288800 session 0x5640ba1d7a40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293423 data_alloc: 218103808 data_used: 11604132
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293423 data_alloc: 218103808 data_used: 11604132
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 70696960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eefa5000/0x0/0x4ffc00000, data 0x1d19967/0x1ea7000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 70688768 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 70688768 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.191104889s of 13.239007950s, submitted: 18
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 70680576 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3356318 data_alloc: 218103808 data_used: 11604132
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 62136320 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee79c000/0x0/0x4ffc00000, data 0x2522967/0x26b0000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bebff000 session 0x5640b6b06fc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640ba1d61c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640ba1d76c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 70868992 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672400 session 0x5640b6b3cfc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288800 session 0x5640b90e1dc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 70868992 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 70868992 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee75b000/0x0/0x4ffc00000, data 0x2563967/0x26f1000, compress 0x0/0x0/0x0, omap 0x61e34, meta 0xed4e1cc), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2800 session 0x5640b97bbdc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640b6fd1500
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347200 data_alloc: 218103808 data_used: 11604132
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 70705152 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b98b3dc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296312832 unmapped: 70762496 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b786a700
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b9a148c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee736000/0x0/0x4ffc00000, data 0x258798a/0x2716000, compress 0x0/0x0/0x0, omap 0x61b13, meta 0xed4e4ed), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 70615040 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 70615040 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 70615040 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee711000/0x0/0x4ffc00000, data 0x25ab9ad/0x273b000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3439150 data_alloc: 234881024 data_used: 25950884
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ee711000/0x0/0x4ffc00000, data 0x25ab9ad/0x273b000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3443758 data_alloc: 234881024 data_used: 26737316
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 297951232 unmapped: 69124096 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.646347046s of 18.530632019s, submitted: 38
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4edf4f000/0x0/0x4ffc00000, data 0x2d6d9ad/0x2efd000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,37,21])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306290688 unmapped: 60784640 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 63889408 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eda6f000/0x0/0x4ffc00000, data 0x324d9ad/0x33dd000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xed4dfa7), peers [0,2] op hist [0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3528430 data_alloc: 234881024 data_used: 27931300
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306724864 unmapped: 60350464 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec810000/0x0/0x4ffc00000, data 0x330c9ad/0x349c000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,6])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec810000/0x0/0x4ffc00000, data 0x330c9ad/0x349c000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305307648 unmapped: 61767680 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 305905664 unmapped: 61169664 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306176000 unmapped: 60899328 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306176000 unmapped: 60899328 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599658 data_alloc: 234881024 data_used: 28404388
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306184192 unmapped: 60891136 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebd25000/0x0/0x4ffc00000, data 0x3df79ad/0x3f87000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.914390564s of 12.325402260s, submitted: 165
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3605296 data_alloc: 234881024 data_used: 28404388
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 60874752 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 60866560 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebd03000/0x0/0x4ffc00000, data 0x3e199ad/0x3fa9000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01400 session 0x5640b9a14e00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01400 session 0x5640b9a80e00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b7d17180
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b98b2700
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 60866560 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3da800 session 0x5640ba1e36c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b9286fc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b6fd01c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9077880
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01400 session 0x5640b6eeb180
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 60768256 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 60768256 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3659931 data_alloc: 234881024 data_used: 28404388
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 60702720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb738000/0x0/0x4ffc00000, data 0x43e2a1f/0x4574000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 60702720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 60702720 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb738000/0x0/0x4ffc00000, data 0x43e2a1f/0x4574000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3658083 data_alloc: 234881024 data_used: 28404388
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb735000/0x0/0x4ffc00000, data 0x43e5a1f/0x4577000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb735000/0x0/0x4ffc00000, data 0x43e5a1f/0x4577000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0287c00 session 0x5640ba1e2e00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8000 session 0x5640ba1cbc00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3658083 data_alloc: 234881024 data_used: 28404388
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 60694528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640b786aa80
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.240999222s of 17.293668747s, submitted: 46
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd718c00 session 0x5640b7ca8c40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb735000/0x0/0x4ffc00000, data 0x43e5a1f/0x4577000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3660587 data_alloc: 234881024 data_used: 28404404
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306388992 unmapped: 60686336 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb710000/0x0/0x4ffc00000, data 0x440aa1f/0x459c000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306397184 unmapped: 60678144 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3661219 data_alloc: 234881024 data_used: 28404404
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 306397184 unmapped: 60678144 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0286800 session 0x5640b6b07dc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 57901056 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ca000 session 0x5640b9734380
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e6000/0x0/0x4ffc00000, data 0x4a33a81/0x4bc6000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311115776 unmapped: 55959552 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.070935249s of 10.887783051s, submitted: 39
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e6000/0x0/0x4ffc00000, data 0x4a33a81/0x4bc6000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3742080 data_alloc: 234881024 data_used: 34068660
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e4000/0x0/0x4ffc00000, data 0x4a34a81/0x4bc7000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d0c00 session 0x5640b9517c00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311148544 unmapped: 55926784 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3742537 data_alloc: 234881024 data_used: 34068660
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 55918592 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e4000/0x0/0x4ffc00000, data 0x4a34aa4/0x4bc8000, compress 0x0/0x0/0x0, omap 0x62059, meta 0xfeedfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 53526528 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 316825600 unmapped: 50249728 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.375137329s of 10.545452118s, submitted: 8
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 50216960 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 46727168 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840167 data_alloc: 251658240 data_used: 40271540
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 324935680 unmapped: 42139648 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e943c000/0x0/0x4ffc00000, data 0x552eaa4/0x56c2000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 325689344 unmapped: 41385984 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865789 data_alloc: 251658240 data_used: 41956532
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9388000/0x0/0x4ffc00000, data 0x55eaaa4/0x577e000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327958528 unmapped: 39116800 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9388000/0x0/0x4ffc00000, data 0x55eaaa4/0x577e000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327966720 unmapped: 39108608 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327966720 unmapped: 39108608 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865029 data_alloc: 251658240 data_used: 41956532
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 327966720 unmapped: 39108608 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.607978821s of 12.608144760s, submitted: 173
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e936d000/0x0/0x4ffc00000, data 0x560baa4/0x579f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 328597504 unmapped: 38477824 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332881920 unmapped: 34193408 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e853d000/0x0/0x4ffc00000, data 0x643baa4/0x65cf000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 35569664 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e846f000/0x0/0x4ffc00000, data 0x6509aa4/0x669d000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331751424 unmapped: 35323904 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3960693 data_alloc: 251658240 data_used: 43873460
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 35438592 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8448000/0x0/0x4ffc00000, data 0x6530aa4/0x66c4000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e83ff000/0x0/0x4ffc00000, data 0x6579aa4/0x670d000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 35364864 heap: 367075328 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 37806080 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 37797888 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bca50000 session 0x5640b9a816c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332038144 unmapped: 50323456 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7715000/0x0/0x4ffc00000, data 0x7263aa4/0x73f7000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4045123 data_alloc: 251658240 data_used: 44094644
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b98b5340
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332169216 unmapped: 50192384 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c8800 session 0x5640ba1cba40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332300288 unmapped: 50061312 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ccc00 session 0x5640ba1caa80
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.283923626s of 11.258297920s, submitted: 173
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7712000/0x0/0x4ffc00000, data 0x7266aa4/0x73fa000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332562432 unmapped: 49799168 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332570624 unmapped: 49790976 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882800 session 0x5640ba1cb6c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332578816 unmapped: 49782784 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4051812 data_alloc: 251658240 data_used: 44819124
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334946304 unmapped: 47415296 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 36003840 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 36003840 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f6400 session 0x5640b71fda40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 35987456 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e76ed000/0x0/0x4ffc00000, data 0x728baa4/0x741f000, compress 0x0/0x0/0x0, omap 0x62059, meta 0x1108dfa7), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b9681c00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df7400 session 0x5640b6b40540
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 42655744 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3951982 data_alloc: 251658240 data_used: 48821428
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 42631168 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337174528 unmapped: 45187072 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.485056400s of 10.048836708s, submitted: 78
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d6c00 session 0x5640b9b76fc0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 45178880 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3400 session 0x5640b6b06540
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e9f000/0x0/0x4ffc00000, data 0x4add9ad/0x4c6d000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e9f000/0x0/0x4ffc00000, data 0x4add9ad/0x4c6d000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3793986 data_alloc: 251658240 data_used: 41436093
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 45170688 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e9f000/0x0/0x4ffc00000, data 0x4add9ad/0x4c6d000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 45137920 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 43114496 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 44580864 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3833070 data_alloc: 251658240 data_used: 41411517
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 42311680 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b7d176c0
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b786ba40
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 42262528 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e97b7000/0x0/0x4ffc00000, data 0x51c39ad/0x5353000, compress 0x0/0x0/0x0, omap 0x62130, meta 0x1108ded0), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.659010410s of 10.050826073s, submitted: 118
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 41943040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9774000/0x0/0x4ffc00000, data 0x52009ad/0x5390000, compress 0x0/0x0/0x0, omap 0x6225a, meta 0x1108dda6), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334176256 unmapped: 48185344 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640be672800 session 0x5640b97baa80
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 48168960 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3683376 data_alloc: 234881024 data_used: 35200957
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 48168960 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9e2a000/0x0/0x4ffc00000, data 0x3db298a/0x3f41000, compress 0x0/0x0/0x0, omap 0x62b15, meta 0x1108d4eb), peers [0,2] op hist [])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 48168960 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4400 session 0x5640b9a14540
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880800 session 0x5640b938bc00
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333848576 unmapped: 48513024 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329408512 unmapped: 52953088 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb5a4000/0x0/0x4ffc00000, data 0x2ac298a/0x2c51000, compress 0x0/0x0/0x0, omap 0x62cd4, meta 0x1108d32c), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 52936704 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b92cf880
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514266 data_alloc: 234881024 data_used: 26392642
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:21 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514266 data_alloc: 234881024 data_used: 26392642
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2a9e967/0x2c2c000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.876317024s of 17.976917267s, submitted: 106
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514610 data_alloc: 234881024 data_used: 26392642
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514658 data_alloc: 234881024 data_used: 26404895
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebedd000/0x0/0x4ffc00000, data 0x2aa1967/0x2c2f000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebed7000/0x0/0x4ffc00000, data 0x2aa7967/0x2c35000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3514770 data_alloc: 234881024 data_used: 26404895
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 52928512 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.048960686s of 12.287158012s, submitted: 4
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebed7000/0x0/0x4ffc00000, data 0x2aa7967/0x2c35000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3515058 data_alloc: 234881024 data_used: 26404895
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 52920320 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3515058 data_alloc: 234881024 data_used: 26404895
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ebed7000/0x0/0x4ffc00000, data 0x2aa7967/0x2c35000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 43859968 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e7cc00 session 0x5640ba1d61c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329474048 unmapped: 52887552 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb521000/0x0/0x4ffc00000, data 0x345d967/0x35eb000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b786a700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9a148c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0287c00 session 0x5640b98b2700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b7d17180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.118188858s of 10.095643997s, submitted: 17
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329474048 unmapped: 52887552 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb51f000/0x0/0x4ffc00000, data 0x345d9a0/0x35ed000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9076a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e7cc00 session 0x5640b9287880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329687040 unmapped: 52674560 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995e400 session 0x5640b9b761c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918a800 session 0x5640b9a14e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b97bba40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b71088c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329687040 unmapped: 52674560 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b71fce00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3603187 data_alloc: 234881024 data_used: 26404911
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b9b76540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329687040 unmapped: 52674560 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b7109880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b7ca9180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329695232 unmapped: 52666368 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b97bb500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b9516700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329695232 unmapped: 52666368 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881000 session 0x5640b96801c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c8400 session 0x5640b9ab3a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b98b3c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37c3a3f/0x3957000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330776576 unmapped: 51585024 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640ba1d7340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b9a80e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 42311680 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3727735 data_alloc: 234881024 data_used: 32755775
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b6fd01c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea864000/0x0/0x4ffc00000, data 0x4114a3f/0x42a8000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748541 data_alloc: 234881024 data_used: 36906825
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3400 session 0x5640ba1d7880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc000 session 0x5640b7d4d340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea864000/0x0/0x4ffc00000, data 0x4114a3f/0x42a8000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efcc00 session 0x5640b95161c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881000 session 0x5640b7e0f6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.737668037s of 16.296592712s, submitted: 80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7154000 session 0x5640b786ae00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 51576832 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f7dc00 session 0x5640b6b3da40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea864000/0x0/0x4ffc00000, data 0x4114a3f/0x42a8000, compress 0x0/0x0/0x0, omap 0x62c4f, meta 0x1108d3b1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750841 data_alloc: 234881024 data_used: 36906923
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330768384 unmapped: 51593216 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 45686784 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea763000/0x0/0x4ffc00000, data 0x4216a2f/0x43a9000, compress 0x0/0x0/0x0, omap 0x62e0b, meta 0x1108d1f5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,58])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 48439296 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b6b07880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 47128576 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823884 data_alloc: 234881024 data_used: 38943033
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea394000/0x0/0x4ffc00000, data 0x45d799a/0x4767000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823884 data_alloc: 234881024 data_used: 38943033
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea394000/0x0/0x4ffc00000, data 0x45d799a/0x4767000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 46039040 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.759158134s of 13.952198982s, submitted: 143
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 42246144 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 42246144 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e95d4000/0x0/0x4ffc00000, data 0x539f99a/0x552f000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3901432 data_alloc: 234881024 data_used: 39353524
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 43212800 heap: 382361600 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9569c00 session 0x5640ba1d7180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b9451340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd8f7400 session 0x5640b9517c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7000 session 0x5640b9450700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 37896192 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b920c1c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 47456256 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b6fd1500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b6b408c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8a62000/0x0/0x4ffc00000, data 0x5f1a99a/0x60aa000, compress 0x0/0x0/0x0, omap 0x630ef, meta 0x1108cf11), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b9516540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e98cd000/0x0/0x4ffc00000, data 0x4d65967/0x4ef3000, compress 0x0/0x0/0x0, omap 0x6358f, meta 0x1108ca71), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3815905 data_alloc: 234881024 data_used: 31462052
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9c16000/0x0/0x4ffc00000, data 0x4d68967/0x4ef6000, compress 0x0/0x0/0x0, omap 0x6358f, meta 0x1108ca71), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b786b500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b98b5a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332873728 unmapped: 53690368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b9a81500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.418496132s of 10.503636360s, submitted: 179
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b9517180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333029376 unmapped: 53534720 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333029376 unmapped: 53534720 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869123 data_alloc: 234881024 data_used: 36156811
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332996608 unmapped: 53567488 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719c00 session 0x5640ba1e21c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e922b000/0x0/0x4ffc00000, data 0x57519a0/0x58e1000, compress 0x0/0x0/0x0, omap 0x6358f, meta 0x1108ca71), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882c00 session 0x5640b6b40380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333520896 unmapped: 53043200 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fb000 session 0x5640b98b3a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640ba1e2a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d48c00 session 0x5640b6eea700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3882800 session 0x5640b7108000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333520896 unmapped: 53043200 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 53772288 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b89000/0x0/0x4ffc00000, data 0x4be69d9/0x4d76000, compress 0x0/0x0/0x0, omap 0x6374b, meta 0x1108c8b5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621343 data_alloc: 218103808 data_used: 19343755
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b9735dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1bb000/0x0/0x4ffc00000, data 0x37c29c9/0x3951000, compress 0x0/0x0/0x0, omap 0x63beb, meta 0x1108c415), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.704000950s of 10.077165604s, submitted: 89
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b96ab6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 59842560 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640ba1e2380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623776 data_alloc: 218103808 data_used: 19522955
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1ba000/0x0/0x4ffc00000, data 0x37c29ec/0x3952000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1ba000/0x0/0x4ffc00000, data 0x37c29ec/0x3952000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680992 data_alloc: 234881024 data_used: 25910667
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb1ba000/0x0/0x4ffc00000, data 0x37c29ec/0x3952000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680992 data_alloc: 234881024 data_used: 25910667
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 59826176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.229514122s of 12.687259674s, submitted: 6
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330219520 unmapped: 56344576 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 53288960 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5d9000/0x0/0x4ffc00000, data 0x439b9ec/0x452b000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 51666944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 51666944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3771550 data_alloc: 234881024 data_used: 28053899
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 51601408 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 51585024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 51585024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5d9000/0x0/0x4ffc00000, data 0x439b9ec/0x452b000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 51585024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5c0000/0x0/0x4ffc00000, data 0x43bc9ec/0x454c000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3765094 data_alloc: 234881024 data_used: 28057995
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.441932678s of 10.344219208s, submitted: 150
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea5c0000/0x0/0x4ffc00000, data 0x43bc9ec/0x454c000, compress 0x0/0x0/0x0, omap 0x63ecf, meta 0x1108c131), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334430208 unmapped: 52133888 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b7108000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 334446592 unmapped: 52117504 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3766876 data_alloc: 234881024 data_used: 28131723
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 46866432 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea081000/0x0/0x4ffc00000, data 0x48fb9ec/0x4a8b000, compress 0x0/0x0/0x0, omap 0x63ce8, meta 0x1108c318), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,11])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338714624 unmapped: 47849472 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883c00 session 0x5640ba1e3180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b6eeb880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b97bb880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b7c86a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918b400 session 0x5640ba1ca1c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b71fc540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883c00 session 0x5640ba1e2c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 63930368 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea080000/0x0/0x4ffc00000, data 0x48fc9ec/0x4a8c000, compress 0x0/0x0/0x0, omap 0x63ce8, meta 0x1108c318), peers [0,2] op hist [0,0,0,0,0,2,0,0,3])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b9b77180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b6b46fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 57040896 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9568400 session 0x5640b7108e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918b400 session 0x5640b9451a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c69400 session 0x5640b6b3cc40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b920c000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b786a8c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c41000 session 0x5640b98b2540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650019 data_alloc: 218103808 data_used: 16050059
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bca51800 session 0x5640b6fadc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eaab6000/0x0/0x4ffc00000, data 0x39d0977/0x3b5f000, compress 0x0/0x0/0x0, omap 0x6405a, meta 0x1108bfa6), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b9680000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b6b461c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.291111946s of 11.023561478s, submitted: 67
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4800 session 0x5640b9680540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b918a400 session 0x5640b7d4ddc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995f800 session 0x5640b9287500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 63922176 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 67969024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c41000 session 0x5640ba1d61c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 67969024 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bfd02800 session 0x5640b96abc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461619 data_alloc: 218103808 data_used: 11604230
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318603264 unmapped: 67960832 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec6fe000/0x0/0x4ffc00000, data 0x227e99a/0x240e000, compress 0x0/0x0/0x0, omap 0x6412f, meta 0x1108bed1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318603264 unmapped: 67960832 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 318603264 unmapped: 67960832 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534711 data_alloc: 234881024 data_used: 23817494
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec6fd000/0x0/0x4ffc00000, data 0x227e9bd/0x240f000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534711 data_alloc: 234881024 data_used: 23817494
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 65609728 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.370122910s of 14.617922783s, submitted: 45
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ec347000/0x0/0x4ffc00000, data 0x26349bd/0x27c5000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [0,0,0,0,0,0,0,0,20,15])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326696960 unmapped: 59867136 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 325984256 unmapped: 60579840 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329678848 unmapped: 56885248 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3656759 data_alloc: 234881024 data_used: 24887574
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb35a000/0x0/0x4ffc00000, data 0x36219bd/0x37b2000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329760768 unmapped: 56803328 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329768960 unmapped: 56795136 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3653135 data_alloc: 234881024 data_used: 24887574
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329768960 unmapped: 56795136 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb357000/0x0/0x4ffc00000, data 0x36249bd/0x37b5000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb357000/0x0/0x4ffc00000, data 0x36249bd/0x37b5000, compress 0x0/0x0/0x0, omap 0x64315, meta 0x1108bceb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3653135 data_alloc: 234881024 data_used: 24887574
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b7d17a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4c00 session 0x5640b6b3c380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b96aa540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3cc000 session 0x5640ba1d6700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.829696655s of 14.025759697s, submitted: 159
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b7e19c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329777152 unmapped: 56786944 heap: 386564096 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92861c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4c00 session 0x5640ba1d7340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bfd02800 session 0x5640b9680a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea7400 session 0x5640b9681180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b6b47dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b9517180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b97bbc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eacd4000/0x0/0x4ffc00000, data 0x35859fc/0x3716000, compress 0x0/0x0/0x0, omap 0x6498b, meta 0x1108b675), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620090 data_alloc: 218103808 data_used: 17666310
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326344704 unmapped: 61841408 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326352896 unmapped: 61833216 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eacd4000/0x0/0x4ffc00000, data 0x35859fc/0x3716000, compress 0x0/0x0/0x0, omap 0x6498b, meta 0x1108b675), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326352896 unmapped: 61833216 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eacd4000/0x0/0x4ffc00000, data 0x35859fc/0x3716000, compress 0x0/0x0/0x0, omap 0x6498b, meta 0x1108b675), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326352896 unmapped: 61833216 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620090 data_alloc: 218103808 data_used: 17666310
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.730665207s of 10.020484924s, submitted: 79
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326361088 unmapped: 61825024 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881c00 session 0x5640b6b408c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 326672384 unmapped: 61513728 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 329056256 unmapped: 59129856 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb3d2000/0x0/0x4ffc00000, data 0x35a99fc/0x373a000, compress 0x0/0x0/0x0, omap 0x64ead, meta 0x1108b153), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 57368576 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d38400 session 0x5640b9517c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b6fd0fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b6fd16c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b920c540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881c00 session 0x5640ba1cbc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728348 data_alloc: 234881024 data_used: 29568755
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eafe9000/0x0/0x4ffc00000, data 0x39929fc/0x3b23000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eafe9000/0x0/0x4ffc00000, data 0x39929fc/0x3b23000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728460 data_alloc: 234881024 data_used: 29568755
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b7e0e540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b6b40700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640ba1ca540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.269579887s of 11.586714745s, submitted: 32
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640ba1e2c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 330973184 unmapped: 57212928 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eafe6000/0x0/0x4ffc00000, data 0x3994a0c/0x3b26000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 49938432 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3834990 data_alloc: 234881024 data_used: 35554547
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea59d000/0x0/0x4ffc00000, data 0x43dca0c/0x456e000, compress 0x0/0x0/0x0, omap 0x650c9, meta 0x1108af37), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea59d000/0x0/0x4ffc00000, data 0x43dca0c/0x456e000, compress 0x0/0x0/0x0, omap 0x6515b, meta 0x1108aea5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ea59d000/0x0/0x4ffc00000, data 0x43dca0c/0x456e000, compress 0x0/0x0/0x0, omap 0x6515b, meta 0x1108aea5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 48340992 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d47000 session 0x5640b786a700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9286a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b92ce1c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b9735a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343056384 unmapped: 45129728 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9b76a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d47000 session 0x5640b98b5a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b98b48c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b92876c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efd000 session 0x5640b98b3dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341000192 unmapped: 54099968 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3903175 data_alloc: 234881024 data_used: 35558643
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341000192 unmapped: 54099968 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7400 session 0x5640b7108e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9c41000 session 0x5640b6b3c540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341008384 unmapped: 54091776 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9a93000/0x0/0x4ffc00000, data 0x4ee7a0c/0x5079000, compress 0x0/0x0/0x0, omap 0x64e05, meta 0x1108b1fb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b62d9c00 session 0x5640b96aa000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 56606720 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 56606720 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.088479042s of 11.825679779s, submitted: 189
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b6b3d6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 56598528 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb0e2000/0x0/0x4ffc00000, data 0x389a99a/0x3a2a000, compress 0x0/0x0/0x0, omap 0x6598c, meta 0x1108a674), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750570 data_alloc: 234881024 data_used: 21596801
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 53477376 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 54935552 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 54231040 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640ba1d6a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7e6d800 session 0x5640b786b880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3688952 data_alloc: 234881024 data_used: 27207297
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0286400 session 0x5640ba1d7dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb7bc000/0x0/0x4ffc00000, data 0x31c0977/0x334f000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb7bc000/0x0/0x4ffc00000, data 0x31c0977/0x334f000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.3 total, 600.0 interval#012Cumulative writes: 42K writes, 166K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.77 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5853 writes, 25K keys, 5853 commit groups, 1.0 writes per commit group, ingest: 29.76 MB, 0.05 MB/s#012Interval WAL: 5853 writes, 2145 syncs, 2.73 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3688612 data_alloc: 234881024 data_used: 27207297
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 56696832 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.558639526s of 13.316927910s, submitted: 148
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eb79b000/0x0/0x4ffc00000, data 0x31e1977/0x3370000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [0,0,0,0,0,0,0,4,8])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340959232 unmapped: 54140928 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 52207616 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288c00 session 0x5640b7109180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640b938bc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288c00 session 0x5640b7ca8c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b92cfdc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 52207616 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3766733 data_alloc: 234881024 data_used: 27806302
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342966272 unmapped: 52133888 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b7109180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea6c00 session 0x5640b6b47dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea6c00 session 0x5640ba1d7dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b6fd1500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640b7ca9340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0288c00 session 0x5640b9107180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640b9b76c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b9734380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4eabd2000/0x0/0x4ffc00000, data 0x3d9c987/0x3f2c000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1108a0ef), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079c00 session 0x5640b9681c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 52789248 heap: 395100160 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 43294720 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 53288960 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01800 session 0x5640ba1cb500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7ea6c00 session 0x5640b6b40700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640ba1ca540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719000 session 0x5640b9286a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b9b76a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f61000/0x0/0x4ffc00000, data 0x4872997/0x4a03000, compress 0x0/0x0/0x0, omap 0x65f11, meta 0x1222a0ef), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845163 data_alloc: 234881024 data_used: 27933294
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f8c00 session 0x5640b9450700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01000 session 0x5640b9ab3180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54337536 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b6fd0fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.445695877s of 10.866799355s, submitted: 121
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b97bbc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 54419456 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f8c00 session 0x5640b6fd1880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 54411264 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0286c00 session 0x5640b7d16fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3864208 data_alloc: 234881024 data_used: 30846574
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 54403072 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f45000/0x0/0x4ffc00000, data 0x4896997/0x4a27000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ce000 session 0x5640b6b3c8c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1800 session 0x5640b9286fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 54394880 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 54394880 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x48ba9a7/0x4a4c000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 54558720 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 49537024 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3929050 data_alloc: 251658240 data_used: 40919678
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 49537024 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x48ba9a7/0x4a4c000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 49537024 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8f20000/0x0/0x4ffc00000, data 0x48ba9a7/0x4a4c000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3929050 data_alloc: 251658240 data_used: 40919678
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 49528832 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.873625755s of 13.021769524s, submitted: 13
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 48062464 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8995000/0x0/0x4ffc00000, data 0x4e459a7/0x4fd7000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 48062464 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640ba1e3dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7c00 session 0x5640b9a80c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b6b3d880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059000 session 0x5640b786b880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8995000/0x0/0x4ffc00000, data 0x4e459a7/0x4fd7000, compress 0x0/0x0/0x0, omap 0x66421, meta 0x12229bdf), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348782592 unmapped: 47931392 heap: 396713984 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b6fad6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059000 session 0x5640b6b47880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b9286700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7c00 session 0x5640b9287880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640b98b4000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 48947200 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4051601 data_alloc: 251658240 data_used: 41120382
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 48947200 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ad5000/0x0/0x4ffc00000, data 0x5b649b7/0x5cf7000, compress 0x0/0x0/0x0, omap 0x66079, meta 0x133c9f87), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 48939008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1c00 session 0x5640b9516e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059000 session 0x5640b71fc540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 48939008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b6b3da40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7c00 session 0x5640b7ca8380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 46178304 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e66ae000/0x0/0x4ffc00000, data 0x5f8a9c7/0x611e000, compress 0x0/0x0/0x0, omap 0x66079, meta 0x133c9f87), peers [0,2] op hist [0,0,0,0,0,0,2,1,17,7])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 42532864 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4157492 data_alloc: 251658240 data_used: 49897102
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 33980416 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.746216297s of 10.166923523s, submitted: 138
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 34447360 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 34447360 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b6b3d6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01000 session 0x5640b6eebc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 34447360 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640b98b2c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e5b4b000/0x0/0x4ffc00000, data 0x594e9b7/0x5ae1000, compress 0x0/0x0/0x0, omap 0x66822, meta 0x145697de), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 34283520 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4104402 data_alloc: 251658240 data_used: 49179614
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e5b44000/0x0/0x4ffc00000, data 0x59559b7/0x5ae8000, compress 0x0/0x0/0x0, omap 0x66822, meta 0x145697de), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb3e9800 session 0x5640b6b06540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3881c00 session 0x5640b7109c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 34275328 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b95161c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 34201600 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b7b000/0x0/0x4ffc00000, data 0x491f9a7/0x4ab1000, compress 0x0/0x0/0x0, omap 0x6651f, meta 0x14569ae1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3983359 data_alloc: 251658240 data_used: 47805918
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 34201600 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 34201600 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 34193408 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.613125801s of 12.069079399s, submitted: 109
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366813184 unmapped: 33579008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6410000/0x0/0x4ffc00000, data 0x508a9a7/0x521c000, compress 0x0/0x0/0x0, omap 0x6651f, meta 0x14569ae1), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366813184 unmapped: 33579008 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036159 data_alloc: 251658240 data_used: 47977950
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366387200 unmapped: 34004992 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366387200 unmapped: 34004992 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e638c000/0x0/0x4ffc00000, data 0x510e9a7/0x52a0000, compress 0x0/0x0/0x0, omap 0x665ad, meta 0x14569a53), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4035461 data_alloc: 251658240 data_used: 47977950
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 33898496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e638c000/0x0/0x4ffc00000, data 0x510e9a7/0x52a0000, compress 0x0/0x0/0x0, omap 0x665ad, meta 0x14569a53), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 33882112 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 33865728 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 33857536 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 33857536 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.526194572s of 11.816063881s, submitted: 168
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d8800 session 0x5640b9077880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883800 session 0x5640ba1caa80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840913 data_alloc: 234881024 data_used: 34823118
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d47000 session 0x5640ba1e3340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e636b000/0x0/0x4ffc00000, data 0x512f9a7/0x52c1000, compress 0x0/0x0/0x0, omap 0x6663b, meta 0x145699c5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 40239104 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b995fc00 session 0x5640b920c540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f8c00 session 0x5640b6b07c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 46194688 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b63cc800 session 0x5640ba1d6fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8a67000/0x0/0x4ffc00000, data 0x2a37967/0x2bc5000, compress 0x0/0x0/0x0, omap 0x67419, meta 0x14568be7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659823 data_alloc: 234881024 data_used: 23450558
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b000 session 0x5640b9451dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b9516540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 46186496 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8a67000/0x0/0x4ffc00000, data 0x2a37967/0x2bc5000, compress 0x0/0x0/0x0, omap 0x67419, meta 0x14568be7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b6b40e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 54706176 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 54697984 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476109 data_alloc: 218103808 data_used: 11588030
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df1000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x6726f, meta 0x14568d91), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9451340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bfd03000 session 0x5640b9107180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640ba1d7a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9a14fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 54689792 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.651613235s of 31.024972916s, submitted: 126
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b000 session 0x5640b7ca8c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b92cefc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9681c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9450380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b6b06380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb168c00 session 0x5640b71fd180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498350 data_alloc: 218103808 data_used: 11588030
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 52559872 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b97bac40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b9517180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fa800 session 0x5640b98b4000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b92cefc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b96ab6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533112 data_alloc: 218103808 data_used: 11687358
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.144528389s of 12.913942337s, submitted: 54
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572908 data_alloc: 218103808 data_used: 18347454
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [0,0,0,0,0,0,0,12])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9a81500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d1000 session 0x5640b7d17c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640b9a81dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9450700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640ba1e3dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633542 data_alloc: 218103808 data_used: 18347454
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 61710336 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b7e0f6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 61349888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86f9000/0x0/0x4ffc00000, data 0x2da3a2b/0x2f33000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86dc000/0x0/0x4ffc00000, data 0x2dc0a2b/0x2f50000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.446774483s of 10.051395416s, submitted: 119
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 61341696 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 59219968 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756251 data_alloc: 234881024 data_used: 29613817
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 58171392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 56483840 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 56426496 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 56238080 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823769 data_alloc: 234881024 data_used: 30526713
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e74000/0x0/0x4ffc00000, data 0x3622a2b/0x37b2000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.131614208s of 10.359419823s, submitted: 101
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 53854208 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842933 data_alloc: 234881024 data_used: 30547193
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 55320576 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x39eda2b/0x3b7d000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [2,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 53485568 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 46325760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3909324 data_alloc: 234881024 data_used: 31506169
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e3c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 53477376 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4800 session 0x5640b92861c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9286fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 53469184 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9287dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e2540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b98b5a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7800 session 0x5640ba1cb500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9450380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b7ca8c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.177800179s of 10.095353127s, submitted: 95
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640b9107180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3939380 data_alloc: 234881024 data_used: 31603961
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 53264384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3982520 data_alloc: 234881024 data_used: 38296825
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01c00 session 0x5640b9681180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b6b3c8c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b71fce00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 52314112 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca7000/0x0/0x4ffc00000, data 0x47f4a4e/0x4985000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 58081280 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.268082619s of 10.108275414s, submitted: 55
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b9287500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf9000/0x0/0x4ffc00000, data 0x37a29ec/0x3932000, compress 0x0/0x0/0x0, omap 0x680cb, meta 0x14567f35), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3829731 data_alloc: 234881024 data_used: 27148025
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3857131 data_alloc: 234881024 data_used: 31654551
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf7000/0x0/0x4ffc00000, data 0x37a39ec/0x3933000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e734d000/0x0/0x4ffc00000, data 0x41489ec/0x42d8000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924971 data_alloc: 234881024 data_used: 31896215
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7330000/0x0/0x4ffc00000, data 0x41649ec/0x42f4000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.040444374s of 12.756207466s, submitted: 88
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 54910976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972453 data_alloc: 234881024 data_used: 32158359
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd0000 session 0x5640b7ca8380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 55099392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9517500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7c01000/0x0/0x4ffc00000, data 0x389b9ec/0x3a2b000, compress 0x0/0x0/0x0, omap 0x68afb, meta 0x14567505), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3831183 data_alloc: 234881024 data_used: 24707735
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9000 session 0x5640b786b500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b3c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104784966s of 10.713698387s, submitted: 126
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b6b40700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cee00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883400 session 0x5640ba1d6a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ccc00 session 0x5640b786ba40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642934 data_alloc: 218103808 data_used: 16325236
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [0,0,4])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b56c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640ba1e3a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b97356c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b9286700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719800 session 0x5640b97bb500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.670858383s of 19.665880203s, submitted: 79
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 65576960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9286a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b6fd16c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640b90776c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b7d4d340
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dfc00 session 0x5640b6b408c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9c1e000/0x0/0x4ffc00000, data 0x187f977/0x1a0e000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548283 data_alloc: 218103808 data_used: 11591714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cefc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b6b3c540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7ca81c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd800 session 0x5640ba1e2c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b920da40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7d16fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b96801c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b9734c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 70729728 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b938bc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 70647808 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596850 data_alloc: 218103808 data_used: 11595826
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607218 data_alloc: 218103808 data_used: 13302322
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.352157593s of 14.543478966s, submitted: 36
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d46c00 session 0x5640b9287880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645779 data_alloc: 218103808 data_used: 19124786
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343064576 unmapped: 67829760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9249000/0x0/0x4ffc00000, data 0x22529aa/0x23e3000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679649 data_alloc: 218103808 data_used: 19853874
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.882978439s of 11.147073746s, submitted: 50
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9228000/0x0/0x4ffc00000, data 0x22739aa/0x2404000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 63848448 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744277 data_alloc: 218103808 data_used: 20693554
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640ba1e2e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9568c00 session 0x5640b6b40380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b99fb800 session 0x5640b9680a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9534000 session 0x5640b786ae00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 281 heartbeat osd_stat(store_statfs(0x4e8903000/0x0/0x4ffc00000, data 0x2b93546/0x2d25000, compress 0x0/0x0/0x0, omap 0x69d80, meta 0x14566280), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356671488 unmapped: 57999360 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640c0286400 session 0x5640b7d16e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 65568768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 65560576 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78f4000/0x0/0x4ffc00000, data 0x3ba6546/0x3d38000, compress 0x0/0x0/0x0, omap 0x6a58b, meta 0x14565a75), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 282 ms_handle_reset con 0x5640c0286400 session 0x5640b7108e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861772 data_alloc: 234881024 data_used: 23754802
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78ef000/0x0/0x4ffc00000, data 0x3ba8136/0x3d3b000, compress 0x0/0x0/0x0, omap 0x6a7b3, meta 0x1456584d), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 65544192 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b995ec00 session 0x5640b6b06a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b9cd7400 session 0x5640b6eeae00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b99fbc00 session 0x5640b6fd0540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640bd8f6800 session 0x5640b9a14540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 283 heartbeat osd_stat(store_statfs(0x4e78e7000/0x0/0x4ffc00000, data 0x3bb0cee/0x3d45000, compress 0x0/0x0/0x0, omap 0x6a8dc, meta 0x14565724), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3863930 data_alloc: 234881024 data_used: 23762994
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.907689095s of 12.907720566s, submitted: 145
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e78e2000/0x0/0x4ffc00000, data 0x3bb276d/0x3d48000, compress 0x0/0x0/0x0, omap 0x72121, meta 0x1455dedf), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e2a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b94508c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b90776c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b71fc700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b7d16fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b97356c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869944 data_alloc: 234881024 data_used: 23762994
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640b6b40380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b7c86380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b92cee00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b786ba40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e3a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b9450700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b9a148c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880792 data_alloc: 234881024 data_used: 23762994
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b938a700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016672134s of 10.084982872s, submitted: 23
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9d3bc00 session 0x5640ba1ca700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 65421312 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72624, meta 0x1455d9dc), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b9b761c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 65413120 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 65396736 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 65355776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3921622 data_alloc: 234881024 data_used: 29672498
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928406 data_alloc: 234881024 data_used: 30782514
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948271751s of 10.989070892s, submitted: 14
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928734 data_alloc: 234881024 data_used: 30782514
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 61562880 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77ba000/0x0/0x4ffc00000, data 0x3cda7cf/0x3e71000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 61431808 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 61259776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4007460 data_alloc: 234881024 data_used: 32667186
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351010816 unmapped: 63660032 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e39000/0x0/0x4ffc00000, data 0x465c7cf/0x47f3000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841988564s of 10.036563873s, submitted: 48
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6dcb000/0x0/0x4ffc00000, data 0x46ca7cf/0x4861000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015500 data_alloc: 234881024 data_used: 32663090
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b995f000 session 0x5640b6b3d500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b97bb6c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 63594496 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d1800 session 0x5640b97bba40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3961220 data_alloc: 234881024 data_used: 31524914
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7078800 session 0x5640b938afc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.606713295s of 11.753137589s, submitted: 38
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d2c00 session 0x5640ba1e3c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b90e1a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817836 data_alloc: 234881024 data_used: 24588338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822060 data_alloc: 234881024 data_used: 25673778
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640c0286400 session 0x5640b98b56c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640ba3d7400 session 0x5640b786afc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b6efe800 session 0x5640b9517dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b995f800 session 0x5640ba1d6a80
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 57409536 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b9568400 session 0x5640b9a81c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361488384 unmapped: 57384960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.244441986s of 10.926932335s, submitted: 60
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 286 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b786a1c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 57737216 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9400 session 0x5640b9516000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4009024 data_alloc: 234881024 data_used: 34502286
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 56623104 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 54108160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6a93000/0x0/0x4ffc00000, data 0x49f8b85/0x4b95000, compress 0x0/0x0/0x0, omap 0x74419, meta 0x1455bbe7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b98b2c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b9568400 session 0x5640b7ca9dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b995f800 session 0x5640b9516000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 54042624 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 54034432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 288 ms_handle_reset con 0x5640ba3c9400 session 0x5640b90e1a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362643456 unmapped: 56229888 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886545 data_alloc: 234881024 data_used: 34502270
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e829b000/0x0/0x4ffc00000, data 0x31f371f/0x338f000, compress 0x0/0x0/0x0, omap 0x744a3, meta 0x1455bb5d), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b9d3bc00 session 0x5640b9b77c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9a816c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 55132160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9735500
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830131 data_alloc: 234881024 data_used: 30469660
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e88ab000/0x0/0x4ffc00000, data 0x2be5158/0x2d81000, compress 0x0/0x0/0x0, omap 0x74ec9, meta 0x1455b137), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.118906975s of 12.735915184s, submitted: 117
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 55091200 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640b97e7c00 session 0x5640b96aa700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640bb169800 session 0x5640b6b401c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640c3883800 session 0x5640b9517880
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681049 data_alloc: 218103808 data_used: 13614620
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 70680576 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 291 ms_handle_reset con 0x5640ba3d8800 session 0x5640b9286700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617990 data_alloc: 218103808 data_used: 8318460
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620700 data_alloc: 218103808 data_used: 8322521
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.065912247s of 18.099792480s, submitted: 113
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 68370432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9c40400 session 0x5640b7d16e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b97e6000 session 0x5640b938b180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b6efd000 session 0x5640ba1e2e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bd8f6000 session 0x5640b6b40380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3cb400 session 0x5640ba1e3a40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662479 data_alloc: 218103808 data_used: 8322521
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640b6efdc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9785000/0x0/0x4ffc00000, data 0x1d0a21f/0x1ea7000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9a14540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664423 data_alloc: 218103808 data_used: 8322521
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9535c00 session 0x5640b92cf180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b99f1800 session 0x5640b786bc00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bca50400 session 0x5640b9680fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 73564160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.409816742s of 18.537841797s, submitted: 24
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 69484544 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 69459968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743019 data_alloc: 218103808 data_used: 14239193
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744719 data_alloc: 218103808 data_used: 14058969
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f8000/0x0/0x4ffc00000, data 0x229721f/0x2434000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.511266708s of 10.248086929s, submitted: 77
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741927 data_alloc: 218103808 data_used: 14042585
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b63cd800 session 0x5640b95176c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b938bdc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3de800 session 0x5640b6eea700
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bfd03000 session 0x5640b920c540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740759 data_alloc: 218103808 data_used: 14042585
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b9a14000
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b7d16540
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1e3c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 63897600 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.870169163s of 10.112051964s, submitted: 64
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 371507200 unmapped: 56066048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3de800 session 0x5640b92ce380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893632 data_alloc: 234881024 data_used: 20694489
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e7f39000/0x0/0x4ffc00000, data 0x3550a0d/0x36f1000, compress 0x0/0x0/0x0, omap 0x76e61, meta 0x1455919f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 73768960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 294 ms_handle_reset con 0x5640b9c69c00 session 0x5640b9516380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 73752576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 79953920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640c0288800 session 0x5640b9a14e00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3885340 data_alloc: 234881024 data_used: 20694489
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f36000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9c69c00 session 0x5640b98b56c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9735dc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.974083900s of 10.401470184s, submitted: 69
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 79912960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884700 data_alloc: 234881024 data_used: 20694489
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x77129, meta 0x14558ed7), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b6b46fc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640bfd03000 session 0x5640b786ba40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b786afc0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6c40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b71fce00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b98b48c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce6400 session 0x5640ba1d7c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.588809013s of 13.410536766s, submitted: 46
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b7ca81c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 71589888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640c0288400 session 0x5640ba1d76c0
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3896309 data_alloc: 234881024 data_used: 27934169
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 71573504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 297 ms_handle_reset con 0x5640b9569800 session 0x5640b6fd1180
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771885 data_alloc: 218103808 data_used: 13085758
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.633598328s of 13.003002167s, submitted: 52
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3775315 data_alloc: 218103808 data_used: 13089756
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782995 data_alloc: 218103808 data_used: 13831132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785683 data_alloc: 218103808 data_used: 14646236
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640be673400 session 0x5640b7d17c00
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.810759544s of 10.873312950s, submitted: 14
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786a380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343990272 unmapped: 83582976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.330020905s of 43.506050110s, submitted: 29
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347160576 unmapped: 80412672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b97bac40
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707310 data_alloc: 218103808 data_used: 8331193
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9516380
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:22 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786b880
Jan 27 09:47:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:24 np0005597378 nova_compute[238941]: 2026-01-27 14:47:24.004 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:24 np0005597378 rsyslogd[1006]: imjournal: 4802 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 09:47:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 09:47:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 09:47:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:26 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23050 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 09:47:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219337045' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 09:47:26 np0005597378 nova_compute[238941]: 2026-01-27 14:47:26.478 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:26 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23054 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 39403520 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed0ae000/0x0/0x4ffc00000, data 0x18d1ca4/0x1a5e000, compress 0x0/0x0/0x0, omap 0x5a382, meta 0x11095c7e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3237449 data_alloc: 218103808 data_used: 7039315
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed0ae000/0x0/0x4ffc00000, data 0x18d1ca4/0x1a5e000, compress 0x0/0x0/0x0, omap 0x5a382, meta 0x11095c7e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 41050112 heap: 341786624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263d04ddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x562635f5d6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x56263d04c000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.282125473s of 13.709013939s, submitted: 123
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x562636139dc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d400 session 0x5626376b0e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626388abdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x56263d04cfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x562638761c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x562638abe540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec1f5000/0x0/0x4ffc00000, data 0x2789cb4/0x2917000, compress 0x0/0x0/0x0, omap 0x5a69a, meta 0x11095966), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327062 data_alloc: 218103808 data_used: 7043411
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 48685056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 48668672 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 48668672 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec1f5000/0x0/0x4ffc00000, data 0x2789cb4/0x2917000, compress 0x0/0x0/0x0, omap 0x5a69a, meta 0x11095966), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302473216 unmapped: 47185920 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365138 data_alloc: 218103808 data_used: 7575891
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 47120384 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302612480 unmapped: 47046656 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302678016 unmapped: 46981120 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638580000 session 0x56263a56b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302678016 unmapped: 46981120 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263693b180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbb2000/0x0/0x4ffc00000, data 0x2dbecb4/0x2f4c000, compress 0x0/0x0/0x0, omap 0x5a69a, meta 0x11095966), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302678016 unmapped: 46981120 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x56263e63c700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.514797211s of 11.883688927s, submitted: 103
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x56263693b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369227 data_alloc: 218103808 data_used: 7395667
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 47136768 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302243840 unmapped: 47415296 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbbc000/0x0/0x4ffc00000, data 0x2dc1cc4/0x2f50000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbbc000/0x0/0x4ffc00000, data 0x2dc1cc4/0x2f50000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457243 data_alloc: 234881024 data_used: 18931957
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebbbc000/0x0/0x4ffc00000, data 0x2dc1cc4/0x2f50000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457443 data_alloc: 234881024 data_used: 18931957
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 46350336 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.160750389s of 12.198937416s, submitted: 15
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 42065920 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadbf000/0x0/0x4ffc00000, data 0x3bb8cc4/0x3d47000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 40919040 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 40919040 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ead25000/0x0/0x4ffc00000, data 0x3c4acc4/0x3dd9000, compress 0x0/0x0/0x0, omap 0x5a970, meta 0x11095690), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565661 data_alloc: 234881024 data_used: 20443381
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 40919040 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x56263a1a9a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x56263d04c380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x56263a29fdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636e9b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x562635c95340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea911000/0x0/0x4ffc00000, data 0x406bd26/0x41fb000, compress 0x0/0x0/0x0, omap 0x5abf2, meta 0x1109540e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596297 data_alloc: 234881024 data_used: 20443381
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x562636a328c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x56263841da40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 40591360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x56263d04c8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.246704102s of 11.646099091s, submitted: 161
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x562638a90700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309084160 unmapped: 40574976 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602935 data_alloc: 234881024 data_used: 20545781
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea8d9000/0x0/0x4ffc00000, data 0x40a2d36/0x4233000, compress 0x0/0x0/0x0, omap 0x5ac76, meta 0x1109538a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x562636139c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626388aa540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309084160 unmapped: 40574976 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa9400 session 0x5626395dd180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec483000/0x0/0x4ffc00000, data 0x2337d16/0x24c6000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec475000/0x0/0x4ffc00000, data 0x2344d16/0x24d3000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339542 data_alloc: 218103808 data_used: 7282405
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 47357952 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec475000/0x0/0x4ffc00000, data 0x2344d16/0x24d3000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 47316992 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 46997504 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 46997504 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec475000/0x0/0x4ffc00000, data 0x2344d16/0x24d3000, compress 0x0/0x0/0x0, omap 0x5b222, meta 0x11094dde), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.952528000s of 10.060560226s, submitted: 59
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9400 session 0x56263693afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263854f800 session 0x562636138c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263d04cc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262920 data_alloc: 218103808 data_used: 7301861
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed2bd000/0x0/0x4ffc00000, data 0x16c0c91/0x184d000, compress 0x0/0x0/0x0, omap 0x5b32a, meta 0x11094cd6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed2bd000/0x0/0x4ffc00000, data 0x16c0c91/0x184d000, compress 0x0/0x0/0x0, omap 0x5b32a, meta 0x11094cd6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262920 data_alloc: 218103808 data_used: 7301861
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 46964736 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309084160 unmapped: 40574976 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 40460288 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.112121582s of 10.013898849s, submitted: 115
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 310910976 unmapped: 38748160 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eca2b000/0x0/0x4ffc00000, data 0x1f4cc91/0x20d9000, compress 0x0/0x0/0x0, omap 0x5b35d, meta 0x11094ca3), peers [1,2] op hist [0,0,2])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340796 data_alloc: 218103808 data_used: 8396384
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec796000/0x0/0x4ffc00000, data 0x21e1c91/0x236e000, compress 0x0/0x0/0x0, omap 0x5b35d, meta 0x11094ca3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340924 data_alloc: 218103808 data_used: 8400480
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 311443456 unmapped: 38215680 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626387ae380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638574c00 session 0x56263e63c8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec791000/0x0/0x4ffc00000, data 0x21eec91/0x237b000, compress 0x0/0x0/0x0, omap 0x5b35d, meta 0x11094ca3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562639c50c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecfff000/0x0/0x4ffc00000, data 0x129ac1f/0x1425000, compress 0x0/0x0/0x0, omap 0x5b630, meta 0x110949d0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209237 data_alloc: 218103808 data_used: 3282528
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecfff000/0x0/0x4ffc00000, data 0x129ac1f/0x1425000, compress 0x0/0x0/0x0, omap 0x5b630, meta 0x110949d0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecfff000/0x0/0x4ffc00000, data 0x129ac1f/0x1425000, compress 0x0/0x0/0x0, omap 0x5b630, meta 0x110949d0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 309985280 unmapped: 39673856 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.917388916s of 13.230425835s, submitted: 102
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562639c50fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x5626361396c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638336fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 41574400 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 41566208 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 41566208 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 41566208 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136470 data_alloc: 218103808 data_used: 222718
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4edfb6000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5b267, meta 0x11094d99), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263854f800 session 0x5626376b16c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638aa4700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562638aa5c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 41549824 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x5626395dc8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.774730682s of 18.882884979s, submitted: 49
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abf180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9400 session 0x56263693a8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626395dddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562638761180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x56263d04c8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184934 data_alloc: 218103808 data_used: 226779
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 41525248 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184934 data_alloc: 218103808 data_used: 226779
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed89b000/0x0/0x4ffc00000, data 0x10e6c5e/0x1271000, compress 0x0/0x0/0x0, omap 0x5af9d, meta 0x11095063), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263a56b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e77c00 session 0x5626361e0700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263d04c380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x562635f5d180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.819248199s of 13.036633492s, submitted: 43
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 41517056 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253181 data_alloc: 218103808 data_used: 226779
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ed899000/0x0/0x4ffc00000, data 0x10e6c97/0x1273000, compress 0x0/0x0/0x0, omap 0x5b021, meta 0x11094fdf), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,13])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 317382656 unmapped: 32276480 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x56263693b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626395dcc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf400 session 0x562638a8f500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626388aa700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x56263841d6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x56263a56bc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4c000/0x0/0x4ffc00000, data 0x1a33cd0/0x1bc0000, compress 0x0/0x0/0x0, omap 0x5b021, meta 0x11094fdf), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636e9b500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1400 session 0x562638866c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263841d6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 41320448 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263e63c8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253999 data_alloc: 218103808 data_used: 227323
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 41312256 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x56263a1a9180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4b000/0x0/0x4ffc00000, data 0x1a33ce0/0x1bc1000, compress 0x0/0x0/0x0, omap 0x5b0a5, meta 0x11094f5b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x562638a90fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 41738240 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6b0400 session 0x562636e9b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 41902080 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 41902080 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4a000/0x0/0x4ffc00000, data 0x1a33cf0/0x1bc2000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349457 data_alloc: 234881024 data_used: 14025357
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4a000/0x0/0x4ffc00000, data 0x1a33cf0/0x1bc2000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349457 data_alloc: 234881024 data_used: 14025357
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ecf4a000/0x0/0x4ffc00000, data 0x1a33cf0/0x1bc2000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.473999023s of 18.295272827s, submitted: 46
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307019776 unmapped: 42639360 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 307748864 unmapped: 41910272 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 41205760 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 40976384 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3418665 data_alloc: 234881024 data_used: 14271117
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314621952 unmapped: 35037184 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebdc0000/0x0/0x4ffc00000, data 0x2bb7cf0/0x2d46000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [0,0,0,0,0,0,0,0,3])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebce1000/0x0/0x4ffc00000, data 0x2c8dcf0/0x2e1c000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314531840 unmapped: 35127296 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 35971072 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 35962880 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 35962880 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489353 data_alloc: 234881024 data_used: 14922381
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebaf3000/0x0/0x4ffc00000, data 0x2e88cf0/0x3017000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebaf3000/0x0/0x4ffc00000, data 0x2e88cf0/0x3017000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebadf000/0x0/0x4ffc00000, data 0x2e96cf0/0x3025000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebadf000/0x0/0x4ffc00000, data 0x2e96cf0/0x3025000, compress 0x0/0x0/0x0, omap 0x5b129, meta 0x11094ed7), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.095812321s of 12.819896698s, submitted: 257
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487699 data_alloc: 234881024 data_used: 14917773
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315072512 unmapped: 34586624 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x5626388ab500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x56263a29fdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562638894a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562638761c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315080704 unmapped: 34578432 heap: 349659136 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5000 session 0x562638a91a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562636240c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562636138c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263a1a96c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626387aefc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf15000/0x0/0x4ffc00000, data 0x3a68cf0/0x3bf7000, compress 0x0/0x0/0x0, omap 0x5ae5f, meta 0x110951a1), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578740 data_alloc: 234881024 data_used: 14925965
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315121664 unmapped: 50331648 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab45000/0x0/0x4ffc00000, data 0x3e38cf0/0x3fc7000, compress 0x0/0x0/0x0, omap 0x5ab11, meta 0x110954ef), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578244 data_alloc: 234881024 data_used: 14925965
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab45000/0x0/0x4ffc00000, data 0x3e38cf0/0x3fc7000, compress 0x0/0x0/0x0, omap 0x5ab11, meta 0x110954ef), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562639081c00 session 0x56263d04ddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562635f5cc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315129856 unmapped: 50323456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578244 data_alloc: 234881024 data_used: 14925965
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x5626361e0a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.101070404s of 16.262178421s, submitted: 32
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315138048 unmapped: 50315264 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab45000/0x0/0x4ffc00000, data 0x3e38cf0/0x3fc7000, compress 0x0/0x0/0x0, omap 0x5ab11, meta 0x110954ef), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab44000/0x0/0x4ffc00000, data 0x3e38d00/0x3fc8000, compress 0x0/0x0/0x0, omap 0x5a847, meta 0x110957b9), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315138048 unmapped: 50315264 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263841ddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315146240 unmapped: 50307072 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582691 data_alloc: 234881024 data_used: 14927501
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eab3a000/0x0/0x4ffc00000, data 0x3e42d00/0x3fd2000, compress 0x0/0x0/0x0, omap 0x5a847, meta 0x110957b9), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315154432 unmapped: 50298880 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583323 data_alloc: 234881024 data_used: 14927501
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x562638aa48c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x562636a32c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x562638abfdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x562638a90540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319225856 unmapped: 46227456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.778729439s of 10.137582779s, submitted: 29
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x562636276fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562636a336c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263d04cc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x562636241a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x5626387ae1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322863104 unmapped: 42590208 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea322000/0x0/0x4ffc00000, data 0x4658d10/0x47e9000, compress 0x0/0x0/0x0, omap 0x5aac9, meta 0x11095537), peers [1,2] op hist [0,0,0,0,0,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322969600 unmapped: 42483712 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322969600 unmapped: 42483712 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322969600 unmapped: 42483712 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea319000/0x0/0x4ffc00000, data 0x4661d10/0x47f2000, compress 0x0/0x0/0x0, omap 0x5aac9, meta 0x11095537), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3734090 data_alloc: 251658240 data_used: 29160413
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323002368 unmapped: 42450944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea319000/0x0/0x4ffc00000, data 0x4661d10/0x47f2000, compress 0x0/0x0/0x0, omap 0x5aac9, meta 0x11095537), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323035136 unmapped: 42418176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x56263a56a8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x5626388aa540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323035136 unmapped: 42418176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857d800 session 0x562636138e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x5626395dc540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323051520 unmapped: 42401792 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f5000/0x0/0x4ffc00000, data 0x4685d20/0x4817000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323051520 unmapped: 42401792 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3736728 data_alloc: 251658240 data_used: 29160941
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f5000/0x0/0x4ffc00000, data 0x4685d20/0x4817000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323633152 unmapped: 41820160 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 326557696 unmapped: 38895616 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f2000/0x0/0x4ffc00000, data 0x4688d20/0x481a000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 329383936 unmapped: 36069376 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2f2000/0x0/0x4ffc00000, data 0x4688d20/0x481a000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.584152222s of 12.301334381s, submitted: 24
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332668928 unmapped: 32784384 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 32178176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822184 data_alloc: 251658240 data_used: 34807789
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332218368 unmapped: 33234944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b33000/0x0/0x4ffc00000, data 0x4e47d20/0x4fd9000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332447744 unmapped: 33005568 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b28000/0x0/0x4ffc00000, data 0x4e51d20/0x4fe3000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3834170 data_alloc: 251658240 data_used: 35872749
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b23000/0x0/0x4ffc00000, data 0x4e57d20/0x4fe9000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b23000/0x0/0x4ffc00000, data 0x4e57d20/0x4fe9000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332496896 unmapped: 32956416 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3834554 data_alloc: 251658240 data_used: 35885037
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.191211700s of 12.030132294s, submitted: 51
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332537856 unmapped: 32915456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333897728 unmapped: 31555584 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333070336 unmapped: 32382976 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e960e000/0x0/0x4ffc00000, data 0x5366d20/0x54f8000, compress 0x0/0x0/0x0, omap 0x5ab4d, meta 0x110954b3), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333094912 unmapped: 32358400 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333283328 unmapped: 32169984 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873838 data_alloc: 251658240 data_used: 36207597
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x5626376b1880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb0000 session 0x562638abfc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576c00 session 0x562635f5c8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333389824 unmapped: 32063488 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636276fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 30531584 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8ddf000/0x0/0x4ffc00000, data 0x5b91d59/0x5d25000, compress 0x0/0x0/0x0, omap 0x5abd1, meta 0x1109542f), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,13])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 22028288 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x5626362401c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576c00 session 0x562638a916c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb0000 session 0x56263a29fa40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x56263e63cc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c023000 session 0x562639c50fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3935679 data_alloc: 251658240 data_used: 36011005
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x56263693a1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576c00 session 0x56263693ae00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb0000 session 0x562638894380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.909857750s of 11.693741798s, submitted: 127
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 30228480 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 30220288 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb1c00 session 0x562636241880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x5b96dc5/0x5d2c000, compress 0x0/0x0/0x0, omap 0x5ac55, meta 0x110953ab), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 30212096 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x5b96dc5/0x5d2c000, compress 0x0/0x0/0x0, omap 0x5ac55, meta 0x110953ab), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 30171136 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3946278 data_alloc: 251658240 data_used: 37111805
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 24846336 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 23404544 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x5b96dc5/0x5d2c000, compress 0x0/0x0/0x0, omap 0x5ac55, meta 0x110953ab), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 23265280 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x562638a90700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857b000 session 0x5626388ab340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 23257088 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626388abc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6ae800 session 0x5626361e1a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 23224320 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3856554 data_alloc: 251658240 data_used: 35891181
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342228992 unmapped: 23224320 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9b16000/0x0/0x4ffc00000, data 0x4e61db5/0x4ff6000, compress 0x0/0x0/0x0, omap 0x5af28, meta 0x110950d8), peers [1,2] op hist [0,0,0,0,0,4])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 31539200 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.520123959s of 10.120170593s, submitted: 73
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x56263841c000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369d0c00 session 0x562638a90a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb278000/0x0/0x4ffc00000, data 0x36fcda5/0x3890000, compress 0x0/0x0/0x0, omap 0x5ac5e, meta 0x110953a2), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb27a000/0x0/0x4ffc00000, data 0x36ffd95/0x3892000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x110959ba), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621246 data_alloc: 234881024 data_used: 20337133
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb27a000/0x0/0x4ffc00000, data 0x36ffd95/0x3892000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x110959ba), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 31531008 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 30859264 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 330194944 unmapped: 35258368 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3697116 data_alloc: 234881024 data_used: 20649453
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636a11800 session 0x56263a56b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636ab6000 session 0x562636a32fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 35250176 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea54f000/0x0/0x4ffc00000, data 0x4421d95/0x45b4000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x110959ba), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.223359108s of 10.027532578s, submitted: 121
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 330244096 unmapped: 35209216 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 37437440 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576400 session 0x5626395dd6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578999 data_alloc: 234881024 data_used: 17696221
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 328032256 unmapped: 37421056 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626388abdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eac7a000/0x0/0x4ffc00000, data 0x3542d13/0x36d2000, compress 0x0/0x0/0x0, omap 0x5a400, meta 0x11095c00), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636139dc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 326991872 unmapped: 38461440 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263a56a540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 46194688 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319266816 unmapped: 46186496 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319266816 unmapped: 46186496 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 09:47:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558187243' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319275008 unmapped: 46178304 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e4000/0x0/0x4ffc00000, data 0x1f02c91/0x208f000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379442 data_alloc: 218103808 data_used: 8005483
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 46170112 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.273204803s of 30.385374069s, submitted: 58
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e2000/0x0/0x4ffc00000, data 0x1f03c91/0x2090000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379658 data_alloc: 218103808 data_used: 8005483
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319291392 unmapped: 46161920 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ec6e2000/0x0/0x4ffc00000, data 0x1f03c91/0x2090000, compress 0x0/0x0/0x0, omap 0x5a508, meta 0x11095af8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319299584 unmapped: 46153728 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381012 data_alloc: 218103808 data_used: 8009481
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562635f5cfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eca7b000/0x0/0x4ffc00000, data 0x1f03cba/0x2091000, compress 0x0/0x0/0x0, omap 0x5a58c, meta 0x11095a74), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x562638a6fdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638a90fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f708800 session 0x562636e9b500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638761dc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263d04d180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.236966133s of 10.079217911s, submitted: 25
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325074944 unmapped: 40378368 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638760380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x562636139180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f705c00 session 0x562638761340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457295 data_alloc: 218103808 data_used: 8009481
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638abfdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 45096960 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebecc000/0x0/0x4ffc00000, data 0x2ab2cf3/0x2c40000, compress 0x0/0x0/0x0, omap 0x5a694, meta 0x1109596c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abe540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320389120 unmapped: 45064192 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320397312 unmapped: 45056000 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebea8000/0x0/0x4ffc00000, data 0x2ad6cf3/0x2c64000, compress 0x0/0x0/0x0, omap 0x5a718, meta 0x110958e8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x5626387608c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562639c50700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626361e2000 session 0x56263a1a9180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638abec40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638aa4380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 45031424 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325566464 unmapped: 39886848 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509537 data_alloc: 218103808 data_used: 8536329
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abf340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562636a32a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567800 session 0x5626361e1180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626376b0c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626361e1500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83c000/0x0/0x4ffc00000, data 0x2fa0d2c/0x3130000, compress 0x0/0x0/0x0, omap 0x5a44e, meta 0x12235bb2), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320741376 unmapped: 44711936 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320741376 unmapped: 44711936 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320741376 unmapped: 44711936 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83c000/0x0/0x4ffc00000, data 0x2fa0d65/0x3130000, compress 0x0/0x0/0x0, omap 0x5a44e, meta 0x12235bb2), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561881 data_alloc: 234881024 data_used: 19123993
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562636138e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83c000/0x0/0x4ffc00000, data 0x2fa0d65/0x3130000, compress 0x0/0x0/0x0, omap 0x5a44e, meta 0x12235bb2), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562639c51340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322691072 unmapped: 42762240 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626376e8800 session 0x562638aa48c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.028289795s of 15.859889984s, submitted: 56
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626395dd180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322699264 unmapped: 42754048 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626387ae380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322707456 unmapped: 42745856 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566617 data_alloc: 234881024 data_used: 19136281
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea83a000/0x0/0x4ffc00000, data 0x2fa0d98/0x3132000, compress 0x0/0x0/0x0, omap 0x5a69d, meta 0x12235963), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320790528 unmapped: 44662784 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322027520 unmapped: 43425792 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638895c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf78000/0x0/0x4ffc00000, data 0x2862d98/0x29f4000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf78000/0x0/0x4ffc00000, data 0x2862d98/0x29f4000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf6f000/0x0/0x4ffc00000, data 0x286bd98/0x29fd000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3503495 data_alloc: 234881024 data_used: 13474089
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf6f000/0x0/0x4ffc00000, data 0x286bd98/0x29fd000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf6f000/0x0/0x4ffc00000, data 0x286bd98/0x29fd000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3503495 data_alloc: 234881024 data_used: 13474089
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322068480 unmapped: 43384832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322076672 unmapped: 43376640 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.716560364s of 14.380043030s, submitted: 66
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 42131456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea8f5000/0x0/0x4ffc00000, data 0x2ee5d98/0x3077000, compress 0x0/0x0/0x0, omap 0x5a7a5, meta 0x1223585b), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 42131456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323321856 unmapped: 42131456 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542699 data_alloc: 234881024 data_used: 13711657
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323690496 unmapped: 41762816 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d800 session 0x56263a56a000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4400 session 0x5626361e1c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626387ae540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562635f5d340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562635f5d6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d800 session 0x562635f5c1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263857cc00 session 0x562635f5c000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562638a6ee00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562635c94540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327229440 unmapped: 38223872 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x562638866e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562638abe700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327188480 unmapped: 38264832 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562635f5da40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e96fe000/0x0/0x4ffc00000, data 0x2f36da8/0x30c9000, compress 0x0/0x0/0x0, omap 0x5a5e3, meta 0x133d5a1d), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539905 data_alloc: 234881024 data_used: 13070121
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327204864 unmapped: 38248448 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.797038078s of 10.519598007s, submitted: 129
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 38240256 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626376b0c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e96e1000/0x0/0x4ffc00000, data 0x2f57dcb/0x30eb000, compress 0x0/0x0/0x0, omap 0x5a667, meta 0x133d5999), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e96e1000/0x0/0x4ffc00000, data 0x2f57dcb/0x30eb000, compress 0x0/0x0/0x0, omap 0x5a667, meta 0x133d5999), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 38240256 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626387afa40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327213056 unmapped: 38240256 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562635c95180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x56263693a000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566326 data_alloc: 234881024 data_used: 16118071
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263d91d800 session 0x5626387ae540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x56263a56a000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562635f5d340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x56263d04c700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327237632 unmapped: 38215680 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e90f7000/0x0/0x4ffc00000, data 0x3540ddb/0x36d5000, compress 0x0/0x0/0x0, omap 0x5a8b6, meta 0x133d574a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x5626361e1c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638576000 session 0x5626361e1180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562639c50700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 327254016 unmapped: 38199296 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e90f1000/0x0/0x4ffc00000, data 0x3546ddb/0x36db000, compress 0x0/0x0/0x0, omap 0x5a8e9, meta 0x133d5717), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x5626368b8000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562636277a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 326230016 unmapped: 39223296 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321814528 unmapped: 43638784 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 45531136 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3394987 data_alloc: 218103808 data_used: 4390632
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562638a8e540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa95000/0x0/0x4ffc00000, data 0x1ba5cf7/0x1d36000, compress 0x0/0x0/0x0, omap 0x5a727, meta 0x133d58d9), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f70a000 session 0x56263e63d340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x562636e9b880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.852835178s of 10.088224411s, submitted: 106
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626395dda40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x56263a1a96c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5cc4/0x1d34000, compress 0x0/0x0/0x0, omap 0x5a7ab, meta 0x133d5855), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x562638a908c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 45522944 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395823 data_alloc: 218103808 data_used: 4453952
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425647 data_alloc: 234881024 data_used: 9457216
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425647 data_alloc: 234881024 data_used: 9457216
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.098381042s of 12.684741020s, submitted: 9
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa98000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319315968 unmapped: 46137344 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaa99000/0x0/0x4ffc00000, data 0x1ba5ca1/0x1d33000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [0,0,3,4])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320708608 unmapped: 44744704 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490673 data_alloc: 234881024 data_used: 9727517
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea1a0000/0x0/0x4ffc00000, data 0x2495ca1/0x2623000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea1a0000/0x0/0x4ffc00000, data 0x2495ca1/0x2623000, compress 0x0/0x0/0x0, omap 0x5a9fa, meta 0x133d5606), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490673 data_alloc: 234881024 data_used: 9727517
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.549484253s of 10.240197182s, submitted: 74
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea1a9000/0x0/0x4ffc00000, data 0x2495ca1/0x2623000, compress 0x0/0x0/0x0, omap 0x5a6ac, meta 0x133d5954), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320880640 unmapped: 44572672 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x56263a29ea80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562636139500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 44556288 heap: 365453312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485965 data_alloc: 234881024 data_used: 9727517
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 329383936 unmapped: 38174720 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322043904 unmapped: 45514752 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abefc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e988c000/0x0/0x4ffc00000, data 0x2db3c91/0x2f40000, compress 0x0/0x0/0x0, omap 0x5a7b4, meta 0x133d584c), peers [1,2] op hist [0,0,0,0,0,0,3])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320364544 unmapped: 47194112 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea650000/0x0/0x4ffc00000, data 0x1feecba/0x217c000, compress 0x0/0x0/0x0, omap 0x5aa03, meta 0x133d55fd), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x56263e63d180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x56263a56afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626369f7c00 session 0x5626361e1500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463956 data_alloc: 218103808 data_used: 4402701
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.671667099s of 11.104839325s, submitted: 100
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626361e0540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321847296 unmapped: 45711360 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x5626388ab500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562636a336c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x5626387aefc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 315785216 unmapped: 51773440 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6e700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562638894a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadab000/0x0/0x4ffc00000, data 0x1894c81/0x1a20000, compress 0x0/0x0/0x0, omap 0x5aa0c, meta 0x133d55f4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 312672256 unmapped: 54886400 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562639c51340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 312672256 unmapped: 54886400 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263841d6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadac000/0x0/0x4ffc00000, data 0x1894c4e/0x1a1e000, compress 0x0/0x0/0x0, omap 0x5a80c, meta 0x133d57f4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366135 data_alloc: 218103808 data_used: 176621
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314204160 unmapped: 53354496 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadab000/0x0/0x4ffc00000, data 0x1894c5e/0x1a1f000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314204160 unmapped: 53354496 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443191 data_alloc: 234881024 data_used: 13085677
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eadab000/0x0/0x4ffc00000, data 0x1894c5e/0x1a1f000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443191 data_alloc: 234881024 data_used: 13085677
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 53444608 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.105199814s of 14.853006363s, submitted: 65
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 316129280 unmapped: 51429376 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaac7000/0x0/0x4ffc00000, data 0x1b7ac5e/0x1d05000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 316391424 unmapped: 51167232 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320069632 unmapped: 47489024 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543373 data_alloc: 234881024 data_used: 15076200
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321216512 unmapped: 46342144 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543517 data_alloc: 234881024 data_used: 15080296
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea0e2000/0x0/0x4ffc00000, data 0x2559c5e/0x26e4000, compress 0x0/0x0/0x0, omap 0x5a890, meta 0x133d5770), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 321224704 unmapped: 46333952 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543645 data_alloc: 234881024 data_used: 15084392
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.778573990s of 13.795457840s, submitted: 147
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638a6fc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x5626395dc8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562638a6e700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319807488 unmapped: 47751168 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562635f5c700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eae7e000/0x0/0x4ffc00000, data 0x16abbec/0x1834000, compress 0x0/0x0/0x0, omap 0x5aa1c, meta 0x133d55e4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319807488 unmapped: 47751168 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319807488 unmapped: 47751168 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425558 data_alloc: 234881024 data_used: 9444712
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eae7e000/0x0/0x4ffc00000, data 0x16abbec/0x1834000, compress 0x0/0x0/0x0, omap 0x5aa1c, meta 0x133d55e4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eae7e000/0x0/0x4ffc00000, data 0x16abbec/0x1834000, compress 0x0/0x0/0x0, omap 0x5aa1c, meta 0x133d55e4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638abee00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263841ddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425558 data_alloc: 234881024 data_used: 9444712
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709c00 session 0x562636a336c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.914650917s of 10.026164055s, submitted: 28
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263841d180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 319815680 unmapped: 47742976 heap: 367558656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x562635f5c380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562636241a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eaf96000/0x0/0x4ffc00000, data 0x16abc1f/0x1836000, compress 0x0/0x0/0x0, omap 0x5aaa0, meta 0x133d5560), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709400 session 0x5626388661c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626385bb800 session 0x562638895180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263a1a8540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x56263a56a700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492968 data_alloc: 234881024 data_used: 9977094
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea603000/0x0/0x4ffc00000, data 0x203dc81/0x21c9000, compress 0x0/0x0/0x0, omap 0x5ab24, meta 0x133d54dc), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320888832 unmapped: 54108160 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea603000/0x0/0x4ffc00000, data 0x203dc81/0x21c9000, compress 0x0/0x0/0x0, omap 0x5ab24, meta 0x133d54dc), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 54099968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492968 data_alloc: 234881024 data_used: 9977094
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 54099968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280591011s of 11.585002899s, submitted: 37
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320897024 unmapped: 54099968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562636e9afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea603000/0x0/0x4ffc00000, data 0x203dc81/0x21c9000, compress 0x0/0x0/0x0, omap 0x5aba8, meta 0x133d5458), peers [1,2] op hist [0,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 320913408 unmapped: 54083584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322011136 unmapped: 52985856 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565237 data_alloc: 234881024 data_used: 20024680
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322945024 unmapped: 52051968 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea553000/0x0/0x4ffc00000, data 0x20ecca4/0x2279000, compress 0x0/0x0/0x0, omap 0x5aba8, meta 0x133d5458), peers [1,2] op hist [0,0,0,0,0,0,2,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562638abefc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af800 session 0x562635f5c540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604710 data_alloc: 234881024 data_used: 20024680
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x5626361e0a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a6e000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c50000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6e1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x56263693afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9f85000/0x0/0x4ffc00000, data 0x26bacd3/0x2846000, compress 0x0/0x0/0x0, omap 0x5ad34, meta 0x133d52cc), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 322961408 unmapped: 52035584 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263841d500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516821861s of 11.828451157s, submitted: 52
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626368b81c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 323117056 unmapped: 51879936 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325648384 unmapped: 49348608 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659588 data_alloc: 234881024 data_used: 20766021
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 325656576 unmapped: 49340416 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e963c000/0x0/0x4ffc00000, data 0x2ffcd06/0x318a000, compress 0x0/0x0/0x0, omap 0x5adb8, meta 0x133d5248), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331366400 unmapped: 43630592 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x562638a6e8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x56263e63c000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331366400 unmapped: 43630592 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 40K writes, 161K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 40K writes, 14K syncs, 2.83 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5642 writes, 23K keys, 5642 commit groups, 1.0 writes per commit group, ingest: 28.69 MB, 0.05 MB/s#012Interval WAL: 5642 writes, 2062 syncs, 2.74 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578226 data_alloc: 234881024 data_used: 19053381
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567000 session 0x5626376b01c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea245000/0x0/0x4ffc00000, data 0x23f9d06/0x2587000, compress 0x0/0x0/0x0, omap 0x5ae3c, meta 0x133d51c4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea266000/0x0/0x4ffc00000, data 0x23d8d06/0x2566000, compress 0x0/0x0/0x0, omap 0x5aec0, meta 0x133d5140), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea266000/0x0/0x4ffc00000, data 0x23d8d06/0x2566000, compress 0x0/0x0/0x0, omap 0x5aec0, meta 0x133d5140), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579066 data_alloc: 234881024 data_used: 19061573
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 43622400 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.386153221s of 13.340146065s, submitted: 128
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 331382784 unmapped: 43614208 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333471744 unmapped: 41525248 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 332832768 unmapped: 42164224 heap: 374996992 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 31178752 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8e01000/0x0/0x4ffc00000, data 0x383cd2f/0x39cb000, compress 0x0/0x0/0x0, omap 0x5af44, meta 0x133d50bc), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,19,5])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3735097 data_alloc: 234881024 data_used: 19163973
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263693b340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 333103104 unmapped: 48242688 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638867a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 37617664 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 36569088 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626368b9340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x56263693afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3787864 data_alloc: 234881024 data_used: 19168069
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8268000/0x0/0x4ffc00000, data 0x43d4dca/0x4564000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f6af000 session 0x562638aa4fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562635f5c540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 46792704 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636e9afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.066846371s of 10.832829475s, submitted: 126
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x56263a1a8540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 46784512 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 46784512 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 37871616 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3887723 data_alloc: 251658240 data_used: 33749317
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 37863424 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8263000/0x0/0x4ffc00000, data 0x43d7dfd/0x4569000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562635c95180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8263000/0x0/0x4ffc00000, data 0x43d7dfd/0x4569000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 37863424 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 37863424 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 30842880 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 29327360 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3952452 data_alloc: 251658240 data_used: 42720069
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8262000/0x0/0x4ffc00000, data 0x43d7e20/0x456a000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 29286400 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8260000/0x0/0x4ffc00000, data 0x43d8e20/0x456b000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 29229056 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8260000/0x0/0x4ffc00000, data 0x43d8e20/0x456b000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 29196288 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 29196288 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 29188096 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3952332 data_alloc: 251658240 data_used: 42724165
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.854118347s of 13.052800179s, submitted: 16
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 23945216 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7455000/0x0/0x4ffc00000, data 0x51d6e20/0x5369000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 22454272 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 22274048 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x562638abf6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 21979136 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e6ea0000/0x0/0x4ffc00000, data 0x5791e20/0x5924000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 21979136 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4092076 data_alloc: 251658240 data_used: 43747141
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 21970944 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 21970944 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x56263d04da40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 21970944 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 361398272 unmapped: 19947520 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 17752064 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4159215 data_alloc: 268435456 data_used: 46688167
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e66d9000/0x0/0x4ffc00000, data 0x5f5fe43/0x60f3000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e66d9000/0x0/0x4ffc00000, data 0x5f5fe43/0x60f3000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.015433311s of 10.192104340s, submitted: 236
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 17088512 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 16621568 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 16621568 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x562638a6fc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562638a6f6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638aa4000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e859f000/0x0/0x4ffc00000, data 0x409adae/0x422b000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3898790 data_alloc: 251658240 data_used: 29610407
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356769792 unmapped: 24576000 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f709400 session 0x562638a90540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636240fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9887000/0x0/0x4ffc00000, data 0x2db0d29/0x2f3f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721518 data_alloc: 234881024 data_used: 25086754
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 23592960 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.379924774s of 12.045909882s, submitted: 109
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 22413312 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 358752256 unmapped: 22593536 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d90000/0x0/0x4ffc00000, data 0x389fd29/0x3a2e000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 23707648 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3798102 data_alloc: 234881024 data_used: 26283677
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 23707648 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d78000/0x0/0x4ffc00000, data 0x38bdd29/0x3a4c000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 23707648 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d80000/0x0/0x4ffc00000, data 0x38bdd29/0x3a4c000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3793574 data_alloc: 234881024 data_used: 26373789
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 23838720 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 23756800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8d80000/0x0/0x4ffc00000, data 0x38bdd29/0x3a4c000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 23756800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 23756800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626388aa000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.792613029s of 12.164447784s, submitted: 220
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 27852800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659698 data_alloc: 234881024 data_used: 19529373
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x5626376b0fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84000 session 0x5626387aec40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 27852800 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x562639c51dc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9e08000/0x0/0x4ffc00000, data 0x2835d06/0x29c3000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487376 data_alloc: 218103808 data_used: 7650938
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb01e000/0x0/0x4ffc00000, data 0x1620c81/0x17ac000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 36634624 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c51880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562636a33340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb01e000/0x0/0x4ffc00000, data 0x1620c81/0x17ac000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6e8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 43433984 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eba26000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 43425792 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368131 data_alloc: 218103808 data_used: 210421
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.325378418s of 30.658605576s, submitted: 84
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x5626395dcfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb45d000/0x0/0x4ffc00000, data 0x11e6bec/0x136f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 43417600 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb45d000/0x0/0x4ffc00000, data 0x11e6bec/0x136f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9800 session 0x562636a33880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638a6ea80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x56263841ddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638a6e700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337936384 unmapped: 43409408 heap: 381345792 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3423271 data_alloc: 218103808 data_used: 218480
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4eb45d000/0x0/0x4ffc00000, data 0x11e6bec/0x136f000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [0,0,0,5])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635f5c700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263f567400 session 0x5626376b01c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562636e9a8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562639c51500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263693bc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337903616 unmapped: 49225728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x5626395dddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84000 session 0x56263d04c540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337903616 unmapped: 49225728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263841d6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a8f180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263e63c540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 49250304 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499050 data_alloc: 218103808 data_used: 502640
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x5626368b9180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7fa000/0x0/0x4ffc00000, data 0x1e47c1f/0x1fd2000, compress 0x0/0x0/0x0, omap 0x5af86, meta 0x133d507a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb7000 session 0x5626361396c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.254443169s of 11.891675949s, submitted: 38
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x56263a29f500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7f8000/0x0/0x4ffc00000, data 0x1e47c52/0x1fd4000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 50249728 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 338894848 unmapped: 48234496 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621838 data_alloc: 234881024 data_used: 20356992
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x5626395dd500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562636139c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638aa5dc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263693b180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea7f8000/0x0/0x4ffc00000, data 0x1e47c52/0x1fd4000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [0,0,0,0,0,0,6])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263a1a8a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638abf500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638866e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x562638abec40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562638336fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 47947776 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3665712 data_alloc: 234881024 data_used: 20361088
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 44548096 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea306000/0x0/0x4ffc00000, data 0x2338c62/0x24c6000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [0,0,0,0,2])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a56ba40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 44359680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.644995689s of 10.271551132s, submitted: 99
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 44359680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343900160 unmapped: 43229184 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 343941120 unmapped: 43188224 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3779574 data_alloc: 234881024 data_used: 25505168
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 41418752 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 41418752 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e939a000/0x0/0x4ffc00000, data 0x32a4c62/0x3432000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 41328640 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 41320448 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3800006 data_alloc: 234881024 data_used: 25910672
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9371000/0x0/0x4ffc00000, data 0x32cdc62/0x345b000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e9350000/0x0/0x4ffc00000, data 0x32eec62/0x347c000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.878005981s of 11.467146873s, submitted: 73
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 40263680 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x3ab5c62/0x3c43000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [0,0,0,0,0,0,0,26,1,0,37])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 33619968 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3854794 data_alloc: 234881024 data_used: 25941392
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 34275328 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e8a1f000/0x0/0x4ffc00000, data 0x3c17c62/0x3da5000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 33955840 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 34611200 heap: 387129344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 361627648 unmapped: 26558464 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 35397632 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924144 data_alloc: 234881024 data_used: 27091344
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636138e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 35397632 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7fe6000/0x0/0x4ffc00000, data 0x4658c62/0x47e6000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [0,0,0,0,0,0,5,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 33816576 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626388aa1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626395dd180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562639081c00 session 0x562638a6efc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x5626387608c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 33538048 heap: 388186112 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626388aac40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 27222016 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562635c94fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.818206310s of 10.488877296s, submitted: 160
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626368b8e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x562635c95880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562636a328c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355876864 unmapped: 33931264 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x56263d04d880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3979494 data_alloc: 251658240 data_used: 27296144
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x56263a56a8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a29f180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x56263e63ce00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e78c3000/0x0/0x4ffc00000, data 0x4d79c81/0x4f09000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356032512 unmapped: 33775616 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626395dca80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034809 data_alloc: 251658240 data_used: 35521424
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x5626361e0540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357646336 unmapped: 32161792 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x56263a56ac40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 32030720 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562636139180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fa6400 session 0x56263a56a700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626388b3c00 session 0x56263d04c700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7882000/0x0/0x4ffc00000, data 0x4dbac81/0x4f4a000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626395dca80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.917056084s of 10.204359055s, submitted: 30
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 32006144 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3901475 data_alloc: 251658240 data_used: 29260191
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e882e000/0x0/0x4ffc00000, data 0x3e0dc81/0x3f9d000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 29286400 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942835 data_alloc: 251658240 data_used: 36173215
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 26451968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 25460736 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7e80000/0x0/0x4ffc00000, data 0x47bcc81/0x494c000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 25305088 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4020779 data_alloc: 251658240 data_used: 37471647
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7dd4000/0x0/0x4ffc00000, data 0x4867c81/0x49f7000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 25239552 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7dd4000/0x0/0x4ffc00000, data 0x4867c81/0x49f7000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.449850082s of 11.943284035s, submitted: 103
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 22986752 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 23650304 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e761d000/0x0/0x4ffc00000, data 0x5011c81/0x51a1000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e75f2000/0x0/0x4ffc00000, data 0x5042c81/0x51d2000, compress 0x0/0x0/0x0, omap 0x67540, meta 0x133c8ac0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082607 data_alloc: 251658240 data_used: 38308255
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 22552576 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x5626395dc1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562638aa41c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263fa84c00 session 0x562635c95880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e89ca000/0x0/0x4ffc00000, data 0x3c73c71/0x3e02000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 25837568 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906876 data_alloc: 251658240 data_used: 28780447
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x5626361e16c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x562638abf500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 30547968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626376b0fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 30547968 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562639c501c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.503918648s of 11.192797661s, submitted: 119
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x5626388b3c00 session 0x56263841c540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ead00000/0x0/0x4ffc00000, data 0x193fc2f/0x1acb000, compress 0x0/0x0/0x0, omap 0x66c38, meta 0x133c93c8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 36823040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562638a6f500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x56263a1a8c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596093 data_alloc: 234881024 data_used: 12481920
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 43991040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 43991040 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638abea80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ebc77000/0x0/0x4ffc00000, data 0x9ccbec/0xb55000, compress 0x0/0x0/0x0, omap 0x67e58, meta 0x133c81a8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434552 data_alloc: 218103808 data_used: 226602
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x5626368b8e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb8c00 session 0x56263a56a8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626387ae1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636a32a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.327098846s of 19.182558060s, submitted: 61
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 43974656 heap: 389808128 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562639c51c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480238 data_alloc: 218103808 data_used: 226602
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346103808 unmapped: 47906816 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea44e000/0x0/0x4ffc00000, data 0x1055bec/0x11de000, compress 0x0/0x0/0x0, omap 0x67edc, meta 0x14568124), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635f5cc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 47742976 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263c7ba400 session 0x562639c50000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562639c51180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636edb800 session 0x562636241c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638866fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 47554560 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562638866c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb4c00 session 0x562635f5c540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x562638866700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638faf000 session 0x562638aa5a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb5c00 session 0x562635c94c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3506993 data_alloc: 218103808 data_used: 229162
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e7000/0x0/0x4ffc00000, data 0x11bac25/0x1345000, compress 0x0/0x0/0x0, omap 0x683f8, meta 0x14567c08), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e7000/0x0/0x4ffc00000, data 0x11bac5e/0x1345000, compress 0x0/0x0/0x0, omap 0x6843a, meta 0x14567bc6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x56263e7ad000 session 0x5626388ab500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540273 data_alloc: 218103808 data_used: 5777706
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x5626387ae8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 47538176 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562638fb9c00 session 0x562638867a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.888916016s of 14.541015625s, submitted: 67
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 ms_handle_reset con 0x562636e76400 session 0x5626368b9340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 47775744 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345595904 unmapped: 48414720 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4ea2e5000/0x0/0x4ffc00000, data 0x11bac91/0x1347000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x14567b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 48660480 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 48660480 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548848 data_alloc: 218103808 data_used: 6713146
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 48726016 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 47292416 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3648314 data_alloc: 218103808 data_used: 8061754
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 45064192 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e1000/0x0/0x4ffc00000, data 0x1f15c91/0x20a2000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.727126122s of 11.154335976s, submitted: 116
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 46096384 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e83e7000/0x0/0x4ffc00000, data 0x1f18c91/0x20a5000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 45105152 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7c65000/0x0/0x4ffc00000, data 0x268cc91/0x2819000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 42582016 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692260 data_alloc: 218103808 data_used: 9312058
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 41943040 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 heartbeat osd_stat(store_statfs(0x4e7bd1000/0x0/0x4ffc00000, data 0x2728c91/0x28b5000, compress 0x0/0x0/0x0, omap 0x684be, meta 0x15707b42), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 41893888 heap: 394010624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 281 ms_handle_reset con 0x56263e7ad000 session 0x562636a33dc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373047296 unmapped: 30949376 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 281 ms_handle_reset con 0x562638fa7c00 session 0x562638895340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 36478976 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 282 ms_handle_reset con 0x562638521c00 session 0x56263693b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367558656 unmapped: 36438016 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3811512 data_alloc: 234881024 data_used: 23295290
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562636e76400 session 0x562638abf180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 36421632 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e7232000/0x0/0x4ffc00000, data 0x30c5037/0x3256000, compress 0x0/0x0/0x0, omap 0x6890c, meta 0x157076f4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562638fa7c00 session 0x562638a6f180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x562638fb9c00 session 0x562638a8f340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x56263e7ad000 session 0x562638895500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367583232 unmapped: 36413440 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817490 data_alloc: 234881024 data_used: 23295290
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 heartbeat osd_stat(store_statfs(0x4e7211000/0x0/0x4ffc00000, data 0x30e6037/0x3277000, compress 0x0/0x0/0x0, omap 0x6890c, meta 0x157076f4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.522968292s of 13.043628693s, submitted: 160
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 ms_handle_reset con 0x56263fa85400 session 0x56263a29f180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 283 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7210000/0x0/0x4ffc00000, data 0x30e7ab6/0x327a000, compress 0x0/0x0/0x0, omap 0x68dbe, meta 0x15707242), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 40198144 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e720a000/0x0/0x4ffc00000, data 0x30edab6/0x3280000, compress 0x0/0x0/0x0, omap 0x68dbe, meta 0x15707242), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fabc00 session 0x5626368b96c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562638abe540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x56263693ba40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626388aa700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x562638a6ea80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x5626395dcfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x5626388aa8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x56263693aa80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x562638aa4000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 40230912 heap: 403996672 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812172 data_alloc: 234881024 data_used: 23295388
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x562638760a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x56263a29fc00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562636a336c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562636a33340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626395dddc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x562639c50a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f70a800 session 0x56263841d340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562636e76400 session 0x562635f5ca80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562638aa4380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65fe000/0x0/0x4ffc00000, data 0x3cf8b38/0x3e8e000, compress 0x0/0x0/0x0, omap 0x690ba, meta 0x15706f46), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x5626395dd340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65fb000/0x0/0x4ffc00000, data 0x3cfbb38/0x3e91000, compress 0x0/0x0/0x0, omap 0x690ba, meta 0x15706f46), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3892114 data_alloc: 234881024 data_used: 23295388
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.734030724s of 10.003081322s, submitted: 72
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x56263693ac40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263fa85800 session 0x56263a29f6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 43188224 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562636e9b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb9c00 session 0x562638a91180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 43114496 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 43098112 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 43098112 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3908832 data_alloc: 234881024 data_used: 24891324
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3977056 data_alloc: 251658240 data_used: 35594258
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.488423347s of 10.504862785s, submitted: 8
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65d5000/0x0/0x4ffc00000, data 0x3d1fb7e/0x3eb7000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e65cf000/0x0/0x4ffc00000, data 0x3d25b7e/0x3ebd000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 40624128 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978904 data_alloc: 251658240 data_used: 35614738
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369606656 unmapped: 38592512 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 38404096 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6350000/0x0/0x4ffc00000, data 0x3fa4b7e/0x413c000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 38404096 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 34652160 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4076132 data_alloc: 251658240 data_used: 40669202
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 34652160 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.055529594s of 10.484923363s, submitted: 135
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a98000/0x0/0x4ffc00000, data 0x4856b7e/0x49ee000, compress 0x0/0x0/0x0, omap 0x69b3b, meta 0x157064c5), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a9b000/0x0/0x4ffc00000, data 0x4859b7e/0x49f1000, compress 0x0/0x0/0x0, omap 0x69b7e, meta 0x15706482), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4075004 data_alloc: 251658240 data_used: 41074706
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263e7ad000 session 0x56263841d340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 374857728 unmapped: 33341440 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263857c400 session 0x5626361e0540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263f709000 session 0x562635f5c8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 34734080 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871386 data_alloc: 251658240 data_used: 27356674
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6dc7000/0x0/0x4ffc00000, data 0x337bae9/0x3510000, compress 0x0/0x0/0x0, omap 0x69e4c, meta 0x157061b4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 34807808 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 34807808 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6f79000/0x0/0x4ffc00000, data 0x337eae9/0x3513000, compress 0x0/0x0/0x0, omap 0x69e4c, meta 0x157061b4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.176199913s of 11.587745667s, submitted: 63
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638faf000 session 0x562636138e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fb5c00 session 0x5626361e1c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373407744 unmapped: 34791424 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x56263857c400 session 0x56263841d500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796769 data_alloc: 234881024 data_used: 25398637
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 34766848 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3803169 data_alloc: 234881024 data_used: 26738029
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 34758656 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7730000/0x0/0x4ffc00000, data 0x29f5a44/0x2b86000, compress 0x0/0x0/0x0, omap 0x69828, meta 0x157067d8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 ms_handle_reset con 0x562638fa7c00 session 0x562638760540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638fb9c00 session 0x562639c516c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638fa7c00 session 0x562635c94380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373448704 unmapped: 34750464 heap: 408199168 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x56263857c400 session 0x56263841da40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.418519974s of 10.006520271s, submitted: 98
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 ms_handle_reset con 0x562638faf000 session 0x562636a33c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 381730816 unmapped: 30670848 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e6cf8000/0x0/0x4ffc00000, data 0x35fe6a4/0x3792000, compress 0x0/0x0/0x0, omap 0x69b40, meta 0x157064c0), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 heartbeat osd_stat(store_statfs(0x4e64f8000/0x0/0x4ffc00000, data 0x3dfe642/0x3f91000, compress 0x0/0x0/0x0, omap 0x69c06, meta 0x157063fa), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 286 ms_handle_reset con 0x562638fb5c00 session 0x562636240000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 381747200 unmapped: 30654464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 286 handle_osd_map epochs [286,287], i have 287, src has [1,287]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x56263e7ad000 session 0x562638a8f500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 33800192 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3945739 data_alloc: 251658240 data_used: 33148749
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64f3000/0x0/0x4ffc00000, data 0x3e01dea/0x3f97000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 33800192 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x56263857c400 session 0x5626388aa000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x562638fa7c00 session 0x562638a8ee00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64ed000/0x0/0x4ffc00000, data 0x3e06dea/0x3f9c000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 33792000 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 ms_handle_reset con 0x562638faf000 session 0x5626387afdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 33783808 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e64ed000/0x0/0x4ffc00000, data 0x3e06dea/0x3f9c000, compress 0x0/0x0/0x0, omap 0x6a11a, meta 0x15705ee6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 288 ms_handle_reset con 0x562638fb5c00 session 0x56263d04cfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 33751040 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e78f2000/0x0/0x4ffc00000, data 0x2a01994/0x2b97000, compress 0x0/0x0/0x0, omap 0x6a57c, meta 0x15705a84), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 33751040 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845165 data_alloc: 251658240 data_used: 33145238
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 33734656 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 289 ms_handle_reset con 0x562636e76400 session 0x562638a8efc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.065670967s of 10.130508423s, submitted: 116
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 289 ms_handle_reset con 0x562636e76400 session 0x56263693ae00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e7b75000/0x0/0x4ffc00000, data 0x277f3fc/0x2914000, compress 0x0/0x0/0x0, omap 0x6a6e8, meta 0x15705918), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817959 data_alloc: 251658240 data_used: 30548260
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 33726464 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 33718272 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x56263857c400 session 0x56263a56b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 43450368 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x562636edb800 session 0x562638a6e1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 290 ms_handle_reset con 0x562638fb4c00 session 0x562638a90c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 43450368 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727389 data_alloc: 234881024 data_used: 20496543
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e850b000/0x0/0x4ffc00000, data 0x1de9f8a/0x1f7f000, compress 0x0/0x0/0x0, omap 0x6b308, meta 0x15704cf8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 291 ms_handle_reset con 0x562638fa7c00 session 0x56263a56afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc ms_handle_reset ms_handle_reset con 0x562638581c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 58941440 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e9914000/0x0/0x4ffc00000, data 0x9dfa25/0xb76000, compress 0x0/0x0/0x0, omap 0x6ba62, meta 0x1570459e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.494709969s of 12.215576172s, submitted: 67
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541037 data_alloc: 218103808 data_used: 243871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541037 data_alloc: 218103808 data_used: 243871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9911000/0x0/0x4ffc00000, data 0x9e14a4/0xb79000, compress 0x0/0x0/0x0, omap 0x6bac6, meta 0x1570453a), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 59006976 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636e76400 session 0x562636a33500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636edb800 session 0x562638a91500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638abe8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb4c00 session 0x5626361e0fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638faf000 session 0x5626368b8a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604067 data_alloc: 218103808 data_used: 243871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f04000/0x0/0x4ffc00000, data 0x13f04a4/0x1588000, compress 0x0/0x0/0x0, omap 0x6bb4a, meta 0x157044b6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f04000/0x0/0x4ffc00000, data 0x13f04a4/0x1588000, compress 0x0/0x0/0x0, omap 0x6bb4a, meta 0x157044b6), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636e76400 session 0x562638a6ec40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562636edb800 session 0x562638abefc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 58974208 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638895180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.598681450s of 14.721137047s, submitted: 19
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb4c00 session 0x5626388ab340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 58957824 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606954 data_alloc: 218103808 data_used: 243871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 58957824 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638582000 session 0x5626368b8380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 58793984 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669550 data_alloc: 234881024 data_used: 10778783
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f03000/0x0/0x4ffc00000, data 0x13f04b4/0x1589000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x15704270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669550 data_alloc: 234881024 data_used: 10778783
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 356827136 unmapped: 55574528 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.448341370s of 13.470945358s, submitted: 8
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 50331648 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70f8000/0x0/0x4ffc00000, data 0x205b4b4/0x21f4000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 50331648 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749476 data_alloc: 234881024 data_used: 11847839
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753968 data_alloc: 234881024 data_used: 11991199
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754864 data_alloc: 234881024 data_used: 12019871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562635f2a400 session 0x56263e63cfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x56263857c400 session 0x562638a90e00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.421809196s of 15.024203300s, submitted: 73
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 ms_handle_reset con 0x562638fb5c00 session 0x56263841da40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e70ee000/0x0/0x4ffc00000, data 0x20654b4/0x21fe000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 50323456 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754732 data_alloc: 234881024 data_used: 12019871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 50315264 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 293 ms_handle_reset con 0x562635f2a000 session 0x5626395dda40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362086400 unmapped: 50315264 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 370376704 unmapped: 42024960 heap: 412401664 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e654f000/0x0/0x4ffc00000, data 0x2c03050/0x2d9d000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373841920 unmapped: 42541056 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 293 ms_handle_reset con 0x5626361e2c00 session 0x562638abefc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e6292000/0x0/0x4ffc00000, data 0x2ec0050/0x305a000, compress 0x0/0x0/0x0, omap 0x6bd90, meta 0x168a4270), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 373850112 unmapped: 42532864 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3892972 data_alloc: 234881024 data_used: 23853215
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e628d000/0x0/0x4ffc00000, data 0x2ec1c40/0x305d000, compress 0x0/0x0/0x0, omap 0x6c1ee, meta 0x168a3e12), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 294 ms_handle_reset con 0x562638576800 session 0x56263a1a8380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 49930240 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.211944580s of 10.252316475s, submitted: 25
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 52764672 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 52764672 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3877934 data_alloc: 234881024 data_used: 23853215
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e628c000/0x0/0x4ffc00000, data 0x2ec37f8/0x3060000, compress 0x0/0x0/0x0, omap 0x6c1ee, meta 0x168a3e12), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 52740096 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 52731904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 52731904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880622 data_alloc: 234881024 data_used: 23853215
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790b000/0x0/0x4ffc00000, data 0x1841277/0x19df000, compress 0x0/0x0/0x0, omap 0x6c2d6, meta 0x168a3d2a), peers [1,2] op hist [0,0,0,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357261312 unmapped: 59121664 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x5626368b8a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790e000/0x0/0x4ffc00000, data 0x1841267/0x19de000, compress 0x0/0x0/0x0, omap 0x6c51c, meta 0x168a3ae4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651281 data_alloc: 218103808 data_used: 243871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e790e000/0x0/0x4ffc00000, data 0x1841267/0x19de000, compress 0x0/0x0/0x0, omap 0x6c51c, meta 0x168a3ae4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3651281 data_alloc: 218103808 data_used: 243871
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x56263e63cc40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x56263841c700
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb5c00 session 0x562637cfd6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 357277696 unmapped: 59105280 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.305338860s of 18.014606476s, submitted: 58
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263f70a400 session 0x562636240000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x56263d04da40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x562638761880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x562638a8ec40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb5c00 session 0x5626362776c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 51052544 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb0400 session 0x562638a90c40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 49233920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562635f2a000 session 0x562638a91500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e70b5000/0x0/0x4ffc00000, data 0x209a267/0x2237000, compress 0x0/0x0/0x0, omap 0x6c97e, meta 0x168a3682), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x5626361e2c00 session 0x56263a1a9a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 49160192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x56263857c400 session 0x5626361e0000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 ms_handle_reset con 0x562638fb0400 session 0x562636240a80
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 55984128 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739790 data_alloc: 234881024 data_used: 14649386
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7090000/0x0/0x4ffc00000, data 0x20be277/0x225c000, compress 0x0/0x0/0x0, omap 0x6cc54, meta 0x168a33ac), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 297 ms_handle_reset con 0x562636edd000 session 0x562636139500
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652990 data_alloc: 218103808 data_used: 3370026
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e7ee4000/0x0/0x4ffc00000, data 0x1266e67/0x1406000, compress 0x0/0x0/0x0, omap 0x72214, meta 0x1689ddec), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.193080902s of 13.809218407s, submitted: 48
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3655620 data_alloc: 218103808 data_used: 3370026
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 63987712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7ee3000/0x0/0x4ffc00000, data 0x12688e6/0x1409000, compress 0x0/0x0/0x0, omap 0x72746, meta 0x1689d8ba), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674000 data_alloc: 218103808 data_used: 5127210
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7ee3000/0x0/0x4ffc00000, data 0x12688e6/0x1409000, compress 0x0/0x0/0x0, omap 0x72746, meta 0x1689d8ba), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675408 data_alloc: 218103808 data_used: 5172266
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.305813789s of 10.456849098s, submitted: 14
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x5626361e1c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x5626387af180
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 63963136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x5626361e0380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 63946752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 63938560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 63930368 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352460800 unmapped: 63922176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3587237 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 63913984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x72cf2, meta 0x1689d30e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352477184 unmapped: 63905792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.542278290s of 43.837818146s, submitted: 40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352477184 unmapped: 63905792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x56263a56afc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x56263857c400 session 0x562638a90fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144c8d6/0x15ec000, compress 0x0/0x0/0x0, omap 0x72f38, meta 0x1689d0c8), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x562639c516c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x562638894540
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x562638867880
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x562639c51c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 63569920 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cff000/0x0/0x4ffc00000, data 0x144d8d6/0x15ed000, compress 0x0/0x0/0x0, omap 0x72f6a, meta 0x1689d096), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654654 data_alloc: 218103808 data_used: 239658
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb0400 session 0x562638a90380
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 63561728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562635f2a000 session 0x5626388abdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.725785255s of 13.664438248s, submitted: 25
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x5626361e2c00 session 0x56263a29f340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 63528960 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 63520768 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702084 data_alloc: 218103808 data_used: 7995434
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e7cfe000/0x0/0x4ffc00000, data 0x144d8e6/0x15ee000, compress 0x0/0x0/0x0, omap 0x72fee, meta 0x1689d012), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb0400 session 0x56263841cfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb2400 session 0x562636277a40
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355729408 unmapped: 60653568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 ms_handle_reset con 0x562638fb5c00 session 0x562636e9b6c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 62988288 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 62971904 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 62963712 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595399 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8761000/0x0/0x4ffc00000, data 0x9eb8d6/0xb8b000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 62955520 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 62939136 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.200675964s of 64.930892944s, submitted: 27
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 62922752 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875c000/0x0/0x4ffc00000, data 0x9ed4c6/0xb8e000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 299 ms_handle_reset con 0x562635f2a000 session 0x562638a6e8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598893 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875c000/0x0/0x4ffc00000, data 0x9ed4c6/0xb8e000, compress 0x0/0x0/0x0, omap 0x732b8, meta 0x1689cd48), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 299 handle_osd_map epochs [299,300], i have 300, src has [1,300]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 62914560 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602131 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 62898176 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 62889984 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 62881792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 62881792 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 62873600 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 62865408 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 62849024 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 62840832 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 62832640 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 62824448 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 62816256 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 62808064 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 62791680 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4347 writes, 18K keys, 4347 commit groups, 1.0 writes per commit group, ingest: 21.46 MB, 0.04 MB/s#012Interval WAL: 4347 writes, 1605 syncs, 2.71 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 62775296 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 62767104 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 62758912 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62750720 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 62734336 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 62726144 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 62717952 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 62701568 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8759000/0x0/0x4ffc00000, data 0x9eef45/0xb91000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601667 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 62693376 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 124.636978149s of 125.629234314s, submitted: 25
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 301 ms_handle_reset con 0x5626361e2c00 session 0x562638a90fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 62668800 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e8756000/0x0/0x4ffc00000, data 0x9f0b35/0xb94000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604441 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 62660608 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 302 ms_handle_reset con 0x562638fb0400 session 0x562639c51c00
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x1f2725/0x397000, compress 0x0/0x0/0x0, omap 0x7331c, meta 0x1689cce4), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 62636032 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 62636032 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 62611456 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570185 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 62611456 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f52000/0x0/0x4ffc00000, data 0x1f41c0/0x39a000, compress 0x0/0x0/0x0, omap 0x73380, meta 0x1689cc80), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f52000/0x0/0x4ffc00000, data 0x1f41c0/0x39a000, compress 0x0/0x0/0x0, omap 0x73380, meta 0x1689cc80), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 62578688 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.508638382s of 11.227730751s, submitted: 54
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 62562304 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572495 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 62537728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 62537728 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353853440 unmapped: 62529536 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4f000/0x0/0x4ffc00000, data 0x1f5c3f/0x39d000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62521344 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353886208 unmapped: 62496768 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 62488576 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 62480384 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 62472192 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 62464000 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 62447616 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62439424 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 62423040 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread fragmentation_score=0.004528 took=0.000084s
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 62414848 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 62414848 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 62406656 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 62390272 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 62382080 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 62382080 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 62373888 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 62365696 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62349312 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 62341120 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 62332928 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 62324736 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 62308352 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 62300160 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575269 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 110.093017578s of 112.068550110s, submitted: 102
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 62283776 heap: 416382976 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 306 ms_handle_reset con 0x562638fb2400 session 0x562638867340
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f4a000/0x0/0x4ffc00000, data 0x1f77db/0x3a0000, compress 0x0/0x0/0x0, omap 0x738b2, meta 0x1689c74e), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 362487808 unmapped: 62291968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8748000/0x0/0x4ffc00000, data 0x9f939a/0xba4000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [0,1])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 ms_handle_reset con 0x56263fa84800 session 0x5626387ae000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 70656000 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8743000/0x0/0x4ffc00000, data 0x9faf36/0xba7000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624230 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 307 handle_osd_map epochs [307,308], i have 308, src has [1,308]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.350141525s of 30.872489929s, submitted: 13
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 308 ms_handle_reset con 0x562635f2a000 session 0x562638760fc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8742000/0x0/0x4ffc00000, data 0x9fcb26/0xbaa000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626044 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 70565888 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8742000/0x0/0x4ffc00000, data 0x9fcb26/0xbaa000, compress 0x0/0x0/0x0, omap 0x73d80, meta 0x1689c280), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 70516736 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 70492160 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 70459392 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 70434816 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 70410240 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 70336512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 70336512 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 70320128 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 70238208 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 70180864 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 70164480 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 70746112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 70729728 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354058240 unmapped: 70721536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354066432 unmapped: 70713344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 70696960 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 70688768 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 70680576 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 70664192 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 70647808 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 70639616 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 70631424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354156544 unmapped: 70623232 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 70615040 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 70606848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 70606848 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 70598656 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 70590464 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 70582272 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 70574080 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 70557696 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 70549504 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 70541312 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 70533120 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 70508544 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 70500352 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 70483968 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 70475776 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 70467584 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 70451200 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 70443008 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 70426624 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 70418432 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 70402048 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 70393856 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 70369280 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 70361088 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 70352896 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 70344704 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 70328320 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 70311936 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 70303744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 70303744 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 70287360 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 70270976 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 15K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 357 writes, 828 keys, 357 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 357 writes, 165 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 70262784 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 70254592 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 70246400 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc ms_handle_reset ms_handle_reset con 0x56263e7ad000
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 70230016 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 ms_handle_reset con 0x562638582000 session 0x56263a29e1c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 70221824 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 70213632 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 70205440 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 70197248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 70197248 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629538 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e873d000/0x0/0x4ffc00000, data 0x9fe5a5/0xbad000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 70189056 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 434.666351318s of 434.940032959s, submitted: 23
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 310 ms_handle_reset con 0x562638582000 session 0x56263bf8fdc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632312 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e873a000/0x0/0x4ffc00000, data 0xa00195/0xbb0000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 70172672 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 70156288 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 311 ms_handle_reset con 0x562638fb0400 session 0x56263f0ec8c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 70148096 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 311 heartbeat osd_stat(store_statfs(0x4e8f37000/0x0/0x4ffc00000, data 0x201d62/0x3b2000, compress 0x0/0x0/0x0, omap 0x73de4, meta 0x1689c21c), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594414 data_alloc: 218103808 data_used: 215308
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 70139904 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354648064 unmapped: 70131712 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 311 handle_osd_map epochs [311,312], i have 312, src has [1,312]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 312 ms_handle_reset con 0x562638fb4c00 session 0x5626368b88c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 70123520 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 ms_handle_reset con 0x562638fb2400 session 0x56263d3468c0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643536 data_alloc: 218103808 data_used: 219369
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354672640 unmapped: 70107136 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e872f000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643536 data_alloc: 218103808 data_used: 219369
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 70090752 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.107755661s of 18.608486176s, submitted: 55
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 70082560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 70082560 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219369
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354705408 unmapped: 70074368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354705408 unmapped: 70074368 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 70017024 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 70008832 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 69992448 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 69984256 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 69976064 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 69967872 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 69959680 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642352 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 69951488 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.278427124s of 46.877170563s, submitted: 124
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 69935104 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 314 ms_handle_reset con 0x562638fb4c00 session 0x56263c68cfc0
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0xa0538d/0xbb9000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604245 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 69902336 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e8f31000/0x0/0x4ffc00000, data 0x206f6d/0x3bb000, compress 0x0/0x0/0x0, omap 0x74316, meta 0x1689bcea), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604245 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 69894144 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 314 handle_osd_map epochs [314,315], i have 315, src has [1,315]
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 69877760 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 69861376 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 69853184 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 69844992 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 69844992 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 69828608 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 69820416 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 69812224 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 69787648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 69787648 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 69763072 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 69754880 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 69738496 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 69738496 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 69730304 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 69722112 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 69713920 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 69697536 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 69689344 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8f2c000/0x0/0x4ffc00000, data 0x2089ec/0x3be000, compress 0x0/0x0/0x0, omap 0x7437a, meta 0x1689bc86), peers [1,2] op hist [])
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 69681152 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 355172352 unmapped: 69607424 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}'
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'config show' '{prefix=config show}'
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 69918720 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607739 data_alloc: 218103808 data_used: 219521
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 69795840 heap: 424779776 old mem: 2845415832 new mem: 2845415832
Jan 27 09:47:26 np0005597378 ceph-osd[85897]: do_command 'log dump' '{prefix=log dump}'
Jan 27 09:47:27 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23058 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 09:47:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788561174' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 09:47:27 np0005597378 nova_compute[238941]: 2026-01-27 14:47:27.495 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:27 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23061 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 09:47:27 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816851273' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23064 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:28 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 27 09:47:28 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737120404' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 27 09:47:28 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23068 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:29 np0005597378 nova_compute[238941]: 2026-01-27 14:47:29.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:29 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23072 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:29 np0005597378 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:47:29 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:47:29.367+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:47:29 np0005597378 nova_compute[238941]: 2026-01-27 14:47:29.380 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:47:29 np0005597378 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:47:29 np0005597378 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:47:29 np0005597378 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:47:29 np0005597378 nova_compute[238941]: 2026-01-27 14:47:29.381 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:47:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 27 09:47:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/60045833' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 27 09:47:29 np0005597378 systemd[1]: Starting Hostname Service...
Jan 27 09:47:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:29 np0005597378 systemd[1]: Started Hostname Service.
Jan 27 09:47:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 27 09:47:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944304695' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111005101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.027 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315050942' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.221 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.222 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3290MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.222 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.223 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.292 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.292 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.309 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331249373' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971010448' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:47:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313145398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.963 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:47:30 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.971 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:47:31 np0005597378 nova_compute[238941]: 2026-01-27 14:47:30.999 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:47:31 np0005597378 nova_compute[238941]: 2026-01-27 14:47:31.001 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:47:31 np0005597378 nova_compute[238941]: 2026-01-27 14:47:31.001 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404152035' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976210741' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355082506' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 27 09:47:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707408235' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 27 09:47:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097357149' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272100240' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 27 09:47:32 np0005597378 nova_compute[238941]: 2026-01-27 14:47:32.496 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7334381' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 27 09:47:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744761522' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 27 09:47:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 27 09:47:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671057901' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 27 09:47:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 27 09:47:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002644301' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 27 09:47:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 27 09:47:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808190826' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 27 09:47:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:34 np0005597378 nova_compute[238941]: 2026-01-27 14:47:34.009 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:34 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23110 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:34 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23112 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:34 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23114 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:34 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23116 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:35 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23118 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 09:47:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 09:47:35 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23122 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 09:47:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 09:47:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 27 09:47:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926782370' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 27 09:47:36 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:36 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23130 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 27 09:47:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093314445' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 27 09:47:37 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 09:47:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349487275' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 09:47:37 np0005597378 nova_compute[238941]: 2026-01-27 14:47:37.499 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 27 09:47:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807352349' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 27 09:47:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 27 09:47:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 27 09:47:37 np0005597378 nova_compute[238941]: 2026-01-27 14:47:37.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:37 np0005597378 nova_compute[238941]: 2026-01-27 14:47:37.905 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:37 np0005597378 nova_compute[238941]: 2026-01-27 14:47:37.906 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 27 09:47:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 27 09:47:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 27 09:47:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179080454' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 27 09:47:38 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23148 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:39 np0005597378 nova_compute[238941]: 2026-01-27 14:47:39.012 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 27 09:47:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352156603' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 27 09:47:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 27 09:47:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857514706' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 27 09:47:40 np0005597378 nova_compute[238941]: 2026-01-27 14:47:40.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 27 09:47:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770664638' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 27 09:47:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 27 09:47:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729897472' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 27 09:47:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:41 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23158 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:42 np0005597378 nova_compute[238941]: 2026-01-27 14:47:42.500 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 27 09:47:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935337891' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 27 09:47:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 27 09:47:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875681752' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 27 09:47:43 np0005597378 nova_compute[238941]: 2026-01-27 14:47:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:43 np0005597378 nova_compute[238941]: 2026-01-27 14:47:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:47:43 np0005597378 nova_compute[238941]: 2026-01-27 14:47:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:47:43 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23164 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:43 np0005597378 nova_compute[238941]: 2026-01-27 14:47:43.911 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:47:44 np0005597378 nova_compute[238941]: 2026-01-27 14:47:44.013 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 27 09:47:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544651841' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 27 09:47:44 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23168 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:44 np0005597378 ovs-appctl[398526]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 09:47:44 np0005597378 ovs-appctl[398530]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 09:47:44 np0005597378 ovs-appctl[398532]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 09:47:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:47:46.360 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:47:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:47:46.360 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:47:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:47:46.360 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:47:46 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23170 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:46 np0005597378 nova_compute[238941]: 2026-01-27 14:47:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 27 09:47:46 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142045414' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 27 09:47:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 27 09:47:47 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679927810' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 27 09:47:47 np0005597378 nova_compute[238941]: 2026-01-27 14:47:47.501 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:47 np0005597378 podman[399098]: 2026-01-27 14:47:47.710824594 +0000 UTC m=+0.049711615 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:47:47 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23176 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23178 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:47:48 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:47:48 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 27 09:47:48 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772876600' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 27 09:47:49 np0005597378 nova_compute[238941]: 2026-01-27 14:47:49.015 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:49 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 27 09:47:49 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310534896' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 27 09:47:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:49 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23184 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:50 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23186 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:47:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 09:47:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692393944' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 09:47:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 27 09:47:51 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174933280' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 27 09:47:51 np0005597378 podman[399515]: 2026-01-27 14:47:51.776268231 +0000 UTC m=+0.096638653 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:47:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 27 09:47:52 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850345507' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 27 09:47:52 np0005597378 nova_compute[238941]: 2026-01-27 14:47:52.503 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:52 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23194 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 09:47:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027994241' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 09:47:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:53 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 27 09:47:53 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980833364' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 27 09:47:54 np0005597378 nova_compute[238941]: 2026-01-27 14:47:54.017 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 27 09:47:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786882611' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 27 09:47:54 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 27 09:47:54 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235868757' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 27 09:47:55 np0005597378 nova_compute[238941]: 2026-01-27 14:47:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:55 np0005597378 nova_compute[238941]: 2026-01-27 14:47:55.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:47:55 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23204 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:47:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 27 09:47:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036259770' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 27 09:47:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 27 09:47:56 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30471050' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 27 09:47:57 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:57 np0005597378 nova_compute[238941]: 2026-01-27 14:47:57.505 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 27 09:47:57 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697893355' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 27 09:47:57 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 09:47:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:47:58 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23214 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:58 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23216 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:47:58 np0005597378 systemd[1]: Starting Time & Date Service...
Jan 27 09:47:58 np0005597378 systemd[1]: Started Time & Date Service.
Jan 27 09:47:59 np0005597378 nova_compute[238941]: 2026-01-27 14:47:59.018 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790057176' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 27 09:47:59 np0005597378 nova_compute[238941]: 2026-01-27 14:47:59.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000509352' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794510053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:47:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794510053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:47:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23226 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:48:00 np0005597378 nova_compute[238941]: 2026-01-27 14:48:00.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23228 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:00 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:48:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 27 09:48:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398748693' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 27 09:48:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 27 09:48:01 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428007650' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 27 09:48:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:01 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23234 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:48:02 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23236 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:48:02 np0005597378 nova_compute[238941]: 2026-01-27 14:48:02.508 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 27 09:48:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888897363' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 27 09:48:03 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 27 09:48:03 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302793628' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 27 09:48:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:04 np0005597378 nova_compute[238941]: 2026-01-27 14:48:04.019 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:48:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:48:04 np0005597378 podman[400708]: 2026-01-27 14:48:04.76959518 +0000 UTC m=+0.061663625 container create 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 09:48:04 np0005597378 podman[400708]: 2026-01-27 14:48:04.7289714 +0000 UTC m=+0.021039865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:48:04 np0005597378 systemd[1]: Started libpod-conmon-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope.
Jan 27 09:48:04 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:48:04 np0005597378 podman[400708]: 2026-01-27 14:48:04.893524714 +0000 UTC m=+0.185593189 container init 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:48:04 np0005597378 podman[400708]: 2026-01-27 14:48:04.902531166 +0000 UTC m=+0.194599611 container start 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:48:04 np0005597378 podman[400708]: 2026-01-27 14:48:04.913053478 +0000 UTC m=+0.205121923 container attach 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:48:04 np0005597378 systemd[1]: libpod-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope: Deactivated successfully.
Jan 27 09:48:04 np0005597378 practical_mayer[400725]: 167 167
Jan 27 09:48:04 np0005597378 conmon[400725]: conmon 8d4fae21481f37015d59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope/container/memory.events
Jan 27 09:48:04 np0005597378 podman[400708]: 2026-01-27 14:48:04.919571392 +0000 UTC m=+0.211639847 container died 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:48:04 np0005597378 systemd[1]: var-lib-containers-storage-overlay-90fc6e44a0ba621e5605ae6bfa1483cbf03d5cc7181ace18f3b30bd6f8099d95-merged.mount: Deactivated successfully.
Jan 27 09:48:05 np0005597378 podman[400708]: 2026-01-27 14:48:05.002490437 +0000 UTC m=+0.294558892 container remove 8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:48:05 np0005597378 systemd[1]: libpod-conmon-8d4fae21481f37015d59a6b4905cddca7330944afa00e8a9c9ce9e489d39ec8a.scope: Deactivated successfully.
Jan 27 09:48:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 27 09:48:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:48:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:48:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:48:05 np0005597378 podman[400750]: 2026-01-27 14:48:05.138753261 +0000 UTC m=+0.023611424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:48:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:06 np0005597378 podman[400750]: 2026-01-27 14:48:06.043832156 +0000 UTC m=+0.928690289 container create b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:48:06 np0005597378 systemd[1]: Started libpod-conmon-b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81.scope.
Jan 27 09:48:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:48:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:06 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:06 np0005597378 podman[400750]: 2026-01-27 14:48:06.20430064 +0000 UTC m=+1.089158773 container init b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:48:06 np0005597378 podman[400750]: 2026-01-27 14:48:06.212693685 +0000 UTC m=+1.097551808 container start b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:48:06 np0005597378 podman[400750]: 2026-01-27 14:48:06.247044086 +0000 UTC m=+1.131902209 container attach b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:48:06 np0005597378 peaceful_mahavira[400766]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:48:06 np0005597378 peaceful_mahavira[400766]: --> All data devices are unavailable
Jan 27 09:48:06 np0005597378 systemd[1]: libpod-b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81.scope: Deactivated successfully.
Jan 27 09:48:06 np0005597378 podman[400750]: 2026-01-27 14:48:06.790137262 +0000 UTC m=+1.674995405 container died b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:48:06 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1ac46df5815b30e4afc1091276869a0ed910ff2cefb9fa4965e8bb10cf24df9c-merged.mount: Deactivated successfully.
Jan 27 09:48:06 np0005597378 podman[400750]: 2026-01-27 14:48:06.885215082 +0000 UTC m=+1.770073205 container remove b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:48:06 np0005597378 systemd[1]: libpod-conmon-b1299ac880c18cb0369425050162e7b36091d384c4106929fa996f903fcf9e81.scope: Deactivated successfully.
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.425271437 +0000 UTC m=+0.090282813 container create d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.358448235 +0000 UTC m=+0.023459641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:48:07 np0005597378 systemd[1]: Started libpod-conmon-d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6.scope.
Jan 27 09:48:07 np0005597378 nova_compute[238941]: 2026-01-27 14:48:07.513 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.576727409 +0000 UTC m=+0.241738805 container init d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.582942516 +0000 UTC m=+0.247953892 container start d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:48:07 np0005597378 nervous_elion[400879]: 167 167
Jan 27 09:48:07 np0005597378 systemd[1]: libpod-d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6.scope: Deactivated successfully.
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.649874201 +0000 UTC m=+0.314885597 container attach d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.650833917 +0000 UTC m=+0.315845293 container died d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:48:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5af129cc94c6b4d1713083d610b328650c7562a28f3d1cb8a76f4bbf733ccfe5-merged.mount: Deactivated successfully.
Jan 27 09:48:07 np0005597378 podman[400862]: 2026-01-27 14:48:07.749111933 +0000 UTC m=+0.414123309 container remove d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:48:07 np0005597378 systemd[1]: libpod-conmon-d6c51eb01f5de25593182e096e89b072cc01d50b41129432b33d63703d4fb5c6.scope: Deactivated successfully.
Jan 27 09:48:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:07 np0005597378 podman[400904]: 2026-01-27 14:48:07.922627226 +0000 UTC m=+0.058234372 container create 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:48:07 np0005597378 systemd[1]: Started libpod-conmon-58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c.scope.
Jan 27 09:48:07 np0005597378 podman[400904]: 2026-01-27 14:48:07.887951866 +0000 UTC m=+0.023559032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:48:07 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:48:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:08 np0005597378 podman[400904]: 2026-01-27 14:48:08.021706624 +0000 UTC m=+0.157313790 container init 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:48:08 np0005597378 podman[400904]: 2026-01-27 14:48:08.028967628 +0000 UTC m=+0.164574774 container start 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:48:08 np0005597378 podman[400904]: 2026-01-27 14:48:08.036160371 +0000 UTC m=+0.171767517 container attach 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:48:08 np0005597378 romantic_cray[400920]: {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:    "0": [
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:        {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "devices": [
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "/dev/loop3"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            ],
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_name": "ceph_lv0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_size": "21470642176",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "name": "ceph_lv0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "tags": {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cluster_name": "ceph",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.crush_device_class": "",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.encrypted": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.objectstore": "bluestore",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osd_id": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.type": "block",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.vdo": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.with_tpm": "0"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            },
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "type": "block",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "vg_name": "ceph_vg0"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:        }
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:    ],
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:    "1": [
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:        {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "devices": [
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "/dev/loop4"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            ],
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_name": "ceph_lv1",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_size": "21470642176",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "name": "ceph_lv1",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "tags": {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cluster_name": "ceph",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.crush_device_class": "",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.encrypted": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.objectstore": "bluestore",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osd_id": "1",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.type": "block",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.vdo": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.with_tpm": "0"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            },
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "type": "block",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "vg_name": "ceph_vg1"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:        }
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:    ],
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:    "2": [
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:        {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "devices": [
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "/dev/loop5"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            ],
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_name": "ceph_lv2",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_size": "21470642176",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "name": "ceph_lv2",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "tags": {
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.cluster_name": "ceph",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.crush_device_class": "",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.encrypted": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.objectstore": "bluestore",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osd_id": "2",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.type": "block",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.vdo": "0",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:                "ceph.with_tpm": "0"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            },
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "type": "block",
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:            "vg_name": "ceph_vg2"
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:        }
Jan 27 09:48:08 np0005597378 romantic_cray[400920]:    ]
Jan 27 09:48:08 np0005597378 romantic_cray[400920]: }
Jan 27 09:48:08 np0005597378 systemd[1]: libpod-58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c.scope: Deactivated successfully.
Jan 27 09:48:08 np0005597378 podman[400904]: 2026-01-27 14:48:08.338292815 +0000 UTC m=+0.473899961 container died 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:48:08 np0005597378 systemd[1]: var-lib-containers-storage-overlay-592130bc166baf6f09a982d9047c8726da9e2d8af63def75235c78be1c076609-merged.mount: Deactivated successfully.
Jan 27 09:48:08 np0005597378 podman[400904]: 2026-01-27 14:48:08.854736846 +0000 UTC m=+0.990344002 container remove 58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cray, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 27 09:48:08 np0005597378 systemd[1]: libpod-conmon-58f792596b3016d689d0aac28dcbc31c92cad585b85dc7dfa758729d553ac09c.scope: Deactivated successfully.
Jan 27 09:48:09 np0005597378 nova_compute[238941]: 2026-01-27 14:48:09.020 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.281985485 +0000 UTC m=+0.020382848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.400442313 +0000 UTC m=+0.138839646 container create 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:48:09 np0005597378 systemd[1]: Started libpod-conmon-30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055.scope.
Jan 27 09:48:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.500424164 +0000 UTC m=+0.238821497 container init 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.507015831 +0000 UTC m=+0.245413164 container start 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:48:09 np0005597378 stoic_hodgkin[401022]: 167 167
Jan 27 09:48:09 np0005597378 systemd[1]: libpod-30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055.scope: Deactivated successfully.
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.523485582 +0000 UTC m=+0.261882935 container attach 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.524176351 +0000 UTC m=+0.262573684 container died 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 27 09:48:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9bb252260aa1f5473d2638343ae8075abc869c25204be79f52453a2dc01247b2-merged.mount: Deactivated successfully.
Jan 27 09:48:09 np0005597378 podman[401005]: 2026-01-27 14:48:09.629232529 +0000 UTC m=+0.367629862 container remove 30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hodgkin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:48:09 np0005597378 systemd[1]: libpod-conmon-30923147e62d231504f21e2fe7059edf567e0499520c6832ec4e12aa05c54055.scope: Deactivated successfully.
Jan 27 09:48:09 np0005597378 podman[401047]: 2026-01-27 14:48:09.822165983 +0000 UTC m=+0.049055726 container create 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:48:09 np0005597378 systemd[1]: Started libpod-conmon-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope.
Jan 27 09:48:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:09 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:48:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:09 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:48:09 np0005597378 podman[401047]: 2026-01-27 14:48:09.797935373 +0000 UTC m=+0.024825156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:48:09 np0005597378 podman[401047]: 2026-01-27 14:48:09.904163722 +0000 UTC m=+0.131053475 container init 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:48:09 np0005597378 podman[401047]: 2026-01-27 14:48:09.912708811 +0000 UTC m=+0.139598554 container start 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:48:09 np0005597378 podman[401047]: 2026-01-27 14:48:09.919558015 +0000 UTC m=+0.146447758 container attach 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:48:10 np0005597378 lvm[401142]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:48:10 np0005597378 lvm[401142]: VG ceph_vg1 finished
Jan 27 09:48:10 np0005597378 lvm[401141]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:48:10 np0005597378 lvm[401141]: VG ceph_vg0 finished
Jan 27 09:48:10 np0005597378 lvm[401144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:48:10 np0005597378 lvm[401144]: VG ceph_vg2 finished
Jan 27 09:48:10 np0005597378 jolly_euclid[401063]: {}
Jan 27 09:48:10 np0005597378 systemd[1]: libpod-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope: Deactivated successfully.
Jan 27 09:48:10 np0005597378 systemd[1]: libpod-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope: Consumed 1.365s CPU time.
Jan 27 09:48:10 np0005597378 podman[401047]: 2026-01-27 14:48:10.792898669 +0000 UTC m=+1.019788412 container died 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:48:10 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e0b51a990696f06eb98e3a6b7080044cb77281f421bec9e0bc0f6c24d705ab03-merged.mount: Deactivated successfully.
Jan 27 09:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:11 np0005597378 podman[401047]: 2026-01-27 14:48:11.115017998 +0000 UTC m=+1.341907741 container remove 415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:48:11 np0005597378 systemd[1]: libpod-conmon-415e641dacab5d1e71aaa6b6c82f075df98862e4b4cf8d212eb92a14ef9da253.scope: Deactivated successfully.
Jan 27 09:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:48:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:48:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:48:11 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:48:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:12 np0005597378 nova_compute[238941]: 2026-01-27 14:48:12.516 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:48:12 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:48:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:14 np0005597378 nova_compute[238941]: 2026-01-27 14:48:14.023 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.426820) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525296426865, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1449, "num_deletes": 251, "total_data_size": 2036878, "memory_usage": 2063600, "flush_reason": "Manual Compaction"}
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525296639083, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1993740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64489, "largest_seqno": 65937, "table_properties": {"data_size": 1986831, "index_size": 3855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16321, "raw_average_key_size": 20, "raw_value_size": 1972319, "raw_average_value_size": 2522, "num_data_blocks": 172, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525168, "oldest_key_time": 1769525168, "file_creation_time": 1769525296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 212313 microseconds, and 6612 cpu microseconds.
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.639132) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1993740 bytes OK
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.639154) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727093) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727133) EVENT_LOG_v1 {"time_micros": 1769525296727125, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2030132, prev total WAL file size 2030132, number of live WAL files 2.
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.728020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1947KB)], [152(9253KB)]
Jan 27 09:48:16 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525296728070, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11469757, "oldest_snapshot_seqno": -1}
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8618 keys, 9700986 bytes, temperature: kUnknown
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525297020075, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9700986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9647114, "index_size": 31231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 225264, "raw_average_key_size": 26, "raw_value_size": 9497194, "raw_average_value_size": 1102, "num_data_blocks": 1205, "num_entries": 8618, "num_filter_entries": 8618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.020426) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9700986 bytes
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.033955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.3 rd, 33.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 9132, records dropped: 514 output_compression: NoCompression
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.034009) EVENT_LOG_v1 {"time_micros": 1769525297033994, "job": 94, "event": "compaction_finished", "compaction_time_micros": 292137, "compaction_time_cpu_micros": 22938, "output_level": 6, "num_output_files": 1, "total_output_size": 9700986, "num_input_records": 9132, "num_output_records": 8618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525297034535, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525297035912, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:16.727835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:17 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:17.036023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:48:17
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta']
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:48:17 np0005597378 nova_compute[238941]: 2026-01-27 14:48:17.517 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:48:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:48:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:48:18 np0005597378 podman[401187]: 2026-01-27 14:48:18.513724006 +0000 UTC m=+0.091101855 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 09:48:19 np0005597378 nova_compute[238941]: 2026-01-27 14:48:19.025 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:22 np0005597378 nova_compute[238941]: 2026-01-27 14:48:22.521 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:22 np0005597378 podman[401207]: 2026-01-27 14:48:22.79229961 +0000 UTC m=+0.130571703 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:48:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:24 np0005597378 nova_compute[238941]: 2026-01-27 14:48:24.027 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:27 np0005597378 nova_compute[238941]: 2026-01-27 14:48:27.525 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:48:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:48:28 np0005597378 nova_compute[238941]: 2026-01-27 14:48:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:28 np0005597378 nova_compute[238941]: 2026-01-27 14:48:28.525 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:48:28 np0005597378 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:48:28 np0005597378 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:48:28 np0005597378 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:48:28 np0005597378 nova_compute[238941]: 2026-01-27 14:48:28.526 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:48:28 np0005597378 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 09:48:28 np0005597378 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.029 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:48:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839871760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.207 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.349 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.350 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3354MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.351 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.351 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.738 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.738 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:48:29 np0005597378 nova_compute[238941]: 2026-01-27 14:48:29.754 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:48:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:48:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247283241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:48:30 np0005597378 nova_compute[238941]: 2026-01-27 14:48:30.311 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:48:30 np0005597378 nova_compute[238941]: 2026-01-27 14:48:30.319 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:48:30 np0005597378 nova_compute[238941]: 2026-01-27 14:48:30.395 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:48:30 np0005597378 nova_compute[238941]: 2026-01-27 14:48:30.396 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:48:30 np0005597378 nova_compute[238941]: 2026-01-27 14:48:30.397 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:48:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:32 np0005597378 nova_compute[238941]: 2026-01-27 14:48:32.530 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:34 np0005597378 nova_compute[238941]: 2026-01-27 14:48:34.032 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:35 np0005597378 nova_compute[238941]: 2026-01-27 14:48:35.396 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:37 np0005597378 nova_compute[238941]: 2026-01-27 14:48:37.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:37 np0005597378 nova_compute[238941]: 2026-01-27 14:48:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:37 np0005597378 nova_compute[238941]: 2026-01-27 14:48:37.532 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:39 np0005597378 nova_compute[238941]: 2026-01-27 14:48:39.036 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:40 np0005597378 nova_compute[238941]: 2026-01-27 14:48:40.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:42 np0005597378 nova_compute[238941]: 2026-01-27 14:48:42.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:43 np0005597378 nova_compute[238941]: 2026-01-27 14:48:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:43 np0005597378 nova_compute[238941]: 2026-01-27 14:48:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:48:43 np0005597378 nova_compute[238941]: 2026-01-27 14:48:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:48:43 np0005597378 nova_compute[238941]: 2026-01-27 14:48:43.399 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:48:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:44 np0005597378 nova_compute[238941]: 2026-01-27 14:48:44.039 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:45 np0005597378 systemd[1]: session-55.scope: Deactivated successfully.
Jan 27 09:48:45 np0005597378 systemd[1]: session-55.scope: Consumed 3min 17.768s CPU time, 1.0G memory peak, read 432.8M from disk, written 404.4M to disk.
Jan 27 09:48:45 np0005597378 systemd-logind[786]: Session 55 logged out. Waiting for processes to exit.
Jan 27 09:48:45 np0005597378 systemd-logind[786]: Removed session 55.
Jan 27 09:48:45 np0005597378 systemd-logind[786]: New session 56 of user zuul.
Jan 27 09:48:45 np0005597378 systemd[1]: Started Session 56 of User zuul.
Jan 27 09:48:45 np0005597378 systemd[1]: session-56.scope: Deactivated successfully.
Jan 27 09:48:45 np0005597378 systemd-logind[786]: Session 56 logged out. Waiting for processes to exit.
Jan 27 09:48:45 np0005597378 systemd-logind[786]: Removed session 56.
Jan 27 09:48:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:45 np0005597378 systemd-logind[786]: New session 57 of user zuul.
Jan 27 09:48:45 np0005597378 systemd[1]: Started Session 57 of User zuul.
Jan 27 09:48:46 np0005597378 systemd[1]: session-57.scope: Deactivated successfully.
Jan 27 09:48:46 np0005597378 systemd-logind[786]: Session 57 logged out. Waiting for processes to exit.
Jan 27 09:48:46 np0005597378 systemd-logind[786]: Removed session 57.
Jan 27 09:48:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:48:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:48:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:48:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:48:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:48:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:48:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:47 np0005597378 nova_compute[238941]: 2026-01-27 14:48:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:47 np0005597378 nova_compute[238941]: 2026-01-27 14:48:47.563 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:48:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:48 np0005597378 podman[401340]: 2026-01-27 14:48:48.721187979 +0000 UTC m=+0.062374924 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 09:48:49 np0005597378 nova_compute[238941]: 2026-01-27 14:48:49.040 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:52 np0005597378 nova_compute[238941]: 2026-01-27 14:48:52.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:53 np0005597378 podman[401359]: 2026-01-27 14:48:53.764189577 +0000 UTC m=+0.109407345 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:48:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:54 np0005597378 nova_compute[238941]: 2026-01-27 14:48:54.041 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:56 np0005597378 nova_compute[238941]: 2026-01-27 14:48:56.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:48:56 np0005597378 nova_compute[238941]: 2026-01-27 14:48:56.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.429190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336429231, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 549, "num_deletes": 255, "total_data_size": 577172, "memory_usage": 588488, "flush_reason": "Manual Compaction"}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336461004, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 572120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65938, "largest_seqno": 66486, "table_properties": {"data_size": 569099, "index_size": 992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6718, "raw_average_key_size": 18, "raw_value_size": 563161, "raw_average_value_size": 1530, "num_data_blocks": 45, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525297, "oldest_key_time": 1769525297, "file_creation_time": 1769525336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 31854 microseconds, and 2703 cpu microseconds.
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.461044) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 572120 bytes OK
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.461062) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.499850) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.499927) EVENT_LOG_v1 {"time_micros": 1769525336499917, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.499991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 574082, prev total WAL file size 574082, number of live WAL files 2.
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.500737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303134' seq:0, type:0; will stop at (end)
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(558KB)], [155(9473KB)]
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336500795, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10273106, "oldest_snapshot_seqno": -1}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8469 keys, 10167525 bytes, temperature: kUnknown
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336598659, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10167525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10113546, "index_size": 31722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223080, "raw_average_key_size": 26, "raw_value_size": 9965116, "raw_average_value_size": 1176, "num_data_blocks": 1224, "num_entries": 8469, "num_filter_entries": 8469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.599047) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10167525 bytes
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.607801) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 103.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(35.7) write-amplify(17.8) OK, records in: 8986, records dropped: 517 output_compression: NoCompression
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.607836) EVENT_LOG_v1 {"time_micros": 1769525336607822, "job": 96, "event": "compaction_finished", "compaction_time_micros": 98077, "compaction_time_cpu_micros": 24348, "output_level": 6, "num_output_files": 1, "total_output_size": 10167525, "num_input_records": 8986, "num_output_records": 8469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336608769, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525336611167, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.500602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:56 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:48:56.611368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:48:57 np0005597378 nova_compute[238941]: 2026-01-27 14:48:57.572 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:48:59 np0005597378 nova_compute[238941]: 2026-01-27 14:48:59.044 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:48:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162700777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:48:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:48:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162700777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:48:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:01 np0005597378 nova_compute[238941]: 2026-01-27 14:49:01.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:02 np0005597378 nova_compute[238941]: 2026-01-27 14:49:02.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:04 np0005597378 nova_compute[238941]: 2026-01-27 14:49:04.045 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:07 np0005597378 nova_compute[238941]: 2026-01-27 14:49:07.578 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:09 np0005597378 nova_compute[238941]: 2026-01-27 14:49:09.049 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:11 np0005597378 podman[401480]: 2026-01-27 14:49:11.853602904 +0000 UTC m=+0.083985353 container exec da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:49:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:11 np0005597378 podman[401480]: 2026-01-27 14:49:11.961706115 +0000 UTC m=+0.192088564 container exec_died da35e91e4dd6369512e3fe7cb281587ce7f4b21b427ce026251a4e4f9ef64a0c (image=quay.io/ceph/ceph:v20, name=ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:49:12 np0005597378 nova_compute[238941]: 2026-01-27 14:49:12.582 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:49:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:49:12 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:13 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.775286726 +0000 UTC m=+0.042135811 container create a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:49:13 np0005597378 systemd[1]: Started libpod-conmon-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope.
Jan 27 09:49:13 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.754764335 +0000 UTC m=+0.021613440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.855738653 +0000 UTC m=+0.122587758 container init a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.863117862 +0000 UTC m=+0.129966937 container start a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.866874492 +0000 UTC m=+0.133723587 container attach a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:49:13 np0005597378 relaxed_gates[401822]: 167 167
Jan 27 09:49:13 np0005597378 systemd[1]: libpod-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope: Deactivated successfully.
Jan 27 09:49:13 np0005597378 conmon[401822]: conmon a5ab6f565bd86c76ea01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope/container/memory.events
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.869974796 +0000 UTC m=+0.136823881 container died a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:49:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c22ddd31e7a02afe03dce397e174e6174d59f57fa89e2bced8b913f7fa4d40b8-merged.mount: Deactivated successfully.
Jan 27 09:49:13 np0005597378 podman[401807]: 2026-01-27 14:49:13.912875446 +0000 UTC m=+0.179724531 container remove a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 27 09:49:13 np0005597378 systemd[1]: libpod-conmon-a5ab6f565bd86c76ea014ac44eeb4357230e21766cf03d0ea2f29658b677c9de.scope: Deactivated successfully.
Jan 27 09:49:14 np0005597378 nova_compute[238941]: 2026-01-27 14:49:14.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.070647127 +0000 UTC m=+0.047681559 container create a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:49:14 np0005597378 systemd[1]: Started libpod-conmon-a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6.scope.
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.047424984 +0000 UTC m=+0.024459446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:49:14 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:49:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:14 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.168940133 +0000 UTC m=+0.145974585 container init a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.178408218 +0000 UTC m=+0.155442650 container start a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.185432866 +0000 UTC m=+0.162467298 container attach a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:49:14 np0005597378 great_sutherland[401862]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:49:14 np0005597378 great_sutherland[401862]: --> All data devices are unavailable
Jan 27 09:49:14 np0005597378 systemd[1]: libpod-a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6.scope: Deactivated successfully.
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.661581227 +0000 UTC m=+0.638615679 container died a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:49:14 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bde86623fa5c3dc8e3870427d6cab2f567dea8d397fd9fc95768195d6b170360-merged.mount: Deactivated successfully.
Jan 27 09:49:14 np0005597378 podman[401846]: 2026-01-27 14:49:14.722253393 +0000 UTC m=+0.699287825 container remove a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sutherland, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:49:14 np0005597378 systemd[1]: libpod-conmon-a09f848f0c223e6bfcb896b4b50926a4d8040244091fe3562d4b2a386bed89b6.scope: Deactivated successfully.
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.192987149 +0000 UTC m=+0.058680394 container create 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:49:15 np0005597378 systemd[1]: Started libpod-conmon-8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8.scope.
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.161436123 +0000 UTC m=+0.027129388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:49:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.28698169 +0000 UTC m=+0.152674935 container init 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.295889039 +0000 UTC m=+0.161582284 container start 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:49:15 np0005597378 laughing_mclaren[401971]: 167 167
Jan 27 09:49:15 np0005597378 systemd[1]: libpod-8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8.scope: Deactivated successfully.
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.302466996 +0000 UTC m=+0.168160241 container attach 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.303205905 +0000 UTC m=+0.168899150 container died 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:49:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4334d56cdbbf490de2b97d8c6574e960e38826cb1c25c98f55fcd43905079016-merged.mount: Deactivated successfully.
Jan 27 09:49:15 np0005597378 podman[401955]: 2026-01-27 14:49:15.356395502 +0000 UTC m=+0.222088747 container remove 8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 27 09:49:15 np0005597378 systemd[1]: libpod-conmon-8816a6605dba29bf7c69e65e3afbb3deaf32e65f5a35a4f1cd240b12c51f6dc8.scope: Deactivated successfully.
Jan 27 09:49:15 np0005597378 podman[401995]: 2026-01-27 14:49:15.523550986 +0000 UTC m=+0.044499565 container create a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 27 09:49:15 np0005597378 systemd[1]: Started libpod-conmon-a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2.scope.
Jan 27 09:49:15 np0005597378 podman[401995]: 2026-01-27 14:49:15.504540825 +0000 UTC m=+0.025489434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:49:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:49:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:15 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:15 np0005597378 podman[401995]: 2026-01-27 14:49:15.624263786 +0000 UTC m=+0.145212385 container init a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:49:15 np0005597378 podman[401995]: 2026-01-27 14:49:15.632186279 +0000 UTC m=+0.153134858 container start a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:49:15 np0005597378 podman[401995]: 2026-01-27 14:49:15.635929099 +0000 UTC m=+0.156877678 container attach a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:49:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]: {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:    "0": [
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:        {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "devices": [
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "/dev/loop3"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            ],
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_name": "ceph_lv0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_size": "21470642176",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "name": "ceph_lv0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "tags": {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cluster_name": "ceph",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.crush_device_class": "",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.encrypted": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.objectstore": "bluestore",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osd_id": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.type": "block",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.vdo": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.with_tpm": "0"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            },
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "type": "block",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "vg_name": "ceph_vg0"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:        }
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:    ],
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:    "1": [
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:        {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "devices": [
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "/dev/loop4"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            ],
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_name": "ceph_lv1",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_size": "21470642176",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "name": "ceph_lv1",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "tags": {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cluster_name": "ceph",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.crush_device_class": "",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.encrypted": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.objectstore": "bluestore",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osd_id": "1",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.type": "block",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.vdo": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.with_tpm": "0"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            },
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "type": "block",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "vg_name": "ceph_vg1"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:        }
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:    ],
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:    "2": [
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:        {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "devices": [
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "/dev/loop5"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            ],
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_name": "ceph_lv2",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_size": "21470642176",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "name": "ceph_lv2",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "tags": {
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.cluster_name": "ceph",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.crush_device_class": "",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.encrypted": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.objectstore": "bluestore",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osd_id": "2",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.type": "block",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.vdo": "0",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:                "ceph.with_tpm": "0"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            },
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "type": "block",
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:            "vg_name": "ceph_vg2"
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:        }
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]:    ]
Jan 27 09:49:15 np0005597378 stupefied_kilby[402011]: }
Jan 27 09:49:15 np0005597378 systemd[1]: libpod-a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2.scope: Deactivated successfully.
Jan 27 09:49:15 np0005597378 podman[401995]: 2026-01-27 14:49:15.952815698 +0000 UTC m=+0.473764277 container died a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:49:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5f949543370c4c7d47c6c32e3f827bd41a395807db36c7a80c5f39364df53b37-merged.mount: Deactivated successfully.
Jan 27 09:49:16 np0005597378 podman[401995]: 2026-01-27 14:49:16.013932038 +0000 UTC m=+0.534880617 container remove a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:49:16 np0005597378 systemd[1]: libpod-conmon-a762b50eedc2a22bc5d2ffd8a65b38567a0661d601e5c35512e95ee3b8a18bd2.scope: Deactivated successfully.
Jan 27 09:49:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.492454712 +0000 UTC m=+0.040690893 container create 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:49:16 np0005597378 systemd[1]: Started libpod-conmon-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope.
Jan 27 09:49:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.472396413 +0000 UTC m=+0.020632604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.58040788 +0000 UTC m=+0.128644081 container init 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.586188365 +0000 UTC m=+0.134424536 container start 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:49:16 np0005597378 fervent_panini[402107]: 167 167
Jan 27 09:49:16 np0005597378 systemd[1]: libpod-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope: Deactivated successfully.
Jan 27 09:49:16 np0005597378 conmon[402107]: conmon 4f016c57150b8b0210d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope/container/memory.events
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.599716398 +0000 UTC m=+0.147952599 container attach 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.600459049 +0000 UTC m=+0.148695230 container died 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 27 09:49:16 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a1c30ea026e668b1b5a1c196ae8aa6cd0d1880a045eb439c1d5f5a2a0609114b-merged.mount: Deactivated successfully.
Jan 27 09:49:16 np0005597378 podman[402091]: 2026-01-27 14:49:16.653310915 +0000 UTC m=+0.201547076 container remove 4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:49:16 np0005597378 systemd[1]: libpod-conmon-4f016c57150b8b0210d14f414800b360570da841441464e5d4969f2c1ae03e93.scope: Deactivated successfully.
Jan 27 09:49:16 np0005597378 podman[402132]: 2026-01-27 14:49:16.809772702 +0000 UTC m=+0.037677701 container create fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 09:49:16 np0005597378 systemd[1]: Started libpod-conmon-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope.
Jan 27 09:49:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:49:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:49:16 np0005597378 podman[402132]: 2026-01-27 14:49:16.79364712 +0000 UTC m=+0.021552129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:49:16 np0005597378 podman[402132]: 2026-01-27 14:49:16.908930922 +0000 UTC m=+0.136835931 container init fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:49:16 np0005597378 podman[402132]: 2026-01-27 14:49:16.916999118 +0000 UTC m=+0.144904107 container start fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:49:16 np0005597378 podman[402132]: 2026-01-27 14:49:16.923506552 +0000 UTC m=+0.151411561 container attach fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:49:17
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'backups']
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:49:17 np0005597378 nova_compute[238941]: 2026-01-27 14:49:17.587 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:17 np0005597378 lvm[402227]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:49:17 np0005597378 lvm[402226]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:49:17 np0005597378 lvm[402227]: VG ceph_vg1 finished
Jan 27 09:49:17 np0005597378 lvm[402226]: VG ceph_vg0 finished
Jan 27 09:49:17 np0005597378 lvm[402229]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:49:17 np0005597378 lvm[402229]: VG ceph_vg2 finished
Jan 27 09:49:17 np0005597378 thirsty_snyder[402148]: {}
Jan 27 09:49:17 np0005597378 systemd[1]: libpod-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope: Deactivated successfully.
Jan 27 09:49:17 np0005597378 podman[402132]: 2026-01-27 14:49:17.803550806 +0000 UTC m=+1.031455825 container died fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:49:17 np0005597378 systemd[1]: libpod-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope: Consumed 1.382s CPU time.
Jan 27 09:49:17 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8559ba676775d5257eada187d36e08fe43899b4667cd8a53bd6b0c49c28c5f7a-merged.mount: Deactivated successfully.
Jan 27 09:49:17 np0005597378 podman[402132]: 2026-01-27 14:49:17.870960034 +0000 UTC m=+1.098865023 container remove fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:49:17 np0005597378 systemd[1]: libpod-conmon-fce24b1619e7eb1f4f52d4d085d153458686e316f7180f286e7d39c048e39bfc.scope: Deactivated successfully.
Jan 27 09:49:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:49:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:49:17 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:17 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:49:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:49:18 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:49:19 np0005597378 nova_compute[238941]: 2026-01-27 14:49:19.053 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:19 np0005597378 podman[402271]: 2026-01-27 14:49:19.718384624 +0000 UTC m=+0.057246047 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 09:49:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:22 np0005597378 nova_compute[238941]: 2026-01-27 14:49:22.592 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:24 np0005597378 nova_compute[238941]: 2026-01-27 14:49:24.054 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:24 np0005597378 podman[402291]: 2026-01-27 14:49:24.750215459 +0000 UTC m=+0.087714972 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 09:49:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:27 np0005597378 nova_compute[238941]: 2026-01-27 14:49:27.642 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:49:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:49:28 np0005597378 nova_compute[238941]: 2026-01-27 14:49:28.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:28 np0005597378 nova_compute[238941]: 2026-01-27 14:49:28.567 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:49:28 np0005597378 nova_compute[238941]: 2026-01-27 14:49:28.567 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:49:28 np0005597378 nova_compute[238941]: 2026-01-27 14:49:28.567 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:49:28 np0005597378 nova_compute[238941]: 2026-01-27 14:49:28.568 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:49:28 np0005597378 nova_compute[238941]: 2026-01-27 14:49:28.568 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.056 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:49:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918361996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.157 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.326 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.327 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3453MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.327 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.328 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.647 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.648 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.800 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:49:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.949 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.949 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.970 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:49:29 np0005597378 nova_compute[238941]: 2026-01-27 14:49:29.995 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:49:30 np0005597378 nova_compute[238941]: 2026-01-27 14:49:30.018 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:49:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:49:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880629263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:49:30 np0005597378 nova_compute[238941]: 2026-01-27 14:49:30.607 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:49:30 np0005597378 nova_compute[238941]: 2026-01-27 14:49:30.616 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:49:30 np0005597378 nova_compute[238941]: 2026-01-27 14:49:30.654 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:49:30 np0005597378 nova_compute[238941]: 2026-01-27 14:49:30.656 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:49:30 np0005597378 nova_compute[238941]: 2026-01-27 14:49:30.657 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:49:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:32 np0005597378 nova_compute[238941]: 2026-01-27 14:49:32.645 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:34 np0005597378 nova_compute[238941]: 2026-01-27 14:49:34.057 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:37 np0005597378 nova_compute[238941]: 2026-01-27 14:49:37.648 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:37 np0005597378 nova_compute[238941]: 2026-01-27 14:49:37.657 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:37 np0005597378 nova_compute[238941]: 2026-01-27 14:49:37.657 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:38 np0005597378 nova_compute[238941]: 2026-01-27 14:49:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:39 np0005597378 nova_compute[238941]: 2026-01-27 14:49:39.059 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:42 np0005597378 nova_compute[238941]: 2026-01-27 14:49:42.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:42 np0005597378 nova_compute[238941]: 2026-01-27 14:49:42.653 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:43 np0005597378 nova_compute[238941]: 2026-01-27 14:49:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:43 np0005597378 nova_compute[238941]: 2026-01-27 14:49:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:49:43 np0005597378 nova_compute[238941]: 2026-01-27 14:49:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:49:43 np0005597378 nova_compute[238941]: 2026-01-27 14:49:43.469 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:49:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:44 np0005597378 nova_compute[238941]: 2026-01-27 14:49:44.061 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:49:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:49:46.361 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:49:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:49:46.362 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:49:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:47 np0005597378 nova_compute[238941]: 2026-01-27 14:49:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:47 np0005597378 nova_compute[238941]: 2026-01-27 14:49:47.657 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:49:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:49 np0005597378 nova_compute[238941]: 2026-01-27 14:49:49.106 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:50 np0005597378 podman[402363]: 2026-01-27 14:49:50.722941414 +0000 UTC m=+0.055850219 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 09:49:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:52 np0005597378 nova_compute[238941]: 2026-01-27 14:49:52.660 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:54 np0005597378 nova_compute[238941]: 2026-01-27 14:49:54.109 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:55 np0005597378 podman[402383]: 2026-01-27 14:49:55.729955386 +0000 UTC m=+0.075665611 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:49:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:49:57 np0005597378 nova_compute[238941]: 2026-01-27 14:49:57.664 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:49:58 np0005597378 nova_compute[238941]: 2026-01-27 14:49:58.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:49:58 np0005597378 nova_compute[238941]: 2026-01-27 14:49:58.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:49:59 np0005597378 nova_compute[238941]: 2026-01-27 14:49:59.111 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:49:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:49:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3066238430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:49:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:49:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3066238430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:49:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:02 np0005597378 nova_compute[238941]: 2026-01-27 14:50:02.668 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:03 np0005597378 nova_compute[238941]: 2026-01-27 14:50:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:04 np0005597378 nova_compute[238941]: 2026-01-27 14:50:04.113 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:04 np0005597378 nova_compute[238941]: 2026-01-27 14:50:04.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:07 np0005597378 nova_compute[238941]: 2026-01-27 14:50:07.671 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:09 np0005597378 nova_compute[238941]: 2026-01-27 14:50:09.114 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:12 np0005597378 nova_compute[238941]: 2026-01-27 14:50:12.674 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:14 np0005597378 nova_compute[238941]: 2026-01-27 14:50:14.117 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:50:17
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root']
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:50:17 np0005597378 nova_compute[238941]: 2026-01-27 14:50:17.678 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:50:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:50:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:50:18 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:50:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:50:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:50:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:50:19 np0005597378 nova_compute[238941]: 2026-01-27 14:50:19.169 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:19 np0005597378 podman[402553]: 2026-01-27 14:50:19.23327211 +0000 UTC m=+0.024890599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:50:19 np0005597378 podman[402553]: 2026-01-27 14:50:19.560421733 +0000 UTC m=+0.352040202 container create 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:19 np0005597378 systemd[1]: Started libpod-conmon-3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64.scope.
Jan 27 09:50:19 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:50:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:20 np0005597378 podman[402553]: 2026-01-27 14:50:20.007816563 +0000 UTC m=+0.799435072 container init 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:20 np0005597378 podman[402553]: 2026-01-27 14:50:20.019613399 +0000 UTC m=+0.811231868 container start 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:50:20 np0005597378 quirky_bhabha[402569]: 167 167
Jan 27 09:50:20 np0005597378 systemd[1]: libpod-3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64.scope: Deactivated successfully.
Jan 27 09:50:20 np0005597378 podman[402553]: 2026-01-27 14:50:20.102163053 +0000 UTC m=+0.893781542 container attach 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:50:20 np0005597378 podman[402553]: 2026-01-27 14:50:20.103681594 +0000 UTC m=+0.895300063 container died 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:50:20 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7060c0a798156331986e9a9ce7ad05325562298461e5ca41edb14f5de4bc0a35-merged.mount: Deactivated successfully.
Jan 27 09:50:20 np0005597378 podman[402553]: 2026-01-27 14:50:20.867601613 +0000 UTC m=+1.659220082 container remove 3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:50:20 np0005597378 systemd[1]: libpod-conmon-3f1a64406dc4d838d8351987f8b32c1dd7efaf51f5dc1121123c370445d20f64.scope: Deactivated successfully.
Jan 27 09:50:21 np0005597378 podman[402587]: 2026-01-27 14:50:21.007580287 +0000 UTC m=+0.067195073 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:50:21 np0005597378 podman[402610]: 2026-01-27 14:50:21.084223363 +0000 UTC m=+0.084352153 container create 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:50:21 np0005597378 podman[402610]: 2026-01-27 14:50:21.025899069 +0000 UTC m=+0.026027889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:50:21 np0005597378 systemd[1]: Started libpod-conmon-9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8.scope.
Jan 27 09:50:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:50:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:21 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:21 np0005597378 podman[402610]: 2026-01-27 14:50:21.440219781 +0000 UTC m=+0.440348601 container init 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 27 09:50:21 np0005597378 podman[402610]: 2026-01-27 14:50:21.450694072 +0000 UTC m=+0.450822862 container start 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:50:21 np0005597378 podman[402610]: 2026-01-27 14:50:21.548847315 +0000 UTC m=+0.548976135 container attach 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:21 np0005597378 tender_archimedes[402628]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:50:21 np0005597378 tender_archimedes[402628]: --> All data devices are unavailable
Jan 27 09:50:22 np0005597378 systemd[1]: libpod-9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8.scope: Deactivated successfully.
Jan 27 09:50:22 np0005597378 podman[402610]: 2026-01-27 14:50:22.018616984 +0000 UTC m=+1.018745784 container died 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:50:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e8a809d918364ddc3abb9417c51c72e6f4b6b2dc9b17f06b5d9ca329e47bd639-merged.mount: Deactivated successfully.
Jan 27 09:50:22 np0005597378 podman[402610]: 2026-01-27 14:50:22.103137771 +0000 UTC m=+1.103266571 container remove 9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_archimedes, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:50:22 np0005597378 systemd[1]: libpod-conmon-9fe3476b7b070a611f89e2d23801361fbc672533ddeac541c6c9e8fda90e4fe8.scope: Deactivated successfully.
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.625052219 +0000 UTC m=+0.040632141 container create e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:22 np0005597378 systemd[1]: Started libpod-conmon-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope.
Jan 27 09:50:22 np0005597378 nova_compute[238941]: 2026-01-27 14:50:22.681 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:22 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.608069874 +0000 UTC m=+0.023649816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.709027301 +0000 UTC m=+0.124607253 container init e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.717598631 +0000 UTC m=+0.133178563 container start e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.722637006 +0000 UTC m=+0.138216948 container attach e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 27 09:50:22 np0005597378 distracted_hodgkin[402734]: 167 167
Jan 27 09:50:22 np0005597378 systemd[1]: libpod-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope: Deactivated successfully.
Jan 27 09:50:22 np0005597378 conmon[402734]: conmon e4f56f385ea8e71b1ef6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope/container/memory.events
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.725430491 +0000 UTC m=+0.141010433 container died e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:50:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-b936afda2e4f8a8ab18ae973ce9ae64ce9c57e81f73dd1576fb8933080ff9f8f-merged.mount: Deactivated successfully.
Jan 27 09:50:22 np0005597378 podman[402718]: 2026-01-27 14:50:22.766449001 +0000 UTC m=+0.182028923 container remove e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:50:22 np0005597378 systemd[1]: libpod-conmon-e4f56f385ea8e71b1ef6290ee1607036ccccad3e2633650298fdee6875640d2c.scope: Deactivated successfully.
Jan 27 09:50:22 np0005597378 podman[402757]: 2026-01-27 14:50:22.956459388 +0000 UTC m=+0.048862092 container create 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:22 np0005597378 systemd[1]: Started libpod-conmon-52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746.scope.
Jan 27 09:50:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:50:23 np0005597378 podman[402757]: 2026-01-27 14:50:22.934529969 +0000 UTC m=+0.026932703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:50:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:23 np0005597378 podman[402757]: 2026-01-27 14:50:23.046251686 +0000 UTC m=+0.138654410 container init 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 27 09:50:23 np0005597378 podman[402757]: 2026-01-27 14:50:23.054750664 +0000 UTC m=+0.147153358 container start 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:50:23 np0005597378 podman[402757]: 2026-01-27 14:50:23.05907798 +0000 UTC m=+0.151480704 container attach 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]: {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:    "0": [
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:        {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "devices": [
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "/dev/loop3"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            ],
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_name": "ceph_lv0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_size": "21470642176",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "name": "ceph_lv0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "tags": {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cluster_name": "ceph",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.crush_device_class": "",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.encrypted": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.objectstore": "bluestore",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osd_id": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.type": "block",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.vdo": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.with_tpm": "0"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            },
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "type": "block",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "vg_name": "ceph_vg0"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:        }
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:    ],
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:    "1": [
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:        {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "devices": [
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "/dev/loop4"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            ],
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_name": "ceph_lv1",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_size": "21470642176",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "name": "ceph_lv1",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "tags": {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cluster_name": "ceph",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.crush_device_class": "",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.encrypted": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.objectstore": "bluestore",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osd_id": "1",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.type": "block",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.vdo": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.with_tpm": "0"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            },
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "type": "block",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "vg_name": "ceph_vg1"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:        }
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:    ],
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:    "2": [
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:        {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "devices": [
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "/dev/loop5"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            ],
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_name": "ceph_lv2",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_size": "21470642176",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "name": "ceph_lv2",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "tags": {
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.cluster_name": "ceph",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.crush_device_class": "",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.encrypted": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.objectstore": "bluestore",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osd_id": "2",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.type": "block",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.vdo": "0",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:                "ceph.with_tpm": "0"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            },
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "type": "block",
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:            "vg_name": "ceph_vg2"
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:        }
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]:    ]
Jan 27 09:50:23 np0005597378 stoic_bartik[402776]: }
Jan 27 09:50:23 np0005597378 systemd[1]: libpod-52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746.scope: Deactivated successfully.
Jan 27 09:50:23 np0005597378 podman[402757]: 2026-01-27 14:50:23.378386254 +0000 UTC m=+0.470788968 container died 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 27 09:50:23 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7d4849983eb79a163250c92e240d690284a1a2951e09750f0c4e1d4b3c8651b4-merged.mount: Deactivated successfully.
Jan 27 09:50:23 np0005597378 podman[402757]: 2026-01-27 14:50:23.417893243 +0000 UTC m=+0.510295947 container remove 52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bartik, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:50:23 np0005597378 systemd[1]: libpod-conmon-52e7159023a769e0437cd3481d74f7352a36a3525fe4470f81c02ed8057d7746.scope: Deactivated successfully.
Jan 27 09:50:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:23 np0005597378 podman[402860]: 2026-01-27 14:50:23.946790879 +0000 UTC m=+0.054390410 container create bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:50:23 np0005597378 systemd[1]: Started libpod-conmon-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope.
Jan 27 09:50:24 np0005597378 podman[402860]: 2026-01-27 14:50:23.922701643 +0000 UTC m=+0.030301194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:50:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:50:24 np0005597378 podman[402860]: 2026-01-27 14:50:24.06389105 +0000 UTC m=+0.171490611 container init bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:50:24 np0005597378 podman[402860]: 2026-01-27 14:50:24.070211929 +0000 UTC m=+0.177811460 container start bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 27 09:50:24 np0005597378 podman[402860]: 2026-01-27 14:50:24.073855677 +0000 UTC m=+0.181455228 container attach bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 27 09:50:24 np0005597378 optimistic_golick[402876]: 167 167
Jan 27 09:50:24 np0005597378 systemd[1]: libpod-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope: Deactivated successfully.
Jan 27 09:50:24 np0005597378 conmon[402876]: conmon bc2d5ef83d2a7d357d89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope/container/memory.events
Jan 27 09:50:24 np0005597378 podman[402860]: 2026-01-27 14:50:24.077874245 +0000 UTC m=+0.185473776 container died bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:50:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-71f814f62298f6407c0858cf4f7571cd3031b327f596864254622fc676e8fe42-merged.mount: Deactivated successfully.
Jan 27 09:50:24 np0005597378 podman[402860]: 2026-01-27 14:50:24.11981584 +0000 UTC m=+0.227415381 container remove bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:24 np0005597378 systemd[1]: libpod-conmon-bc2d5ef83d2a7d357d8962e015a6521a6464f82890335270d276c702d80946a9.scope: Deactivated successfully.
Jan 27 09:50:24 np0005597378 nova_compute[238941]: 2026-01-27 14:50:24.171 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:24 np0005597378 podman[402898]: 2026-01-27 14:50:24.293645802 +0000 UTC m=+0.044454104 container create ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 27 09:50:24 np0005597378 systemd[1]: Started libpod-conmon-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope.
Jan 27 09:50:24 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:50:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:24 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:50:24 np0005597378 podman[402898]: 2026-01-27 14:50:24.276016639 +0000 UTC m=+0.026824961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:50:24 np0005597378 podman[402898]: 2026-01-27 14:50:24.382908636 +0000 UTC m=+0.133716968 container init ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:50:24 np0005597378 podman[402898]: 2026-01-27 14:50:24.390207112 +0000 UTC m=+0.141015414 container start ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:50:24 np0005597378 podman[402898]: 2026-01-27 14:50:24.393630523 +0000 UTC m=+0.144438825 container attach ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:50:25 np0005597378 lvm[402992]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:50:25 np0005597378 lvm[402993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:50:25 np0005597378 lvm[402992]: VG ceph_vg0 finished
Jan 27 09:50:25 np0005597378 lvm[402993]: VG ceph_vg1 finished
Jan 27 09:50:25 np0005597378 lvm[402995]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:50:25 np0005597378 lvm[402995]: VG ceph_vg2 finished
Jan 27 09:50:25 np0005597378 confident_shannon[402914]: {}
Jan 27 09:50:25 np0005597378 systemd[1]: libpod-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope: Deactivated successfully.
Jan 27 09:50:25 np0005597378 systemd[1]: libpod-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope: Consumed 1.365s CPU time.
Jan 27 09:50:25 np0005597378 podman[402898]: 2026-01-27 14:50:25.218873807 +0000 UTC m=+0.969682119 container died ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 09:50:25 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4f6981752976f56327be799036072a88301c7b47cd6c3f6db1bc5a8149df3c20-merged.mount: Deactivated successfully.
Jan 27 09:50:25 np0005597378 podman[402898]: 2026-01-27 14:50:25.31221449 +0000 UTC m=+1.063022792 container remove ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:50:25 np0005597378 systemd[1]: libpod-conmon-ee649ed813519662b2e53396c2c84a559eac03d00834b1a0d0856bfb3c9d9e24.scope: Deactivated successfully.
Jan 27 09:50:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:50:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:50:25 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:50:25 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:50:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:50:26 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:50:26 np0005597378 podman[403037]: 2026-01-27 14:50:26.755040948 +0000 UTC m=+0.092235515 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:50:27 np0005597378 nova_compute[238941]: 2026-01-27 14:50:27.685 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:50:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.172 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.427 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:50:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:50:29 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244242994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:50:29 np0005597378 nova_compute[238941]: 2026-01-27 14:50:29.986 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.129 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.130 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.130 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.131 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.190 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.190 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.207 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:50:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:50:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522817235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.783 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.792 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.809 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.811 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:50:30 np0005597378 nova_compute[238941]: 2026-01-27 14:50:30.811 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:50:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:32 np0005597378 nova_compute[238941]: 2026-01-27 14:50:32.686 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:34 np0005597378 nova_compute[238941]: 2026-01-27 14:50:34.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:36 np0005597378 nova_compute[238941]: 2026-01-27 14:50:36.811 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:37 np0005597378 nova_compute[238941]: 2026-01-27 14:50:37.690 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:38 np0005597378 nova_compute[238941]: 2026-01-27 14:50:38.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:38 np0005597378 nova_compute[238941]: 2026-01-27 14:50:38.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:39 np0005597378 nova_compute[238941]: 2026-01-27 14:50:39.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:42 np0005597378 nova_compute[238941]: 2026-01-27 14:50:42.695 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:43 np0005597378 nova_compute[238941]: 2026-01-27 14:50:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:43 np0005597378 nova_compute[238941]: 2026-01-27 14:50:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:50:43 np0005597378 nova_compute[238941]: 2026-01-27 14:50:43.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:50:43 np0005597378 nova_compute[238941]: 2026-01-27 14:50:43.521 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:50:43 np0005597378 nova_compute[238941]: 2026-01-27 14:50:43.522 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:44 np0005597378 nova_compute[238941]: 2026-01-27 14:50:44.184 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:50:46.362 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:50:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:50:46.362 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:50:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:50:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.382 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.383 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.385 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.459 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Jan 27 09:50:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.614 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.614 238945 WARNING nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 WARNING nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Removable base files: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/285e7430fe92ea66e9eadd94d86f83f43a584b0f#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.615 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/3912a4d8b71ba799f3af029b116f734f2c6341ea#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 DEBUG nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 27 09:50:46 np0005597378 nova_compute[238941]: 2026-01-27 14:50:46.616 238945 INFO nova.virt.libvirt.imagecache [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Jan 27 09:50:47 np0005597378 nova_compute[238941]: 2026-01-27 14:50:47.699 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:50:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:48 np0005597378 nova_compute[238941]: 2026-01-27 14:50:48.617 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:50:49 np0005597378 nova_compute[238941]: 2026-01-27 14:50:49.186 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:51 np0005597378 podman[403109]: 2026-01-27 14:50:51.722158552 +0000 UTC m=+0.060216346 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:50:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:52 np0005597378 nova_compute[238941]: 2026-01-27 14:50:52.702 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:54 np0005597378 nova_compute[238941]: 2026-01-27 14:50:54.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:50:57 np0005597378 nova_compute[238941]: 2026-01-27 14:50:57.706 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:57 np0005597378 podman[403127]: 2026-01-27 14:50:57.739594453 +0000 UTC m=+0.080386027 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 09:50:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:50:59 np0005597378 nova_compute[238941]: 2026-01-27 14:50:59.189 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:50:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:50:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538391076' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:50:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:50:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538391076' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:50:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:00 np0005597378 nova_compute[238941]: 2026-01-27 14:51:00.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:00 np0005597378 nova_compute[238941]: 2026-01-27 14:51:00.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:51:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:02 np0005597378 nova_compute[238941]: 2026-01-27 14:51:02.710 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:03 np0005597378 nova_compute[238941]: 2026-01-27 14:51:03.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:04 np0005597378 nova_compute[238941]: 2026-01-27 14:51:04.190 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:07 np0005597378 nova_compute[238941]: 2026-01-27 14:51:07.713 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:09 np0005597378 nova_compute[238941]: 2026-01-27 14:51:09.191 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:12 np0005597378 nova_compute[238941]: 2026-01-27 14:51:12.718 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:14 np0005597378 nova_compute[238941]: 2026-01-27 14:51:14.193 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:51:17
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:51:17 np0005597378 nova_compute[238941]: 2026-01-27 14:51:17.723 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:51:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:51:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:51:19 np0005597378 nova_compute[238941]: 2026-01-27 14:51:19.195 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:22 np0005597378 podman[403155]: 2026-01-27 14:51:22.72026963 +0000 UTC m=+0.060763550 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 09:51:22 np0005597378 nova_compute[238941]: 2026-01-27 14:51:22.727 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:24 np0005597378 nova_compute[238941]: 2026-01-27 14:51:24.197 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:51:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:26 np0005597378 podman[403318]: 2026-01-27 14:51:26.620934208 +0000 UTC m=+0.056903997 container create 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:51:26 np0005597378 systemd[1]: Started libpod-conmon-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope.
Jan 27 09:51:26 np0005597378 podman[403318]: 2026-01-27 14:51:26.587138872 +0000 UTC m=+0.023108691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:51:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:51:26 np0005597378 podman[403318]: 2026-01-27 14:51:26.801869471 +0000 UTC m=+0.237839290 container init 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:51:26 np0005597378 podman[403318]: 2026-01-27 14:51:26.809205637 +0000 UTC m=+0.245175426 container start 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 27 09:51:26 np0005597378 epic_cray[403335]: 167 167
Jan 27 09:51:26 np0005597378 systemd[1]: libpod-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope: Deactivated successfully.
Jan 27 09:51:26 np0005597378 conmon[403335]: conmon 32da031db137d176178a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope/container/memory.events
Jan 27 09:51:26 np0005597378 podman[403318]: 2026-01-27 14:51:26.866080183 +0000 UTC m=+0.302049992 container attach 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:51:26 np0005597378 podman[403318]: 2026-01-27 14:51:26.868129368 +0000 UTC m=+0.304099157 container died 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:51:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-8bebc485295831fb31a027ada7c79a909d36205f3c754949295e9354063d1fae-merged.mount: Deactivated successfully.
Jan 27 09:51:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:51:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:51:27 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:51:27 np0005597378 podman[403318]: 2026-01-27 14:51:27.378675051 +0000 UTC m=+0.814644840 container remove 32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:51:27 np0005597378 systemd[1]: libpod-conmon-32da031db137d176178ac52bbd72a93bb315e4cb0de570f26dee1f0c83257116.scope: Deactivated successfully.
Jan 27 09:51:27 np0005597378 podman[403358]: 2026-01-27 14:51:27.56393147 +0000 UTC m=+0.071626582 container create 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:51:27 np0005597378 podman[403358]: 2026-01-27 14:51:27.514909685 +0000 UTC m=+0.022604817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:51:27 np0005597378 systemd[1]: Started libpod-conmon-369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b.scope.
Jan 27 09:51:27 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:51:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:27 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:27 np0005597378 nova_compute[238941]: 2026-01-27 14:51:27.731 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:27 np0005597378 podman[403358]: 2026-01-27 14:51:27.861947383 +0000 UTC m=+0.369642505 container init 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 27 09:51:27 np0005597378 podman[403358]: 2026-01-27 14:51:27.869710441 +0000 UTC m=+0.377405543 container start 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 27 09:51:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:27 np0005597378 podman[403358]: 2026-01-27 14:51:27.957351831 +0000 UTC m=+0.465046953 container attach 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:51:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:51:28 np0005597378 mystifying_heisenberg[403374]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:51:28 np0005597378 mystifying_heisenberg[403374]: --> All data devices are unavailable
Jan 27 09:51:28 np0005597378 systemd[1]: libpod-369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b.scope: Deactivated successfully.
Jan 27 09:51:28 np0005597378 podman[403358]: 2026-01-27 14:51:28.358818529 +0000 UTC m=+0.866513751 container died 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 27 09:51:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-888af3f92f2ed84329bad05c9bfc10966acc0a56dfb91738b261d48382dec49f-merged.mount: Deactivated successfully.
Jan 27 09:51:29 np0005597378 podman[403358]: 2026-01-27 14:51:29.19960824 +0000 UTC m=+1.707303342 container remove 369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:51:29 np0005597378 nova_compute[238941]: 2026-01-27 14:51:29.198 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:29 np0005597378 systemd[1]: libpod-conmon-369e99d4b5e4d6732e0302aa41c544dbb32b718f337435e6ae6814d9bf52b24b.scope: Deactivated successfully.
Jan 27 09:51:29 np0005597378 podman[403394]: 2026-01-27 14:51:29.337092757 +0000 UTC m=+0.942665063 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:51:29 np0005597378 podman[403495]: 2026-01-27 14:51:29.62205356 +0000 UTC m=+0.023675536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:51:29 np0005597378 podman[403495]: 2026-01-27 14:51:29.727857338 +0000 UTC m=+0.129479294 container create 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 27 09:51:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:29 np0005597378 systemd[1]: Started libpod-conmon-804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331.scope.
Jan 27 09:51:29 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:51:30 np0005597378 podman[403495]: 2026-01-27 14:51:30.055014864 +0000 UTC m=+0.456636850 container init 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 27 09:51:30 np0005597378 podman[403495]: 2026-01-27 14:51:30.063727927 +0000 UTC m=+0.465349883 container start 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:51:30 np0005597378 zen_bassi[403511]: 167 167
Jan 27 09:51:30 np0005597378 systemd[1]: libpod-804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331.scope: Deactivated successfully.
Jan 27 09:51:30 np0005597378 podman[403495]: 2026-01-27 14:51:30.222773512 +0000 UTC m=+0.624395478 container attach 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 27 09:51:30 np0005597378 podman[403495]: 2026-01-27 14:51:30.223214205 +0000 UTC m=+0.624836171 container died 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 09:51:30 np0005597378 nova_compute[238941]: 2026-01-27 14:51:30.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:30 np0005597378 nova_compute[238941]: 2026-01-27 14:51:30.549 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:51:30 np0005597378 nova_compute[238941]: 2026-01-27 14:51:30.550 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:51:30 np0005597378 nova_compute[238941]: 2026-01-27 14:51:30.550 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:51:30 np0005597378 nova_compute[238941]: 2026-01-27 14:51:30.550 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:51:30 np0005597378 nova_compute[238941]: 2026-01-27 14:51:30.551 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:51:30 np0005597378 systemd[1]: var-lib-containers-storage-overlay-05bf180e07b0e524499dda8cfe85c84f083a3e0706cf5f2380bf492696bfce02-merged.mount: Deactivated successfully.
Jan 27 09:51:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:51:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486264467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.127 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.270 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.271 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3456MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.271 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.271 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:51:31 np0005597378 podman[403495]: 2026-01-27 14:51:31.403071738 +0000 UTC m=+1.804693694 container remove 804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bassi, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:51:31 np0005597378 systemd[1]: libpod-conmon-804df86ac6ef5ea3c6571f5b89a5032b50b9e65e838072d36d6ee6cc37a7c331.scope: Deactivated successfully.
Jan 27 09:51:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.638 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.638 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:51:31 np0005597378 podman[403557]: 2026-01-27 14:51:31.555824815 +0000 UTC m=+0.025556086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:51:31 np0005597378 nova_compute[238941]: 2026-01-27 14:51:31.662 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:51:31 np0005597378 podman[403557]: 2026-01-27 14:51:31.794746403 +0000 UTC m=+0.264477654 container create e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:51:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:31 np0005597378 systemd[1]: Started libpod-conmon-e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b.scope.
Jan 27 09:51:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:51:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:32 np0005597378 podman[403557]: 2026-01-27 14:51:32.16188445 +0000 UTC m=+0.631615701 container init e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:51:32 np0005597378 podman[403557]: 2026-01-27 14:51:32.172247098 +0000 UTC m=+0.641978359 container start e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:51:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:51:32 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000975515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:51:32 np0005597378 podman[403557]: 2026-01-27 14:51:32.246617633 +0000 UTC m=+0.716348884 container attach e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:51:32 np0005597378 nova_compute[238941]: 2026-01-27 14:51:32.273 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:51:32 np0005597378 nova_compute[238941]: 2026-01-27 14:51:32.280 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:51:32 np0005597378 nova_compute[238941]: 2026-01-27 14:51:32.365 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:51:32 np0005597378 nova_compute[238941]: 2026-01-27 14:51:32.367 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:51:32 np0005597378 nova_compute[238941]: 2026-01-27 14:51:32.367 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]: {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:    "0": [
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:        {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "devices": [
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "/dev/loop3"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            ],
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_name": "ceph_lv0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_size": "21470642176",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "name": "ceph_lv0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "tags": {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cluster_name": "ceph",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.crush_device_class": "",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.encrypted": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.objectstore": "bluestore",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osd_id": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.type": "block",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.vdo": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.with_tpm": "0"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            },
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "type": "block",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "vg_name": "ceph_vg0"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:        }
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:    ],
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:    "1": [
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:        {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "devices": [
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "/dev/loop4"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            ],
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_name": "ceph_lv1",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_size": "21470642176",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "name": "ceph_lv1",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "tags": {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cluster_name": "ceph",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.crush_device_class": "",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.encrypted": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.objectstore": "bluestore",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osd_id": "1",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.type": "block",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.vdo": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.with_tpm": "0"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            },
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "type": "block",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "vg_name": "ceph_vg1"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:        }
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:    ],
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:    "2": [
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:        {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "devices": [
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "/dev/loop5"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            ],
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_name": "ceph_lv2",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_size": "21470642176",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "name": "ceph_lv2",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "tags": {
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.cluster_name": "ceph",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.crush_device_class": "",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.encrypted": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.objectstore": "bluestore",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osd_id": "2",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.type": "block",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.vdo": "0",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:                "ceph.with_tpm": "0"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            },
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "type": "block",
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:            "vg_name": "ceph_vg2"
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:        }
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]:    ]
Jan 27 09:51:32 np0005597378 gifted_heisenberg[403594]: }
Jan 27 09:51:32 np0005597378 systemd[1]: libpod-e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b.scope: Deactivated successfully.
Jan 27 09:51:32 np0005597378 podman[403557]: 2026-01-27 14:51:32.472802869 +0000 UTC m=+0.942534120 container died e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 27 09:51:32 np0005597378 nova_compute[238941]: 2026-01-27 14:51:32.737 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4006df002fe106c56b9b23c6ab6931462963194850348e252ac8d7b6954958af-merged.mount: Deactivated successfully.
Jan 27 09:51:33 np0005597378 podman[403557]: 2026-01-27 14:51:33.106415753 +0000 UTC m=+1.576147004 container remove e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 27 09:51:33 np0005597378 systemd[1]: libpod-conmon-e165d2e67e4cedf9fb93340f37d3bdde0550791e5061706b7f4442273ec5d09b.scope: Deactivated successfully.
Jan 27 09:51:33 np0005597378 podman[403678]: 2026-01-27 14:51:33.629740249 +0000 UTC m=+0.093513819 container create fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:51:33 np0005597378 podman[403678]: 2026-01-27 14:51:33.560287406 +0000 UTC m=+0.024060996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:51:33 np0005597378 systemd[1]: Started libpod-conmon-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope.
Jan 27 09:51:33 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:51:33 np0005597378 podman[403678]: 2026-01-27 14:51:33.879818156 +0000 UTC m=+0.343591756 container init fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 27 09:51:33 np0005597378 podman[403678]: 2026-01-27 14:51:33.888073217 +0000 UTC m=+0.351846787 container start fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:51:33 np0005597378 peaceful_lalande[403695]: 167 167
Jan 27 09:51:33 np0005597378 systemd[1]: libpod-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope: Deactivated successfully.
Jan 27 09:51:33 np0005597378 conmon[403695]: conmon fdd34b62f3af1a8ca3fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope/container/memory.events
Jan 27 09:51:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:33 np0005597378 podman[403678]: 2026-01-27 14:51:33.983814635 +0000 UTC m=+0.447588235 container attach fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:51:33 np0005597378 podman[403678]: 2026-01-27 14:51:33.984490563 +0000 UTC m=+0.448264163 container died fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:51:34 np0005597378 nova_compute[238941]: 2026-01-27 14:51:34.215 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-4f10117ddd7802c758f99fa18e096ff97da3fb83e0c3a94ee2eb4db71f14c823-merged.mount: Deactivated successfully.
Jan 27 09:51:34 np0005597378 podman[403678]: 2026-01-27 14:51:34.670324118 +0000 UTC m=+1.134097688 container remove fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lalande, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:51:34 np0005597378 systemd[1]: libpod-conmon-fdd34b62f3af1a8ca3fc4cac06b49379dc6f972515a5a1833d7624094c6ed3d6.scope: Deactivated successfully.
Jan 27 09:51:34 np0005597378 podman[403719]: 2026-01-27 14:51:34.85685085 +0000 UTC m=+0.056957739 container create a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:51:34 np0005597378 systemd[1]: Started libpod-conmon-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope.
Jan 27 09:51:34 np0005597378 podman[403719]: 2026-01-27 14:51:34.827947285 +0000 UTC m=+0.028054194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:51:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:34 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:51:34 np0005597378 podman[403719]: 2026-01-27 14:51:34.966640844 +0000 UTC m=+0.166747763 container init a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:51:34 np0005597378 podman[403719]: 2026-01-27 14:51:34.974758602 +0000 UTC m=+0.174865491 container start a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:51:34 np0005597378 podman[403719]: 2026-01-27 14:51:34.983101796 +0000 UTC m=+0.183208685 container attach a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:51:35 np0005597378 lvm[403815]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:51:35 np0005597378 lvm[403816]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:51:35 np0005597378 lvm[403816]: VG ceph_vg1 finished
Jan 27 09:51:35 np0005597378 lvm[403815]: VG ceph_vg0 finished
Jan 27 09:51:35 np0005597378 lvm[403818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:51:35 np0005597378 lvm[403818]: VG ceph_vg2 finished
Jan 27 09:51:35 np0005597378 mystifying_taussig[403737]: {}
Jan 27 09:51:35 np0005597378 systemd[1]: libpod-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope: Deactivated successfully.
Jan 27 09:51:35 np0005597378 systemd[1]: libpod-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope: Consumed 1.516s CPU time.
Jan 27 09:51:35 np0005597378 podman[403719]: 2026-01-27 14:51:35.912950095 +0000 UTC m=+1.113057004 container died a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:51:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7f630ee4e0dad69a6db62b6308f19c75b10ce45fcd19394f137ba4129b83d2b3-merged.mount: Deactivated successfully.
Jan 27 09:51:36 np0005597378 podman[403719]: 2026-01-27 14:51:36.045831949 +0000 UTC m=+1.245938838 container remove a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:51:36 np0005597378 systemd[1]: libpod-conmon-a0a0132d5f7256c19e95872b4c3547cc5336c54c3b0de153a21fe11ecd465c39.scope: Deactivated successfully.
Jan 27 09:51:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:51:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:51:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:51:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:51:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:51:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:51:37 np0005597378 nova_compute[238941]: 2026-01-27 14:51:37.369 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:37 np0005597378 nova_compute[238941]: 2026-01-27 14:51:37.742 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:38 np0005597378 nova_compute[238941]: 2026-01-27 14:51:38.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:39 np0005597378 nova_compute[238941]: 2026-01-27 14:51:39.217 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:39 np0005597378 nova_compute[238941]: 2026-01-27 14:51:39.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:42 np0005597378 nova_compute[238941]: 2026-01-27 14:51:42.747 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:43 np0005597378 nova_compute[238941]: 2026-01-27 14:51:43.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:43 np0005597378 nova_compute[238941]: 2026-01-27 14:51:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:51:43 np0005597378 nova_compute[238941]: 2026-01-27 14:51:43.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:51:43 np0005597378 nova_compute[238941]: 2026-01-27 14:51:43.575 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:51:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:44 np0005597378 nova_compute[238941]: 2026-01-27 14:51:44.257 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:44 np0005597378 nova_compute[238941]: 2026-01-27 14:51:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:51:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:51:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:51:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:51:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:51:46.363 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:51:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:47 np0005597378 nova_compute[238941]: 2026-01-27 14:51:47.751 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:51:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:48 np0005597378 nova_compute[238941]: 2026-01-27 14:51:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:48 np0005597378 nova_compute[238941]: 2026-01-27 14:51:48.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:51:49 np0005597378 nova_compute[238941]: 2026-01-27 14:51:49.258 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:49 np0005597378 nova_compute[238941]: 2026-01-27 14:51:49.432 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:51:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:52 np0005597378 nova_compute[238941]: 2026-01-27 14:51:52.756 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:53 np0005597378 podman[403861]: 2026-01-27 14:51:53.744373626 +0000 UTC m=+0.080629724 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:51:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:54 np0005597378 nova_compute[238941]: 2026-01-27 14:51:54.261 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:54 np0005597378 nova_compute[238941]: 2026-01-27 14:51:54.529 238945 DEBUG oslo_concurrency.processutils [None req-eb85e36d-037b-4010-bb5e-4e5f5dd1d6fe eecb64df414c4658bbe0f0e4068a97ab 4b2d057bb74245b8be119fa9985925d6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:51:54 np0005597378 nova_compute[238941]: 2026-01-27 14:51:54.571 238945 DEBUG oslo_concurrency.processutils [None req-eb85e36d-037b-4010-bb5e-4e5f5dd1d6fe eecb64df414c4658bbe0f0e4068a97ab 4b2d057bb74245b8be119fa9985925d6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:51:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:51:57 np0005597378 nova_compute[238941]: 2026-01-27 14:51:57.759 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:51:59 np0005597378 nova_compute[238941]: 2026-01-27 14:51:59.263 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:51:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:51:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/37863354' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:51:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:51:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/37863354' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:51:59 np0005597378 podman[403881]: 2026-01-27 14:51:59.75631205 +0000 UTC m=+0.095626916 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 09:51:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:00 np0005597378 nova_compute[238941]: 2026-01-27 14:52:00.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:00 np0005597378 nova_compute[238941]: 2026-01-27 14:52:00.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.295507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521295590, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1743, "num_deletes": 251, "total_data_size": 2829841, "memory_usage": 2871328, "flush_reason": "Manual Compaction"}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521327681, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2768547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66487, "largest_seqno": 68229, "table_properties": {"data_size": 2760599, "index_size": 4826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16242, "raw_average_key_size": 19, "raw_value_size": 2744708, "raw_average_value_size": 3371, "num_data_blocks": 215, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525337, "oldest_key_time": 1769525337, "file_creation_time": 1769525521, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 32208 microseconds, and 9911 cpu microseconds.
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.327725) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2768547 bytes OK
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.327750) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.333537) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.333575) EVENT_LOG_v1 {"time_micros": 1769525521333565, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.333599) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2822405, prev total WAL file size 2822405, number of live WAL files 2.
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.334588) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2703KB)], [158(9929KB)]
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521334629, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12936072, "oldest_snapshot_seqno": -1}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8769 keys, 11171743 bytes, temperature: kUnknown
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521419259, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11171743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11114850, "index_size": 33868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 229895, "raw_average_key_size": 26, "raw_value_size": 10960180, "raw_average_value_size": 1249, "num_data_blocks": 1311, "num_entries": 8769, "num_filter_entries": 8769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525521, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.419567) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11171743 bytes
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.434037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.7 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9283, records dropped: 514 output_compression: NoCompression
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.434099) EVENT_LOG_v1 {"time_micros": 1769525521434077, "job": 98, "event": "compaction_finished", "compaction_time_micros": 84753, "compaction_time_cpu_micros": 26533, "output_level": 6, "num_output_files": 1, "total_output_size": 11171743, "num_input_records": 9283, "num_output_records": 8769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521434898, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525521437081, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.334480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:52:01.437240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:52:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:52:02.442 154802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:e8:8f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:38:34:79:e1:7f'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 27 09:52:02 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:52:02.444 154802 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 27 09:52:02 np0005597378 nova_compute[238941]: 2026-01-27 14:52:02.443 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:02 np0005597378 nova_compute[238941]: 2026-01-27 14:52:02.761 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:04 np0005597378 nova_compute[238941]: 2026-01-27 14:52:04.264 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:04 np0005597378 nova_compute[238941]: 2026-01-27 14:52:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:07 np0005597378 nova_compute[238941]: 2026-01-27 14:52:07.766 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:08 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:52:08.445 154802 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=65761215-e4d7-402d-90c8-18b025613da8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 27 09:52:09 np0005597378 nova_compute[238941]: 2026-01-27 14:52:09.266 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:09 np0005597378 nova_compute[238941]: 2026-01-27 14:52:09.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:11 np0005597378 nova_compute[238941]: 2026-01-27 14:52:11.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:11 np0005597378 nova_compute[238941]: 2026-01-27 14:52:11.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:52:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:11 np0005597378 nova_compute[238941]: 2026-01-27 14:52:11.856 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:52:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:12 np0005597378 nova_compute[238941]: 2026-01-27 14:52:12.769 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:14 np0005597378 nova_compute[238941]: 2026-01-27 14:52:14.269 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:15 np0005597378 nova_compute[238941]: 2026-01-27 14:52:15.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:52:17
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms']
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:52:17 np0005597378 nova_compute[238941]: 2026-01-27 14:52:17.774 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:52:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:52:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:52:19 np0005597378 nova_compute[238941]: 2026-01-27 14:52:19.271 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:22 np0005597378 nova_compute[238941]: 2026-01-27 14:52:22.777 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:24 np0005597378 nova_compute[238941]: 2026-01-27 14:52:24.273 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:24 np0005597378 podman[403905]: 2026-01-27 14:52:24.709436687 +0000 UTC m=+0.047150805 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 27 09:52:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:27 np0005597378 nova_compute[238941]: 2026-01-27 14:52:27.782 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:52:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:52:29 np0005597378 nova_compute[238941]: 2026-01-27 14:52:29.274 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:29 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:30 np0005597378 podman[403926]: 2026-01-27 14:52:30.742351005 +0000 UTC m=+0.085932886 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:52:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:31 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.524 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.556 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.557 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.557 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.557 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.558 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:52:32 np0005597378 nova_compute[238941]: 2026-01-27 14:52:32.784 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:52:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110211035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.170 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.358 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.360 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.360 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.361 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.462 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.462 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:52:33 np0005597378 nova_compute[238941]: 2026-01-27 14:52:33.483 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:52:33 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:52:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3696338863' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:52:34 np0005597378 nova_compute[238941]: 2026-01-27 14:52:34.080 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:52:34 np0005597378 nova_compute[238941]: 2026-01-27 14:52:34.086 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:52:34 np0005597378 nova_compute[238941]: 2026-01-27 14:52:34.123 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:52:34 np0005597378 nova_compute[238941]: 2026-01-27 14:52:34.125 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:52:34 np0005597378 nova_compute[238941]: 2026-01-27 14:52:34.126 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:52:34 np0005597378 nova_compute[238941]: 2026-01-27 14:52:34.276 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:35 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:52:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:52:37 np0005597378 podman[404139]: 2026-01-27 14:52:37.354440785 +0000 UTC m=+0.024841007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:52:37 np0005597378 podman[404139]: 2026-01-27 14:52:37.508203618 +0000 UTC m=+0.178603820 container create 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:52:37 np0005597378 systemd[1]: Started libpod-conmon-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope.
Jan 27 09:52:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:52:37 np0005597378 podman[404139]: 2026-01-27 14:52:37.768018327 +0000 UTC m=+0.438418559 container init 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:52:37 np0005597378 podman[404139]: 2026-01-27 14:52:37.775434396 +0000 UTC m=+0.445834598 container start 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:52:37 np0005597378 systemd[1]: libpod-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope: Deactivated successfully.
Jan 27 09:52:37 np0005597378 heuristic_mclaren[404156]: 167 167
Jan 27 09:52:37 np0005597378 conmon[404156]: conmon 8c7747eeb17bdfc45e13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope/container/memory.events
Jan 27 09:52:37 np0005597378 nova_compute[238941]: 2026-01-27 14:52:37.790 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:37 np0005597378 podman[404139]: 2026-01-27 14:52:37.838988181 +0000 UTC m=+0.509388383 container attach 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 27 09:52:37 np0005597378 podman[404139]: 2026-01-27 14:52:37.839307299 +0000 UTC m=+0.509707501 container died 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:52:37 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:52:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:52:37 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:52:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-bd69f3d83ab78e9792e85336943763dd9e94d74fd7f7365bd4ee0f50466a716d-merged.mount: Deactivated successfully.
Jan 27 09:52:38 np0005597378 podman[404139]: 2026-01-27 14:52:38.419615203 +0000 UTC m=+1.090015395 container remove 8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 27 09:52:38 np0005597378 systemd[1]: libpod-conmon-8c7747eeb17bdfc45e13eba66fad3a543940137bb28ec86015a3bc7a58825887.scope: Deactivated successfully.
Jan 27 09:52:38 np0005597378 podman[404181]: 2026-01-27 14:52:38.555720924 +0000 UTC m=+0.023576454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:52:38 np0005597378 podman[404181]: 2026-01-27 14:52:38.727922972 +0000 UTC m=+0.195778482 container create cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:52:38 np0005597378 systemd[1]: Started libpod-conmon-cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4.scope.
Jan 27 09:52:38 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:52:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:38 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:38 np0005597378 nova_compute[238941]: 2026-01-27 14:52:38.983 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:39 np0005597378 podman[404181]: 2026-01-27 14:52:39.07979068 +0000 UTC m=+0.547646220 container init cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:52:39 np0005597378 podman[404181]: 2026-01-27 14:52:39.087212518 +0000 UTC m=+0.555068018 container start cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:52:39 np0005597378 podman[404181]: 2026-01-27 14:52:39.234416966 +0000 UTC m=+0.702272556 container attach cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:52:39 np0005597378 nova_compute[238941]: 2026-01-27 14:52:39.278 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:39 np0005597378 nova_compute[238941]: 2026-01-27 14:52:39.377 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:39 np0005597378 agitated_cartwright[404197]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:52:39 np0005597378 agitated_cartwright[404197]: --> All data devices are unavailable
Jan 27 09:52:39 np0005597378 systemd[1]: libpod-cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4.scope: Deactivated successfully.
Jan 27 09:52:39 np0005597378 podman[404181]: 2026-01-27 14:52:39.575630318 +0000 UTC m=+1.043485848 container died cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 09:52:39 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5419065ff0d17fdf132b2b86623b718f68c3c4dff450d1220b7e95de1bc271cd-merged.mount: Deactivated successfully.
Jan 27 09:52:39 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:40 np0005597378 podman[404181]: 2026-01-27 14:52:40.206149959 +0000 UTC m=+1.674005469 container remove cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 27 09:52:40 np0005597378 systemd[1]: libpod-conmon-cbb59c446b1a527f22b5140c2b7d1849b6ae4292a9271a21e5a5f7038f8ea6b4.scope: Deactivated successfully.
Jan 27 09:52:40 np0005597378 podman[404289]: 2026-01-27 14:52:40.727751619 +0000 UTC m=+0.094150716 container create d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:52:40 np0005597378 podman[404289]: 2026-01-27 14:52:40.656637881 +0000 UTC m=+0.023036978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:52:40 np0005597378 systemd[1]: Started libpod-conmon-d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7.scope.
Jan 27 09:52:40 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:52:40 np0005597378 podman[404289]: 2026-01-27 14:52:40.968566188 +0000 UTC m=+0.334965285 container init d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:52:40 np0005597378 podman[404289]: 2026-01-27 14:52:40.975691719 +0000 UTC m=+0.342090816 container start d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:52:40 np0005597378 zen_burnell[404305]: 167 167
Jan 27 09:52:40 np0005597378 systemd[1]: libpod-d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7.scope: Deactivated successfully.
Jan 27 09:52:40 np0005597378 podman[404289]: 2026-01-27 14:52:40.996274111 +0000 UTC m=+0.362673238 container attach d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 27 09:52:40 np0005597378 podman[404289]: 2026-01-27 14:52:40.997153424 +0000 UTC m=+0.363552511 container died d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:52:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-eb88b01a5315d4974f838fcc881c6ceaad2cc7aa50157136472db3dc5c986fe1-merged.mount: Deactivated successfully.
Jan 27 09:52:41 np0005597378 podman[404289]: 2026-01-27 14:52:41.13902892 +0000 UTC m=+0.505428017 container remove d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 27 09:52:41 np0005597378 systemd[1]: libpod-conmon-d853f8268ae3461d346ca8986d48d91f6e27cc04d9c4a19002ff4b78c39d1bb7.scope: Deactivated successfully.
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.310843037 +0000 UTC m=+0.046157629 container create a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:52:41 np0005597378 systemd[1]: Started libpod-conmon-a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5.scope.
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.288129268 +0000 UTC m=+0.023443920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:52:41 np0005597378 nova_compute[238941]: 2026-01-27 14:52:41.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:41 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:52:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:41 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.434801152 +0000 UTC m=+0.170115774 container init a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.442413986 +0000 UTC m=+0.177728578 container start a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.463208414 +0000 UTC m=+0.198523046 container attach a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:52:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]: {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:    "0": [
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:        {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "devices": [
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "/dev/loop3"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            ],
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_name": "ceph_lv0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_size": "21470642176",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "name": "ceph_lv0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "tags": {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cluster_name": "ceph",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.crush_device_class": "",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.encrypted": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.objectstore": "bluestore",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osd_id": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.type": "block",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.vdo": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.with_tpm": "0"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            },
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "type": "block",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "vg_name": "ceph_vg0"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:        }
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:    ],
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:    "1": [
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:        {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "devices": [
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "/dev/loop4"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            ],
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_name": "ceph_lv1",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_size": "21470642176",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "name": "ceph_lv1",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "tags": {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cluster_name": "ceph",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.crush_device_class": "",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.encrypted": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.objectstore": "bluestore",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osd_id": "1",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.type": "block",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.vdo": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.with_tpm": "0"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            },
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "type": "block",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "vg_name": "ceph_vg1"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:        }
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:    ],
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:    "2": [
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:        {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "devices": [
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "/dev/loop5"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            ],
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_name": "ceph_lv2",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_size": "21470642176",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "name": "ceph_lv2",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "tags": {
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.cluster_name": "ceph",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.crush_device_class": "",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.encrypted": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.objectstore": "bluestore",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osd_id": "2",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.type": "block",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.vdo": "0",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:                "ceph.with_tpm": "0"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            },
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "type": "block",
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:            "vg_name": "ceph_vg2"
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:        }
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]:    ]
Jan 27 09:52:41 np0005597378 goofy_volhard[404346]: }
Jan 27 09:52:41 np0005597378 systemd[1]: libpod-a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5.scope: Deactivated successfully.
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.76469082 +0000 UTC m=+0.500005432 container died a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:52:41 np0005597378 systemd[1]: var-lib-containers-storage-overlay-30acdcae8e8080fb361c884dc53474b4b43aaf6aed25927d9a5222fd3938dbd1-merged.mount: Deactivated successfully.
Jan 27 09:52:41 np0005597378 podman[404330]: 2026-01-27 14:52:41.904055898 +0000 UTC m=+0.639370490 container remove a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:52:41 np0005597378 systemd[1]: libpod-conmon-a35c8c03b6f5ef949eb255cdae1d669772d9d4a6dc0e31416d0a783eaf0e94f5.scope: Deactivated successfully.
Jan 27 09:52:41 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.418422723 +0000 UTC m=+0.072713421 container create efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.370626941 +0000 UTC m=+0.024917659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:52:42 np0005597378 systemd[1]: Started libpod-conmon-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope.
Jan 27 09:52:42 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.592647107 +0000 UTC m=+0.246937825 container init efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.600739574 +0000 UTC m=+0.255030272 container start efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:52:42 np0005597378 intelligent_curran[404449]: 167 167
Jan 27 09:52:42 np0005597378 systemd[1]: libpod-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope: Deactivated successfully.
Jan 27 09:52:42 np0005597378 conmon[404449]: conmon efc79e20b727a92fc996 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope/container/memory.events
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.622091466 +0000 UTC m=+0.276382194 container attach efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.622550868 +0000 UTC m=+0.276841576 container died efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:52:42 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3311b2b0cd5ac91edb2fa7d4121c22feffd3f567facb2ecd316a0e23c185f8c5-merged.mount: Deactivated successfully.
Jan 27 09:52:42 np0005597378 podman[404433]: 2026-01-27 14:52:42.783746372 +0000 UTC m=+0.438037070 container remove efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_curran, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:52:42 np0005597378 systemd[1]: libpod-conmon-efc79e20b727a92fc996d6bdf8850aab0f5ae11799679b2a3f9c5a3512b51b1f.scope: Deactivated successfully.
Jan 27 09:52:42 np0005597378 nova_compute[238941]: 2026-01-27 14:52:42.792 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:43 np0005597378 podman[404474]: 2026-01-27 14:52:42.933676023 +0000 UTC m=+0.025718431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:52:43 np0005597378 podman[404474]: 2026-01-27 14:52:43.089825811 +0000 UTC m=+0.181868189 container create 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:52:43 np0005597378 systemd[1]: Started libpod-conmon-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope.
Jan 27 09:52:43 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:43 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:52:43 np0005597378 podman[404474]: 2026-01-27 14:52:43.352072544 +0000 UTC m=+0.444114922 container init 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 09:52:43 np0005597378 podman[404474]: 2026-01-27 14:52:43.359162614 +0000 UTC m=+0.451204992 container start 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:52:43 np0005597378 nova_compute[238941]: 2026-01-27 14:52:43.385 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:43 np0005597378 nova_compute[238941]: 2026-01-27 14:52:43.386 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:52:43 np0005597378 nova_compute[238941]: 2026-01-27 14:52:43.386 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:52:43 np0005597378 podman[404474]: 2026-01-27 14:52:43.451945994 +0000 UTC m=+0.543988372 container attach 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:52:43 np0005597378 nova_compute[238941]: 2026-01-27 14:52:43.511 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:52:43 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:44 np0005597378 lvm[404568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:52:44 np0005597378 lvm[404569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:52:44 np0005597378 lvm[404568]: VG ceph_vg0 finished
Jan 27 09:52:44 np0005597378 lvm[404569]: VG ceph_vg1 finished
Jan 27 09:52:44 np0005597378 lvm[404571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:52:44 np0005597378 lvm[404571]: VG ceph_vg2 finished
Jan 27 09:52:44 np0005597378 boring_feynman[404490]: {}
Jan 27 09:52:44 np0005597378 systemd[1]: libpod-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope: Deactivated successfully.
Jan 27 09:52:44 np0005597378 systemd[1]: libpod-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope: Consumed 1.273s CPU time.
Jan 27 09:52:44 np0005597378 podman[404474]: 2026-01-27 14:52:44.184999544 +0000 UTC m=+1.277041922 container died 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:52:44 np0005597378 nova_compute[238941]: 2026-01-27 14:52:44.281 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:44 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1f8b1e54efa531571b1dfc2741ee2b7caf72610a7269f68d0415d8a9c8639612-merged.mount: Deactivated successfully.
Jan 27 09:52:44 np0005597378 nova_compute[238941]: 2026-01-27 14:52:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:44 np0005597378 podman[404474]: 2026-01-27 14:52:44.766139071 +0000 UTC m=+1.858181439 container remove 29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_feynman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:52:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:52:44 np0005597378 systemd[1]: libpod-conmon-29f46d53b6c5b11450a4c6b2dde2ab163a914059e6aa720b9bc6a5f9d85a9a3c.scope: Deactivated successfully.
Jan 27 09:52:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:52:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:52:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:52:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:52:45 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:52:45 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:52:46.364 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:52:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:52:46.364 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:52:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:52:46.364 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:52:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:47 np0005597378 nova_compute[238941]: 2026-01-27 14:52:47.797 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:52:47 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:49 np0005597378 nova_compute[238941]: 2026-01-27 14:52:49.283 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:49 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:50 np0005597378 nova_compute[238941]: 2026-01-27 14:52:50.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:51 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:52 np0005597378 nova_compute[238941]: 2026-01-27 14:52:52.800 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:53 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:54 np0005597378 nova_compute[238941]: 2026-01-27 14:52:54.314 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:55 np0005597378 podman[404613]: 2026-01-27 14:52:55.753495778 +0000 UTC m=+0.090298532 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 09:52:55 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:52:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:52:57 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1368 writes, 6264 keys, 1368 commit groups, 1.0 writes per commit group, ingest: 8.91 MB, 0.01 MB/s#012Interval WAL: 1368 writes, 1368 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.5      2.95              0.26        49    0.060       0      0       0.0       0.0#012  L6      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     76.9     65.4      6.37              1.13        48    0.133    323K    25K       0.0       0.0#012 Sum      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     52.6     53.7      9.33              1.39        97    0.096    323K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     53.5     54.2      1.08              0.16        10    0.108     45K   2547       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     76.9     65.4      6.37              1.13        48    0.133    323K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.6      2.94              0.26        48    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.9      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 600.0 interval#012Flush(GB): cumulative 0.082, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.49 GB write, 0.08 MB/s write, 0.48 GB read, 0.08 MB/s read, 9.3 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ec4e4d38d0#2 capacity: 304.00 MB usage: 59.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000504 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3980,56.97 MB,18.7407%) FilterBlock(98,908.23 KB,0.291759%) IndexBlock(98,1.44 MB,0.474493%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 27 09:52:57 np0005597378 nova_compute[238941]: 2026-01-27 14:52:57.805 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:57 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:52:58 np0005597378 nova_compute[238941]: 2026-01-27 14:52:58.803 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:52:59 np0005597378 nova_compute[238941]: 2026-01-27 14:52:59.317 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:52:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:52:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612611081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:52:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:52:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612611081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:52:59 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:01 np0005597378 podman[404633]: 2026-01-27 14:53:01.763631794 +0000 UTC m=+0.101558175 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:53:01 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:02 np0005597378 nova_compute[238941]: 2026-01-27 14:53:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:02 np0005597378 nova_compute[238941]: 2026-01-27 14:53:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:53:02 np0005597378 nova_compute[238941]: 2026-01-27 14:53:02.808 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:03 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:04 np0005597378 nova_compute[238941]: 2026-01-27 14:53:04.347 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:04 np0005597378 nova_compute[238941]: 2026-01-27 14:53:04.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:05 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:07 np0005597378 nova_compute[238941]: 2026-01-27 14:53:07.812 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:07 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:09 np0005597378 nova_compute[238941]: 2026-01-27 14:53:09.349 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:09 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:11 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:12 np0005597378 nova_compute[238941]: 2026-01-27 14:53:12.817 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:13 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:14 np0005597378 nova_compute[238941]: 2026-01-27 14:53:14.354 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:15 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:53:17
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:53:17 np0005597378 nova_compute[238941]: 2026-01-27 14:53:17.821 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:53:17 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:53:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:53:19 np0005597378 nova_compute[238941]: 2026-01-27 14:53:19.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:19 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:21 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:22 np0005597378 nova_compute[238941]: 2026-01-27 14:53:22.825 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:23 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:24 np0005597378 nova_compute[238941]: 2026-01-27 14:53:24.356 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:25 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:26 np0005597378 podman[404657]: 2026-01-27 14:53:26.757492145 +0000 UTC m=+0.096521340 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 27 09:53:27 np0005597378 nova_compute[238941]: 2026-01-27 14:53:27.847 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:27 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:53:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:53:29 np0005597378 nova_compute[238941]: 2026-01-27 14:53:29.359 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:32 np0005597378 podman[404675]: 2026-01-27 14:53:32.771907895 +0000 UTC m=+0.109011704 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 09:53:32 np0005597378 nova_compute[238941]: 2026-01-27 14:53:32.850 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.360 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.474 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.475 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.475 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.475 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:53:34 np0005597378 nova_compute[238941]: 2026-01-27 14:53:34.476 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:53:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:53:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086512471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:53:35 np0005597378 nova_compute[238941]: 2026-01-27 14:53:35.095 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:53:35 np0005597378 nova_compute[238941]: 2026-01-27 14:53:35.352 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:53:35 np0005597378 nova_compute[238941]: 2026-01-27 14:53:35.353 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:53:35 np0005597378 nova_compute[238941]: 2026-01-27 14:53:35.353 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:53:35 np0005597378 nova_compute[238941]: 2026-01-27 14:53:35.354 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:53:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:36 np0005597378 nova_compute[238941]: 2026-01-27 14:53:36.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:53:36 np0005597378 nova_compute[238941]: 2026-01-27 14:53:36.319 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:53:36 np0005597378 nova_compute[238941]: 2026-01-27 14:53:36.342 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:53:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:53:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416231754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:53:37 np0005597378 nova_compute[238941]: 2026-01-27 14:53:37.000 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:53:37 np0005597378 nova_compute[238941]: 2026-01-27 14:53:37.007 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:53:37 np0005597378 nova_compute[238941]: 2026-01-27 14:53:37.046 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:53:37 np0005597378 nova_compute[238941]: 2026-01-27 14:53:37.048 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:53:37 np0005597378 nova_compute[238941]: 2026-01-27 14:53:37.048 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:53:37 np0005597378 nova_compute[238941]: 2026-01-27 14:53:37.854 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:39 np0005597378 nova_compute[238941]: 2026-01-27 14:53:39.363 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:41 np0005597378 nova_compute[238941]: 2026-01-27 14:53:41.049 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:41 np0005597378 nova_compute[238941]: 2026-01-27 14:53:41.050 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:41 np0005597378 nova_compute[238941]: 2026-01-27 14:53:41.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:42 np0005597378 nova_compute[238941]: 2026-01-27 14:53:42.859 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:44 np0005597378 nova_compute[238941]: 2026-01-27 14:53:44.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:44 np0005597378 nova_compute[238941]: 2026-01-27 14:53:44.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:44 np0005597378 nova_compute[238941]: 2026-01-27 14:53:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:53:44 np0005597378 nova_compute[238941]: 2026-01-27 14:53:44.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:53:44 np0005597378 nova_compute[238941]: 2026-01-27 14:53:44.557 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:53:45 np0005597378 nova_compute[238941]: 2026-01-27 14:53:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:53:45 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:53:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:46 np0005597378 podman[404886]: 2026-01-27 14:53:46.236472184 +0000 UTC m=+0.024850118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:53:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:53:46.366 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:53:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:53:46.366 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:53:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:53:46.366 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:53:46 np0005597378 podman[404886]: 2026-01-27 14:53:46.378968806 +0000 UTC m=+0.167346710 container create 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:53:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:46 np0005597378 systemd[1]: Started libpod-conmon-3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6.scope.
Jan 27 09:53:46 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:53:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:53:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:53:46 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:53:46 np0005597378 podman[404886]: 2026-01-27 14:53:46.851658123 +0000 UTC m=+0.640036057 container init 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:53:46 np0005597378 podman[404886]: 2026-01-27 14:53:46.861396534 +0000 UTC m=+0.649774438 container start 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:53:46 np0005597378 determined_babbage[404902]: 167 167
Jan 27 09:53:46 np0005597378 systemd[1]: libpod-3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6.scope: Deactivated successfully.
Jan 27 09:53:47 np0005597378 podman[404886]: 2026-01-27 14:53:47.484609279 +0000 UTC m=+1.272987203 container attach 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:53:47 np0005597378 podman[404886]: 2026-01-27 14:53:47.485671297 +0000 UTC m=+1.274049221 container died 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:53:47 np0005597378 systemd[1]: var-lib-containers-storage-overlay-7c70abed2db423384bb408dc63eb5ccb34e79a3938fe334dad95bcee47a169ad-merged.mount: Deactivated successfully.
Jan 27 09:53:47 np0005597378 podman[404886]: 2026-01-27 14:53:47.673053863 +0000 UTC m=+1.461431767 container remove 3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:53:47 np0005597378 systemd[1]: libpod-conmon-3d917db2519b04589b519437f893acd116503aec82b5e3ea9422ff7701427aa6.scope: Deactivated successfully.
Jan 27 09:53:47 np0005597378 nova_compute[238941]: 2026-01-27 14:53:47.864 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:53:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:53:47 np0005597378 podman[404925]: 2026-01-27 14:53:47.833797625 +0000 UTC m=+0.029982316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:53:47 np0005597378 podman[404925]: 2026-01-27 14:53:47.9279572 +0000 UTC m=+0.124141861 container create 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 27 09:53:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:48 np0005597378 systemd[1]: Started libpod-conmon-63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f.scope.
Jan 27 09:53:48 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:53:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:48 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:48 np0005597378 podman[404925]: 2026-01-27 14:53:48.160713582 +0000 UTC m=+0.356898273 container init 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 27 09:53:48 np0005597378 podman[404925]: 2026-01-27 14:53:48.167503854 +0000 UTC m=+0.363688515 container start 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:53:48 np0005597378 podman[404925]: 2026-01-27 14:53:48.178379336 +0000 UTC m=+0.374564027 container attach 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:53:48 np0005597378 busy_greider[404942]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:53:48 np0005597378 busy_greider[404942]: --> All data devices are unavailable
Jan 27 09:53:48 np0005597378 systemd[1]: libpod-63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f.scope: Deactivated successfully.
Jan 27 09:53:48 np0005597378 podman[404925]: 2026-01-27 14:53:48.664943536 +0000 UTC m=+0.861128227 container died 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 27 09:53:49 np0005597378 nova_compute[238941]: 2026-01-27 14:53:49.366 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:49 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5593bea62a8ffea718f8b94ac95363775d8cae10624d725fdcb7cadbfee0f6b0-merged.mount: Deactivated successfully.
Jan 27 09:53:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:50 np0005597378 podman[404925]: 2026-01-27 14:53:50.321511077 +0000 UTC m=+2.517695738 container remove 63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:53:50 np0005597378 systemd[1]: libpod-conmon-63ca868269f7105761936b36becc61670be167439a666438dd54bb440577e11f.scope: Deactivated successfully.
Jan 27 09:53:50 np0005597378 podman[405037]: 2026-01-27 14:53:50.820255433 +0000 UTC m=+0.026350328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:53:51 np0005597378 nova_compute[238941]: 2026-01-27 14:53:51.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:53:51 np0005597378 podman[405037]: 2026-01-27 14:53:51.477990144 +0000 UTC m=+0.684085019 container create 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:53:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:51 np0005597378 systemd[1]: Started libpod-conmon-7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec.scope.
Jan 27 09:53:51 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:53:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:52 np0005597378 podman[405037]: 2026-01-27 14:53:52.022257911 +0000 UTC m=+1.228352816 container init 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 27 09:53:52 np0005597378 podman[405037]: 2026-01-27 14:53:52.029343041 +0000 UTC m=+1.235437916 container start 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:53:52 np0005597378 modest_wozniak[405053]: 167 167
Jan 27 09:53:52 np0005597378 systemd[1]: libpod-7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec.scope: Deactivated successfully.
Jan 27 09:53:52 np0005597378 podman[405037]: 2026-01-27 14:53:52.270766097 +0000 UTC m=+1.476860992 container attach 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:53:52 np0005597378 podman[405037]: 2026-01-27 14:53:52.271216178 +0000 UTC m=+1.477311063 container died 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 27 09:53:52 np0005597378 systemd[1]: var-lib-containers-storage-overlay-d962f14779601a753672cd5812c14269f9a083a9a2dc0e9576517fb8808cc498-merged.mount: Deactivated successfully.
Jan 27 09:53:52 np0005597378 nova_compute[238941]: 2026-01-27 14:53:52.867 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:53 np0005597378 podman[405037]: 2026-01-27 14:53:53.504574589 +0000 UTC m=+2.710669464 container remove 7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:53:53 np0005597378 systemd[1]: libpod-conmon-7bec26ecf1aaaa01f8e1a929a57032ba317499e20735c5b13ea4ba1dd15edcec.scope: Deactivated successfully.
Jan 27 09:53:53 np0005597378 podman[405077]: 2026-01-27 14:53:53.64927763 +0000 UTC m=+0.024763505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:53:53 np0005597378 podman[405077]: 2026-01-27 14:53:53.792413769 +0000 UTC m=+0.167899614 container create 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:53:53 np0005597378 systemd[1]: Started libpod-conmon-33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2.scope.
Jan 27 09:53:53 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:53 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:53:53 np0005597378 podman[405077]: 2026-01-27 14:53:53.988610041 +0000 UTC m=+0.364095906 container init 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:53:53 np0005597378 podman[405077]: 2026-01-27 14:53:53.996512553 +0000 UTC m=+0.371998398 container start 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:53:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:54 np0005597378 practical_mendel[405094]: {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:    "0": [
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:        {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "devices": [
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "/dev/loop3"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            ],
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_name": "ceph_lv0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_size": "21470642176",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "name": "ceph_lv0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "tags": {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cluster_name": "ceph",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.crush_device_class": "",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.encrypted": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.objectstore": "bluestore",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osd_id": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.type": "block",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.vdo": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.with_tpm": "0"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            },
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "type": "block",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "vg_name": "ceph_vg0"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:        }
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:    ],
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:    "1": [
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:        {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "devices": [
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "/dev/loop4"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            ],
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_name": "ceph_lv1",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_size": "21470642176",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "name": "ceph_lv1",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "tags": {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cluster_name": "ceph",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.crush_device_class": "",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.encrypted": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.objectstore": "bluestore",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osd_id": "1",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.type": "block",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.vdo": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.with_tpm": "0"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            },
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "type": "block",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "vg_name": "ceph_vg1"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:        }
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:    ],
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:    "2": [
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:        {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "devices": [
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "/dev/loop5"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            ],
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_name": "ceph_lv2",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_size": "21470642176",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "name": "ceph_lv2",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "tags": {
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.cluster_name": "ceph",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.crush_device_class": "",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.encrypted": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.objectstore": "bluestore",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osd_id": "2",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.type": "block",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.vdo": "0",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:                "ceph.with_tpm": "0"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            },
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "type": "block",
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:            "vg_name": "ceph_vg2"
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:        }
Jan 27 09:53:54 np0005597378 practical_mendel[405094]:    ]
Jan 27 09:53:54 np0005597378 practical_mendel[405094]: }
Jan 27 09:53:54 np0005597378 podman[405077]: 2026-01-27 14:53:54.292027218 +0000 UTC m=+0.667513063 container attach 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:53:54 np0005597378 systemd[1]: libpod-33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2.scope: Deactivated successfully.
Jan 27 09:53:54 np0005597378 podman[405103]: 2026-01-27 14:53:54.357629068 +0000 UTC m=+0.028402893 container died 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:53:54 np0005597378 nova_compute[238941]: 2026-01-27 14:53:54.368 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:55 np0005597378 systemd[1]: var-lib-containers-storage-overlay-fdbe5400cc15076eba9b3c308242670331ea60098f2b8f8cc9f0d0c58adfc05a-merged.mount: Deactivated successfully.
Jan 27 09:53:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:56 np0005597378 podman[405103]: 2026-01-27 14:53:56.339831012 +0000 UTC m=+2.010604857 container remove 33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:53:56 np0005597378 systemd[1]: libpod-conmon-33fc908be693cdb7e5a5f2f283a6ba46b5815c11d3515f7f352834e3e07435b2.scope: Deactivated successfully.
Jan 27 09:53:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:53:56 np0005597378 podman[405179]: 2026-01-27 14:53:56.877863422 +0000 UTC m=+0.024153029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:53:57 np0005597378 podman[405179]: 2026-01-27 14:53:57.072759569 +0000 UTC m=+0.219049156 container create 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:53:57 np0005597378 systemd[1]: Started libpod-conmon-6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06.scope.
Jan 27 09:53:57 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:53:57 np0005597378 nova_compute[238941]: 2026-01-27 14:53:57.873 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:57 np0005597378 podman[405179]: 2026-01-27 14:53:57.898436814 +0000 UTC m=+1.044726431 container init 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:53:57 np0005597378 podman[405179]: 2026-01-27 14:53:57.906014648 +0000 UTC m=+1.052304235 container start 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 27 09:53:57 np0005597378 friendly_perlman[405208]: 167 167
Jan 27 09:53:57 np0005597378 systemd[1]: libpod-6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06.scope: Deactivated successfully.
Jan 27 09:53:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:53:58 np0005597378 podman[405179]: 2026-01-27 14:53:58.348104884 +0000 UTC m=+1.494394501 container attach 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 27 09:53:58 np0005597378 podman[405179]: 2026-01-27 14:53:58.348774793 +0000 UTC m=+1.495064390 container died 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:53:58 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a3abd1b1ac444ff7b1a57f3adc4aa7008945f3da3675c44b403ffe116f45e54f-merged.mount: Deactivated successfully.
Jan 27 09:53:59 np0005597378 nova_compute[238941]: 2026-01-27 14:53:59.370 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:53:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:53:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/598555741' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:53:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:53:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/598555741' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:53:59 np0005597378 podman[405179]: 2026-01-27 14:53:59.967990831 +0000 UTC m=+3.114280418 container remove 6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_perlman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 27 09:53:59 np0005597378 systemd[1]: libpod-conmon-6a0891ac0143ab019ac2913692ad336f10a8e5f67f9ec8fc658b73c232b0fb06.scope: Deactivated successfully.
Jan 27 09:54:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:00 np0005597378 podman[405195]: 2026-01-27 14:54:00.021883467 +0000 UTC m=+2.907111843 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 09:54:00 np0005597378 podman[405239]: 2026-01-27 14:54:00.130444958 +0000 UTC m=+0.023395507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:54:00 np0005597378 podman[405239]: 2026-01-27 14:54:00.378719498 +0000 UTC m=+0.271670027 container create 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 27 09:54:00 np0005597378 systemd[1]: Started libpod-conmon-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope.
Jan 27 09:54:00 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:54:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:54:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:54:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:54:00 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:54:00 np0005597378 podman[405239]: 2026-01-27 14:54:00.92431269 +0000 UTC m=+0.817263229 container init 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:54:00 np0005597378 podman[405239]: 2026-01-27 14:54:00.932216683 +0000 UTC m=+0.825167212 container start 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:54:01 np0005597378 podman[405239]: 2026-01-27 14:54:01.446729192 +0000 UTC m=+1.339679741 container attach 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:54:01 np0005597378 lvm[405335]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:54:01 np0005597378 lvm[405335]: VG ceph_vg1 finished
Jan 27 09:54:01 np0005597378 lvm[405334]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:54:01 np0005597378 lvm[405334]: VG ceph_vg0 finished
Jan 27 09:54:01 np0005597378 lvm[405337]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:54:01 np0005597378 lvm[405337]: VG ceph_vg2 finished
Jan 27 09:54:01 np0005597378 crazy_ganguly[405256]: {}
Jan 27 09:54:01 np0005597378 systemd[1]: libpod-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope: Deactivated successfully.
Jan 27 09:54:01 np0005597378 systemd[1]: libpod-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope: Consumed 1.354s CPU time.
Jan 27 09:54:01 np0005597378 podman[405239]: 2026-01-27 14:54:01.760137957 +0000 UTC m=+1.653088496 container died 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:54:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:02 np0005597378 systemd[1]: var-lib-containers-storage-overlay-5354c1b68973d29fc4ae18f7bc1584e6a1c06ccb81ad0b22db192016788a56b4-merged.mount: Deactivated successfully.
Jan 27 09:54:02 np0005597378 podman[405239]: 2026-01-27 14:54:02.71173949 +0000 UTC m=+2.604690019 container remove 93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:54:02 np0005597378 systemd[1]: libpod-conmon-93b926db2f99a2b2e8f29338351ed6482d35111fa43f0c87f2cb61948d99e1f8.scope: Deactivated successfully.
Jan 27 09:54:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:54:02 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:54:02 np0005597378 nova_compute[238941]: 2026-01-27 14:54:02.877 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:54:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:54:03 np0005597378 ceph-osd[85897]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.81 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 390 writes, 862 keys, 390 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s#012Interval WAL: 390 writes, 184 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5626346df8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Jan 27 09:54:03 np0005597378 nova_compute[238941]: 2026-01-27 14:54:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:03 np0005597378 nova_compute[238941]: 2026-01-27 14:54:03.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:54:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:04 np0005597378 podman[405352]: 2026-01-27 14:54:04.142113193 +0000 UTC m=+0.096862968 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 09:54:04 np0005597378 nova_compute[238941]: 2026-01-27 14:54:04.371 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:04 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:54:04 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:54:05 np0005597378 nova_compute[238941]: 2026-01-27 14:54:05.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:05 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:54:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:07 np0005597378 nova_compute[238941]: 2026-01-27 14:54:07.881 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:54:08 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 48K writes, 184K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.73 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 448 writes, 1100 keys, 448 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 448 writes, 201 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 27 09:54:09 np0005597378 nova_compute[238941]: 2026-01-27 14:54:09.372 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:12 np0005597378 nova_compute[238941]: 2026-01-27 14:54:12.885 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:13 np0005597378 nova_compute[238941]: 2026-01-27 14:54:13.376 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:54:14 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.0 total, 600.0 interval#012Cumulative writes: 38K writes, 156K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 465 writes, 1028 keys, 465 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s#012Interval WAL: 465 writes, 216 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 27 09:54:14 np0005597378 nova_compute[238941]: 2026-01-27 14:54:14.374 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:54:17
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups']
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:54:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:54:17 np0005597378 nova_compute[238941]: 2026-01-27 14:54:17.890 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:54:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:54:19 np0005597378 nova_compute[238941]: 2026-01-27 14:54:19.379 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:22 np0005597378 nova_compute[238941]: 2026-01-27 14:54:22.894 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:24 np0005597378 nova_compute[238941]: 2026-01-27 14:54:24.380 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:27 np0005597378 ceph-mgr[75385]: [devicehealth INFO root] Check health
Jan 27 09:54:27 np0005597378 nova_compute[238941]: 2026-01-27 14:54:27.898 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:54:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:54:29 np0005597378 nova_compute[238941]: 2026-01-27 14:54:29.381 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:30 np0005597378 podman[405405]: 2026-01-27 14:54:30.74169152 +0000 UTC m=+0.067876101 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 09:54:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:32 np0005597378 nova_compute[238941]: 2026-01-27 14:54:32.902 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:34 np0005597378 nova_compute[238941]: 2026-01-27 14:54:34.383 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:34 np0005597378 podman[405422]: 2026-01-27 14:54:34.79341828 +0000 UTC m=+0.131939719 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:54:35 np0005597378 nova_compute[238941]: 2026-01-27 14:54:35.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:35 np0005597378 nova_compute[238941]: 2026-01-27 14:54:35.575 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:54:35 np0005597378 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:54:35 np0005597378 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:54:35 np0005597378 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:54:35 np0005597378 nova_compute[238941]: 2026-01-27 14:54:35.576 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:54:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753348517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:54:36 np0005597378 nova_compute[238941]: 2026-01-27 14:54:36.332 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:54:36 np0005597378 nova_compute[238941]: 2026-01-27 14:54:36.638 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:54:36 np0005597378 nova_compute[238941]: 2026-01-27 14:54:36.639 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:54:36 np0005597378 nova_compute[238941]: 2026-01-27 14:54:36.639 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:54:36 np0005597378 nova_compute[238941]: 2026-01-27 14:54:36.640 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.961724) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525676962001, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1436, "num_deletes": 251, "total_data_size": 2326495, "memory_usage": 2372152, "flush_reason": "Manual Compaction"}
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525676990476, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1336004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68230, "largest_seqno": 69665, "table_properties": {"data_size": 1331020, "index_size": 2315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12934, "raw_average_key_size": 20, "raw_value_size": 1320146, "raw_average_value_size": 2108, "num_data_blocks": 106, "num_entries": 626, "num_filter_entries": 626, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525522, "oldest_key_time": 1769525522, "file_creation_time": 1769525676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 28812 microseconds, and 5184 cpu microseconds.
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.990532) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1336004 bytes OK
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.990555) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.998604) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.998679) EVENT_LOG_v1 {"time_micros": 1769525676998665, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.998724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2320176, prev total WAL file size 2320176, number of live WAL files 2.
Jan 27 09:54:36 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.000088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303130' seq:0, type:0; will stop at (end)
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1304KB)], [161(10MB)]
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677000144, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12507747, "oldest_snapshot_seqno": -1}
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8953 keys, 10111428 bytes, temperature: kUnknown
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677091564, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10111428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10056014, "index_size": 31941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22405, "raw_key_size": 233836, "raw_average_key_size": 26, "raw_value_size": 9900720, "raw_average_value_size": 1105, "num_data_blocks": 1236, "num_entries": 8953, "num_filter_entries": 8953, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.091860) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10111428 bytes
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.107546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.7 rd, 110.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.7 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(16.9) write-amplify(7.6) OK, records in: 9395, records dropped: 442 output_compression: NoCompression
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.107590) EVENT_LOG_v1 {"time_micros": 1769525677107573, "job": 100, "event": "compaction_finished", "compaction_time_micros": 91520, "compaction_time_cpu_micros": 31477, "output_level": 6, "num_output_files": 1, "total_output_size": 10111428, "num_input_records": 9395, "num_output_records": 8953, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677108483, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525677111536, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:36.999864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:54:37 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:54:37.111676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.186 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.186 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.257 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing inventories for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.359 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating ProviderTree inventory for provider cc8b0052-0829-4cee-8aba-4745f236afe4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.360 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Updating inventory in ProviderTree for provider cc8b0052-0829-4cee-8aba-4745f236afe4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.385 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing aggregate associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.408 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Refreshing trait associations for resource provider cc8b0052-0829-4cee-8aba-4745f236afe4, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_BMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.425 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:54:37 np0005597378 nova_compute[238941]: 2026-01-27 14:54:37.905 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:54:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316169089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:54:38 np0005597378 nova_compute[238941]: 2026-01-27 14:54:38.095 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:54:38 np0005597378 nova_compute[238941]: 2026-01-27 14:54:38.104 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:54:38 np0005597378 nova_compute[238941]: 2026-01-27 14:54:38.164 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:54:38 np0005597378 nova_compute[238941]: 2026-01-27 14:54:38.167 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:54:38 np0005597378 nova_compute[238941]: 2026-01-27 14:54:38.167 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:54:39 np0005597378 nova_compute[238941]: 2026-01-27 14:54:39.385 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:42 np0005597378 nova_compute[238941]: 2026-01-27 14:54:42.908 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:43 np0005597378 nova_compute[238941]: 2026-01-27 14:54:43.161 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:43 np0005597378 nova_compute[238941]: 2026-01-27 14:54:43.161 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:43 np0005597378 nova_compute[238941]: 2026-01-27 14:54:43.162 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:44 np0005597378 nova_compute[238941]: 2026-01-27 14:54:44.423 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:45 np0005597378 nova_compute[238941]: 2026-01-27 14:54:45.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:45 np0005597378 nova_compute[238941]: 2026-01-27 14:54:45.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:54:45 np0005597378 nova_compute[238941]: 2026-01-27 14:54:45.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:54:45 np0005597378 nova_compute[238941]: 2026-01-27 14:54:45.505 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:54:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:54:46.367 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:54:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:54:46.368 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:54:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:54:46.368 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:54:46 np0005597378 nova_compute[238941]: 2026-01-27 14:54:46.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:54:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:54:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:54:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:54:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:54:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:54:47 np0005597378 nova_compute[238941]: 2026-01-27 14:54:47.913 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:49 np0005597378 nova_compute[238941]: 2026-01-27 14:54:49.424 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:52 np0005597378 nova_compute[238941]: 2026-01-27 14:54:52.916 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:53 np0005597378 nova_compute[238941]: 2026-01-27 14:54:53.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:54:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:54 np0005597378 nova_compute[238941]: 2026-01-27 14:54:54.427 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:54:57 np0005597378 nova_compute[238941]: 2026-01-27 14:54:57.920 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:54:59 np0005597378 nova_compute[238941]: 2026-01-27 14:54:59.429 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:54:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:54:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2503008954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:54:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:54:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2503008954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:55:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:01 np0005597378 podman[405490]: 2026-01-27 14:55:01.74282069 +0000 UTC m=+0.083605164 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 09:55:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:02 np0005597378 nova_compute[238941]: 2026-01-27 14:55:02.924 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:03 np0005597378 nova_compute[238941]: 2026-01-27 14:55:03.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:03 np0005597378 nova_compute[238941]: 2026-01-27 14:55:03.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:55:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Jan 27 09:55:04 np0005597378 nova_compute[238941]: 2026-01-27 14:55:04.481 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:55:05 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:55:05 np0005597378 podman[405616]: 2026-01-27 14:55:05.676697738 +0000 UTC m=+0.141949628 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:55:05 np0005597378 podman[405679]: 2026-01-27 14:55:05.861545126 +0000 UTC m=+0.024600830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:55:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Jan 27 09:55:06 np0005597378 podman[405679]: 2026-01-27 14:55:06.231433986 +0000 UTC m=+0.394489670 container create 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 27 09:55:06 np0005597378 systemd[1]: Started libpod-conmon-85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4.scope.
Jan 27 09:55:06 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:55:06 np0005597378 nova_compute[238941]: 2026-01-27 14:55:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:55:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:55:06 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:55:06 np0005597378 podman[405679]: 2026-01-27 14:55:06.536082328 +0000 UTC m=+0.699138032 container init 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:55:06 np0005597378 podman[405679]: 2026-01-27 14:55:06.545901951 +0000 UTC m=+0.708957645 container start 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:55:06 np0005597378 naughty_jennings[405695]: 167 167
Jan 27 09:55:06 np0005597378 systemd[1]: libpod-85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4.scope: Deactivated successfully.
Jan 27 09:55:06 np0005597378 podman[405679]: 2026-01-27 14:55:06.626302927 +0000 UTC m=+0.789358621 container attach 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 27 09:55:06 np0005597378 podman[405679]: 2026-01-27 14:55:06.628903287 +0000 UTC m=+0.791959001 container died 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 27 09:55:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:07 np0005597378 systemd[1]: var-lib-containers-storage-overlay-dc85b7e55989937fa2260f645a984a7412e2a4e2cd594e73dddeb0dda690590b-merged.mount: Deactivated successfully.
Jan 27 09:55:07 np0005597378 podman[405679]: 2026-01-27 14:55:07.806038359 +0000 UTC m=+1.969094043 container remove 85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 27 09:55:07 np0005597378 systemd[1]: libpod-conmon-85684f0dbff67844d8cfccd1f79636902c37c03b2de61366ec3ea0634524efc4.scope: Deactivated successfully.
Jan 27 09:55:07 np0005597378 nova_compute[238941]: 2026-01-27 14:55:07.927 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Jan 27 09:55:08 np0005597378 podman[405719]: 2026-01-27 14:55:07.953860434 +0000 UTC m=+0.026043710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:55:08 np0005597378 podman[405719]: 2026-01-27 14:55:08.114872722 +0000 UTC m=+0.187055988 container create 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 09:55:08 np0005597378 systemd[1]: Started libpod-conmon-00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a.scope.
Jan 27 09:55:08 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:55:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:08 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:08 np0005597378 podman[405719]: 2026-01-27 14:55:08.420657713 +0000 UTC m=+0.492840999 container init 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 09:55:08 np0005597378 podman[405719]: 2026-01-27 14:55:08.427922088 +0000 UTC m=+0.500105354 container start 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:55:08 np0005597378 podman[405719]: 2026-01-27 14:55:08.508720475 +0000 UTC m=+0.580903771 container attach 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 09:55:08 np0005597378 dreamy_jepsen[405736]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:55:08 np0005597378 dreamy_jepsen[405736]: --> All data devices are unavailable
Jan 27 09:55:08 np0005597378 systemd[1]: libpod-00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a.scope: Deactivated successfully.
Jan 27 09:55:08 np0005597378 podman[405756]: 2026-01-27 14:55:08.958615821 +0000 UTC m=+0.030770756 container died 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:55:09 np0005597378 systemd[1]: var-lib-containers-storage-overlay-98552318bb8f4925560a0084a8ac5268e1521e707717c87f0839bca538f6dea4-merged.mount: Deactivated successfully.
Jan 27 09:55:09 np0005597378 nova_compute[238941]: 2026-01-27 14:55:09.483 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:09 np0005597378 podman[405756]: 2026-01-27 14:55:09.597469286 +0000 UTC m=+0.669624211 container remove 00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:55:09 np0005597378 systemd[1]: libpod-conmon-00a667f8a5b13d1804464a89e503288b1e4feed66ae002c2e054a4aa48f9506a.scope: Deactivated successfully.
Jan 27 09:55:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:55:10 np0005597378 podman[405833]: 2026-01-27 14:55:10.053369513 +0000 UTC m=+0.024019965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:55:10 np0005597378 podman[405833]: 2026-01-27 14:55:10.259550523 +0000 UTC m=+0.230200945 container create 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:55:10 np0005597378 systemd[1]: Started libpod-conmon-8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0.scope.
Jan 27 09:55:10 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:55:10 np0005597378 podman[405833]: 2026-01-27 14:55:10.584294643 +0000 UTC m=+0.554945095 container init 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:55:10 np0005597378 podman[405833]: 2026-01-27 14:55:10.592416221 +0000 UTC m=+0.563066643 container start 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:55:10 np0005597378 epic_faraday[405849]: 167 167
Jan 27 09:55:10 np0005597378 systemd[1]: libpod-8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0.scope: Deactivated successfully.
Jan 27 09:55:10 np0005597378 podman[405833]: 2026-01-27 14:55:10.662318156 +0000 UTC m=+0.632968578 container attach 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 27 09:55:10 np0005597378 podman[405833]: 2026-01-27 14:55:10.663135858 +0000 UTC m=+0.633786280 container died 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:55:11 np0005597378 systemd[1]: var-lib-containers-storage-overlay-1e8e21529ceb8056daeb66e065f68fd57fd28ae5d489705118a135a03092c296-merged.mount: Deactivated successfully.
Jan 27 09:55:11 np0005597378 podman[405833]: 2026-01-27 14:55:11.71650189 +0000 UTC m=+1.687152312 container remove 8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_faraday, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:55:11 np0005597378 systemd[1]: libpod-conmon-8665068e46197815133a198709d3de580252023e72e5ed3c4eeda0d132d069b0.scope: Deactivated successfully.
Jan 27 09:55:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:11 np0005597378 podman[405873]: 2026-01-27 14:55:11.861859878 +0000 UTC m=+0.025667080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:55:12 np0005597378 podman[405873]: 2026-01-27 14:55:12.009324453 +0000 UTC m=+0.173131635 container create aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:55:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 27 09:55:12 np0005597378 systemd[1]: Started libpod-conmon-aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d.scope.
Jan 27 09:55:12 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:55:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:12 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:12 np0005597378 podman[405873]: 2026-01-27 14:55:12.381219568 +0000 UTC m=+0.545026770 container init aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:55:12 np0005597378 podman[405873]: 2026-01-27 14:55:12.388242707 +0000 UTC m=+0.552049889 container start aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:55:12 np0005597378 podman[405873]: 2026-01-27 14:55:12.562539441 +0000 UTC m=+0.726346643 container attach aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 27 09:55:12 np0005597378 practical_knuth[405890]: {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:    "0": [
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:        {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "devices": [
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "/dev/loop3"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            ],
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_name": "ceph_lv0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_size": "21470642176",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "name": "ceph_lv0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "tags": {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cluster_name": "ceph",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.crush_device_class": "",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.encrypted": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.objectstore": "bluestore",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osd_id": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.type": "block",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.vdo": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.with_tpm": "0"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            },
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "type": "block",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "vg_name": "ceph_vg0"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:        }
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:    ],
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:    "1": [
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:        {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "devices": [
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "/dev/loop4"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            ],
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_name": "ceph_lv1",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_size": "21470642176",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "name": "ceph_lv1",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "tags": {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cluster_name": "ceph",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.crush_device_class": "",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.encrypted": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.objectstore": "bluestore",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osd_id": "1",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.type": "block",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.vdo": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.with_tpm": "0"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            },
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "type": "block",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "vg_name": "ceph_vg1"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:        }
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:    ],
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:    "2": [
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:        {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "devices": [
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "/dev/loop5"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            ],
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_name": "ceph_lv2",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_size": "21470642176",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "name": "ceph_lv2",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "tags": {
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.cluster_name": "ceph",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.crush_device_class": "",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.encrypted": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.objectstore": "bluestore",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osd_id": "2",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.type": "block",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.vdo": "0",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:                "ceph.with_tpm": "0"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            },
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "type": "block",
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:            "vg_name": "ceph_vg2"
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:        }
Jan 27 09:55:12 np0005597378 practical_knuth[405890]:    ]
Jan 27 09:55:12 np0005597378 practical_knuth[405890]: }
Jan 27 09:55:12 np0005597378 systemd[1]: libpod-aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d.scope: Deactivated successfully.
Jan 27 09:55:12 np0005597378 podman[405873]: 2026-01-27 14:55:12.696624037 +0000 UTC m=+0.860431229 container died aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:55:12 np0005597378 nova_compute[238941]: 2026-01-27 14:55:12.930 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:13 np0005597378 systemd[1]: var-lib-containers-storage-overlay-0cf27e8d791b92fac4d2ff57de6eadf89379e63ea657890c7f18ba5327ad9aff-merged.mount: Deactivated successfully.
Jan 27 09:55:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 27 09:55:14 np0005597378 podman[405873]: 2026-01-27 14:55:14.317543982 +0000 UTC m=+2.481351174 container remove aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:55:14 np0005597378 systemd[1]: libpod-conmon-aba9cd3b4445e24f1d644bccacc74de3ee51a483024da908e42902dc2f6b084d.scope: Deactivated successfully.
Jan 27 09:55:14 np0005597378 nova_compute[238941]: 2026-01-27 14:55:14.486 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:14 np0005597378 podman[405973]: 2026-01-27 14:55:14.745499039 +0000 UTC m=+0.023842480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:55:14 np0005597378 podman[405973]: 2026-01-27 14:55:14.977949513 +0000 UTC m=+0.256292934 container create 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 27 09:55:15 np0005597378 systemd[1]: Started libpod-conmon-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope.
Jan 27 09:55:15 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:55:15 np0005597378 podman[405973]: 2026-01-27 14:55:15.222137993 +0000 UTC m=+0.500481424 container init 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 27 09:55:15 np0005597378 podman[405973]: 2026-01-27 14:55:15.232149272 +0000 UTC m=+0.510492703 container start 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:55:15 np0005597378 upbeat_golick[405988]: 167 167
Jan 27 09:55:15 np0005597378 systemd[1]: libpod-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope: Deactivated successfully.
Jan 27 09:55:15 np0005597378 conmon[405988]: conmon 8f6b18961d8576f22668 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope/container/memory.events
Jan 27 09:55:15 np0005597378 podman[405973]: 2026-01-27 14:55:15.393038577 +0000 UTC m=+0.671382028 container attach 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:55:15 np0005597378 podman[405973]: 2026-01-27 14:55:15.393690644 +0000 UTC m=+0.672034065 container died 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 27 09:55:15 np0005597378 systemd[1]: var-lib-containers-storage-overlay-43832fea0a21fd86dd6fc77bb60b756f2bb33ad3f1328dac303a2feed589612c-merged.mount: Deactivated successfully.
Jan 27 09:55:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Jan 27 09:55:16 np0005597378 podman[405973]: 2026-01-27 14:55:16.278218378 +0000 UTC m=+1.556561799 container remove 8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:55:16 np0005597378 systemd[1]: libpod-conmon-8f6b18961d8576f2266810d47c63f1e66fcabd2491998e1a2e5c40934da7a7c3.scope: Deactivated successfully.
Jan 27 09:55:16 np0005597378 podman[406011]: 2026-01-27 14:55:16.429865695 +0000 UTC m=+0.029082702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:55:16 np0005597378 podman[406011]: 2026-01-27 14:55:16.648504819 +0000 UTC m=+0.247721796 container create fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:55:16 np0005597378 systemd[1]: Started libpod-conmon-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope.
Jan 27 09:55:16 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:55:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:16 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:55:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:17 np0005597378 podman[406011]: 2026-01-27 14:55:17.038459028 +0000 UTC m=+0.637676035 container init fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 27 09:55:17 np0005597378 podman[406011]: 2026-01-27 14:55:17.046054761 +0000 UTC m=+0.645271738 container start fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:55:17 np0005597378 podman[406011]: 2026-01-27 14:55:17.229977955 +0000 UTC m=+0.829194932 container attach fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:55:17
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log']
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:55:17 np0005597378 lvm[406106]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:55:17 np0005597378 lvm[406107]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:55:17 np0005597378 lvm[406107]: VG ceph_vg1 finished
Jan 27 09:55:17 np0005597378 lvm[406106]: VG ceph_vg0 finished
Jan 27 09:55:17 np0005597378 lvm[406109]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:55:17 np0005597378 lvm[406109]: VG ceph_vg2 finished
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:55:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:55:17 np0005597378 naughty_cartwright[406028]: {}
Jan 27 09:55:17 np0005597378 nova_compute[238941]: 2026-01-27 14:55:17.935 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:17 np0005597378 systemd[1]: libpod-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope: Deactivated successfully.
Jan 27 09:55:17 np0005597378 systemd[1]: libpod-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope: Consumed 1.476s CPU time.
Jan 27 09:55:17 np0005597378 podman[406011]: 2026-01-27 14:55:17.956194681 +0000 UTC m=+1.555411678 container died fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:55:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:55:18 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a789ba7669f30074b756d85e756fc578ca55d4338691647575b4860cb5f3e3b3-merged.mount: Deactivated successfully.
Jan 27 09:55:19 np0005597378 podman[406011]: 2026-01-27 14:55:19.413895428 +0000 UTC m=+3.013112405 container remove fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cartwright, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:55:19 np0005597378 systemd[1]: libpod-conmon-fb5541d6a416753551bc7f825365263212f9925f8a7c77e97ca36df412fb74e3.scope: Deactivated successfully.
Jan 27 09:55:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:55:19 np0005597378 nova_compute[238941]: 2026-01-27 14:55:19.487 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:55:19 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:55:19 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:55:19 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:55:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Jan 27 09:55:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:55:21 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 27 09:55:22 np0005597378 nova_compute[238941]: 2026-01-27 14:55:22.939 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 27 09:55:24 np0005597378 nova_compute[238941]: 2026-01-27 14:55:24.489 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Jan 27 09:55:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:27 np0005597378 nova_compute[238941]: 2026-01-27 14:55:27.943 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:55:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:55:29 np0005597378 nova_compute[238941]: 2026-01-27 14:55:29.490 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 27 09:55:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 27 09:55:32 np0005597378 podman[406148]: 2026-01-27 14:55:32.773127781 +0000 UTC m=+0.101755540 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 09:55:32 np0005597378 nova_compute[238941]: 2026-01-27 14:55:32.947 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 27 09:55:34 np0005597378 nova_compute[238941]: 2026-01-27 14:55:34.492 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Jan 27 09:55:36 np0005597378 podman[406168]: 2026-01-27 14:55:36.078936675 +0000 UTC m=+0.085562686 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:55:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.414 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.415 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:55:37 np0005597378 nova_compute[238941]: 2026-01-27 14:55:37.949 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:55:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013926238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.060 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:55:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.357 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.358 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.358 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.359 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.423 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.423 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:55:38 np0005597378 nova_compute[238941]: 2026-01-27 14:55:38.445 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:55:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:55:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664164979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:55:39 np0005597378 nova_compute[238941]: 2026-01-27 14:55:39.117 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:55:39 np0005597378 nova_compute[238941]: 2026-01-27 14:55:39.125 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:55:39 np0005597378 nova_compute[238941]: 2026-01-27 14:55:39.145 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:55:39 np0005597378 nova_compute[238941]: 2026-01-27 14:55:39.147 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:55:39 np0005597378 nova_compute[238941]: 2026-01-27 14:55:39.148 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:55:39 np0005597378 nova_compute[238941]: 2026-01-27 14:55:39.494 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:42 np0005597378 nova_compute[238941]: 2026-01-27 14:55:42.142 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:42 np0005597378 nova_compute[238941]: 2026-01-27 14:55:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:42 np0005597378 nova_compute[238941]: 2026-01-27 14:55:42.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:42 np0005597378 nova_compute[238941]: 2026-01-27 14:55:42.952 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:44 np0005597378 nova_compute[238941]: 2026-01-27 14:55:44.533 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:55:46.369 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:55:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:55:46.370 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:55:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:55:46.370 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:55:46 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:47 np0005597378 nova_compute[238941]: 2026-01-27 14:55:47.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:47 np0005597378 nova_compute[238941]: 2026-01-27 14:55:47.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:55:47 np0005597378 nova_compute[238941]: 2026-01-27 14:55:47.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:55:47 np0005597378 nova_compute[238941]: 2026-01-27 14:55:47.401 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:55:47 np0005597378 nova_compute[238941]: 2026-01-27 14:55:47.402 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:55:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:55:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:55:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:55:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:55:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:55:47 np0005597378 nova_compute[238941]: 2026-01-27 14:55:47.975 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:49 np0005597378 nova_compute[238941]: 2026-01-27 14:55:49.535 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:51 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.381258) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752381297, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 854, "num_deletes": 251, "total_data_size": 1146136, "memory_usage": 1166144, "flush_reason": "Manual Compaction"}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752405821, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1135194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69666, "largest_seqno": 70519, "table_properties": {"data_size": 1130914, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9447, "raw_average_key_size": 19, "raw_value_size": 1122316, "raw_average_value_size": 2318, "num_data_blocks": 89, "num_entries": 484, "num_filter_entries": 484, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525677, "oldest_key_time": 1769525677, "file_creation_time": 1769525752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 24637 microseconds, and 4181 cpu microseconds.
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.405889) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1135194 bytes OK
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.405914) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.421484) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.421530) EVENT_LOG_v1 {"time_micros": 1769525752421520, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.421559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1141947, prev total WAL file size 1141947, number of live WAL files 2.
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.422468) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1108KB)], [164(9874KB)]
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752422539, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11246622, "oldest_snapshot_seqno": -1}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8923 keys, 9497818 bytes, temperature: kUnknown
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752517728, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9497818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9443338, "index_size": 31071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 233856, "raw_average_key_size": 26, "raw_value_size": 9289273, "raw_average_value_size": 1041, "num_data_blocks": 1192, "num_entries": 8923, "num_filter_entries": 8923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.517982) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9497818 bytes
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.521187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.1 rd, 99.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(18.3) write-amplify(8.4) OK, records in: 9437, records dropped: 514 output_compression: NoCompression
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.521214) EVENT_LOG_v1 {"time_micros": 1769525752521202, "job": 102, "event": "compaction_finished", "compaction_time_micros": 95254, "compaction_time_cpu_micros": 27316, "output_level": 6, "num_output_files": 1, "total_output_size": 9497818, "num_input_records": 9437, "num_output_records": 8923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752521559, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525752523487, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.422240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:55:52 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:55:52.523536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:55:52 np0005597378 nova_compute[238941]: 2026-01-27 14:55:52.979 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:54 np0005597378 nova_compute[238941]: 2026-01-27 14:55:54.537 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:55 np0005597378 nova_compute[238941]: 2026-01-27 14:55:55.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:55:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:56 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:55:57 np0005597378 nova_compute[238941]: 2026-01-27 14:55:57.983 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:55:59 np0005597378 nova_compute[238941]: 2026-01-27 14:55:59.539 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:55:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:55:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1556719735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:55:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:55:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1556719735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:56:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:01 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:02 np0005597378 nova_compute[238941]: 2026-01-27 14:56:02.986 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:03 np0005597378 podman[406236]: 2026-01-27 14:56:03.711278011 +0000 UTC m=+0.053613618 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:56:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:04 np0005597378 nova_compute[238941]: 2026-01-27 14:56:04.541 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:05 np0005597378 nova_compute[238941]: 2026-01-27 14:56:05.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:05 np0005597378 nova_compute[238941]: 2026-01-27 14:56:05.381 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:56:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:06 np0005597378 nova_compute[238941]: 2026-01-27 14:56:06.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:06 np0005597378 podman[406254]: 2026-01-27 14:56:06.72791526 +0000 UTC m=+0.073037400 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 09:56:06 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:07 np0005597378 nova_compute[238941]: 2026-01-27 14:56:07.988 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:09 np0005597378 nova_compute[238941]: 2026-01-27 14:56:09.543 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:11 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:12 np0005597378 nova_compute[238941]: 2026-01-27 14:56:12.994 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:13 np0005597378 nova_compute[238941]: 2026-01-27 14:56:13.375 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:14 np0005597378 nova_compute[238941]: 2026-01-27 14:56:14.546 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:16 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:56:17
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:56:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:56:17 np0005597378 nova_compute[238941]: 2026-01-27 14:56:17.998 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:56:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:56:19 np0005597378 nova_compute[238941]: 2026-01-27 14:56:19.548 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:56:20 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:56:21 np0005597378 podman[406428]: 2026-01-27 14:56:21.093561545 +0000 UTC m=+0.026860811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:56:21 np0005597378 podman[406428]: 2026-01-27 14:56:21.208256392 +0000 UTC m=+0.141555628 container create 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 27 09:56:21 np0005597378 systemd[1]: Started libpod-conmon-4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48.scope.
Jan 27 09:56:21 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:56:21 np0005597378 podman[406428]: 2026-01-27 14:56:21.447760975 +0000 UTC m=+0.381060261 container init 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 27 09:56:21 np0005597378 podman[406428]: 2026-01-27 14:56:21.458603756 +0000 UTC m=+0.391902992 container start 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:56:21 np0005597378 dazzling_nash[406444]: 167 167
Jan 27 09:56:21 np0005597378 systemd[1]: libpod-4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48.scope: Deactivated successfully.
Jan 27 09:56:21 np0005597378 podman[406428]: 2026-01-27 14:56:21.710980985 +0000 UTC m=+0.644280281 container attach 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:56:21 np0005597378 podman[406428]: 2026-01-27 14:56:21.712422624 +0000 UTC m=+0.645721870 container died 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:56:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:56:21 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:56:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:22 np0005597378 systemd[1]: var-lib-containers-storage-overlay-a277dbac2fbe026609b518605b1d2fed95a301eecc3c8ba307c25cd591d68189-merged.mount: Deactivated successfully.
Jan 27 09:56:23 np0005597378 nova_compute[238941]: 2026-01-27 14:56:23.002 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:23 np0005597378 podman[406428]: 2026-01-27 14:56:23.555914567 +0000 UTC m=+2.489213803 container remove 4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_nash, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 27 09:56:23 np0005597378 systemd[1]: libpod-conmon-4c2d6f600a9528b5e76052663d88d05b8c222526283ffc4df239257e62a02b48.scope: Deactivated successfully.
Jan 27 09:56:23 np0005597378 podman[406467]: 2026-01-27 14:56:23.792478682 +0000 UTC m=+0.094591868 container create 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:56:23 np0005597378 podman[406467]: 2026-01-27 14:56:23.732483293 +0000 UTC m=+0.034596479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:56:23 np0005597378 systemd[1]: Started libpod-conmon-8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50.scope.
Jan 27 09:56:23 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:56:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:23 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:23 np0005597378 podman[406467]: 2026-01-27 14:56:23.945417774 +0000 UTC m=+0.247530990 container init 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 09:56:23 np0005597378 podman[406467]: 2026-01-27 14:56:23.955017221 +0000 UTC m=+0.257130407 container start 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 27 09:56:23 np0005597378 podman[406467]: 2026-01-27 14:56:23.989029424 +0000 UTC m=+0.291142640 container attach 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:56:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:24 np0005597378 reverent_yalow[406483]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:56:24 np0005597378 reverent_yalow[406483]: --> All data devices are unavailable
Jan 27 09:56:24 np0005597378 systemd[1]: libpod-8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50.scope: Deactivated successfully.
Jan 27 09:56:24 np0005597378 podman[406467]: 2026-01-27 14:56:24.447739936 +0000 UTC m=+0.749853122 container died 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:56:24 np0005597378 nova_compute[238941]: 2026-01-27 14:56:24.551 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:24 np0005597378 systemd[1]: var-lib-containers-storage-overlay-c988bd3eceea6b1d9ff9a703277effd044beb6d0e0fb6afaae3fb13757026f60-merged.mount: Deactivated successfully.
Jan 27 09:56:25 np0005597378 podman[406467]: 2026-01-27 14:56:25.206062695 +0000 UTC m=+1.508175881 container remove 8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_yalow, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:56:25 np0005597378 systemd[1]: libpod-conmon-8f2bd1fcc10b1b8c2515f36fb7658b5f0d765a27221a185604ebbbaa3f983d50.scope: Deactivated successfully.
Jan 27 09:56:25 np0005597378 podman[406578]: 2026-01-27 14:56:25.778694793 +0000 UTC m=+0.108470521 container create 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:56:25 np0005597378 podman[406578]: 2026-01-27 14:56:25.695316447 +0000 UTC m=+0.025092215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:56:25 np0005597378 systemd[1]: Started libpod-conmon-987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e.scope.
Jan 27 09:56:25 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:56:25 np0005597378 podman[406578]: 2026-01-27 14:56:25.990970647 +0000 UTC m=+0.320746395 container init 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 27 09:56:25 np0005597378 podman[406578]: 2026-01-27 14:56:25.999743832 +0000 UTC m=+0.329519550 container start 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:56:26 np0005597378 kind_volhard[406594]: 167 167
Jan 27 09:56:26 np0005597378 systemd[1]: libpod-987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e.scope: Deactivated successfully.
Jan 27 09:56:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:26 np0005597378 podman[406578]: 2026-01-27 14:56:26.120057779 +0000 UTC m=+0.449833507 container attach 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 09:56:26 np0005597378 podman[406578]: 2026-01-27 14:56:26.120578103 +0000 UTC m=+0.450353831 container died 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:56:26 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e790fcf5a5e1ac732bbc58953e3578a4602815d5a5cd74de2e0f2834e28d27c7-merged.mount: Deactivated successfully.
Jan 27 09:56:26 np0005597378 podman[406578]: 2026-01-27 14:56:26.518136545 +0000 UTC m=+0.847912283 container remove 987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_volhard, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:56:26 np0005597378 systemd[1]: libpod-conmon-987c00d4303e81fba881b9c787430c7447f65649ca2699ca6366f75b4ec8f76e.scope: Deactivated successfully.
Jan 27 09:56:26 np0005597378 podman[406620]: 2026-01-27 14:56:26.73156646 +0000 UTC m=+0.074906630 container create b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:56:26 np0005597378 podman[406620]: 2026-01-27 14:56:26.682547186 +0000 UTC m=+0.025887376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:56:26 np0005597378 systemd[1]: Started libpod-conmon-b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c.scope.
Jan 27 09:56:26 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:56:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:26 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:26 np0005597378 podman[406620]: 2026-01-27 14:56:26.850259203 +0000 UTC m=+0.193599393 container init b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 27 09:56:26 np0005597378 podman[406620]: 2026-01-27 14:56:26.858860124 +0000 UTC m=+0.202200294 container start b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 09:56:26 np0005597378 podman[406620]: 2026-01-27 14:56:26.888622002 +0000 UTC m=+0.231962202 container attach b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:56:27 np0005597378 sad_galois[406636]: {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:    "0": [
Jan 27 09:56:27 np0005597378 sad_galois[406636]:        {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "devices": [
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "/dev/loop3"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            ],
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_name": "ceph_lv0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_size": "21470642176",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "name": "ceph_lv0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "tags": {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cluster_name": "ceph",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.crush_device_class": "",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.encrypted": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.objectstore": "bluestore",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osd_id": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.type": "block",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.vdo": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.with_tpm": "0"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            },
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "type": "block",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "vg_name": "ceph_vg0"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:        }
Jan 27 09:56:27 np0005597378 sad_galois[406636]:    ],
Jan 27 09:56:27 np0005597378 sad_galois[406636]:    "1": [
Jan 27 09:56:27 np0005597378 sad_galois[406636]:        {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "devices": [
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "/dev/loop4"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            ],
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_name": "ceph_lv1",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_size": "21470642176",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "name": "ceph_lv1",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "tags": {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cluster_name": "ceph",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.crush_device_class": "",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.encrypted": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.objectstore": "bluestore",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osd_id": "1",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.type": "block",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.vdo": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.with_tpm": "0"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            },
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "type": "block",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "vg_name": "ceph_vg1"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:        }
Jan 27 09:56:27 np0005597378 sad_galois[406636]:    ],
Jan 27 09:56:27 np0005597378 sad_galois[406636]:    "2": [
Jan 27 09:56:27 np0005597378 sad_galois[406636]:        {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "devices": [
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "/dev/loop5"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            ],
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_name": "ceph_lv2",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_size": "21470642176",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "name": "ceph_lv2",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "tags": {
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.cluster_name": "ceph",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.crush_device_class": "",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.encrypted": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.objectstore": "bluestore",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osd_id": "2",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.type": "block",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.vdo": "0",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:                "ceph.with_tpm": "0"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            },
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "type": "block",
Jan 27 09:56:27 np0005597378 sad_galois[406636]:            "vg_name": "ceph_vg2"
Jan 27 09:56:27 np0005597378 sad_galois[406636]:        }
Jan 27 09:56:27 np0005597378 sad_galois[406636]:    ]
Jan 27 09:56:27 np0005597378 sad_galois[406636]: }
Jan 27 09:56:27 np0005597378 systemd[1]: libpod-b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c.scope: Deactivated successfully.
Jan 27 09:56:27 np0005597378 podman[406620]: 2026-01-27 14:56:27.178641821 +0000 UTC m=+0.521982011 container died b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 27 09:56:27 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3b9e4f7affdc5b4f86be4b5586649e4a431953adee99c4c5f6351943c53aa023-merged.mount: Deactivated successfully.
Jan 27 09:56:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:27 np0005597378 podman[406620]: 2026-01-27 14:56:27.407068947 +0000 UTC m=+0.750409117 container remove b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:56:27 np0005597378 systemd[1]: libpod-conmon-b3d6eb5db1778971a405cae3f9a67adafad1bc76c4f3ed787baa4bb55bb0316c.scope: Deactivated successfully.
Jan 27 09:56:27 np0005597378 podman[406717]: 2026-01-27 14:56:27.996760923 +0000 UTC m=+0.069305039 container create 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:56:28 np0005597378 nova_compute[238941]: 2026-01-27 14:56:28.005 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:28 np0005597378 podman[406717]: 2026-01-27 14:56:27.954645783 +0000 UTC m=+0.027189929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:56:28 np0005597378 systemd[1]: Started libpod-conmon-23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0.scope.
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:56:28 np0005597378 podman[406717]: 2026-01-27 14:56:28.119541066 +0000 UTC m=+0.192085182 container init 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 09:56:28 np0005597378 podman[406717]: 2026-01-27 14:56:28.132042432 +0000 UTC m=+0.204586548 container start 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:56:28 np0005597378 heuristic_banzai[406734]: 167 167
Jan 27 09:56:28 np0005597378 systemd[1]: libpod-23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0.scope: Deactivated successfully.
Jan 27 09:56:28 np0005597378 podman[406717]: 2026-01-27 14:56:28.144205467 +0000 UTC m=+0.216749583 container attach 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:56:28 np0005597378 podman[406717]: 2026-01-27 14:56:28.145520143 +0000 UTC m=+0.218064259 container died 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 27 09:56:28 np0005597378 systemd[1]: var-lib-containers-storage-overlay-17117829f06994738883f0e8cf4e76b66729d44da68993956adcf3abbd54942d-merged.mount: Deactivated successfully.
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:56:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:56:28 np0005597378 podman[406717]: 2026-01-27 14:56:28.352229547 +0000 UTC m=+0.424773663 container remove 23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 27 09:56:28 np0005597378 systemd[1]: libpod-conmon-23c71785cf88c0a9cfc2a0569ba2177eafaf0220e19ed45f1886889c3cd697c0.scope: Deactivated successfully.
Jan 27 09:56:28 np0005597378 podman[406758]: 2026-01-27 14:56:28.557090981 +0000 UTC m=+0.052805446 container create 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:56:28 np0005597378 systemd[1]: Started libpod-conmon-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope.
Jan 27 09:56:28 np0005597378 podman[406758]: 2026-01-27 14:56:28.531872475 +0000 UTC m=+0.027586960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:56:28 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:56:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:28 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:56:28 np0005597378 podman[406758]: 2026-01-27 14:56:28.655896802 +0000 UTC m=+0.151611287 container init 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 27 09:56:28 np0005597378 podman[406758]: 2026-01-27 14:56:28.666161517 +0000 UTC m=+0.161876012 container start 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 27 09:56:28 np0005597378 podman[406758]: 2026-01-27 14:56:28.675274312 +0000 UTC m=+0.170988797 container attach 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 27 09:56:29 np0005597378 nova_compute[238941]: 2026-01-27 14:56:29.554 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:29 np0005597378 lvm[406853]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:56:29 np0005597378 lvm[406851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:56:29 np0005597378 lvm[406851]: VG ceph_vg0 finished
Jan 27 09:56:29 np0005597378 lvm[406853]: VG ceph_vg1 finished
Jan 27 09:56:29 np0005597378 lvm[406854]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:56:29 np0005597378 lvm[406854]: VG ceph_vg2 finished
Jan 27 09:56:29 np0005597378 fervent_pascal[406773]: {}
Jan 27 09:56:29 np0005597378 systemd[1]: libpod-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope: Deactivated successfully.
Jan 27 09:56:29 np0005597378 podman[406758]: 2026-01-27 14:56:29.719094537 +0000 UTC m=+1.214809022 container died 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:56:29 np0005597378 systemd[1]: libpod-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope: Consumed 1.666s CPU time.
Jan 27 09:56:29 np0005597378 systemd[1]: var-lib-containers-storage-overlay-07324586f269ae07d32ee8cc9521a26ff717f0cad0590c9f8cd42a10f3d3d6b0-merged.mount: Deactivated successfully.
Jan 27 09:56:29 np0005597378 podman[406758]: 2026-01-27 14:56:29.888526551 +0000 UTC m=+1.384241016 container remove 6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:56:29 np0005597378 systemd[1]: libpod-conmon-6d95ca2b008999ca581be878d6d8d50e51df58840d5749e2f5fff940aa367501.scope: Deactivated successfully.
Jan 27 09:56:29 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:56:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:56:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:56:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:56:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:56:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:56:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.443405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525792443487, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 580, "num_deletes": 257, "total_data_size": 604923, "memory_usage": 615400, "flush_reason": "Manual Compaction"}
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525792695540, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 599459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70520, "largest_seqno": 71099, "table_properties": {"data_size": 596298, "index_size": 1068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7233, "raw_average_key_size": 18, "raw_value_size": 589956, "raw_average_value_size": 1520, "num_data_blocks": 47, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769525753, "oldest_key_time": 1769525753, "file_creation_time": 1769525792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 252177 microseconds, and 3247 cpu microseconds.
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.695588) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 599459 bytes OK
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.695613) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.783652) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.783702) EVENT_LOG_v1 {"time_micros": 1769525792783691, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.783730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 601686, prev total WAL file size 628504, number of live WAL files 2.
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.785231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303133' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(585KB)], [167(9275KB)]
Jan 27 09:56:32 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525792785306, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10097277, "oldest_snapshot_seqno": -1}
Jan 27 09:56:33 np0005597378 nova_compute[238941]: 2026-01-27 14:56:33.010 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8786 keys, 9992838 bytes, temperature: kUnknown
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525793276665, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 9992838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9938085, "index_size": 31664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 231935, "raw_average_key_size": 26, "raw_value_size": 9785249, "raw_average_value_size": 1113, "num_data_blocks": 1216, "num_entries": 8786, "num_filter_entries": 8786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769519573, "oldest_key_time": 0, "file_creation_time": 1769525792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fa55fde-2af1-4194-a177-64db194a2554", "db_session_id": "RO2N8YJLBKD38EI8MLHU", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.276927) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 9992838 bytes
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.431282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.5 rd, 20.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(33.5) write-amplify(16.7) OK, records in: 9311, records dropped: 525 output_compression: NoCompression
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.431316) EVENT_LOG_v1 {"time_micros": 1769525793431303, "job": 104, "event": "compaction_finished", "compaction_time_micros": 491433, "compaction_time_cpu_micros": 28177, "output_level": 6, "num_output_files": 1, "total_output_size": 9992838, "num_input_records": 9311, "num_output_records": 8786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525793431704, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769525793433624, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:32.785043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:56:33 np0005597378 ceph-mon[75090]: rocksdb: (Original Log Time 2026/01/27-14:56:33.433750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 27 09:56:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:34 np0005597378 nova_compute[238941]: 2026-01-27 14:56:34.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:34 np0005597378 podman[406895]: 2026-01-27 14:56:34.731058991 +0000 UTC m=+0.066884545 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 09:56:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:37 np0005597378 podman[406915]: 2026-01-27 14:56:37.756021252 +0000 UTC m=+0.084371833 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.014 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.426 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.427 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:56:38 np0005597378 nova_compute[238941]: 2026-01-27 14:56:38.427 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:56:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:56:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372240222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.052 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.233 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.234 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3463MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.234 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.296 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.296 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.328 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.555 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:56:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2893613467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.977 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:56:39 np0005597378 nova_compute[238941]: 2026-01-27 14:56:39.983 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:56:40 np0005597378 nova_compute[238941]: 2026-01-27 14:56:40.000 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:56:40 np0005597378 nova_compute[238941]: 2026-01-27 14:56:40.002 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:56:40 np0005597378 nova_compute[238941]: 2026-01-27 14:56:40.003 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:56:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:42 np0005597378 nova_compute[238941]: 2026-01-27 14:56:42.996 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:43 np0005597378 nova_compute[238941]: 2026-01-27 14:56:43.048 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:43 np0005597378 nova_compute[238941]: 2026-01-27 14:56:43.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:44 np0005597378 nova_compute[238941]: 2026-01-27 14:56:44.381 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:44 np0005597378 nova_compute[238941]: 2026-01-27 14:56:44.558 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:46 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:56:46.370 154802 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:56:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:56:46.371 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:56:46 np0005597378 ovn_metadata_agent[154797]: 2026-01-27 14:56:46.371 154802 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:56:47 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:56:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:56:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:56:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:56:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:56:47 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:56:48 np0005597378 nova_compute[238941]: 2026-01-27 14:56:48.050 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:48 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:48 np0005597378 nova_compute[238941]: 2026-01-27 14:56:48.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:48 np0005597378 nova_compute[238941]: 2026-01-27 14:56:48.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 27 09:56:48 np0005597378 nova_compute[238941]: 2026-01-27 14:56:48.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 27 09:56:48 np0005597378 nova_compute[238941]: 2026-01-27 14:56:48.405 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 27 09:56:49 np0005597378 nova_compute[238941]: 2026-01-27 14:56:49.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:49 np0005597378 nova_compute[238941]: 2026-01-27 14:56:49.559 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:50 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:52 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:52 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:53 np0005597378 nova_compute[238941]: 2026-01-27 14:56:53.089 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:54 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:54 np0005597378 nova_compute[238941]: 2026-01-27 14:56:54.560 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:56 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:56 np0005597378 nova_compute[238941]: 2026-01-27 14:56:56.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:56:57 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:56:58 np0005597378 nova_compute[238941]: 2026-01-27 14:56:58.093 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:58 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:56:59 np0005597378 nova_compute[238941]: 2026-01-27 14:56:59.562 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:56:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 27 09:56:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/89023703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 27 09:56:59 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 27 09:56:59 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/89023703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 27 09:57:00 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:02 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:02 np0005597378 nova_compute[238941]: 2026-01-27 14:57:02.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:02 np0005597378 nova_compute[238941]: 2026-01-27 14:57:02.382 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 27 09:57:02 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:03 np0005597378 nova_compute[238941]: 2026-01-27 14:57:03.134 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:04 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:04 np0005597378 nova_compute[238941]: 2026-01-27 14:57:04.564 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:05 np0005597378 nova_compute[238941]: 2026-01-27 14:57:05.507 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:05 np0005597378 nova_compute[238941]: 2026-01-27 14:57:05.507 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 27 09:57:05 np0005597378 podman[406985]: 2026-01-27 14:57:05.746677699 +0000 UTC m=+0.083509241 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 09:57:06 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:07 np0005597378 nova_compute[238941]: 2026-01-27 14:57:07.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:07 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:08 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:08 np0005597378 nova_compute[238941]: 2026-01-27 14:57:08.138 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:08 np0005597378 podman[407005]: 2026-01-27 14:57:08.789071498 +0000 UTC m=+0.122682431 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 09:57:09 np0005597378 nova_compute[238941]: 2026-01-27 14:57:09.565 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:10 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:12 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:12 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:13 np0005597378 nova_compute[238941]: 2026-01-27 14:57:13.175 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:14 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:14 np0005597378 nova_compute[238941]: 2026-01-27 14:57:14.567 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:16 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Optimize plan auto_2026-01-27_14:57:17
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] do_upmap
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [balancer INFO root] prepared 0/10 upmap changes
Jan 27 09:57:17 np0005597378 nova_compute[238941]: 2026-01-27 14:57:17.383 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:17 np0005597378 nova_compute[238941]: 2026-01-27 14:57:17.383 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 27 09:57:17 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:17 np0005597378 nova_compute[238941]: 2026-01-27 14:57:17.419 238945 DEBUG nova.compute.manager [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] scanning for idle connections..
Jan 27 09:57:17 np0005597378 ceph-mgr[75385]: [volumes INFO mgr_util] cleaning up connections: []
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:18 np0005597378 nova_compute[238941]: 2026-01-27 14:57:18.178 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 27 09:57:18 np0005597378 ceph-mgr[75385]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 27 09:57:19 np0005597378 nova_compute[238941]: 2026-01-27 14:57:19.382 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:19 np0005597378 nova_compute[238941]: 2026-01-27 14:57:19.570 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:20 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:22 np0005597378 systemd-logind[786]: New session 58 of user zuul.
Jan 27 09:57:22 np0005597378 systemd[1]: Started Session 58 of User zuul.
Jan 27 09:57:22 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:22 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:23 np0005597378 nova_compute[238941]: 2026-01-27 14:57:23.181 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:24 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:24 np0005597378 nova_compute[238941]: 2026-01-27 14:57:24.571 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:25 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23314 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:25 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23316 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:26 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:26 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 27 09:57:26 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370301219' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 27 09:57:27 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:28 np0005597378 nova_compute[238941]: 2026-01-27 14:57:28.183 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] _maybe_adjust
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6337238407898844e-05 of space, bias 1.0, pg target 0.004901171522369653 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.696707835131933e-06 of space, bias 1.0, pg target 0.0014090123505395799 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0567608154122002e-06 of space, bias 4.0, pg target 0.0012681129784946402 quantized to 16 (current 16)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 27 09:57:28 np0005597378 ceph-mgr[75385]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 27 09:57:29 np0005597378 nova_compute[238941]: 2026-01-27 14:57:29.573 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:29 np0005597378 ovs-vsctl[407317]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 09:57:30 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:57:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:30 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 09:57:30 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:57:30 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:30 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 09:57:30 np0005597378 virtqemud[238711]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 09:57:31 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: cache status {prefix=cache status} (starting...)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:57:31 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: client ls {prefix=client ls} (starting...)
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:31 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 27 09:57:31 np0005597378 lvm[407866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:57:31 np0005597378 lvm[407866]: VG ceph_vg2 finished
Jan 27 09:57:31 np0005597378 lvm[407873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:57:31 np0005597378 lvm[407873]: VG ceph_vg0 finished
Jan 27 09:57:31 np0005597378 lvm[407876]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:57:31 np0005597378 lvm[407876]: VG ceph_vg1 finished
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.039953128 +0000 UTC m=+0.050661059 container create 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 27 09:57:32 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23320 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.016274103 +0000 UTC m=+0.026982054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:57:32 np0005597378 systemd[1]: Started libpod-conmon-085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9.scope.
Jan 27 09:57:32 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.192468254 +0000 UTC m=+0.203176205 container init 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.20403811 +0000 UTC m=+0.214746031 container start 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.209429473 +0000 UTC m=+0.220137424 container attach 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 27 09:57:32 np0005597378 confident_northcutt[407923]: 167 167
Jan 27 09:57:32 np0005597378 systemd[1]: libpod-085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9.scope: Deactivated successfully.
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.215676397 +0000 UTC m=+0.226384338 container died 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:57:32 np0005597378 systemd[1]: var-lib-containers-storage-overlay-e8ff542957a243287cb8b512ad66e75010196cb081ec688d2b579458946e5260-merged.mount: Deactivated successfully.
Jan 27 09:57:32 np0005597378 podman[407897]: 2026-01-27 14:57:32.310583363 +0000 UTC m=+0.321291284 container remove 085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:57:32 np0005597378 systemd[1]: libpod-conmon-085d31b6b89d7098c56d4de4321138642151834a3e2f2bd588790f43d0cf70d9.scope: Deactivated successfully.
Jan 27 09:57:32 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: damage ls {prefix=damage ls} (starting...)
Jan 27 09:57:32 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:32 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump loads {prefix=dump loads} (starting...)
Jan 27 09:57:32 np0005597378 podman[407992]: 2026-01-27 14:57:32.461646272 +0000 UTC m=+0.020579435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:57:32 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23322 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:32 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 27 09:57:32 np0005597378 podman[407992]: 2026-01-27 14:57:32.707972236 +0000 UTC m=+0.266905369 container create e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 27 09:57:32 np0005597378 systemd[1]: Started libpod-conmon-e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5.scope.
Jan 27 09:57:32 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:57:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:32 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:32 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 27 09:57:32 np0005597378 podman[407992]: 2026-01-27 14:57:32.848991529 +0000 UTC m=+0.407924682 container init e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:57:32 np0005597378 podman[407992]: 2026-01-27 14:57:32.864737275 +0000 UTC m=+0.423670408 container start e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 27 09:57:32 np0005597378 podman[407992]: 2026-01-27 14:57:32.961714736 +0000 UTC m=+0.520647869 container attach e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 27 09:57:32 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 27 09:57:33 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23324 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:33 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 27 09:57:33 np0005597378 nova_compute[238941]: 2026-01-27 14:57:33.187 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 27 09:57:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315590473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 27 09:57:33 np0005597378 pensive_fermi[408061]: --> passed data devices: 0 physical, 3 LVM
Jan 27 09:57:33 np0005597378 pensive_fermi[408061]: --> All data devices are unavailable
Jan 27 09:57:33 np0005597378 podman[407992]: 2026-01-27 14:57:33.389567522 +0000 UTC m=+0.948500665 container died e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 27 09:57:33 np0005597378 systemd[1]: libpod-e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5.scope: Deactivated successfully.
Jan 27 09:57:33 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 27 09:57:33 np0005597378 systemd[1]: var-lib-containers-storage-overlay-3a2c1990fdcfa70852301c3e252076c5767c6f531f8d009d88ffea9260caabef-merged.mount: Deactivated successfully.
Jan 27 09:57:33 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 27 09:57:33 np0005597378 podman[407992]: 2026-01-27 14:57:33.64665645 +0000 UTC m=+1.205589583 container remove e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:57:33 np0005597378 systemd[1]: libpod-conmon-e90bc80347afcb8027bafe2554212174d65d56febc014208747dd224cbcd19b5.scope: Deactivated successfully.
Jan 27 09:57:33 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 27 09:57:33 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618790613' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 27 09:57:33 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23328 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:33 np0005597378 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:57:33 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:57:33.729+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:57:33 np0005597378 auditd[704]: Audit daemon rotating log files
Jan 27 09:57:33 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: ops {prefix=ops} (starting...)
Jan 27 09:57:34 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782604491' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617927192' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.274088607 +0000 UTC m=+0.031176954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.412697327 +0000 UTC m=+0.169785654 container create 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 27 09:57:34 np0005597378 systemd[1]: Started libpod-conmon-5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970.scope.
Jan 27 09:57:34 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.562593445 +0000 UTC m=+0.319681782 container init 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.570617057 +0000 UTC m=+0.327705374 container start 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 27 09:57:34 np0005597378 gallant_jackson[408329]: 167 167
Jan 27 09:57:34 np0005597378 nova_compute[238941]: 2026-01-27 14:57:34.575 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:34 np0005597378 systemd[1]: libpod-5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970.scope: Deactivated successfully.
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.589275569 +0000 UTC m=+0.346363886 container attach 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.58965268 +0000 UTC m=+0.346741007 container died 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 09:57:34 np0005597378 systemd[1]: var-lib-containers-storage-overlay-9bbd22afe0bfbc3be511746a6560f0aaa0f4c6cde4a0aaafbe3f111a8471b976-merged.mount: Deactivated successfully.
Jan 27 09:57:34 np0005597378 podman[408283]: 2026-01-27 14:57:34.749933171 +0000 UTC m=+0.507021488 container remove 5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:57:34 np0005597378 systemd[1]: libpod-conmon-5a624e239a7a75cfea7c920b68bb7ddd1ebd8e030bc84034e8dc1d23b0bfd970.scope: Deactivated successfully.
Jan 27 09:57:34 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: session ls {prefix=session ls} (starting...)
Jan 27 09:57:34 np0005597378 ceph-mds[95200]: mds.cephfs.compute-0.ukpmyo asok_command: status {prefix=status} (starting...)
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420509848' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 09:57:34 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871679363' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 09:57:35 np0005597378 podman[408374]: 2026-01-27 14:57:34.934795342 +0000 UTC m=+0.033265879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:57:35 np0005597378 podman[408374]: 2026-01-27 14:57:35.102634754 +0000 UTC m=+0.201105271 container create 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 27 09:57:35 np0005597378 systemd[1]: Started libpod-conmon-2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d.scope.
Jan 27 09:57:35 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:57:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:35 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:35 np0005597378 podman[408374]: 2026-01-27 14:57:35.187832864 +0000 UTC m=+0.286303401 container init 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 27 09:57:35 np0005597378 podman[408374]: 2026-01-27 14:57:35.198890926 +0000 UTC m=+0.297361443 container start 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 27 09:57:35 np0005597378 podman[408374]: 2026-01-27 14:57:35.208839348 +0000 UTC m=+0.307309885 container attach 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]: {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:    "0": [
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:        {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "devices": [
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "/dev/loop3"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            ],
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_name": "ceph_lv0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_size": "21470642176",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7401de7e-4bb5-49b0-a16c-bddf5aaf400a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "name": "ceph_lv0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "tags": {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.block_uuid": "VRNVCj-7Kiq-B209-IWXU-8GVK-UTzA-M8w77B",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cluster_name": "ceph",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.crush_device_class": "",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.encrypted": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.objectstore": "bluestore",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osd_fsid": "7401de7e-4bb5-49b0-a16c-bddf5aaf400a",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osd_id": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.type": "block",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.vdo": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.with_tpm": "0"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            },
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "type": "block",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "vg_name": "ceph_vg0"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:        }
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:    ],
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:    "1": [
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:        {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "devices": [
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "/dev/loop4"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            ],
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_name": "ceph_lv1",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_size": "21470642176",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=534ad76f-0fe2-4925-988a-e0878f02e0e5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "name": "ceph_lv1",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "tags": {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.block_uuid": "qV3ENI-DiRK-3lEm-4h72-f0Yu-ega0-Y0kCzE",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cluster_name": "ceph",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.crush_device_class": "",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.encrypted": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.objectstore": "bluestore",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osd_fsid": "534ad76f-0fe2-4925-988a-e0878f02e0e5",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osd_id": "1",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.type": "block",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.vdo": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.with_tpm": "0"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            },
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "type": "block",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "vg_name": "ceph_vg1"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:        }
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:    ],
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:    "2": [
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:        {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "devices": [
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "/dev/loop5"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            ],
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_name": "ceph_lv2",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_size": "21470642176",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4d8fd694-f443-5fb1-b612-70034b2f3c6e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=37f85830-e66d-4c55-9f5f-5b8a8c68c8a4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "lv_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "name": "ceph_lv2",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "tags": {
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.block_uuid": "ToYGPl-zUZo-LFEr-Wtfr-td4v-vX2s-5mgYSb",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cephx_lockbox_secret": "",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cluster_fsid": "4d8fd694-f443-5fb1-b612-70034b2f3c6e",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.cluster_name": "ceph",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.crush_device_class": "",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.encrypted": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.objectstore": "bluestore",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osd_fsid": "37f85830-e66d-4c55-9f5f-5b8a8c68c8a4",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osd_id": "2",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.type": "block",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.vdo": "0",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:                "ceph.with_tpm": "0"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            },
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "type": "block",
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:            "vg_name": "ceph_vg2"
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:        }
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]:    ]
Jan 27 09:57:35 np0005597378 vibrant_chaum[408408]: }
Jan 27 09:57:35 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23342 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:35 np0005597378 systemd[1]: libpod-2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d.scope: Deactivated successfully.
Jan 27 09:57:35 np0005597378 podman[408374]: 2026-01-27 14:57:35.545918369 +0000 UTC m=+0.644388886 container died 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 27 09:57:35 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 09:57:35 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814597607' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 09:57:35 np0005597378 systemd[1]: var-lib-containers-storage-overlay-6b025f94af42fed8bc22af0b734d68ce15fdbbd491a39deae0de8ec5c3c30d14-merged.mount: Deactivated successfully.
Jan 27 09:57:36 np0005597378 podman[408374]: 2026-01-27 14:57:35.999987248 +0000 UTC m=+1.098457765 container remove 2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:57:36 np0005597378 systemd[1]: libpod-conmon-2cd60c83fe90e6b8707bff137f07b7aca5229e3ee1e87624a93230e52a3d940d.scope: Deactivated successfully.
Jan 27 09:57:36 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23344 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:36 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:36 np0005597378 podman[408524]: 2026-01-27 14:57:36.160873336 +0000 UTC m=+0.093858340 container health_status a629085e82a5d37ef65e31d678b61e1cf9d237624a54282e566da492ac5b876e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 09:57:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 09:57:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3932124911' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.615864069 +0000 UTC m=+0.048002069 container create bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:57:36 np0005597378 systemd[1]: Started libpod-conmon-bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767.scope.
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.595807369 +0000 UTC m=+0.027945399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:57:36 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.728495402 +0000 UTC m=+0.160633422 container init bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.740170301 +0000 UTC m=+0.172308301 container start bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 27 09:57:36 np0005597378 busy_joliot[408677]: 167 167
Jan 27 09:57:36 np0005597378 systemd[1]: libpod-bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767.scope: Deactivated successfully.
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.746743354 +0000 UTC m=+0.178881384 container attach bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.749290092 +0000 UTC m=+0.181428112 container died bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 27 09:57:36 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 27 09:57:36 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227961189' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 27 09:57:36 np0005597378 systemd[1]: var-lib-containers-storage-overlay-71f13c7110f3a377e3aca33b5507695fb6336c80d2e33e2f5ea65fc8f7e25e52-merged.mount: Deactivated successfully.
Jan 27 09:57:36 np0005597378 podman[408658]: 2026-01-27 14:57:36.825977257 +0000 UTC m=+0.258115257 container remove bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 27 09:57:36 np0005597378 systemd[1]: libpod-conmon-bec5a4958bde1ff68050297a6ccc0b4ea65c12d3f6fb0685a71637d84e7f5767.scope: Deactivated successfully.
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230711875' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 09:57:37 np0005597378 podman[408728]: 2026-01-27 14:57:37.043474879 +0000 UTC m=+0.054289495 container create 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 27 09:57:37 np0005597378 systemd[1]: Started libpod-conmon-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope.
Jan 27 09:57:37 np0005597378 podman[408728]: 2026-01-27 14:57:37.018555981 +0000 UTC m=+0.029370617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 27 09:57:37 np0005597378 systemd[1]: Started libcrun container.
Jan 27 09:57:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:37 np0005597378 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 27 09:57:37 np0005597378 podman[408728]: 2026-01-27 14:57:37.146345596 +0000 UTC m=+0.157160232 container init 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 27 09:57:37 np0005597378 podman[408728]: 2026-01-27 14:57:37.152834366 +0000 UTC m=+0.163648982 container start 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 27 09:57:37 np0005597378 podman[408728]: 2026-01-27 14:57:37.160407867 +0000 UTC m=+0.171222513 container attach 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581558141' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 27 09:57:37 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224875917' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 27 09:57:37 np0005597378 lvm[408980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 27 09:57:37 np0005597378 lvm[408982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 27 09:57:37 np0005597378 lvm[408981]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 27 09:57:37 np0005597378 lvm[408981]: VG ceph_vg1 finished
Jan 27 09:57:37 np0005597378 lvm[408980]: VG ceph_vg0 finished
Jan 27 09:57:37 np0005597378 lvm[408982]: VG ceph_vg2 finished
Jan 27 09:57:38 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23356 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:38 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:57:38.030+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:57:38 np0005597378 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 27 09:57:38 np0005597378 jovial_haibt[408751]: {}
Jan 27 09:57:38 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:38 np0005597378 systemd[1]: libpod-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope: Deactivated successfully.
Jan 27 09:57:38 np0005597378 systemd[1]: libpod-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope: Consumed 1.438s CPU time.
Jan 27 09:57:38 np0005597378 podman[408728]: 2026-01-27 14:57:38.141173362 +0000 UTC m=+1.151987998 container died 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 27 09:57:38 np0005597378 nova_compute[238941]: 2026-01-27 14:57:38.245 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178043735' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 09:57:38 np0005597378 systemd[1]: var-lib-containers-storage-overlay-20a48cc66304f0ea886b259f0a4e1c6efa454da01eea2cd7f9c804b22ec20b64-merged.mount: Deactivated successfully.
Jan 27 09:57:38 np0005597378 podman[408728]: 2026-01-27 14:57:38.683996076 +0000 UTC m=+1.694810692 container remove 46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_haibt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522767366' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 27 09:57:38 np0005597378 systemd[1]: libpod-conmon-46810e0ba09b45a9e274b74a9e61d74438ba687dd27ed322abed4f8c7d039df3.scope: Deactivated successfully.
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 27 09:57:38 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23362 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:38 np0005597378 ceph-mon[75090]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:39 np0005597378 podman[409215]: 2026-01-27 14:57:39.017547282 +0000 UTC m=+0.127918288 container health_status 71f420ef9a8a99e99b3aeb6580252417c3fea12212d6bca71991cd67cacd4882 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ccfe4e01d473c8ebde1fd76e0750e6e376bf817cb352cc887731cf4ff3741348-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68-000463ee3dfc871e462a055b3596fe18aeef15f6245df3c47d8ce46e74144b68'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 09:57:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:39 np0005597378 ceph-mon[75090]: from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' 
Jan 27 09:57:39 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23366 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 27 09:57:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723295431' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.394 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115619 data_alloc: 218103808 data_used: 6116698
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ed097000/0x0/0x4ffc00000, data 0x18e44bf/0x1a75000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 47144960 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 43687936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd51c5000 session 0x564bcd6728c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccd908c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcc229500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bcea961c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bd37c0e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c4400 session 0x564bcea6a700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151123 data_alloc: 218103808 data_used: 6116698
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcea6afc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 46833664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bcea96000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbda000/0x0/0x4ffc00000, data 0x1da14bf/0x1f32000, compress 0x0/0x0/0x0, omap 0x46c3e, meta 0x110a93c2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bccddda40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 46899200 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3213774 data_alloc: 234881024 data_used: 16147817
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 46694400 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.585988045s of 12.116201401s, submitted: 52
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccdaddc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbd9000/0x0/0x4ffc00000, data 0x1da14cf/0x1f33000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 46546944 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 46301184 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf5c5800 session 0x564bd3846c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccebae00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bd2688e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc6400 session 0x564bccf4c8c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 45146112 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x1dc54cf/0x1f57000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2cc000 session 0x564bd0fa68c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bcd674a80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc730800 session 0x564bccf4cfc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282758 data_alloc: 234881024 data_used: 20738921
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bca8d5500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bcc2b4000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bccd956c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec517000/0x0/0x4ffc00000, data 0x24634cf/0x25f5000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bceffac40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 45056000 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323618 data_alloc: 234881024 data_used: 20788073
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bccf4ca80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 43188224 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bcd6756c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 43180032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.031028748s of 10.043293953s, submitted: 53
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ebf18000/0x0/0x4ffc00000, data 0x2a60502/0x2bf4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 43081728 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 40255488 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 38076416 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3435075 data_alloc: 234881024 data_used: 25682281
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 35086336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb1d7000/0x0/0x4ffc00000, data 0x3799502/0x392d000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311320576 unmapped: 35758080 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448499 data_alloc: 234881024 data_used: 27217257
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb15e000/0x0/0x4ffc00000, data 0x381a502/0x39ae000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311771136 unmapped: 35307520 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.520648003s of 11.550464630s, submitted: 124
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312623104 unmapped: 34455552 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 33734656 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x3b6e502/0x3d02000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,19])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3496971 data_alloc: 234881024 data_used: 27254121
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313868288 unmapped: 33210368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bd34cd180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc201000 session 0x564bd37c0000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcca88400 session 0x564bd37c08c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd08d6700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313876480 unmapped: 33202176 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313925632 unmapped: 33153024 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319258624 unmapped: 27820032 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea442000/0x0/0x4ffc00000, data 0x453353b/0x46c9000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,3,2])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd37c0000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcfd84800 session 0x564bcd6756c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bd37c1dc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3543038 data_alloc: 234881024 data_used: 28028281
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea425000/0x0/0x4ffc00000, data 0x455053b/0x46e6000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 33038336 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb7800 session 0x564bcb64ac40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd0fa7a40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314048512 unmapped: 33030144 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bd26881c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314146816 unmapped: 32931840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.908327103s of 10.164081573s, submitted: 122
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bcc88a1c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318701568 unmapped: 28377088 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcefdbc00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcdfc7400 session 0x564bca9f8540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bd3847880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb5bd400 session 0x564bcc229500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb543000 session 0x564bcdb70540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd27af800 session 0x564bcefdac40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d4000/0x0/0x4ffc00000, data 0x4fa159d/0x5138000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd37c0e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617665 data_alloc: 234881024 data_used: 28233081
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314507264 unmapped: 32571392 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315170816 unmapped: 31907840 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.417 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.418 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.418 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3400 session 0x564bd40a8a80
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.419 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bccebb6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99d0000/0x0/0x4ffc00000, data 0x4fa45f9/0x513c000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649761 data_alloc: 234881024 data_used: 33556857
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318849024 unmapped: 28229632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bca8d41c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd08d7340
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a000 session 0x564bccf3b180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4efc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 28090368 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.126866341s of 10.051178932s, submitted: 51
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315604992 unmapped: 31473664 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb0a9800 session 0x564bcc88ac40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315949056 unmapped: 31129600 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3579543 data_alloc: 251658240 data_used: 36066169
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea867000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 26509312 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea868000/0x0/0x4ffc00000, data 0x410e5c6/0x42a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3613059 data_alloc: 251658240 data_used: 36390742
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325566464 unmapped: 21512192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325615616 unmapped: 21463040 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325099520 unmapped: 21979136 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea368000/0x0/0x4ffc00000, data 0x460e5c6/0x47a4000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3618871 data_alloc: 251658240 data_used: 36960086
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325156864 unmapped: 21921792 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.973931313s of 12.332912445s, submitted: 76
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329621504 unmapped: 17457152 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329662464 unmapped: 17416192 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330014720 unmapped: 17063936 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330039296 unmapped: 17039360 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e99e5000/0x0/0x4ffc00000, data 0x4f905c6/0x5126000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684395 data_alloc: 251658240 data_used: 38274902
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd28e000 session 0x564bd34ccfc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca5c5400 session 0x564bccf4c8c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 330072064 unmapped: 17006592 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccf70800 session 0x564bd34cd6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ac800 session 0x564bd0fa6e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf138000 session 0x564bccddc700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328065024 unmapped: 19013632 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce92f800 session 0x564bcd672fc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406949 data_alloc: 234881024 data_used: 21413057
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ea3ec000/0x0/0x4ffc00000, data 0x458d531/0x4720000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 23830528 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.216124535s of 10.042542458s, submitted: 195
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd2decc00 session 0x564bcea6a540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bd34cd500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eaef3000/0x0/0x4ffc00000, data 0x31ac4cf/0x333e000, compress 0x0/0x0/0x0, omap 0x46e09, meta 0x110a91f7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 23846912 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccf4c700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec876000/0x0/0x4ffc00000, data 0x210544e/0x2294000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bcd674e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccd95880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 25698304 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111213 data_alloc: 218103808 data_used: 6241262
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb1c000/0x0/0x4ffc00000, data 0xe6144e/0xff0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bccb4f880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4edb40000/0x0/0x4ffc00000, data 0xe3d44e/0xfcc000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313614336 unmapped: 33464320 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103637 data_alloc: 218103808 data_used: 6136814
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311730176 unmapped: 35348480 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.195335388s of 20.409894943s, submitted: 46
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325222400 unmapped: 21856256 heap: 347078656 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcccb6800 session 0x564bccd94c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bccebb6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb504c00 session 0x564bd08d7340
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb505000 session 0x564bccb4efc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc77c400 session 0x564bccddc700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184004 data_alloc: 218103808 data_used: 6136814
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcd87a800 session 0x564bcc229180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 41041920 heap: 352862208 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ece5c000/0x0/0x4ffc00000, data 0x1b204b0/0x1cb0000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [0,0,0,0,0,0,0,0,5,2])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bca802800 session 0x564bcea6a540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bccb78400 session 0x564bd34cc380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 48390144 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030c00 session 0x564bd2688700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4abc00 session 0x564bccd90380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcf030000 session 0x564bd37c08c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 311582720 unmapped: 48635904 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bce2ccc00 session 0x564bcb0821c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb75a400 session 0x564bcb082e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bd12e3c00 session 0x564bcefdb880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338534 data_alloc: 234881024 data_used: 19637230
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcb4ab400 session 0x564bca8d4e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 47521792 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 316588032 unmapped: 43630592 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4ec102000/0x0/0x4ffc00000, data 0x287a4b0/0x2a0a000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320102400 unmapped: 40116224 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.074495316s of 19.268436432s, submitted: 41
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467488 data_alloc: 251658240 data_used: 33608686
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322863104 unmapped: 37355520 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb859000/0x0/0x4ffc00000, data 0x311b4b0/0x32ab000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3478950 data_alloc: 251658240 data_used: 33842158
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323108864 unmapped: 37109760 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb847000/0x0/0x4ffc00000, data 0x31274b0/0x32b7000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326033408 unmapped: 34185216 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 heartbeat osd_stat(store_statfs(0x4eb064000/0x0/0x4ffc00000, data 0x39184b0/0x3aa8000, compress 0x0/0x0/0x0, omap 0x46fd4, meta 0x110a902c), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522832 data_alloc: 251658240 data_used: 33960942
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 ms_handle_reset con 0x564bcc2d2000 session 0x564bceffb180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326049792 unmapped: 34168832 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 280 handle_osd_map epochs [280,281], i have 281, src has [1,281]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.465786934s of 10.867195129s, submitted: 144
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d7880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6bdc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344834048 unmapped: 15384576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 281 ms_handle_reset con 0x564bce2ccc00 session 0x564bd08d76c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 18046976 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 26599424 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 282 ms_handle_reset con 0x564bd12e3c00 session 0x564bccddda40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 26591232 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb4ab400 session 0x564bccdaddc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3678431 data_alloc: 251658240 data_used: 42726398
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 283 heartbeat osd_stat(store_statfs(0x4e9b73000/0x0/0x4ffc00000, data 0x4e02cae/0x4f97000, compress 0x0/0x0/0x0, omap 0x47530, meta 0x110a8ad0), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcb75a400 session 0x564bccddc1c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333643776 unmapped: 26574848 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bcc2d2000 session 0x564bd3846c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 283 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a8c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 26558464 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680133 data_alloc: 251658240 data_used: 42726398
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333676544 unmapped: 26542080 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333692928 unmapped: 26525696 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd1152000 session 0x564bcb0836c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcea96000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bccb4f500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bccf4da40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bccf3a000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bca9f8380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bd40a9500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bccd956c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd08d6c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6d000/0x0/0x4ffc00000, data 0x4e062e5/0x4f9d000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 26517504 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.470180511s of 13.830414772s, submitted: 70
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bcd92ba40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd2dedc00 session 0x564bcea97340
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bcb50d880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bca944c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd0fa6540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730339 data_alloc: 251658240 data_used: 42726398
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce2ccc00 session 0x564bccdac540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933f000/0x0/0x4ffc00000, data 0x56362e5/0x57cd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd672a80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bd12e3800 session 0x564bcd6728c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd08d6e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bcea6aa80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bd08d68c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732101 data_alloc: 251658240 data_used: 42726398
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933e000/0x0/0x4ffc00000, data 0x56362f5/0x57ce000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 26468352 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a8800 session 0x564bd08d6000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb4ab400 session 0x564bd3846700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 23896064 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 15638528 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 11714560 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 7340032 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933d000/0x0/0x4ffc00000, data 0x5636305/0x57cf000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 7307264 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.061373711s of 15.209449768s, submitted: 14
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3862183 data_alloc: 268435456 data_used: 61940222
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e933b000/0x0/0x4ffc00000, data 0x5637305/0x57d0000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 7233536 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3875583 data_alloc: 268435456 data_used: 64652286
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 6529024 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 6168576 heap: 360218624 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 358891520 unmapped: 4472832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 3768320 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e892e000/0x0/0x4ffc00000, data 0x6044305/0x61dd000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 3522560 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3951443 data_alloc: 268435456 data_used: 66069502
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8908000/0x0/0x4ffc00000, data 0x606b305/0x6204000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 3489792 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.538804054s of 14.776687622s, submitted: 86
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb75a400 session 0x564bd2689500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc2d2000 session 0x564bca9f9500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 3950891 data_alloc: 268435456 data_used: 66069502
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 3481600 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bccd90380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8905000/0x0/0x4ffc00000, data 0x606e305/0x6207000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 3457024 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792688 data_alloc: 268435456 data_used: 57512958
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcf138000 session 0x564bd34cddc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcc154c00 session 0x564bccdacfc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 3448832 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9942000/0x0/0x4ffc00000, data 0x4e072f5/0x4f9f000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bcb0a9800 session 0x564bd3846fc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3597436 data_alloc: 251658240 data_used: 43418622
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598844 data_alloc: 251658240 data_used: 43578366
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 11558912 heap: 363364352 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 ms_handle_reset con 0x564bce92f800 session 0x564bccb4f880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.184745789s of 16.264896393s, submitted: 22
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcc29bc00 session 0x564bcd92b6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bca63e000 session 0x564bccd94c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bd2830400 session 0x564bd3847dc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eb0c1000/0x0/0x4ffc00000, data 0x38b32f5/0x3a4b000, compress 0x0/0x0/0x0, omap 0x47ac3, meta 0x110a853d), peers [0,1] op hist [0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 6619136 heap: 375971840 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 285 ms_handle_reset con 0x564bcb0a9800 session 0x564bcea97dc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49987584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 286 heartbeat osd_stat(store_statfs(0x4e84b7000/0x0/0x4ffc00000, data 0x64b9e91/0x6653000, compress 0x0/0x0/0x0, omap 0x47b9e, meta 0x110a8462), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 286 ms_handle_reset con 0x564bcc154c00 session 0x564bccf3aa80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355352576 unmapped: 50036736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e84b4000/0x0/0x4ffc00000, data 0x64bba81/0x6656000, compress 0x0/0x0/0x0, omap 0x47c79, meta 0x110a8387), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859240 data_alloc: 251658240 data_used: 48092158
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 50012160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 50003968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 288 ms_handle_reset con 0x564bcb543000 session 0x564bccebbc00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0b3000/0x0/0x4ffc00000, data 0x38ba245/0x3a57000, compress 0x0/0x0/0x0, omap 0x481df, meta 0x110a7e21), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 49954816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640010 data_alloc: 251658240 data_used: 48092158
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce2ccc00 session 0x564bcc2b56c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bcb015400 session 0x564bcdb70e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.908384323s of 10.784473419s, submitted: 63
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b0000/0x0/0x4ffc00000, data 0x38bbce0/0x3a5a000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 ms_handle_reset con 0x564bce92f000 session 0x564bcb082380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 49946624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x38bbcd0/0x3a59000, compress 0x0/0x0/0x0, omap 0x4866c, meta 0x110a7994), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3639513 data_alloc: 251658240 data_used: 48092771
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 49938432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bcccb5000 session 0x564bccd94540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 67616768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec58e000/0x0/0x4ffc00000, data 0x23de84e/0x257b000, compress 0x0/0x0/0x0, omap 0x48749, meta 0x110a78b7), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bca5c5400 session 0x564bd34cc000
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 290 ms_handle_reset con 0x564bce92f400 session 0x564bd37c0540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 67878912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 291 ms_handle_reset con 0x564bce2cdc00 session 0x564bcd674e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194344 data_alloc: 218103808 data_used: 3716593
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed76f000/0x0/0x4ffc00000, data 0xe50287/0xfed000, compress 0x0/0x0/0x0, omap 0x48bd9, meta 0x110a7427), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320028672 unmapped: 85360640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.671979904s of 12.600020409s, submitted: 72
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197054 data_alloc: 218103808 data_used: 3720591
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb1a000/0x0/0x4ffc00000, data 0xe51d06/0xff0000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x110a7349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb0821c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bd0fa6a80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bd37c0380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2689340
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 319979520 unmapped: 85409792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcdb70a80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd2688700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bcb50c1c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcb50c380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bce2cdc00 session 0x564bcb64bdc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233345 data_alloc: 218103808 data_used: 3720591
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bcb543800
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bcc2b5500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec493000/0x0/0x4ffc00000, data 0x1338d78/0x14d9000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb4ac400 session 0x564bcc967a40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 84811776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc200c00 session 0x564bcea97180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.606670380s of 14.736115456s, submitted: 46
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29bc00 session 0x564bd34cc380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 85032960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcc29b800 session 0x564bd08d7500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3233898 data_alloc: 218103808 data_used: 3720607
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320339968 unmapped: 85049344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263082 data_alloc: 218103808 data_used: 8572831
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec492000/0x0/0x4ffc00000, data 0x1338d9b/0x14da000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 320208896 unmapped: 85180416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.413036346s of 13.421483994s, submitted: 4
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 82018304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec365000/0x0/0x4ffc00000, data 0x1465d9b/0x1607000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321503232 unmapped: 83886080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321150976 unmapped: 84238336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebda1000/0x0/0x4ffc00000, data 0x1a29d9b/0x1bcb000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [0,0,0,0,0,4])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3313756 data_alloc: 218103808 data_used: 9641375
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 83763200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314012 data_alloc: 218103808 data_used: 9649567
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd59000/0x0/0x4ffc00000, data 0x1a71d9b/0x1c13000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321544192 unmapped: 83845120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.950139046s of 10.105495453s, submitted: 85
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314116 data_alloc: 218103808 data_used: 9674143
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebd18000/0x0/0x4ffc00000, data 0x1ab2d9b/0x1c54000, compress 0x0/0x0/0x0, omap 0x48cb7, meta 0x12247349), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcca7ec00 session 0x564bceffa380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bccf73000 session 0x564bcefdaa80
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 ms_handle_reset con 0x564bcb543c00 session 0x564bca8d4e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321527808 unmapped: 83861504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321560576 unmapped: 83828736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc200c00 session 0x564bd08d7880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321978 data_alloc: 218103808 data_used: 9670047
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 321552384 unmapped: 83836928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ebd11000/0x0/0x4ffc00000, data 0x1ab49a9/0x1c59000, compress 0x0/0x0/0x0, omap 0x4914a, meta 0x12246eb6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccf3a1c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29b800 session 0x564bccf3b880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 70385664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.576875687s of 10.103278160s, submitted: 32
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 327180288 unmapped: 78209024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 76718080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 293 ms_handle_reset con 0x564bcc29bc00 session 0x564bccd91a40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 75661312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424611 data_alloc: 234881024 data_used: 16704927
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322871296 unmapped: 82518016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 294 ms_handle_reset con 0x564bcb543c00 session 0x564bccebba40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 294 heartbeat osd_stat(store_statfs(0x4e9de4000/0x0/0x4ffc00000, data 0x2841537/0x29e6000, compress 0x0/0x0/0x0, omap 0x493eb, meta 0x133e6c15), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bccf3b500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de0000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322879488 unmapped: 82509824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcca7ec00 session 0x564bcc88b180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcb543c00 session 0x564bcb64a540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429181 data_alloc: 234881024 data_used: 16704943
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 ms_handle_reset con 0x564bcc200c00 session 0x564bd0fa6700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322887680 unmapped: 82501632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x2843151/0x29ea000, compress 0x0/0x0/0x0, omap 0x4975b, meta 0x133e68a5), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.745933533s of 10.679224014s, submitted: 58
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314638336 unmapped: 90750976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3310157 data_alloc: 218103808 data_used: 3724703
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314646528 unmapped: 90742784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b5e/0x1d8a000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309609 data_alloc: 218103808 data_used: 3720607
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaa3f000/0x0/0x4ffc00000, data 0x1be3b3b/0x1d89000, compress 0x0/0x0/0x0, omap 0x49c01, meta 0x133e63ff), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314654720 unmapped: 90734592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bccb4f500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bccf73000 session 0x564bca3cd880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bccddc1c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bd3846540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.256450653s of 12.440944672s, submitted: 37
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bca8d4700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc228540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311357 data_alloc: 218103808 data_used: 3728764
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcccb6400 session 0x564bccd95500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bca85d6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bccdaddc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bceffb180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314580992 unmapped: 90808320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29bc00 session 0x564bcc88a540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb4ab800 session 0x564bcd674c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcb543c00 session 0x564bcb50c8c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x34b2bad/0x365a000, compress 0x0/0x0/0x0, omap 0x49f17, meta 0x133e60e9), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc200c00 session 0x564bcc229180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 ms_handle_reset con 0x564bcc29b800 session 0x564bcc2b4380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 77316096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 296 handle_osd_map epochs [296,297], i have 297, src has [1,297]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 297 ms_handle_reset con 0x564bcdf3d400 session 0x564bccd90fc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x34b472b/0x365b000, compress 0x0/0x0/0x0, omap 0x4a2f8, meta 0x133e5d08), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422141 data_alloc: 218103808 data_used: 9147162
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467581 data_alloc: 234881024 data_used: 16831258
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e9efa000/0x0/0x4ffc00000, data 0x272972b/0x28d0000, compress 0x0/0x0/0x0, omap 0x4a2e3, meta 0x133e5d1d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.041012764s of 13.806105614s, submitted: 93
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470355 data_alloc: 234881024 data_used: 16831258
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325271552 unmapped: 80117760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325451776 unmapped: 79937536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491451 data_alloc: 234881024 data_used: 19464474
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e9ef7000/0x0/0x4ffc00000, data 0x272b1aa/0x28d3000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325574656 unmapped: 79814656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 325844992 unmapped: 79544320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.217539787s of 10.448942184s, submitted: 41
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29bc00 session 0x564bd0fa6fc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bcd675dc0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 326017024 unmapped: 79372288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca944540
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318504960 unmapped: 86884352 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250829 data_alloc: 218103808 data_used: 3728650
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb072000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318513152 unmapped: 86876160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318521344 unmapped: 86867968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bceffa380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcd674380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcdf3d400 session 0x564bd40a8380
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bcc228e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.690319061s of 43.858875275s, submitted: 23
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 322838528 unmapped: 82550784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bca9f81c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bca944e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bd0fa6c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcca88800 session 0x564bccf3a1c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcb543c00 session 0x564bca85c8c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 86491136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3290573 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318906368 unmapped: 86482944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc200c00 session 0x564bccddda40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 86474752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314125 data_alloc: 218103808 data_used: 7700648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb27f000/0x0/0x4ffc00000, data 0x13a6148/0x154d000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64b880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcc72f800 session 0x564bccf3b500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.886404037s of 19.264944077s, submitted: 16
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 87212032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 ms_handle_reset con 0x564bcd87ac00 session 0x564bccf3a8c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313548800 unmapped: 91840512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313556992 unmapped: 91832320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 91824128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 91815936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313581568 unmapped: 91807744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313589760 unmapped: 91799552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251917 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313597952 unmapped: 91791360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.420921326s of 58.578922272s, submitted: 6
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313622528 unmapped: 91766784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7ca000/0x0/0x4ffc00000, data 0xe5c138/0x1002000, compress 0x0/0x0/0x0, omap 0x4a7a1, meta 0x133e585f), peers [0,1] op hist [0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dd28/0x1005000, compress 0x0/0x0/0x0, omap 0x4a87b, meta 0x133e5785), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 299 ms_handle_reset con 0x564bcb543c00 session 0x564bca944e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253803 data_alloc: 218103808 data_used: 3732648
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eb7c5000/0x0/0x4ffc00000, data 0xe5dbf4/0x1003000, compress 0x0/0x0/0x0, omap 0x4a19b, meta 0x133e5e65), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 91725824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 91709440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 91701248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313712640 unmapped: 91676672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 91668480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 91660288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312721408 unmapped: 92667904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312729600 unmapped: 92659712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312737792 unmapped: 92651520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312754176 unmapped: 92635136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312770560 unmapped: 92618752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.0 total, 600.0 interval#012Cumulative writes: 37K writes, 153K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3476 writes, 15K keys, 3476 commit groups, 1.0 writes per commit group, ingest: 17.17 MB, 0.03 MB/s#012Interval WAL: 3476 writes, 1293 syncs, 2.69 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312778752 unmapped: 92610560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312786944 unmapped: 92602368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312795136 unmapped: 92594176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312803328 unmapped: 92585984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312811520 unmapped: 92577792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 3736709
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc200c00 session 0x564bd40a9a40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 312827904 unmapped: 92561408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 ms_handle_reset con 0x564bcc29b800 session 0x564bcb64ba40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315449344 unmapped: 89939968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315457536 unmapped: 89931776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eb7c4000/0x0/0x4ffc00000, data 0xe5f673/0x1006000, compress 0x0/0x0/0x0, omap 0x4a65c, meta 0x133e59a4), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315465728 unmapped: 89923584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256497 data_alloc: 218103808 data_used: 8389765
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 123.139335632s of 125.560668945s, submitted: 63
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315473920 unmapped: 89915392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 315490304 unmapped: 89899008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 301 ms_handle_reset con 0x564bcc72f800 session 0x564bcc967880
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 91693056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 91684864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ebc34000/0x0/0x4ffc00000, data 0x9f1230/0xb97000, compress 0x0/0x0/0x0, omap 0x4a737, meta 0x133e58c9), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228506 data_alloc: 218103808 data_used: 3740719
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 302 ms_handle_reset con 0x564bcd87ac00 session 0x564bccebb500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec430000/0x0/0x4ffc00000, data 0x1f4865/0x39a000, compress 0x0/0x0/0x0, omap 0x4acd7, meta 0x133e5329), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3173403 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305979392 unmapped: 99409920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.102717400s of 11.751947403s, submitted: 81
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 305987584 unmapped: 99401728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306003968 unmapped: 99385344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175457 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec42f000/0x0/0x4ffc00000, data 0x1f62e4/0x39d000, compress 0x0/0x0/0x0, omap 0x4adb3, meta 0x133e524d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 306012160 unmapped: 99377152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 ms_handle_reset con 0x564bcb543c00 session 0x564bd2689340
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307093504 unmapped: 98295808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307109888 unmapped: 98279424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread fragmentation_score=0.004046 took=0.000057s
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 98271232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307134464 unmapped: 98254848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307142656 unmapped: 98246656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307150848 unmapped: 98238464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307159040 unmapped: 98230272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307167232 unmapped: 98222080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307175424 unmapped: 98213888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307183616 unmapped: 98205696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 98197504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184218 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 98189312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 108.550910950s of 110.262329102s, submitted: 108
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec427000/0x0/0x4ffc00000, data 0x1f7fc4/0x3a3000, compress 0x0/0x0/0x0, omap 0x4b27b, meta 0x133e4d85), peers [0,1] op hist [0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307208192 unmapped: 98181120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307216384 unmapped: 98172928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 305 handle_osd_map epochs [305,306], i have 306, src has [1,306]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 306 ms_handle_reset con 0x564bcc200c00 session 0x564bcd675a40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307265536 unmapped: 98123776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234062 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307273728 unmapped: 98115584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 ms_handle_reset con 0x564bcc29b800 session 0x564bcefdb500
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307298304 unmapped: 98091008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307306496 unmapped: 98082816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307314688 unmapped: 98074624 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307322880 unmapped: 98066432 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262948 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307331072 unmapped: 98058240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb7ad000/0x0/0x4ffc00000, data 0xe6b762/0x101d000, compress 0x0/0x0/0x0, omap 0x4b92e, meta 0x133e46d2), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307339264 unmapped: 98050048 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.985069275s of 32.682743073s, submitted: 30
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265706 data_alloc: 218103808 data_used: 140300
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307363840 unmapped: 98025472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 308 ms_handle_reset con 0x564bcc72f800 session 0x564bccb4e700
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d231/0x101e000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 308 heartbeat osd_stat(store_statfs(0x4eb7ac000/0x0/0x4ffc00000, data 0xe6d20e/0x101d000, compress 0x0/0x0/0x0, omap 0x4be07, meta 0x133e41f9), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263809 data_alloc: 218103808 data_used: 140284
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307412992 unmapped: 97976320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307421184 unmapped: 97968128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307437568 unmapped: 97951744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307445760 unmapped: 97943552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307453952 unmapped: 97935360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307462144 unmapped: 97927168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307470336 unmapped: 97918976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307478528 unmapped: 97910784 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307486720 unmapped: 97902592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307494912 unmapped: 97894400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307503104 unmapped: 97886208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307511296 unmapped: 97878016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307519488 unmapped: 97869824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307527680 unmapped: 97861632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307535872 unmapped: 97853440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307544064 unmapped: 97845248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307552256 unmapped: 97837056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307560448 unmapped: 97828864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307568640 unmapped: 97820672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307585024 unmapped: 97804288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307593216 unmapped: 97796096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307609600 unmapped: 97779712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307617792 unmapped: 97771520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307625984 unmapped: 97763328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307642368 unmapped: 97746944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307650560 unmapped: 97738752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307658752 unmapped: 97730560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307675136 unmapped: 97714176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307683328 unmapped: 97705984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307691520 unmapped: 97697792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307707904 unmapped: 97681408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307716096 unmapped: 97673216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307724288 unmapped: 97665024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 97656832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 97632256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307765248 unmapped: 97624064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 97615872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 97599488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 97591296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 97583104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 97574912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307822592 unmapped: 97566720 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307830784 unmapped: 97558528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307838976 unmapped: 97550336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307847168 unmapped: 97542144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307855360 unmapped: 97533952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307863552 unmapped: 97525760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307871744 unmapped: 97517568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 97509376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 97492992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 nova_compute[238941]: 2026-01-27 14:57:39.576 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 97484800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307912704 unmapped: 97476608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 97468416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 97460224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 97452032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 97443840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 97435648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 97427456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307978240 unmapped: 97411072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 97402880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308002816 unmapped: 97386496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308011008 unmapped: 97378304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308019200 unmapped: 97370112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308027392 unmapped: 97361920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 97353728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308043776 unmapped: 97345536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308051968 unmapped: 97337344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308060160 unmapped: 97329152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308068352 unmapped: 97320960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 97312768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 97304576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308092928 unmapped: 97296384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308101120 unmapped: 97288192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308109312 unmapped: 97280000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308125696 unmapped: 97263616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308133888 unmapped: 97255424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308142080 unmapped: 97247232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308150272 unmapped: 97239040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308158464 unmapped: 97230848 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 97206272 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.0 total, 600.0 interval#012Cumulative writes: 38K writes, 155K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 477 writes, 1189 keys, 477 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s#012Interval WAL: 477 writes, 216 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 97198080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308199424 unmapped: 97189888 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308207616 unmapped: 97181696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc ms_handle_reset ms_handle_reset con 0x564bce92f400
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308314112 unmapped: 97075200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 ms_handle_reset con 0x564bcb4ac400 session 0x564bcb50dc00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 97067008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266503 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 97058816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 434.266174316s of 434.955169678s, submitted: 42
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 309 handle_osd_map epochs [309,310], i have 310, src has [1,310]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 310 heartbeat osd_stat(store_statfs(0x4eb7aa000/0x0/0x4ffc00000, data 0xe6ec8d/0x1020000, compress 0x0/0x0/0x0, omap 0x4bee3, meta 0x133e411d), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 310 ms_handle_reset con 0x564bcc200c00 session 0x564bccd90e00
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227396 data_alloc: 218103808 data_used: 144345
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 96952320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 311 ms_handle_reset con 0x564bcc29b800 session 0x564bd1a7b6c0
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308461568 unmapped: 96927744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec418000/0x0/0x4ffc00000, data 0x202407/0x3b2000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3206379 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308453376 unmapped: 96935936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ec419000/0x0/0x4ffc00000, data 0x20242a/0x3b3000, compress 0x0/0x0/0x0, omap 0x4c713, meta 0x133e38ed), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 314171392 unmapped: 91217920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 ms_handle_reset con 0x564bcc72f800 session 0x564bde13f180
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebf9f000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238116 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.222564697s of 18.768489838s, submitted: 94
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308658176 unmapped: 96731136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 96722944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308674560 unmapped: 96714752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308682752 unmapped: 96706560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308690944 unmapped: 96698368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236676 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.976013184s of 45.642684937s, submitted: 114
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ebfa3000/0x0/0x4ffc00000, data 0x675a45/0x829000, compress 0x0/0x0/0x0, omap 0x4ccdb, meta 0x133e3325), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 96690176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 314 ms_handle_reset con 0x564bcd87ac00 session 0x564bdd5f8c40
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ebf9e000/0x0/0x4ffc00000, data 0x207612/0x3bb000, compress 0x0/0x0/0x0, omap 0x4d4af, meta 0x133e2b51), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217266 data_alloc: 218103808 data_used: 148406
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 96657408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308748288 unmapped: 96641024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308764672 unmapped: 96624640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308772864 unmapped: 96616448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308781056 unmapped: 96608256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308797440 unmapped: 96591872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308805632 unmapped: 96583680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 96575488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308822016 unmapped: 96567296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308830208 unmapped: 96559104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308838400 unmapped: 96550912 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308854784 unmapped: 96534528 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308862976 unmapped: 96526336 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308871168 unmapped: 96518144 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308903936 unmapped: 96485376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config show' '{prefix=config show}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308920320 unmapped: 96468992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308756480 unmapped: 96632832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'perf dump' '{prefix=perf dump}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308789248 unmapped: 96600064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'perf schema' '{prefix=perf schema}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308944896 unmapped: 96444416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308944896 unmapped: 96444416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308953088 unmapped: 96436224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308953088 unmapped: 96436224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308961280 unmapped: 96428032 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308969472 unmapped: 96419840 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308977664 unmapped: 96411648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308977664 unmapped: 96411648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308977664 unmapped: 96411648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308985856 unmapped: 96403456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 308994048 unmapped: 96395264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309002240 unmapped: 96387072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309010432 unmapped: 96378880 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 96370688 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 96370688 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309026816 unmapped: 96362496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309035008 unmapped: 96354304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309043200 unmapped: 96346112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 96329728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309067776 unmapped: 96321536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309075968 unmapped: 96313344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309092352 unmapped: 96296960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 96288768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 96288768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 96288768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 96280576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 96280576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309116928 unmapped: 96272384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309125120 unmapped: 96264192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309125120 unmapped: 96264192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309125120 unmapped: 96264192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309133312 unmapped: 96256000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 96247808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 96231424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 96223232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309174272 unmapped: 96215040 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309190656 unmapped: 96198656 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 96190464 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309215232 unmapped: 96174080 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309231616 unmapped: 96157696 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309239808 unmapped: 96149504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309239808 unmapped: 96149504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309239808 unmapped: 96149504 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309248000 unmapped: 96141312 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309256192 unmapped: 96133120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309256192 unmapped: 96133120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309256192 unmapped: 96133120 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309264384 unmapped: 96124928 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 96116736 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 96108544 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309297152 unmapped: 96092160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309297152 unmapped: 96092160 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 96083968 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309313536 unmapped: 96075776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309313536 unmapped: 96075776 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 96067584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 96067584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 96067584 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 96059392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 96059392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 96059392 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309338112 unmapped: 96051200 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 96043008 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 96034816 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 96010240 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 96002048 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 95993856 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 95985664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 95985664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 95985664 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 95977472 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 95961088 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 95961088 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 95952896 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309444608 unmapped: 95944704 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 95936512 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 95928320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 95928320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 95928320 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 95920128 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 95911936 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309485568 unmapped: 95903744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309485568 unmapped: 95903744 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309493760 unmapped: 95895552 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309501952 unmapped: 95887360 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309510144 unmapped: 95879168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309510144 unmapped: 95879168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309510144 unmapped: 95879168 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 95870976 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309534720 unmapped: 95854592 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 95846400 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309551104 unmapped: 95838208 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309559296 unmapped: 95830016 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309567488 unmapped: 95821824 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309575680 unmapped: 95813632 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309583872 unmapped: 95805440 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 95797248 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309633024 unmapped: 95756288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309665792 unmapped: 95723520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309673984 unmapped: 95715328 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309690368 unmapped: 95698944 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309723136 unmapped: 95666176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309764096 unmapped: 95625216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309813248 unmapped: 95576064 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309821440 unmapped: 95567872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309821440 unmapped: 95567872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309600256 unmapped: 95789056 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309608448 unmapped: 95780864 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309616640 unmapped: 95772672 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309624832 unmapped: 95764480 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309633024 unmapped: 95756288 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309641216 unmapped: 95748096 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309649408 unmapped: 95739904 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309657600 unmapped: 95731712 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.0 total, 600.0 interval#012Cumulative writes: 38K writes, 156K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 465 writes, 1028 keys, 465 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s#012Interval WAL: 465 writes, 216 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.169       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bc8d098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309665792 unmapped: 95723520 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309682176 unmapped: 95707136 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309698560 unmapped: 95690752 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309706752 unmapped: 95682560 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309714944 unmapped: 95674368 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309723136 unmapped: 95666176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309723136 unmapped: 95666176 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309731328 unmapped: 95657984 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309739520 unmapped: 95649792 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309747712 unmapped: 95641600 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309755904 unmapped: 95633408 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309764096 unmapped: 95625216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309764096 unmapped: 95625216 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309772288 unmapped: 95617024 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309780480 unmapped: 95608832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219976 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309780480 unmapped: 95608832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309780480 unmapped: 95608832 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40c000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 554.941650391s of 555.255554199s, submitted: 41
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 95600640 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219343 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309796864 unmapped: 95592448 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309805056 unmapped: 95584256 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309821440 unmapped: 95567872 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309829632 unmapped: 95559680 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309837824 unmapped: 95551488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309837824 unmapped: 95551488 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.659294128s of 11.130941391s, submitted: 38
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309846016 unmapped: 95543296 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309854208 unmapped: 95535104 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309903360 unmapped: 95485952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309903360 unmapped: 95485952 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309911552 unmapped: 95477760 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309919744 unmapped: 95469568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309919744 unmapped: 95469568 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309927936 unmapped: 95461376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309927936 unmapped: 95461376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309927936 unmapped: 95461376 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309936128 unmapped: 95453184 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309936128 unmapped: 95453184 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309936128 unmapped: 95453184 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309944320 unmapped: 95444992 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309952512 unmapped: 95436800 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309960704 unmapped: 95428608 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309968896 unmapped: 95420416 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 309977088 unmapped: 95412224 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310001664 unmapped: 95387648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310001664 unmapped: 95387648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310001664 unmapped: 95387648 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310009856 unmapped: 95379456 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310026240 unmapped: 95363072 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310042624 unmapped: 95346688 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310050816 unmapped: 95338496 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 95330304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 95330304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 95330304 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310067200 unmapped: 95322112 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310075392 unmapped: 95313920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310075392 unmapped: 95313920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310075392 unmapped: 95313920 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310083584 unmapped: 95305728 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 95297536 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 95289344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 95289344 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310108160 unmapped: 95281152 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310116352 unmapped: 95272960 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310124544 unmapped: 95264768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310124544 unmapped: 95264768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310124544 unmapped: 95264768 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310132736 unmapped: 95256576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310140928 unmapped: 95248384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310140928 unmapped: 95248384 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310149120 unmapped: 95240192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310149120 unmapped: 95240192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310149120 unmapped: 95240192 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310157312 unmapped: 95232000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310157312 unmapped: 95232000 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310165504 unmapped: 95223808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310165504 unmapped: 95223808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310173696 unmapped: 95215616 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310181888 unmapped: 95207424 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310190080 unmapped: 95199232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310190080 unmapped: 95199232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310190080 unmapped: 95199232 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config show' '{prefix=config show}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310132736 unmapped: 95256576 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310165504 unmapped: 95223808 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec40e000/0x0/0x4ffc00000, data 0x209091/0x3be000, compress 0x0/0x0/0x0, omap 0x4d58b, meta 0x133e2a75), peers [0,1] op hist [])
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: prioritycache tune_memory target: 4294967296 mapped: 310018048 unmapped: 95371264 heap: 405389312 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219256 data_alloc: 218103808 data_used: 152467
Jan 27 09:57:39 np0005597378 ceph-osd[88005]: do_command 'log dump' '{prefix=log dump}'
Jan 27 09:57:39 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23368 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:39 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 09:57:39 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 09:57:40 np0005597378 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 09:57:40 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184313597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155305212' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.160 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:57:40 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23374 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} v 0)
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2980848574' entity='mgr.compute-0.uujfpe' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dfjhvm", "name": "rgw_frontends"} : dispatch
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.342 238945 WARNING nova.virt.libvirt.driver [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.344 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3341MB free_disk=59.98730120714754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.344 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.345 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.438 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.438 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 27 09:57:40 np0005597378 nova_compute[238941]: 2026-01-27 14:57:40.460 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 27 09:57:40 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418528761' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 27 09:57:40 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23378 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 27 09:57:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77908740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 27 09:57:41 np0005597378 nova_compute[238941]: 2026-01-27 14:57:41.086 238945 DEBUG oslo_concurrency.processutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 27 09:57:41 np0005597378 nova_compute[238941]: 2026-01-27 14:57:41.092 238945 DEBUG nova.compute.provider_tree [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed in ProviderTree for provider: cc8b0052-0829-4cee-8aba-4745f236afe4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 27 09:57:41 np0005597378 nova_compute[238941]: 2026-01-27 14:57:41.149 238945 DEBUG nova.scheduler.client.report [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Inventory has not changed for provider cc8b0052-0829-4cee-8aba-4745f236afe4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 27 09:57:41 np0005597378 nova_compute[238941]: 2026-01-27 14:57:41.151 238945 DEBUG nova.compute.resource_tracker [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 27 09:57:41 np0005597378 nova_compute[238941]: 2026-01-27 14:57:41.151 238945 DEBUG oslo_concurrency.lockutils [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 27 09:57:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 27 09:57:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570865129' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 27 09:57:41 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23384 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:41 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23388 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 27 09:57:41 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 27 09:57:41 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193368411' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 27 09:57:42 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 27 09:57:42 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23390 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:57:42 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 27 09:57:42 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751087795' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 27 09:57:43 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23394 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:57:43 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 27 09:57:43 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572600449' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 27 09:57:43 np0005597378 nova_compute[238941]: 2026-01-27 14:57:43.248 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:43 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23398 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:57:44 np0005597378 ceph-mgr[75385]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 27 09:57:44 np0005597378 ceph-mgr[75385]: log_channel(audit) log [DBG] : from='client.23402 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 27 09:57:44 np0005597378 ceph-4d8fd694-f443-5fb1-b612-70034b2f3c6e-mgr-compute-0-uujfpe[75381]: 2026-01-27T14:57:44.132+0000 7f5469e41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:57:44 np0005597378 ceph-mgr[75385]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 27 09:57:44 np0005597378 nova_compute[238941]: 2026-01-27 14:57:44.133 238945 DEBUG oslo_service.periodic_task [None req-cb54a3a9-cecb-43a4-8f74-88e40bac1068 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 27 09:57:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 27 09:57:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483513658' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 27 09:57:44 np0005597378 nova_compute[238941]: 2026-01-27 14:57:44.577 238945 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.651613235s of 31.024972916s, submitted: 126
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b000 session 0x5640b7ca8c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c0289c00 session 0x5640b92cefc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9681c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b9450380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9b74000/0x0/0x4ffc00000, data 0x192a967/0x1ab8000, compress 0x0/0x0/0x0, omap 0x67481, meta 0x14568b7f), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344563712 unmapped: 55828480 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b6b06380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bb168c00 session 0x5640b71fd180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b96aa380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498350 data_alloc: 218103808 data_used: 11588030
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 52559872 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b7ca9dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b97bac40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b9517180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99fa800 session 0x5640b98b4000
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6efc800 session 0x5640b92cefc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 54771712 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7df6000 session 0x5640b96ab6c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533112 data_alloc: 218103808 data_used: 11687358
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 54468608 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.144528389s of 12.913942337s, submitted: 54
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344555520 unmapped: 55836672 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572908 data_alloc: 218103808 data_used: 18347454
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344432640 unmapped: 55959552 heap: 400392192 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e972f000/0x0/0x4ffc00000, data 0x1d6e9c9/0x1efd000, compress 0x0/0x0/0x0, omap 0x67693, meta 0x1456896d), peers [0,2] op hist [0,0,0,0,0,0,0,12])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9a81500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d1000 session 0x5640b7d17c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640b9a81dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9450700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640ba1e3dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633542 data_alloc: 218103808 data_used: 18347454
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e8d7f000/0x0/0x4ffc00000, data 0x271da2b/0x28ad000, compress 0x0/0x0/0x0, omap 0x67c15, meta 0x145683eb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344702976 unmapped: 66191360 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 61710336 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b7e0f6c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 61349888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86f9000/0x0/0x4ffc00000, data 0x2da3a2b/0x2f33000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e86dc000/0x0/0x4ffc00000, data 0x2dc0a2b/0x2f50000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.446774483s of 10.051395416s, submitted: 119
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 61341696 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 59219968 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756251 data_alloc: 234881024 data_used: 29613817
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 58171392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 56483840 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 56426496 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x3600a2b/0x3790000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 56238080 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823769 data_alloc: 234881024 data_used: 30526713
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7e74000/0x0/0x4ffc00000, data 0x3622a2b/0x37b2000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354664448 unmapped: 56229888 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.131614208s of 10.359419823s, submitted: 101
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 53854208 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842933 data_alloc: 234881024 data_used: 30547193
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 55320576 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x39eda2b/0x3b7d000, compress 0x0/0x0/0x0, omap 0x6776a, meta 0x14568896), peers [0,2] op hist [2,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357400576 unmapped: 53493760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357408768 unmapped: 53485568 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 46325760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3909324 data_alloc: 234881024 data_used: 31506169
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e3c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357416960 unmapped: 53477376 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d4800 session 0x5640b92861c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9286fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7100000/0x0/0x4ffc00000, data 0x439ca2b/0x452c000, compress 0x0/0x0/0x0, omap 0x67c56, meta 0x145683aa), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357425152 unmapped: 53469184 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b9287dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640ba1e2540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd400 session 0x5640b98b5a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b97e7800 session 0x5640ba1cb500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6dfa000 session 0x5640b9450380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd1400 session 0x5640b7ca8c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 53452800 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.177800179s of 10.095353127s, submitted: 95
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d3b800 session 0x5640b9107180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3939380 data_alloc: 234881024 data_used: 31603961
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 53420032 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 357629952 unmapped: 53264384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3982520 data_alloc: 234881024 data_used: 38296825
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358572032 unmapped: 52322304 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca8000/0x0/0x4ffc00000, data 0x47f4a2b/0x4984000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b6f01c00 session 0x5640b9681180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b6b3c8c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b71fce00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 358580224 unmapped: 52314112 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6ca7000/0x0/0x4ffc00000, data 0x47f4a4e/0x4985000, compress 0x0/0x0/0x0, omap 0x67ce2, meta 0x1456831e), peers [0,2] op hist [0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 58081280 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.268082619s of 10.108275414s, submitted: 55
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3880400 session 0x5640b9287500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf9000/0x0/0x4ffc00000, data 0x37a29ec/0x3932000, compress 0x0/0x0/0x0, omap 0x680cb, meta 0x14567f35), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3829731 data_alloc: 234881024 data_used: 27148025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3857131 data_alloc: 234881024 data_used: 31654551
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7cf7000/0x0/0x4ffc00000, data 0x37a39ec/0x3933000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 58064896 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354967552 unmapped: 55926784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e734d000/0x0/0x4ffc00000, data 0x41489ec/0x42d8000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3924971 data_alloc: 234881024 data_used: 31896215
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356048896 unmapped: 54845440 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7330000/0x0/0x4ffc00000, data 0x41649ec/0x42f4000, compress 0x0/0x0/0x0, omap 0x68156, meta 0x14567eaa), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.040444374s of 12.756207466s, submitted: 88
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 54910976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972453 data_alloc: 234881024 data_used: 32158359
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355868672 unmapped: 55025664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e6b5d000/0x0/0x4ffc00000, data 0x493f9ec/0x4acf000, compress 0x0/0x0/0x0, omap 0x681c0, meta 0x14567e40), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9cd0000 session 0x5640b7ca8380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355794944 unmapped: 55099392 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7059400 session 0x5640b9517500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e7c01000/0x0/0x4ffc00000, data 0x389b9ec/0x3a2b000, compress 0x0/0x0/0x0, omap 0x68afb, meta 0x14567505), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3831183 data_alloc: 234881024 data_used: 24707735
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9000 session 0x5640b786b500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 58834944 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b3c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104784966s of 10.713698387s, submitted: 126
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b99f0400 session 0x5640b6b40700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cee00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 59375616 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640c3883400 session 0x5640ba1d6a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3ccc00 session 0x5640b786ba40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642934 data_alloc: 218103808 data_used: 16325236
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 63700992 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e917c000/0x0/0x4ffc00000, data 0x23219c9/0x24b0000, compress 0x0/0x0/0x0, omap 0x695bd, meta 0x14566a43), peers [0,2] op hist [0,0,4])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b98b56c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 67198976 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531764 data_alloc: 218103808 data_used: 11587700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x16ad967/0x183b000, compress 0x0/0x0/0x0, omap 0x69205, meta 0x14566dfb), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640ba1e3a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343703552 unmapped: 67190784 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b97356c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b9286700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640bd719800 session 0x5640b97bb500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.670858383s of 19.665880203s, submitted: 79
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 65576960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7058800 session 0x5640b9286a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d39c00 session 0x5640b6fd16c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3c9800 session 0x5640b90776c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d3c00 session 0x5640b7d4d340
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dfc00 session 0x5640b6b408c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9c1e000/0x0/0x4ffc00000, data 0x187f977/0x1a0e000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548283 data_alloc: 218103808 data_used: 11591714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b92cefc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b6b3c540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 70672384 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7ca81c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3dd800 session 0x5640ba1e2c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b920da40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7078800 session 0x5640b7d16fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b96801c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d7800 session 0x5640b9734c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 70729728 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640ba3d9400 session 0x5640b938bc00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 70647808 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3596850 data_alloc: 218103808 data_used: 11595826
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607218 data_alloc: 218103808 data_used: 13302322
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 70696960 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9579000/0x0/0x4ffc00000, data 0x1f229aa/0x20b3000, compress 0x0/0x0/0x0, omap 0x69769, meta 0x14566897), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.352157593s of 14.543478966s, submitted: 36
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b9d46c00 session 0x5640b9287880
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9555000/0x0/0x4ffc00000, data 0x1f469aa/0x20d7000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645779 data_alloc: 218103808 data_used: 19124786
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 70385664 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343064576 unmapped: 67829760 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9249000/0x0/0x4ffc00000, data 0x22529aa/0x23e3000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679649 data_alloc: 218103808 data_used: 19853874
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343810048 unmapped: 67084288 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.882978439s of 11.147073746s, submitted: 50
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 heartbeat osd_stat(store_statfs(0x4e9228000/0x0/0x4ffc00000, data 0x22739aa/0x2404000, compress 0x0/0x0/0x0, omap 0x69946, meta 0x145666ba), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 66945024 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 63848448 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744277 data_alloc: 218103808 data_used: 20693554
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 ms_handle_reset con 0x5640b7079000 session 0x5640ba1e2e00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 63062016 heap: 410894336 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9568c00 session 0x5640b6b40380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b99fb800 session 0x5640b9680a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640b9534000 session 0x5640b786ae00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 281 heartbeat osd_stat(store_statfs(0x4e8903000/0x0/0x4ffc00000, data 0x2b93546/0x2d25000, compress 0x0/0x0/0x0, omap 0x69d80, meta 0x14566280), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 356671488 unmapped: 57999360 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 281 ms_handle_reset con 0x5640c0286400 session 0x5640b7d16e00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 65568768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 65560576 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78f4000/0x0/0x4ffc00000, data 0x3ba6546/0x3d38000, compress 0x0/0x0/0x0, omap 0x6a58b, meta 0x14565a75), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 282 ms_handle_reset con 0x5640c0286400 session 0x5640b7108e00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861772 data_alloc: 234881024 data_used: 23754802
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 282 heartbeat osd_stat(store_statfs(0x4e78ef000/0x0/0x4ffc00000, data 0x3ba8136/0x3d3b000, compress 0x0/0x0/0x0, omap 0x6a7b3, meta 0x1456584d), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 65544192 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b995ec00 session 0x5640b6b06a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b9cd7400 session 0x5640b6eeae00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640b99fbc00 session 0x5640b6fd0540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 283 ms_handle_reset con 0x5640bd8f6800 session 0x5640b9a14540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 65478656 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 283 heartbeat osd_stat(store_statfs(0x4e78e7000/0x0/0x4ffc00000, data 0x3bb0cee/0x3d45000, compress 0x0/0x0/0x0, omap 0x6a8dc, meta 0x14565724), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3863930 data_alloc: 234881024 data_used: 23762994
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.907689095s of 12.907720566s, submitted: 145
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 65462272 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e78e2000/0x0/0x4ffc00000, data 0x3bb276d/0x3d48000, compress 0x0/0x0/0x0, omap 0x72121, meta 0x1455dedf), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e2a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b94508c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b90776c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b71fc700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b7d16fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3dd800 session 0x5640b97356c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869944 data_alloc: 234881024 data_used: 23762994
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640b6b40380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b7c86380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3c8000 session 0x5640b92cee00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640bd8f7c00 session 0x5640b786ba40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 65454080 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7155400 session 0x5640ba1e3a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9cd6000 session 0x5640b9450700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77e0000/0x0/0x4ffc00000, data 0x3cb57cf/0x3e4c000, compress 0x0/0x0/0x0, omap 0x72459, meta 0x1455dba7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b9a148c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880792 data_alloc: 234881024 data_used: 23762994
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b938a700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016672134s of 10.084982872s, submitted: 23
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9d3bc00 session 0x5640ba1ca700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 65445888 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 65421312 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72624, meta 0x1455d9dc), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b9b761c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 65413120 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 65396736 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 65355776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3921622 data_alloc: 234881024 data_used: 29672498
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77bc000/0x0/0x4ffc00000, data 0x3cd97cf/0x3e70000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928406 data_alloc: 234881024 data_used: 30782514
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 65101824 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948271751s of 10.989070892s, submitted: 14
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 65085440 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3928734 data_alloc: 234881024 data_used: 30782514
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 61562880 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e77ba000/0x0/0x4ffc00000, data 0x3cda7cf/0x3e71000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 61431808 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 61259776 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 63668224 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4007460 data_alloc: 234881024 data_used: 32667186
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351010816 unmapped: 63660032 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e39000/0x0/0x4ffc00000, data 0x465c7cf/0x47f3000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841988564s of 10.036563873s, submitted: 48
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 63225856 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6dcb000/0x0/0x4ffc00000, data 0x46ca7cf/0x4861000, compress 0x0/0x0/0x0, omap 0x72afe, meta 0x1455d502), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015500 data_alloc: 234881024 data_used: 32663090
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 63217664 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b995f000 session 0x5640b6b3d500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b99f8800 session 0x5640b97bb6c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 63594496 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d1800 session 0x5640b97bba40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3961220 data_alloc: 234881024 data_used: 31524914
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74ff000/0x0/0x4ffc00000, data 0x3f967cf/0x412d000, compress 0x0/0x0/0x0, omap 0x73428, meta 0x1455cbd8), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b7078800 session 0x5640b938afc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.606713295s of 11.753137589s, submitted: 38
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640ba3d2c00 session 0x5640ba1e3c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 63553536 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640b9568400 session 0x5640b90e1a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3817836 data_alloc: 234881024 data_used: 24588338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351141888 unmapped: 63528960 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3822060 data_alloc: 234881024 data_used: 25673778
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e84d3000/0x0/0x4ffc00000, data 0x2fc07cf/0x3157000, compress 0x0/0x0/0x0, omap 0x73a55, meta 0x1455c5ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 ms_handle_reset con 0x5640c0286400 session 0x5640b98b56c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351150080 unmapped: 63520768 heap: 414670848 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640ba3d7400 session 0x5640b786afc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b6efe800 session 0x5640b9517dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b995f800 session 0x5640ba1d6a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 57409536 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 285 ms_handle_reset con 0x5640b9568400 session 0x5640b9a81c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361488384 unmapped: 57384960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.244441986s of 10.926932335s, submitted: 60
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 286 ms_handle_reset con 0x5640ba3d2c00 session 0x5640b786a1c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 57737216 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9400 session 0x5640b9516000
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4009024 data_alloc: 234881024 data_used: 34502286
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 56623104 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 54108160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6a93000/0x0/0x4ffc00000, data 0x49f8b85/0x4b95000, compress 0x0/0x0/0x0, omap 0x74419, meta 0x1455bbe7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640ba3c9c00 session 0x5640b98b2c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b9568400 session 0x5640b7ca9dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 287 ms_handle_reset con 0x5640b995f800 session 0x5640b9516000
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 54042624 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 54034432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 288 ms_handle_reset con 0x5640ba3c9400 session 0x5640b90e1a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362643456 unmapped: 56229888 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886545 data_alloc: 234881024 data_used: 34502270
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e829b000/0x0/0x4ffc00000, data 0x31f371f/0x338f000, compress 0x0/0x0/0x0, omap 0x744a3, meta 0x1455bb5d), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b9d3bc00 session 0x5640b9b77c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9a816c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 56221696 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 55132160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 289 ms_handle_reset con 0x5640b7155400 session 0x5640b9735500
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830131 data_alloc: 234881024 data_used: 30469660
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e88ab000/0x0/0x4ffc00000, data 0x2be5158/0x2d81000, compress 0x0/0x0/0x0, omap 0x74ec9, meta 0x1455b137), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.118906975s of 12.735915184s, submitted: 117
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 55099392 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 289 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 55091200 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640b97e7c00 session 0x5640b96aa700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 63946752 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640bb169800 session 0x5640b6b401c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 290 ms_handle_reset con 0x5640c3883800 session 0x5640b9517880
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681049 data_alloc: 218103808 data_used: 13614620
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 70680576 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 291 ms_handle_reset con 0x5640ba3d8800 session 0x5640b9286700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 71811072 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617990 data_alloc: 218103808 data_used: 8318460
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 71729152 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9dcd000/0x0/0x4ffc00000, data 0x16c07a0/0x185c000, compress 0x0/0x0/0x0, omap 0x75cc0, meta 0x1455a340), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620700 data_alloc: 218103808 data_used: 8322521
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9dcb000/0x0/0x4ffc00000, data 0x16c221f/0x185f000, compress 0x0/0x0/0x0, omap 0x76803, meta 0x145597fd), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 71720960 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.065912247s of 18.099792480s, submitted: 113
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 68370432 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9c40400 session 0x5640b7d16e00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b97e6000 session 0x5640b938b180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b6efd000 session 0x5640ba1e2e00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bd8f6000 session 0x5640b6b40380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3cb400 session 0x5640ba1e3a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662479 data_alloc: 218103808 data_used: 8322521
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640b6efdc00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9785000/0x0/0x4ffc00000, data 0x1d0a21f/0x1ea7000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 71507968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9a14540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664423 data_alloc: 218103808 data_used: 8322521
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9535c00 session 0x5640b92cf180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b99f1800 session 0x5640b786bc00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bca50400 session 0x5640b9680fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 71499776 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 73572352 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345309184 unmapped: 73564160 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696939 data_alloc: 218103808 data_used: 13693401
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 73555968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.409816742s of 18.537841797s, submitted: 24
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9761000/0x0/0x4ffc00000, data 0x1d2e21f/0x1ecb000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 69484544 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 69459968 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743019 data_alloc: 218103808 data_used: 14239193
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744719 data_alloc: 218103808 data_used: 14058969
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f3000/0x0/0x4ffc00000, data 0x229421f/0x2431000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f8000/0x0/0x4ffc00000, data 0x229721f/0x2434000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 70123520 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.511266708s of 10.248086929s, submitted: 77
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 69058560 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741927 data_alloc: 218103808 data_used: 14042585
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b63cd800 session 0x5640b95176c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b938bdc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640ba3de800 session 0x5640b6eea700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 ms_handle_reset con 0x5640bfd03000 session 0x5640b920c540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740759 data_alloc: 218103808 data_used: 14042585
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e91f6000/0x0/0x4ffc00000, data 0x229921f/0x2436000, compress 0x0/0x0/0x0, omap 0x76a0b, meta 0x145595f5), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9cd7c00 session 0x5640b9a14000
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640b9ce7c00 session 0x5640b7d16540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1e3c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 69050368 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 63897600 heap: 418873344 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.870169163s of 10.112051964s, submitted: 64
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 371507200 unmapped: 56066048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 293 ms_handle_reset con 0x5640ba3de800 session 0x5640b92ce380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893632 data_alloc: 234881024 data_used: 20694489
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 73777152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e7f39000/0x0/0x4ffc00000, data 0x3550a0d/0x36f1000, compress 0x0/0x0/0x0, omap 0x76e61, meta 0x1455919f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 73768960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 294 ms_handle_reset con 0x5640b9c69c00 session 0x5640b9516380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 73752576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 79953920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640c0288800 session 0x5640b9a14e00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3885340 data_alloc: 234881024 data_used: 20694489
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f36000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 79945728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9c69c00 session 0x5640b98b56c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b9735dc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.974083900s of 10.401470184s, submitted: 69
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x76f88, meta 0x14559078), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 79912960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884700 data_alloc: 234881024 data_used: 20694489
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7f38000/0x0/0x4ffc00000, data 0x35525c5/0x36f4000, compress 0x0/0x0/0x0, omap 0x77129, meta 0x14558ed7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b6b46fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8b0c000/0x0/0x4ffc00000, data 0x297d044/0x2b20000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760712 data_alloc: 218103808 data_used: 8322521
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640bfd03000 session 0x5640b786ba40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b786afc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce7c00 session 0x5640ba1d6c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9d48c00 session 0x5640b71fce00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640ba3d1c00 session 0x5640b98b48c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9ce6400 session 0x5640ba1d7c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.588809013s of 13.410536766s, submitted: 46
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640b9c69c00 session 0x5640b7ca81c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355983360 unmapped: 71589888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36ed044/0x3890000, compress 0x0/0x0/0x0, omap 0x776a7, meta 0x14558959), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355991552 unmapped: 71581696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 ms_handle_reset con 0x5640c0288400 session 0x5640ba1d76c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3896309 data_alloc: 234881024 data_used: 27934169
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 71573504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 297 ms_handle_reset con 0x5640b9569800 session 0x5640b6fd1180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771885 data_alloc: 218103808 data_used: 13085758
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243abf5/0x25df000, compress 0x0/0x0/0x0, omap 0x77732, meta 0x145588ce), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.633598328s of 13.003002167s, submitted: 52
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3775315 data_alloc: 218103808 data_used: 13089756
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782995 data_alloc: 218103808 data_used: 13831132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 80052224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785683 data_alloc: 218103808 data_used: 14646236
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640be673400 session 0x5640b7d17c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 79921152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.810759544s of 10.873312950s, submitted: 14
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9048000/0x0/0x4ffc00000, data 0x243c674/0x25e2000, compress 0x0/0x0/0x0, omap 0x78284, meta 0x14557d7c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786a380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343949312 unmapped: 83623936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343957504 unmapped: 83615744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 83607552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 83599360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671339 data_alloc: 218103808 data_used: 8327132
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 83591168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343990272 unmapped: 83582976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 83574784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.330020905s of 43.506050110s, submitted: 29
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347160576 unmapped: 80412672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b97bac40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707310 data_alloc: 218103808 data_used: 8331193
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9516380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640c0289c00 session 0x5640b786b880
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 83558400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344023040 unmapped: 83550208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344023040 unmapped: 83550208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707326 data_alloc: 218103808 data_used: 8331193
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9828000/0x0/0x4ffc00000, data 0x1c5e6b3/0x1e04000, compress 0x0/0x0/0x0, omap 0x786e4, meta 0x1455791c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707326 data_alloc: 218103808 data_used: 8331193
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344031232 unmapped: 83542016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9ce6800 session 0x5640b9a80a80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9cd0c00 session 0x5640b920c540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.954597473s of 13.605207443s, submitted: 24
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344039424 unmapped: 83533824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b7e6d400 session 0x5640b9a14540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 83230720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9803000/0x0/0x4ffc00000, data 0x1c826d6/0x1e29000, compress 0x0/0x0/0x0, omap 0x78160, meta 0x14557ea0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344350720 unmapped: 83222528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747496 data_alloc: 218103808 data_used: 14174137
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 82911232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 82911232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9ce6800 session 0x5640ba1d6fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640b9568c00 session 0x5640b786a540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 344670208 unmapped: 82903040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680118 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 ms_handle_reset con 0x5640ba3c9000 session 0x5640b94501c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9d95000/0x0/0x4ffc00000, data 0x16f0674/0x1896000, compress 0x0/0x0/0x0, omap 0x7852d, meta 0x14557ad3), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341680128 unmapped: 85893120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 85868544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341721088 unmapped: 85852160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e9dba000/0x0/0x4ffc00000, data 0x16cc651/0x1871000, compress 0x0/0x0/0x0, omap 0x78a9b, meta 0x14557565), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678614 data_alloc: 218103808 data_used: 8335191
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341729280 unmapped: 85843968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 85712896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 62.531997681s of 64.608131409s, submitted: 58
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x16ce21e/0x1873000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 ms_handle_reset con 0x5640b6f00c00 session 0x5640b98b5180
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680817 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680817 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e9db8000/0x0/0x4ffc00000, data 0x16ce20e/0x1872000, compress 0x0/0x0/0x0, omap 0x78b26, meta 0x145574da), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 299 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 85663744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 85630976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 85581824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 47K writes, 182K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4239 writes, 16K keys, 4239 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s#012Interval WAL: 4239 writes, 1717 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 85475328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db5000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c19, meta 0x145573e7), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683591 data_alloc: 218103808 data_used: 8339252
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 115.369064331s of 115.781822205s, submitted: 29
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 ms_handle_reset con 0x5640b6efd800 session 0x5640b90776c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 ms_handle_reset con 0x5640ba3d4c00 session 0x5640b9516540
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db7000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c4e, meta 0x145573b2), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 76038144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692215 data_alloc: 234881024 data_used: 17448756
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9db7000/0x0/0x4ffc00000, data 0x16cfc8d/0x1875000, compress 0x0/0x0/0x0, omap 0x78c4e, meta 0x145573b2), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692215 data_alloc: 234881024 data_used: 17448756
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 76029952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 76013568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.952626228s of 10.143486977s, submitted: 11
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 300 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 301 ms_handle_reset con 0x5640b6f01c00 session 0x5640b92868c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 301 heartbeat osd_stat(store_statfs(0x4eadb3000/0x0/0x4ffc00000, data 0x6d185a/0x877000, compress 0x0/0x0/0x0, omap 0x7905e, meta 0x14556fa2), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582091 data_alloc: 218103808 data_used: 3882769
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 86065152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 302 ms_handle_reset con 0x5640bfd02c00 session 0x5640b9450c40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 302 heartbeat osd_stat(store_statfs(0x4eb220000/0x0/0x4ffc00000, data 0x26344a/0x40a000, compress 0x0/0x0/0x0, omap 0x7a3f7, meta 0x14555c09), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3555471 data_alloc: 218103808 data_used: 212753
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21d000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21d000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338657280 unmapped: 88915968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462058067s of 11.051116943s, submitted: 95
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338690048 unmapped: 88883200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 303 heartbeat osd_stat(store_statfs(0x4eb21f000/0x0/0x4ffc00000, data 0x264ee5/0x40d000, compress 0x0/0x0/0x0, omap 0x7a4ed, meta 0x14555b13), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 88850432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557525 data_alloc: 218103808 data_used: 212753
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 88850432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338731008 unmapped: 88842240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eb21c000/0x0/0x4ffc00000, data 0x266964/0x410000, compress 0x0/0x0/0x0, omap 0x7a5e3, meta 0x14555a1d), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338739200 unmapped: 88834048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338755584 unmapped: 88817664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 ms_handle_reset con 0x5640ba3cb800 session 0x5640b6b3c8c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338763776 unmapped: 88809472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread fragmentation_score=0.004502 took=0.000055s
Jan 27 09:57:44 np0005597378 ceph-mon[75090]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 27 09:57:44 np0005597378 ceph-mon[75090]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2834295586' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338771968 unmapped: 88801280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338780160 unmapped: 88793088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338788352 unmapped: 88784896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338796544 unmapped: 88776704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 88768512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338804736 unmapped: 88768512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 88760320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338821120 unmapped: 88752128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338829312 unmapped: 88743936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338837504 unmapped: 88735744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338853888 unmapped: 88719360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564444 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eb216000/0x0/0x4ffc00000, data 0x268523/0x414000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338862080 unmapped: 88711168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 108.638061523s of 110.715545654s, submitted: 105
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 80297984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 heartbeat osd_stat(store_statfs(0x4eaa16000/0x0/0x4ffc00000, data 0xa68523/0xc14000, compress 0x0/0x0/0x0, omap 0x7a955, meta 0x145556ab), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338886656 unmapped: 88686592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628828 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 306 ms_handle_reset con 0x5640ba3d1c00 session 0x5640ba1cafc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 88670208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 88670208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 ms_handle_reset con 0x5640ba3d5c00 session 0x5640b7d17a40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338919424 unmapped: 88653824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338927616 unmapped: 88645632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338935808 unmapped: 88637440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 88621056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338960384 unmapped: 88612864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9d9f000/0x0/0x4ffc00000, data 0x16dbc7e/0x188b000, compress 0x0/0x0/0x0, omap 0x7ad88, meta 0x14555278), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679845 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.541948318s of 32.050785065s, submitted: 18
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 338976768 unmapped: 88596480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 308 ms_handle_reset con 0x5640be673400 session 0x5640b9106fc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681409 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9d9d000/0x0/0x4ffc00000, data 0x16dd84b/0x188d000, compress 0x0/0x0/0x0, omap 0x7bc24, meta 0x145543dc), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339001344 unmapped: 88571904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339025920 unmapped: 88547328 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339034112 unmapped: 88539136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339042304 unmapped: 88530944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339050496 unmapped: 88522752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339058688 unmapped: 88514560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339066880 unmapped: 88506368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339075072 unmapped: 88498176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339083264 unmapped: 88489984 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339099648 unmapped: 88473600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339107840 unmapped: 88465408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 88457216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339116032 unmapped: 88457216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 88449024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 88449024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339132416 unmapped: 88440832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339140608 unmapped: 88432640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 88424448 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339156992 unmapped: 88416256 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339165184 unmapped: 88408064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339173376 unmapped: 88399872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339181568 unmapped: 88391680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339189760 unmapped: 88383488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339197952 unmapped: 88375296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 88367104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339214336 unmapped: 88358912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339222528 unmapped: 88350720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339230720 unmapped: 88342528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 88334336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339247104 unmapped: 88326144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 88309760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 88301568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 88285184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339288064 unmapped: 88285184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339296256 unmapped: 88276992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 88268800 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339312640 unmapped: 88260608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339329024 unmapped: 88244224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339337216 unmapped: 88236032 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339345408 unmapped: 88227840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 88219648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 88219648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 88211456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 88203264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 88195072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 88186880 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 88178688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 88170496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 88154112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 88137728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 88129536 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 88121344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 88113152 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 88104960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 88096768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 88088576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 88080384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 88072192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 88064000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 88064000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 88047616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339542016 unmapped: 88031232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 88023040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 88023040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 88006656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 87998464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339582976 unmapped: 87990272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 87982080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 87973888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339607552 unmapped: 87965696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 87957504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 87949312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 87941120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339640320 unmapped: 87932928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 87924736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 87924736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 87908352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 87900160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 87891968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339689472 unmapped: 87883776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 87867392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 87859200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 87842816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339738624 unmapped: 87834624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 87826432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 87818240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339763200 unmapped: 87810048 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 87801856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 87785472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 87777280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 87769088 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 87760896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 87744512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 87736320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 87719936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 87711744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 87703552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 87695360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 87687168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339886080 unmapped: 87687168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 87678976 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 47K writes, 183K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 507 writes, 1307 keys, 507 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s#012Interval WAL: 507 writes, 229 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 87670784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 87662592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 87654400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339935232 unmapped: 87638016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 87629824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc ms_handle_reset ms_handle_reset con 0x5640ba3c9c00
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3361718077
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3361718077,v1:192.168.122.100:6801/3361718077]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: mgrc handle_mgr_configure stats_period=5
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 87613440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640c0286400 session 0x5640b7e0ec40
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640b99f9400 session 0x5640b7e0e1c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 ms_handle_reset con 0x5640b9534c00 session 0x5640b96801c0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 87605248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 87597056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684183 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 87588864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 434.687835693s of 435.170562744s, submitted: 49
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e9d9a000/0x0/0x4ffc00000, data 0x16df2ca/0x1890000, compress 0x0/0x0/0x0, omap 0x7bd1b, meta 0x145542e5), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 309 handle_osd_map epochs [310,310], i have 310, src has [1,310]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 310 ms_handle_reset con 0x5640b9cd1400 session 0x5640bc556380
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 87547904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623591 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 87547904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 310 heartbeat osd_stat(store_statfs(0x4eaa08000/0x0/0x4ffc00000, data 0xa70e97/0xc22000, compress 0x0/0x0/0x0, omap 0x7bda8, meta 0x14554258), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340189184 unmapped: 87384064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 311 ms_handle_reset con 0x5640b7079800 session 0x5640ba1cafc0
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 87359488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eb205000/0x0/0x4ffc00000, data 0x272a87/0x425000, compress 0x0/0x0/0x0, omap 0x7c1ba, meta 0x14553e46), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585501 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 87351296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eaa07000/0x0/0x4ffc00000, data 0xa72a87/0xc25000, compress 0x0/0x0/0x0, omap 0x7c397, meta 0x14553c69), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 311 handle_osd_map epochs [312,312], i have 312, src has [1,312]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 87359488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 312 ms_handle_reset con 0x5640bfd03400 session 0x5640ba1d6000
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 ms_handle_reset con 0x5640b7155400 session 0x5640be058700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636406 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636406 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea9fc000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.133428574s of 18.694644928s, submitted: 80
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 87277568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 86171648 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 86130688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 86122496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 86114304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 86106112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 heartbeat osd_stat(store_statfs(0x4eaa00000/0x0/0x4ffc00000, data 0xa760c5/0xc2c000, compress 0x0/0x0/0x0, omap 0x7d460, meta 0x14552ba0), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635078 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.926372528s of 45.250740051s, submitted: 112
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 86097920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 313 handle_osd_map epochs [314,314], i have 314, src has [1,314]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 314 ms_handle_reset con 0x5640ba3dd000 session 0x5640b7d4c700
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3597094 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 86032384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 314 heartbeat osd_stat(store_statfs(0x4eb1fc000/0x0/0x4ffc00000, data 0x277c92/0x42e000, compress 0x0/0x0/0x0, omap 0x7d884, meta 0x1455277c), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3597094 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 86024192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688928604s of 11.144803047s, submitted: 42
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 86007808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 86007808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 85999616 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 85983232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341598208 unmapped: 85975040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 85966848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 85958656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 85958656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 85950464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 85942272 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 85934080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 85925888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 85917696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 85909504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 85901312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 85884928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 85876736 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 85860352 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 85778432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'config show' '{prefix=config show}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341737472 unmapped: 85835776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341737472 unmapped: 85835776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'log dump' '{prefix=log dump}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341762048 unmapped: 85811200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'perf dump' '{prefix=perf dump}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'perf schema' '{prefix=perf schema}'
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 85688320 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 85680128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 85671936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341917696 unmapped: 85655552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 85647360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 85639168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 85622784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 85614592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 85606400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 85598208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 85590016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 85573632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 85565440 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 85557248 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 85549056 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 85540864 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 85532672 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 85524480 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 85516288 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 85516288 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 85508096 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342073344 unmapped: 85499904 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 85491712 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 85483520 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 85467136 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 85458944 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 85450752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 85450752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 85450752 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 85442560 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 85434368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 85434368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 85434368 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 85426176 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342163456 unmapped: 85409792 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 85401600 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 85393408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 85393408 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 85385216 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 85377024 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 85368832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 85368832 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 85360640 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 85336064 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 85327872 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 85319680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 85319680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 85319680 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 85311488 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 85303296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 85303296 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 85295104 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 85286912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 85270528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 85270528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 85270528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 85262336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 85254144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 85237760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 85229568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 85221376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 85221376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 85221376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342360064 unmapped: 85213184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342360064 unmapped: 85213184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 85204992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 85204992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 85204992 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 85188608 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 85180416 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342401024 unmapped: 85172224 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342417408 unmapped: 85155840 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342433792 unmapped: 85139456 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 85131264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 85131264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 85131264 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 85123072 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342458368 unmapped: 85114880 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 85106688 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 85098496 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 85090304 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342491136 unmapped: 85082112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342491136 unmapped: 85082112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342491136 unmapped: 85082112 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342499328 unmapped: 85073920 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 85065728 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342515712 unmapped: 85057536 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 85049344 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 85032960 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 85024768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 85024768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 85024768 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 85016576 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 85008384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 85008384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 85008384 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 85000192 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 84992000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 84992000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 84992000 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 84983808 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 84967424 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 84959232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 84959232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 84959232 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342622208 unmapped: 84951040 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 84942848 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 84934656 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342646784 unmapped: 84926464 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342663168 unmapped: 84910080 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 84901888 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 84893696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342679552 unmapped: 84893696 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342687744 unmapped: 84885504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342687744 unmapped: 84885504 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 84877312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342695936 unmapped: 84877312 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 84869120 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342712320 unmapped: 84860928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342712320 unmapped: 84860928 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 84844544 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342745088 unmapped: 84828160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342745088 unmapped: 84828160 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 84819968 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 84811776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 84811776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 84811776 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 84803584 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 84795392 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 84787200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 84787200 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 84770816 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 84762624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 84762624 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 84754432 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 84746240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 84746240 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 48K writes, 184K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.73 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 448 writes, 1100 keys, 448 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 448 writes, 201 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5640b54d78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 84729856 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 84721664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 84721664 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342859776 unmapped: 84713472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342859776 unmapped: 84713472 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 84705280 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 84688896 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 84680704 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342900736 unmapped: 84672512 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 84656128 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342925312 unmapped: 84647936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342925312 unmapped: 84647936 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 84639744 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 84631552 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342949888 unmapped: 84623360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342949888 unmapped: 84623360 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 84615168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 84615168 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599868 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 544.517883301s of 544.549804688s, submitted: 13
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 84598784 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 84590592 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 84582400 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 84574208 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.140014648s of 11.336414337s, submitted: 38
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 84566016 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599220 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 84557824 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 343023616 unmapped: 84549632 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340238336 unmapped: 87334912 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340246528 unmapped: 87326720 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 87318528 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 87310336 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 87302144 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 87293952 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 87285760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 87285760 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 87277568 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 87269376 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599148 data_alloc: 218103808 data_used: 213025
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 87261184 heap: 427573248 old mem: 2845415832 new mem: 2845415832
Jan 27 09:57:44 np0005597378 ceph-osd[86941]: osd.1 315 heartbeat osd_stat(store_statfs(0x4eb1fb000/0x0/0x4ffc00000, data 0x279711/0x431000, compress 0x0/0x0/0x0, omap 0x7d97c, meta 0x14552684), peers [0,2] op hist [])
